0% found this document useful (0 votes)
53 views16 pages

July 2021 CIB Report

The report discusses two networks engaged in coordinated inauthentic behavior that were removed from Facebook and Instagram in July 2021. One network originated in Myanmar and targeted domestic audiences, while the other network operated from Russia and targeted audiences primarily in India, Latin America, and to a lesser extent the United States. The Myanmar network used fake and duplicate accounts to post content both in support of and against anti-military protests and political opposition. The Russia network was linked to a UK-registered marketing firm known as Fazze that conducted operations from Russia.

Uploaded by

Beyza Gemici
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views16 pages

July 2021 CIB Report

The report discusses two networks engaged in coordinated inauthentic behavior that were removed from Facebook and Instagram in July 2021. One network originated in Myanmar and targeted domestic audiences, while the other network operated from Russia and targeted audiences primarily in India, Latin America, and to a lesser extent the United States. The Myanmar network used fake and duplicate accounts to post content both in support of and against anti-military protests and political opposition. The Russia network was linked to a UK-registered marketing firm known as Fazze that conducted operations from Russia.

Uploaded by

Beyza Gemici
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

JULY‌‌2021‌ 

  ‌

  ‌

  ‌

  ‌

  ‌

 
  ‌

DETAILED‌‌REPORT‌‌   ‌

July‌‌2021‌‌Coordinated‌‌Inauthentic‌‌ 
Behavior‌‌Report‌  ‌
    ‌

 ‌
We’re‌‌constantly‌‌working‌‌to‌‌find‌‌and‌‌stop‌‌coordinated‌‌campaigns‌‌that‌‌seek‌‌to‌‌manipulate‌‌ 
public‌‌debate‌‌across‌‌our‌‌apps.‌  ‌

PURPOSE‌‌OF‌‌THIS‌‌REPORT‌  ‌

Over‌‌the‌‌past‌‌four‌‌years,‌‌we’ve‌‌shared‌‌our‌‌findings‌‌about‌‌c
‌ oordinated‌‌inauthentic‌‌behavior‌‌   ‌
we‌‌detect‌‌and‌‌remove‌‌from‌‌our‌‌platforms.‌‌As‌‌part‌‌of‌‌our‌‌regular‌‌CIB‌‌reports,‌‌we’re‌‌sharing‌‌ 
information‌‌about‌‌all‌‌networks‌‌we‌‌take‌‌down‌‌over‌‌the‌‌course‌‌of‌‌a‌‌month‌‌to‌‌make‌‌it‌‌easier‌‌for‌‌ 
people‌‌to‌‌see‌‌progress‌‌we’re‌‌making‌‌in‌‌one‌‌place.‌‌   ‌

WHAT‌‌IS‌‌CIB?‌  ‌

We‌‌view‌‌CIB‌‌as‌‌coordinated‌‌efforts‌‌to‌‌manipulate‌‌public‌‌debate‌‌for‌‌a‌‌strategic‌‌goal‌‌where‌‌fake‌‌ 
accounts‌‌are‌‌central‌‌to‌‌the‌‌operation.‌‌There‌‌are‌‌two‌‌tiers‌‌of‌‌these‌‌activities‌‌that‌‌we‌‌work‌‌to‌‌ 
stop:‌‌1)‌‌coordinated‌‌inauthentic‌‌behavior‌‌in‌‌the‌‌context‌‌of‌‌domestic,‌‌non-government‌‌ 
campaigns‌‌and‌‌2)‌‌coordinated‌‌inauthentic‌‌behavior‌‌on‌‌behalf‌‌of‌‌a‌‌foreign‌‌or‌‌government‌‌actor.‌‌   ‌

COORDINATED‌‌INAUTHENTIC‌‌BEHAVIOR‌‌(CIB)‌  ‌

When‌‌we‌‌find‌‌domestic,‌‌non-government‌‌campaigns‌‌that‌‌include‌‌groups‌‌of‌‌accounts‌‌and‌‌Pages‌‌ 
seeking‌‌to‌‌mislead‌‌people‌‌about‌‌who‌‌they‌‌are‌‌and‌‌what‌‌they‌‌are‌‌doing‌‌while‌‌relying‌‌on‌‌fake‌‌ 
accounts,‌‌we‌‌remove‌‌both‌‌inauthentic‌‌and‌‌authentic‌‌accounts,‌‌Pages‌‌and‌‌Groups‌‌directly‌‌ 
involved‌‌in‌‌this‌‌activity.‌‌   ‌

FOREIGN‌‌OR‌‌GOVERNMENT‌‌INTERFERENCE‌‌(FGI)‌  ‌

If‌‌we‌‌find‌‌any‌‌instances‌‌of‌‌CIB‌‌conducted‌‌on‌‌behalf‌‌of‌‌a‌‌government‌‌entity‌‌or‌‌by‌‌a‌‌foreign‌‌actor,‌‌ 
we‌‌apply‌‌the‌‌broadest‌‌enforcement‌‌measures‌‌including‌‌the‌‌removal‌‌of‌‌every‌‌on-platform‌‌ 
property‌‌connected‌‌to‌‌the‌‌operation‌‌itself‌‌and‌‌the‌‌people‌‌and‌‌organizations‌‌behind‌‌it.‌‌   ‌

CONTINUOUS‌‌ENFORCEMENT‌  ‌

We‌‌monitor‌‌for‌‌efforts‌‌to‌‌re-establish‌‌a‌‌presence‌‌on‌‌Facebook‌‌by‌‌networks‌‌we‌‌previously‌‌ 
removed.‌‌Using‌‌both‌‌automated‌‌and‌‌manual‌‌detection,‌‌we‌‌continuously‌‌remove‌‌accounts‌‌and‌‌ 
Pages‌‌connected‌‌to‌‌networks‌‌we‌‌took‌‌down‌‌in‌‌the‌‌past.‌‌   ‌

JULY‌‌2021‌‌COORDINATED‌‌INAUTHENTIC‌‌BEHAVIOR‌‌REPORT‌ 2‌  ‌
SUMMARY‌‌OF‌‌JULY‌‌2021‌‌FINDINGS‌  ‌

Our‌‌teams‌‌continue‌‌to‌‌focus‌‌on‌‌finding‌‌and‌‌removing‌‌deceptive‌‌campaigns‌‌around‌‌the‌‌world‌‌— ‌‌
whether‌‌they‌‌are‌‌foreign‌‌or‌‌domestic.‌‌In‌J
‌ uly‌,‌‌we‌‌removed‌‌two‌‌networks‌‌—‌ ‌from‌‌Russia‌‌and‌‌ 
Myanmar.‌‌In‌‌this‌‌report,‌‌we’re‌‌also‌‌sharing‌‌an‌‌in-depth‌‌analysis‌‌by‌‌our‌‌threat‌‌intelligence‌‌team‌‌ 
into‌‌the‌‌network‌‌operated‌‌from‌‌Russia,‌‌to‌‌add‌‌to‌‌the‌‌public‌‌reporting‌‌on‌‌its‌‌activity‌‌across‌‌over‌‌ 
a‌‌dozen‌‌different‌‌platforms.‌‌We‌‌have‌‌shared‌‌information‌‌about‌‌our‌‌findings‌‌with‌‌industry‌‌ 
partners,‌‌researchers,‌‌law‌‌enforcement‌‌and‌‌policymakers.‌  ‌

We‌‌know‌‌that‌‌influence‌‌operations‌‌will‌‌keep‌‌evolving‌‌in‌‌response‌‌to‌‌our‌‌enforcement,‌‌and‌‌new‌‌ 
deceptive‌‌behaviors‌‌will‌‌emerge.‌‌We‌‌will‌‌continue‌‌to‌‌refine‌‌our‌‌enforcement‌‌and‌‌share‌‌our‌‌ 
findings‌‌publicly.‌‌We‌‌are‌‌making‌‌progress‌‌rooting‌‌out‌‌this‌‌abuse,‌‌but‌‌as‌‌we’ve‌‌said‌‌before‌‌—‌‌it’s‌‌ 
an‌‌ongoing‌‌effort‌‌and‌‌we’re‌‌committed‌‌to‌‌continually‌‌improving‌‌to‌‌stay‌‌ahead.‌‌That‌‌means‌‌ 
building‌‌better‌‌technology,‌‌hiring‌‌more‌‌people‌‌and‌‌working‌‌closely‌‌with‌‌law‌‌enforcement,‌‌ 
security‌‌experts‌‌and‌‌other‌‌companies.‌  ‌

● Total‌‌number‌‌of‌‌Facebook‌‌accounts‌‌removed:‌‌‌144‌  ‌

● Total‌‌number‌‌of‌‌Instagram‌‌accounts‌‌removed:‌‌‌262‌  ‌

● Total‌‌number‌‌of‌‌Pages‌‌removed:‌1‌ 3‌  ‌

● Total‌‌number‌‌of‌‌Groups‌‌removed:‌‌‌8 ‌ ‌

NETWORKS‌‌REMOVED‌‌IN‌‌JULY‌‌2021:‌  ‌

1. Myanmar:‌W
‌ e‌‌removed‌‌79‌‌Facebook‌‌accounts,‌‌13‌‌Pages,‌‌eight‌‌Groups,‌‌and‌‌19‌‌Instagram‌‌ 
accounts‌‌in‌‌Myanmar‌‌that‌‌targeted‌‌domestic‌‌audiences‌‌and‌‌were‌‌linked‌‌to‌‌individuals‌‌ 
associated‌‌with‌‌the‌‌Myanmar‌‌military.‌‌We‌‌found‌‌this‌‌activity‌‌after‌‌reviewing‌‌information‌‌ 
about‌‌a‌‌portion‌‌of‌‌it‌‌shared‌‌by‌‌a‌‌member‌‌of‌‌civil‌‌society‌‌in‌‌Myanmar.‌‌Our‌‌investigation‌‌ 
revealed‌‌some‌‌links‌‌between‌‌this‌‌operation‌‌and‌‌the‌‌activity‌‌we‌‌r‌ emoved‌‌‌in‌‌2018.‌  ‌

2. Russia‌:‌‌We‌‌removed‌‌65‌‌Facebook‌‌accounts‌‌and‌‌243‌‌Instagram‌‌accounts‌‌from‌‌Russia‌‌ 
that‌‌we‌‌linked‌‌to‌‌Fazze,‌a
‌ ‌‌subsidiary‌‌of‌‌a‌U
‌ K-registered‌‌marketing‌‌firm,‌‌whose‌‌operations‌‌ 
were‌‌primarily‌‌conducted‌‌from‌‌Russia.‌‌Fazze‌‌is‌‌now‌‌banned‌‌from‌‌our‌‌platform.‌‌This‌‌ 
operation‌‌targeted‌‌audiences‌‌primarily‌‌in‌‌India,‌‌Latin‌‌America‌‌and,‌‌to‌‌a‌‌much‌‌lesser‌‌ 
extent,‌‌the‌‌United‌‌States.‌‌We‌‌found‌‌this‌‌network‌‌after‌‌reviewing‌‌public‌‌reporting‌‌about‌‌ 
an‌‌off-platform‌‌portion‌‌of‌‌this‌‌activity.‌  ‌

  ‌

JULY‌‌2021‌‌COORDINATED‌‌INAUTHENTIC‌‌BEHAVIOR‌‌REPORT‌ 3‌  ‌
01‌  ‌
We‌‌removed‌‌79‌‌Facebook‌‌accounts,‌‌13‌‌Pages,‌‌eight‌‌Groups,‌‌and‌‌19‌‌Instagram‌‌accounts‌‌for‌‌ 
violating‌‌our‌‌policy‌‌against‌‌c
‌ oordinated‌‌inauthentic‌‌behavior‌.‌‌This‌‌network‌‌originated‌‌in‌‌ 
Myanmar‌‌and‌‌targeted‌‌domestic‌‌audiences‌‌in‌‌that‌‌country.‌  ‌

The‌‌people‌‌behind‌‌this‌‌activity‌‌used‌‌duplicate‌‌and‌‌fake‌‌accounts‌‌—‌‌some‌‌of‌‌which‌‌were‌‌already‌‌ 
detected‌‌and‌‌disabled‌‌by‌‌our‌‌automated‌‌systems‌‌—‌‌to‌‌post,‌‌comment‌‌on‌‌their‌‌own‌‌content‌‌and‌‌ 
create‌‌Groups.‌‌Some‌‌of‌‌the‌‌accounts‌‌posed‌‌as‌‌protesters‌‌and‌‌members‌‌of‌‌the‌‌opposition‌‌and‌‌ 
joined‌‌pro-democracy‌‌Groups,‌‌while‌‌others‌‌ran‌‌pro-military‌‌Pages‌‌that‌‌claimed‌‌to‌‌expose‌‌ 
anti-Tatmadaw‌‌protesters.‌‌A‌‌handful‌‌of‌‌accounts‌‌used‌‌photos‌‌likely‌‌generated‌‌using‌‌machine‌‌ 
learning‌‌techniques‌‌like‌‌generative‌‌adversarial‌‌networks‌‌(GAN).‌‌This‌‌network‌‌also‌‌amplified‌‌the‌‌ 
content‌‌posted‌‌by‌‌Pages‌‌criticizing‌‌the‌‌National‌‌League‌‌for‌‌Democracy.‌‌They‌‌posted‌‌in‌‌ 
Burmese‌‌about‌‌news‌‌and‌‌current‌‌events‌‌in‌‌the‌‌country,‌‌including‌‌content‌‌on‌‌both‌‌sides‌‌of‌‌the‌‌ 
political‌‌debate‌‌in‌‌Myanmar‌‌—‌‌in‌‌support‌‌and‌‌criticism‌‌of‌‌anti-military‌‌protests,‌‌the‌‌opposition‌‌ 
and‌‌the‌‌military‌‌coup.‌  ‌

We‌‌found‌‌this‌‌activity‌‌after‌‌reviewing‌‌information‌‌about‌‌a‌‌portion‌‌of‌‌it‌‌shared‌‌by‌‌a‌‌member‌‌  
of‌‌civil‌‌society‌‌in‌‌Myanmar.‌‌Our‌‌investigation‌‌revealed‌‌some‌‌links‌‌between‌‌this‌‌operation‌‌and‌‌   ‌
the‌‌activity‌‌we‌‌r‌ emoved‌‌‌in‌‌2018.‌‌Although‌‌the‌‌people‌‌behind‌‌it‌‌attempted‌‌to‌‌conceal‌‌their‌‌ 
identities‌‌and‌‌coordination,‌‌our‌‌investigation‌‌found‌‌links‌‌to‌‌individuals‌‌associated‌‌with‌‌the‌‌ 
Myanmar‌‌military.‌  ‌

● Presence‌‌on‌‌Facebook‌‌and‌‌Instagram:‌‌‌79‌‌Facebook‌‌accounts,‌‌13‌‌Pages,‌‌eight‌‌Groups‌‌ 
and‌‌19‌‌Instagram‌‌accounts.‌  ‌
● Followers:‌A
‌ bout‌‌55,500‌‌accounts‌‌followed‌‌one‌‌or‌‌more‌‌of‌‌these‌‌Pages,‌‌around‌‌   ‌
10,000‌‌people‌‌joined‌‌one‌‌or‌‌more‌‌of‌‌these‌‌Groups,‌‌and‌‌about‌‌1,700‌‌accounts‌‌followed‌‌ 
one‌‌or‌‌more‌‌of‌‌these‌‌Instagram‌‌accounts.‌ 
● Advertising:‌‌‌Around‌‌$650‌‌in‌‌spending‌‌for‌‌ads‌‌on‌‌Facebook‌‌and‌‌Instagram‌‌paid‌‌for‌‌ 
primarily‌‌in‌‌US‌‌dollars.‌  ‌

  ‌

  ‌

  ‌

JULY‌‌2021‌‌COORDINATED‌‌INAUTHENTIC‌‌BEHAVIOR‌‌REPORT‌ 4‌  ‌
Below‌‌is‌‌a‌‌sample‌‌of‌‌the‌‌content‌‌posted‌‌by‌‌some‌‌of‌‌these‌‌accounts‌‌and‌‌Pages.‌‌   ‌

Translation‌  ‌
  ‌
Page‌‌name:‌D
‌ iscovering‌‌the‌‌thieves‌  ‌

  ‌

 ‌

 ‌

 ‌

 ‌

 ‌

 ‌

 ‌

 ‌

 ‌

 ‌

 ‌

 ‌

 ‌

 ‌

 ‌

 ‌

 ‌

 ‌

 ‌
 
 ‌  ‌

JULY‌‌2021‌‌COORDINATED‌‌INAUTHENTIC‌‌BEHAVIOR‌‌REPORT‌ 5‌  ‌
 ‌

 ‌

 ‌

 ‌

 ‌

JULY‌‌2021‌‌COORDINATED‌‌INAUTHENTIC‌‌BEHAVIOR‌‌REPORT‌ 6‌  ‌
02‌  ‌
IN‌‌DEPTH‌‌ANALYSIS‌  ‌
INFLUENCE‌‌THE‌‌INFLUENCERS:‌‌COVID‌‌VACCINE‌‌OPERATION‌‌FROM‌‌RUSSIA‌  ‌

By‌‌Ben‌‌Nimmo,‌‌Global‌‌IO‌‌Threat‌‌Intelligence‌‌Lead,‌‌and‌‌the‌‌IO‌‌Threat‌‌Intelligence‌‌Team‌‌   ‌

EXECUTIVE‌‌SUMMARY:‌  ‌

We‌‌removed‌‌65‌‌Facebook‌‌accounts‌‌and‌‌243‌‌Instagram‌‌accounts‌‌for‌‌violating‌‌our‌‌policy‌‌against‌‌ 
foreign‌‌interference‌,‌‌which‌‌is‌‌c
‌ oordinated‌‌inauthentic‌‌behavior‌‌‌on‌‌behalf‌‌of‌‌a‌‌foreign‌‌entity.‌‌This‌‌ 
network‌‌operated‌‌across‌‌over‌‌a‌‌dozen‌‌platforms‌‌and‌‌forums‌‌but‌‌failed‌‌to‌‌build‌‌an‌‌audience.‌‌It‌‌ 
originated‌‌in‌‌Russia‌‌and‌‌targeted‌‌audiences‌‌primarily‌‌in‌‌India,‌‌Latin‌‌America‌‌and,‌‌to‌‌a‌‌much‌‌lesser‌‌ 
extent,‌‌the‌‌United‌‌States.‌‌Our‌‌investigation‌‌found‌‌links‌‌between‌‌this‌‌campaign‌‌and‌‌Fazze,‌‌a ‌‌
subsidiary‌‌of‌‌a‌‌UK-registered‌‌marketing‌‌firm,‌‌whose‌‌operations‌‌were‌‌primarily‌‌conducted‌‌from‌‌ 
Russia.‌‌Fazze‌‌is‌‌now‌‌banned‌‌from‌‌our‌‌platform.‌  ‌

This‌‌campaign‌‌came‌‌in‌‌two‌‌distinct‌‌waves,‌‌separated‌‌by‌‌five‌‌months‌‌of‌‌inactivity.‌‌First,‌‌in‌‌ 
November‌‌and‌‌December‌‌2020,‌‌the‌‌network‌‌posted‌‌memes‌‌and‌‌comments‌‌claiming‌‌that‌‌the‌‌ 
AstraZeneca‌‌COVID-19‌‌vaccine‌‌would‌‌turn‌‌people‌‌into‌‌chimpanzees.‌‌Five‌‌months‌‌later,‌‌in‌‌May‌‌ 
2021,‌‌it‌‌questioned‌‌the‌‌safety‌‌of‌‌the‌‌Pfizer‌‌vaccine‌‌by‌‌posting‌‌an‌‌allegedly‌‌hacked‌‌and‌‌leaked‌ 
AstraZeneca‌‌document.‌‌It‌‌is‌‌noteworthy‌‌that‌‌both‌‌phases‌‌coincided‌‌with‌‌periods‌‌when‌‌a‌‌number‌‌ 
of‌‌governments,‌‌including‌‌in‌‌Latin‌‌America,‌‌India‌‌and‌‌the‌‌United‌‌States,‌‌were‌‌reportedly‌‌ 
discussing‌‌the‌‌emergency‌‌authorizations‌‌for‌‌these‌‌respective‌‌vaccines.‌  ‌

This‌‌campaign‌‌functioned‌‌as‌‌a‌‌disinformation‌‌laundromat.‌‌It‌‌created‌‌misleading‌‌articles‌‌and‌‌ 
petitions‌‌on‌‌multiple‌‌forums‌‌including‌‌Reddit,‌‌Medium,‌‌Change[.]org,‌‌and‌‌Medapply[.]co[.]uk.‌‌It‌‌ 
then‌‌used‌‌fake‌‌accounts‌‌on‌‌social‌‌media,‌‌including‌‌Facebook‌‌and‌‌Instagram,‌‌to‌‌seed‌‌and‌‌amplify‌‌ 
this‌‌off-platform‌‌content,‌‌using‌‌crude‌‌spammy‌‌tactics.‌‌The‌‌crux‌‌of‌‌the‌‌campaign,‌‌though,‌‌ 
appeared‌‌to‌‌be‌‌engaging‌‌influencers‌‌with‌‌pre-existing‌‌audiences‌‌on‌‌Instagram,‌‌YouTube‌‌and‌‌ 
TikTok‌‌to‌‌post‌‌content‌‌and‌‌use‌‌particular‌‌hashtags‌‌without‌‌disclosing‌‌the‌‌origin‌‌of‌‌the‌‌posts.‌‌This‌‌ 
use‌‌of‌‌influencers‌‌appeared‌‌to‌‌be‌‌in‌‌line‌‌with‌‌the‌‌firm’s‌‌advertised‌‌services,‌‌which‌‌included‌‌access‌‌ 
to‌“‌ a‌‌large‌‌list‌‌of‌‌bloggers‌‌from‌‌Youtube,‌‌Instagram,‌‌and‌‌Facebook...‌‌[where]‌‌Accounts‌‌are‌‌ready‌‌to‌‌ 
post‌‌your‌‌ads‌‌for‌‌reasonable‌‌pricing.‌‌Work‌‌with‌‌bloggers‌‌directly‌‌without‌‌any‌‌3rd‌‌party.”‌  ‌

JULY‌‌2021‌‌COORDINATED‌‌INAUTHENTIC‌‌BEHAVIOR‌‌REPORT‌ 7‌  ‌
The‌‌vast‌‌majority‌‌of‌‌this‌‌campaign‌‌fell‌‌flat‌‌among‌‌the‌‌audiences‌‌it‌‌targeted,‌‌with‌‌nearly‌‌all‌‌its‌‌ 
Instagram‌‌posts‌‌receiving‌‌zero‌‌likes.‌‌At‌‌the‌‌time‌‌of‌‌this‌‌writing,‌‌its‌‌English-language‌‌petition‌‌on‌‌ 
Change[.]org‌‌gained‌‌only‌‌about‌‌550‌‌signatures,‌‌and‌‌its‌‌Hindi-language‌‌petition‌‌gained‌‌less‌‌than‌‌ 
900‌‌signatures.‌‌Only‌‌the‌‌paid‌‌influencers’‌‌posts‌‌attracted‌‌some‌‌limited‌‌attention.‌‌However,‌‌this‌‌ 
reliance‌‌on‌‌external‌‌influencers‌‌became‌‌the‌‌operation’s‌‌undoing—‌‌in‌‌May,‌‌a‌‌handful‌‌of‌‌them‌‌ 
exposed‌‌the‌‌anti-Pfizer‌‌efforts.‌  ‌

This‌‌continues‌‌the‌‌trend‌‌we‌‌highlighted‌‌in‌‌our‌‌recent‌‌T
‌ hreat‌‌Report‌—‌‌influence‌‌operations‌‌ 
increasingly‌‌target‌‌authentic‌‌influential‌‌voices‌‌to‌‌carry‌‌their‌‌messages.‌‌Through‌‌them,‌‌deceptive‌‌ 
campaigns‌‌gain‌‌access‌‌to‌‌the‌‌influencer’s‌‌ready-made‌‌audience,‌‌but‌‌it‌‌comes‌‌with‌‌a‌‌significant‌‌risk‌‌ 
of‌‌exposure.‌  ‌

Another‌‌aspect‌‌of‌‌this‌‌campaign‌‌worth‌‌highlighting‌‌is‌‌its‌‌operations‌‌across‌‌multiple‌‌internet‌‌ 
services‌‌at‌‌once,‌‌which‌‌makes‌‌it‌‌more‌‌challenging‌‌for‌‌any‌‌one‌‌platform‌‌to‌‌see‌‌the‌‌full‌‌picture‌‌and‌‌ 
take‌‌action‌‌on‌‌the‌‌whole‌‌of‌‌the‌‌campaign.‌‌This‌‌is‌‌why‌‌a‌‌whole-of-society‌‌response‌‌to‌‌such‌‌ 
disinformation‌‌campaigns‌‌is‌‌critical.‌‌The‌‌influencers‌‌who‌‌came‌‌forward‌‌in‌‌G
‌ ermany‌‌‌and‌‌F
‌ rance‌‌ 
played‌‌a‌‌key‌‌role‌‌in‌‌exposing‌‌the‌‌first‌‌clue‌‌to‌‌this‌‌activity.‌‌Open-source‌‌researchers‌‌and‌‌journalists‌‌ 
were‌‌able‌‌to‌‌report‌‌on‌‌much‌‌of‌‌the‌‌operation’s‌‌May‌‌activity.‌‌Our‌‌internal‌‌investigation‌‌led‌‌to‌‌ 
uncovering‌‌the‌‌full‌‌scope‌‌of‌‌this‌‌network‌‌on‌‌our‌‌platform,‌‌including‌‌its‌‌activity‌‌in‌‌2020,‌‌which‌‌we‌‌ 
shared‌‌with‌‌our‌‌industry‌‌peers.‌‌Our‌‌analysis‌‌benefited‌‌from‌‌research‌‌by‌‌CNN,‌‌The‌‌Daily‌‌Beast‌‌and‌‌ 
Graphika.‌‌This‌‌collaboration‌‌between‌‌different‌‌members‌‌of‌‌the‌‌community‌‌is‌‌vital‌‌to‌‌exposing‌‌ 
influence‌‌operations‌‌and‌‌understanding‌‌their‌‌impact.‌  ‌

Still,‌‌some‌‌questions‌‌remain‌‌about‌‌aspects‌‌of‌‌this‌‌campaign‌‌—‌‌including‌‌who‌‌commissioned‌‌Fazze‌‌ 
to‌‌run‌‌it‌‌—‌‌that‌‌would‌‌benefit‌‌from‌‌further‌‌research‌‌by‌‌the‌‌defender‌‌community.‌‌One‌‌feature‌‌of‌‌ 
these‌‌for-hire‌‌influence‌‌operations‌‌is‌‌that‌‌they‌‌allow‌‌the‌‌ultimate‌‌beneficiary‌‌to‌‌obfuscate‌‌their‌‌ 
involvement.‌‌Another‌‌question‌‌relates‌‌to‌‌how‌‌the‌‌“hacked‌‌and‌‌leaked”‌‌document‌‌came‌‌into‌‌ 
Fazze’s‌‌hands.‌‌As‌‌with‌‌any‌‌influence‌‌operation,‌‌understanding‌‌the‌‌motive‌‌behind‌‌leaks‌‌like‌‌these‌‌is‌‌ 
key‌‌to‌‌putting‌‌the‌‌operation‌‌in‌‌context.‌  ‌

As‌‌part‌‌of‌‌disrupting‌‌this‌‌operation,‌‌we‌‌took‌‌down‌‌their‌‌accounts,‌‌including‌‌authentic‌‌assets‌‌of‌‌ 
the‌‌people‌‌behind‌‌this‌‌network,‌‌and‌‌blocked‌‌domains‌‌associated‌‌with‌‌their‌‌activity.‌‌We‌‌also‌‌ 
notified‌‌people‌‌who‌‌we‌‌believe‌‌may‌‌have‌‌been‌‌contacted‌‌by‌‌this‌‌network‌‌and‌‌shared‌‌our‌‌findings‌‌ 
with‌‌law‌‌enforcement,‌‌independent‌‌researchers,‌‌policy-makers‌‌and‌‌our‌‌industry‌‌peers‌‌so‌‌they‌‌too‌‌ 
can‌‌take‌‌action‌‌if‌‌they‌‌find‌‌violating‌‌activity.‌‌   ‌

  ‌

  ‌

  ____________________________________________________________________________________‌  ‌

JULY‌‌2021‌‌COORDINATED‌‌INAUTHENTIC‌‌BEHAVIOR‌‌REPORT‌ 8‌  ‌
TAKEDOWN‌‌BY‌‌THE‌‌NUMBERS‌  ‌
● Presence‌‌on‌‌Facebook‌‌and‌‌Instagram:‌‌‌65‌‌Facebook‌‌accounts‌‌and‌‌243‌‌Instagram‌‌ 
accounts.‌  ‌

● Followers:‌A
‌ bout‌‌24,000‌‌accounts‌‌followed‌‌one‌‌or‌‌more‌‌of‌‌these‌‌Instagram‌‌accounts.‌‌   ‌

● Advertising:‌‌‌Around‌‌$200‌‌in‌‌spending‌‌for‌‌ads‌‌on‌‌Facebook‌‌and‌‌Instagram‌‌paid‌‌for‌‌ 
primarily‌‌in‌‌US‌‌dollars,‌‌euros‌‌and‌‌Russian‌‌rubles.‌‌That‌‌includes‌‌the‌‌entirety‌‌of‌‌historic‌‌ 
advertising‌‌activity‌‌by‌‌both‌‌inauthentic‌‌and‌‌authentic‌‌accounts‌‌removed‌‌as‌‌part‌‌of‌‌this‌‌ 
network.‌‌We‌‌haven’t‌‌seen‌‌ad‌‌spend‌‌associated‌‌with‌‌the‌‌vaccine-focused‌‌campaign‌‌ 
described‌‌in‌‌this‌‌report.‌  ‌

  ____________________________________________________________________________________‌  ‌

  ‌

NOVEMBER-DECEMBER‌‌2020:‌‌THE‌‌ASTRAZENECA‌‌PHASE‌  ‌
The‌‌operation’s‌‌first‌‌phase‌‌centered‌‌around‌‌the‌‌false‌‌claim‌‌that‌‌the‌‌AstraZeneca‌‌vaccine‌‌   ‌
was‌‌dangerous‌‌because‌‌it‌‌was‌‌derived‌‌from‌‌a‌‌chimpanzee‌‌adenovirus.‌‌It‌‌primarily‌‌targeted‌‌ 
audiences‌‌in‌‌India‌‌and‌‌Latin‌‌America,‌‌with‌‌sporadic‌‌and‌‌unsuccessful‌‌attempts‌‌to‌‌reach‌‌ 
audiences‌‌in‌‌the‌‌United‌‌States.‌‌This‌‌phase‌‌of‌‌the‌‌on-platform‌‌operation‌‌has‌‌not‌‌been‌‌   ‌
previously‌‌reported.‌  ‌

The‌‌Fazze‌‌operation‌‌began‌‌with‌‌the‌‌creation‌‌of‌‌two‌‌batches‌‌of‌‌fake‌‌Facebook‌‌accounts‌‌in‌‌late‌‌ 
2020,‌‌which‌‌likely‌‌originated‌‌from‌‌account‌‌farms‌‌in‌‌Bangladesh‌‌and‌‌Pakistan.‌‌They‌‌posed‌‌as‌‌ 
being‌‌based‌‌in‌‌India.‌  ‌

The‌‌accounts‌‌initially‌‌posted‌‌a‌‌small‌‌volume‌‌of‌‌non-covid‌‌content‌‌—‌‌typically‌‌about‌‌Indian‌‌food‌‌ 
or‌‌Hollywood‌‌actors.‌‌In‌‌late‌‌November,‌‌however,‌‌the‌‌operation‌‌began‌‌using‌‌some‌‌of‌‌them‌‌to‌‌ 
post‌‌on‌‌blogging‌‌platforms‌‌and‌‌petition‌‌websites,‌‌including‌‌Medium‌‌and‌‌Change[.]org.‌‌These‌‌ 
blogs‌‌and‌‌petitions,‌‌in‌‌English‌‌and‌‌Hindi,‌‌claimed‌‌that‌‌AstraZeneca‌‌manipulated‌‌its‌‌COVID-19‌‌ 
vaccine‌‌trial‌‌data‌‌and‌‌used‌‌an‌‌untried‌‌technology‌‌to‌‌create‌‌the‌‌vaccine.‌  ‌

  ‌

  ‌

  ‌

  ‌

  ‌

JULY‌‌2021‌‌COORDINATED‌‌INAUTHENTIC‌‌BEHAVIOR‌‌REPORT‌ 9‌  ‌
Image‌  ‌

Screenshot‌‌of‌‌a‌‌petition‌‌on‌‌Change[.]org,‌‌created‌‌ 
by‌‌the‌‌operation‌‌and‌‌addressed‌‌to‌‌the‌‌World‌‌ 
Health‌‌Organization‌‌on‌‌November‌‌25,‌‌2020.‌‌   ‌

  ‌

  ‌

  ‌

  ‌

  ‌

  ‌

  ‌

In‌‌December,‌‌as‌‌the‌‌Indian‌‌government‌‌was‌‌d
‌ iscussing‌‌‌emergency‌‌authorization‌‌for‌‌the‌‌ 
AstraZeneca‌‌vaccine,‌‌the‌‌operation‌‌began‌‌using‌‌fake‌‌accounts‌‌on‌‌both‌‌Facebook‌‌and‌‌Instagram‌‌ 
to‌‌promote‌‌its‌‌blogs‌‌and‌‌petitions,‌‌together‌‌with‌‌a‌‌large‌‌stock‌‌of‌‌memes.‌‌All‌‌these‌‌memes‌‌ 
featured‌‌the‌‌suggestion,‌‌or‌‌even‌‌the‌‌explicit‌‌claim,‌‌that‌‌the‌‌AstraZeneca‌‌vaccine‌‌would‌‌turn‌‌its‌‌ 
subjects‌‌into‌‌chimpanzees.‌‌Many‌‌of‌‌the‌‌memes‌‌featured‌‌scenes‌‌from‌‌the‌‌1968‌‌movie‌‌“Planet‌‌of‌‌ 
the‌‌Apes”.‌  ‌
  ‌
Image‌  ‌

Post‌‌by‌‌the‌‌network‌  ‌

  ‌

  ‌

The‌‌fake‌‌Facebook‌‌accounts‌‌focused‌‌on‌‌low-volume,‌‌targeted‌‌posting:‌‌each‌‌one‌‌typically‌‌ 
posted‌‌three‌‌to‌‌six‌‌memes,‌‌usually‌‌in‌‌Hindi,‌‌together‌‌with‌‌a‌‌link‌‌to‌‌one‌‌of‌‌the‌‌operation’s‌‌ 
off-platform‌‌pieces‌‌of‌‌content‌‌and‌‌a‌‌brief‌‌comment‌‌in‌‌Hindi‌‌or‌‌English.‌‌These‌‌posts‌‌received‌‌ 
few‌‌if‌‌any‌‌likes,‌‌and‌‌some‌‌were‌‌ridiculed‌‌by‌‌real‌‌people.‌  ‌

  ‌

  ‌

  ‌

JULY‌‌2021‌‌COORDINATED‌‌INAUTHENTIC‌‌BEHAVIOR‌‌REPORT‌ 10‌  ‌
Image‌  ‌

Meme‌‌posted‌‌by‌‌this‌‌network’s‌‌fake‌‌accounts,‌‌ 
together‌‌with‌‌a‌‌link‌‌to‌‌one‌‌of‌‌the‌‌operation’s‌‌ 
petitions.‌  ‌

Caption‌  ‌

“Walter,‌‌come‌‌on‌‌over,‌‌don't‌‌hesitate!‌‌AstraZeneca's‌‌ 
vaccine‌‌is‌‌safe!‌‌Yesterday‌‌we‌‌took‌‌the‌‌vaccine‌‌ 
ourselves...”‌   ‌ ‌

  ‌

Image‌  ‌

Conversation‌‌between‌‌two‌‌of‌‌the‌‌operation’s‌‌ 
fake‌‌accounts,‌‌responding‌‌to‌‌an‌‌unrelated‌‌ 
post‌‌on‌‌a‌‌Page‌‌called‌‌“Texans‌‌for‌‌Vaccine‌‌ 
Choice”.‌  ‌

  ‌

  ‌
  ‌
Image:‌  ‌

Comment‌‌from‌‌a‌‌real‌‌person‌‌mocking‌‌one‌‌of‌‌the‌‌ 
operation’s‌‌posts,‌‌followed‌‌by‌‌critical‌‌responses‌‌from‌‌ 
three‌‌of‌‌the‌‌operation’s‌‌fake‌‌accounts.‌‌   ‌

  ‌

  ‌

  ‌

  ‌

  ‌

  ‌

  ‌

  ‌

JULY‌‌2021‌‌COORDINATED‌‌INAUTHENTIC‌‌BEHAVIOR‌‌REPORT‌ 11‌  ‌
The‌‌Instagram‌‌activity‌‌was‌‌crude‌‌and‌‌spammy,‌‌and‌‌organized‌‌around‌‌a‌‌handful‌‌of‌‌hashtags.‌‌Two‌‌ 
of‌‌them‌‌were‌‌in‌‌English:‌‌#AstraZenecakills‌‌and‌‌#AstraZenecalies,‌‌in‌‌addition‌‌to‌‌their‌‌equivalents‌‌ 
in‌‌Portuguese:‌‌#AstraZenecamata‌‌and‌‌#AstraZenecamente.‌‌The‌‌last‌‌one‌‌was‌‌ 
#stopAstraZeneca.‌‌The‌‌operation‌‌likely‌‌originated‌‌these‌‌hashtags:‌‌neither‌‌#astrazenecakills,‌‌ 
#stopastrazeneca‌‌nor‌‌#astrazenecalies‌‌appeared‌‌to‌‌have‌‌been‌‌used‌‌on‌‌Instagram‌‌before.‌‌ 
Between‌‌December‌‌14‌‌and‌‌December‌‌21,‌‌about‌‌10,000‌‌posts‌‌that‌‌included‌‌the‌‌operation's‌‌ 
hashtags‌‌were‌‌made,‌‌often‌‌with‌‌links‌‌to‌‌the‌‌operation's‌‌off-platform‌‌articles.‌  ‌

Most‌‌of‌‌the‌‌accounts‌‌posted‌‌these‌‌hashtags‌‌dozens‌‌or‌‌hundreds‌‌of‌‌times‌‌in‌‌quick‌‌succession,‌‌ 
likely‌‌automated,‌‌with‌‌some‌‌detected‌‌and‌‌disabled‌‌by‌‌our‌‌automated‌‌systems.‌‌While‌‌the‌‌ 
volume‌‌was‌‌high,‌‌the‌‌campaign‌‌saw‌‌minimal‌‌success.‌‌On‌‌one‌‌of‌‌the‌‌hashtags‌‌used‌‌by‌‌the‌‌ 
operation,‌‌the‌‌top-performing‌‌post‌‌received‌‌five‌‌likes,‌‌whereas‌‌the‌‌vast‌‌majority‌‌got‌‌zero.‌‌ 
Altogether,‌‌the‌‌operation’s‌‌Instagram‌‌posts‌‌attracted‌‌around‌‌1,000‌‌likes‌‌combined,‌‌with‌‌most‌‌ 
receiving‌‌zero.‌  ‌

In‌‌another‌‌sign‌‌of‌‌the‌‌sloppy‌‌nature‌‌of‌‌this‌‌campaign,‌‌most‌‌of‌‌the‌‌posts‌‌that‌‌used‌‌ 
Portuguese-language‌‌hashtags‌‌between‌‌December‌‌14‌‌and‌‌December‌‌16‌‌attached‌‌them‌‌to‌‌ 
Hindi-language‌‌memes.‌‌From‌‌December‌‌17‌‌onwards,‌‌they‌‌accompanied‌‌the‌‌hashtags‌‌with‌‌the‌‌ 
same‌‌memes,‌‌but‌‌translated‌‌into‌‌Spanish.‌‌(“AstraZeneca‌‌mata”,‌‌“AstraZeneca‌‌kills”,‌ ‌is‌‌the‌‌same‌‌ 
in‌‌Spanish‌‌and‌‌Portuguese,‌‌whereas‌‌“AstraZeneca‌‌lies”‌‌is‌‌“AstraZeneca‌‌mente”‌‌in‌‌Portuguese,‌‌ 
but‌‌“AstraZeneca‌‌miente”‌‌in‌‌Spanish.)‌‌Perhaps‌‌not‌‌surprisingly,‌‌this‌‌spammy‌‌amplification‌‌also‌‌ 
failed‌‌to‌‌attract‌‌attention.‌  ‌

Image‌  ‌

Meme‌‌posted‌‌by‌‌one‌‌of‌‌the‌‌Instagram‌‌accounts‌‌ 
on‌‌December‌‌15,‌‌2020.‌‌The‌‌meme‌‌was‌‌in‌‌Hindi,‌‌ 
but‌‌the‌‌account‌‌had‌‌an‌‌English‌‌name.‌‌It‌‌was‌‌ 
operated‌‌from‌‌Russia,‌‌and‌‌it‌‌used‌‌a‌‌Portuguese‌‌ 
hashtag.‌  ‌

  ‌

  ‌

  ‌

  ‌

  ‌

  ‌

JULY‌‌2021‌‌COORDINATED‌‌INAUTHENTIC‌‌BEHAVIOR‌‌REPORT‌ 12‌  ‌
  ‌

Image‌  ‌

Two‌‌posts‌‌by‌‌the‌‌same‌‌fake‌‌account‌‌on‌‌December‌‌14‌‌ 
and‌‌December‌‌30,‌‌showing‌‌the‌‌same‌‌meme‌‌in‌‌Hindi‌‌ 
and‌‌then‌‌in‌‌Spanish.‌‌The‌‌Hindi‌‌meme‌‌came‌‌with‌‌an‌‌ 
English‌‌comment,‌‌the‌‌Spanish‌‌meme‌‌with‌‌a‌‌Hindi‌‌ 
comment.‌‌Neither‌‌post‌‌gained‌‌reactions.‌  ‌

Upper‌‌post‌‌caption‌:‌“‌ N
‌ ow‌‌it's‌‌your‌‌turn‌‌to‌‌get‌‌ 
vaccinated‌‌with‌‌AstraZeneca”.‌  ‌

  ‌

Lower‌‌post‌‌text‌:‌‌“T
‌ he‌‌vaccine‌‌was‌‌based‌‌on‌‌a ‌‌
chimpanzee‌‌gene‌‌from‌‌the‌‌company‌‌AstraZeneca.”‌  ‌

Lower‌‌post‌‌caption‌:‌‌“OK,‌‌it’s‌‌your‌‌turn‌‌for‌‌the‌‌ 
AstraZeneca‌‌vaccine.”‌  ‌

  ‌

  ‌

While‌‌this‌‌spammy‌‌campaign‌‌was‌‌still‌‌running,‌‌a‌‌handful‌‌of‌‌health‌‌and‌‌wellbeing‌‌influencers‌‌ 
posted‌‌Instagram‌‌stories‌‌that‌‌used‌‌the‌‌same‌‌hashtags,‌‌referenced‌‌the‌‌fact‌‌that‌‌the‌‌ 
AstraZeneca‌‌vaccine‌‌was‌‌derived‌‌from‌‌chimpanzee‌‌adenovirus‌‌and‌‌shared‌‌links‌‌to‌‌the‌‌petitions‌‌ 
that‌‌the‌‌Fazze‌‌operation‌‌had‌‌created.‌‌While‌‌possible,‌‌it‌‌appears‌‌highly‌‌unlikely‌‌that‌‌these‌‌ 
influencers‌‌shared‌‌the‌‌operation’s‌‌work‌‌organically.‌‌Given‌‌the‌‌public‌‌reporting‌‌about‌‌this‌‌ 
network’s‌‌engagement‌‌with‌‌influencers‌‌in‌‌May,‌‌it‌‌is‌‌likely‌‌that‌‌the‌‌operation‌‌used‌‌similar‌‌tactics‌‌ 
in‌‌December‌‌2020‌‌and‌‌asked‌‌unwitting‌‌people‌‌to‌‌amplify‌‌this‌‌campaign‌‌against‌‌AstraZeneca‌‌ 
across‌‌social‌‌media‌‌platforms.‌  ‌

The‌‌Instagram‌‌spam‌‌activity‌‌ended‌‌on‌‌December‌‌21.‌‌The‌‌Facebook‌‌accounts‌‌continued‌‌posting‌‌ 
at‌‌a‌‌very‌‌low‌‌level‌‌—‌‌roughly‌‌a‌‌post‌‌a‌‌week‌‌—‌‌into‌‌early‌‌January.‌‌Between‌‌December‌‌30‌‌and‌‌ 
January‌‌18,‌‌the‌‌A
‌ rgentinian‌,‌I‌ ndian‌,‌‌and‌‌B
‌ razilian‌‌‌governments‌‌granted‌‌emergency‌‌authorization‌‌ 
to‌‌the‌‌AstraZeneca‌‌vaccine.‌‌On‌‌January‌‌6,‌‌the‌‌operation‌‌stopped‌‌posting.‌  ‌

  ‌

  ‌

  ‌

JULY‌‌2021‌‌COORDINATED‌‌INAUTHENTIC‌‌BEHAVIOR‌‌REPORT‌ 13‌  ‌
MAY‌‌2021:‌‌THE‌‌PFIZER‌‌PHASE‌  ‌
After‌‌months‌‌of‌‌inactivity,‌‌the‌‌operation‌‌resumed‌‌in‌‌May‌‌2021‌‌to‌‌claim‌‌that‌‌Pfizer’s‌‌   ‌
COVID-19‌‌vaccine‌‌had‌‌caused‌‌a‌‌much‌‌higher‌‌“casualty‌‌rate”‌‌than‌‌other‌‌vaccines.‌‌This‌‌time,‌‌the‌‌ 
on-platform‌‌activity‌‌appeared‌‌to‌‌be‌‌limited‌‌to‌‌a‌‌few‌‌dozen‌‌Facebook‌‌posts‌‌in‌‌English‌‌that‌‌ 
primarily‌‌targeted‌‌Pages‌‌and‌‌Groups‌‌in‌‌the‌‌United‌‌States,‌‌which‌‌received‌‌almost‌‌no‌‌reaction,‌‌ 
and‌‌a‌‌single‌‌post‌‌by‌‌an‌‌influencer‌‌in‌‌Brazil.‌‌Open-source‌‌r‌ eporting‌‌c
‌ onfirms‌‌‌that‌‌Fazze‌‌also‌‌sent‌‌ 
emails‌‌to‌‌influencers‌‌in‌‌France‌‌and‌‌Germany,‌‌who‌‌ultimately‌‌exposed‌‌it,‌‌and‌‌likely‌‌recruited‌‌a ‌‌
YouTube‌‌influencer‌‌in‌‌India.‌‌We‌‌did‌‌not‌‌see‌‌evidence‌‌of‌‌this‌‌phase‌‌of‌‌the‌‌operation‌‌targeting‌‌ 
India,‌‌France‌‌and‌‌Germany‌‌on‌‌our‌‌platform.‌  ‌

This‌‌phase‌‌of‌‌the‌‌operation‌‌coincided‌‌roughly‌‌with‌‌a‌‌period‌‌when‌‌the‌‌E
‌ uropean‌‌Medicines‌‌ 
Agency‌‌‌and‌‌B
‌ razil‌‌‌were‌‌discussing‌‌approving‌‌the‌‌Pfizer‌‌vaccine‌‌for‌‌adolescents,‌‌and‌‌came‌‌just‌‌ 
after‌‌the‌‌U
‌ S‌‌Food‌‌and‌‌Drug‌‌Agency‌‌‌approved‌‌it‌‌for‌‌adolescents‌‌on‌‌May‌‌10.‌‌Pfizer‌‌was‌‌also‌‌ 
reportedly‌‌in‌‌talks‌‌with‌‌the‌‌I‌ ndian‌‌‌regulator‌‌in‌‌early‌‌May‌‌over‌‌an‌‌“expedited‌‌approval‌‌pathway”‌‌ 
for‌‌its‌‌vaccine.‌ 

At‌‌the‌‌core‌‌of‌‌this‌‌second‌‌phase‌‌of‌‌the‌‌operation‌‌was‌‌a‌‌12-page‌‌document‌‌comparing‌‌the‌‌ 
efficacy‌‌of‌‌different‌‌COVID-19‌‌vaccines.‌‌The‌‌operation‌‌claimed‌‌that‌‌it‌‌had‌‌been‌‌hacked‌‌and‌‌ 
leaked‌‌from‌‌AstraZeneca.‌‌It‌‌included‌‌a‌‌table‌‌which‌‌purported‌‌to‌‌show‌‌that‌‌the‌‌Pfizer‌‌vaccine‌‌ 
caused‌‌a‌‌much‌‌higher‌‌casualty‌‌rate‌‌than‌‌other‌‌vaccines;‌‌the‌‌table‌‌has‌‌since‌‌been‌‌d
‌ escribed‌‌‌by‌‌ 
BBC‌‌investigative‌‌journalists‌‌as‌‌“c
‌ obbled‌‌together‌‌from‌‌different‌‌sources‌‌and‌‌taken‌‌out‌‌of‌‌ 
context”.‌‌It‌‌has‌‌not‌‌been‌‌confirmed‌‌how‌‌the‌‌document‌‌made‌‌it‌‌into‌‌this‌‌operation’s‌‌hands.‌  ‌

The‌‌Pfizer-focused‌‌phase‌‌of‌‌the‌‌operation‌‌began‌‌on‌‌May‌‌14‌‌with‌‌three‌‌articles‌‌on‌‌Medium,‌‌ 
Reddit‌‌and‌‌ethicalhacker[.]org,‌‌posted‌‌within‌‌roughly‌‌90‌‌minutes.‌‌Each‌‌article‌‌claimed‌‌that‌ 
AstraZeneca‌‌had‌‌been‌‌hacked,‌‌and‌‌shared‌‌a‌‌low-quality‌‌screenshot‌‌of‌‌the‌‌“casualty‌‌rate”‌‌table.‌  ‌

As‌‌before,‌‌the‌‌operation‌‌did‌‌not‌‌immediately‌‌promote‌‌these‌‌fake‌‌articles‌‌on‌‌our‌‌platform.‌‌ 
Instead,‌‌on‌‌May‌‌18,‌‌one‌‌of‌‌its‌‌Facebook‌‌accounts‌‌was‌‌used‌‌to‌‌create‌‌an‌‌account‌‌on‌‌a‌‌UK-based‌‌ 
medical‌‌forum‌‌for‌‌students,‌‌and‌‌posted‌‌the‌‌entire‌‌“hacked”‌‌document‌‌there.‌‌The‌‌next‌‌day,‌‌a ‌‌
second‌‌wave‌‌of‌‌articles‌‌appeared‌‌on‌‌third-party‌‌websites‌‌to‌‌promote‌‌the‌‌“Pfizer‌‌casualty‌‌rate”‌‌ 
narrative,‌‌referencing‌‌the‌‌first‌‌wave‌‌of‌‌articles‌‌as‌‌a‌‌source.‌‌The‌‌operation‌‌then‌‌used‌‌its‌‌Facebook‌‌ 
accounts‌‌to‌‌post‌‌the‌‌links‌‌to‌‌these‌‌articles‌‌in‌‌different‌‌health-‌‌and‌‌news-related‌‌Groups‌‌and‌‌ 
Pages.‌‌This‌‌time,‌‌all‌‌the‌‌posts‌‌were‌‌in‌‌English,‌‌and‌‌they‌‌primarily‌‌focused‌‌on‌‌US-based‌‌ 
audiences,‌‌but‌‌the‌‌volume‌‌of‌‌posting‌‌remained‌‌very‌‌low,‌‌typically‌‌two-four‌‌posts‌‌per‌‌account,‌‌ 
which‌‌received‌‌almost‌‌no‌‌reaction.‌‌   ‌

  ‌

  ‌

JULY‌‌2021‌‌COORDINATED‌‌INAUTHENTIC‌‌BEHAVIOR‌‌REPORT‌ 14‌  ‌
Image‌  ‌

Screenshot‌‌of‌‌the‌‌post‌‌on‌‌the‌‌British‌‌medical‌‌forum‌‌ 
claiming‌‌to‌‌leak‌‌the‌‌full‌‌AstraZeneca‌‌document,‌‌ 
May‌‌18.‌  ‌

  ‌

  ‌

  ‌

Image‌  ‌

Screenshot‌‌of‌‌a‌‌comment‌‌by‌‌the‌‌same‌‌fake‌‌ 
Facebook‌‌account,‌‌responding‌‌to‌‌a‌‌local‌‌news‌‌story‌‌ 
in‌‌Florida‌‌on‌‌20‌‌May,‌‌2021.‌‌The‌‌account‌‌shared‌‌a ‌‌
link‌‌to‌‌one‌‌of‌‌the‌‌operation’s‌‌early‌‌articles.‌  ‌

According‌‌to‌‌French‌‌investigative‌‌publication‌‌Fact‌‌&‌‌Furious,‌‌the‌‌operation‌‌sent‌‌influencers‌‌on‌‌ 
YouTube,‌‌Instagram‌‌and‌‌TikTok‌‌a‌‌detailed‌‌briefing‌‌containing‌‌the‌‌campaign‌‌arguments,‌‌articles,‌‌ 
and‌‌instructions‌‌to‌‌add‌‌links‌‌in‌‌the‌‌influencers’‌‌bios.‌‌Interestingly,‌‌unlike‌‌in‌‌December,‌‌the‌‌ 
operation‌‌itself‌‌did‌‌not‌‌rely‌‌on‌‌spammy‌‌tactics‌‌nor‌‌used‌‌any‌‌hashtags.‌‌However,‌‌it‌‌reportedly‌‌ 
instructed‌‌the‌‌influencers‌‌to‌‌use‌‌generic‌‌hashtags,‌‌such‌‌as‌‌#corona‌‌and‌‌#covid19,‌‌likely‌‌in‌‌an‌‌ 
attempt‌‌to‌‌inject‌‌their‌‌content‌‌into‌‌the‌‌mainstream‌‌pandemic-related‌‌conversation‌‌online.‌  ‌

Image‌  ‌

Part‌‌of‌‌the‌‌tasking‌‌that‌‌the‌‌Fazze‌‌campaign‌‌was‌‌reported‌‌to‌‌ 
have‌‌sent‌‌to‌‌influencers.‌‌(Source:‌‌f‌ actandfurious.com‌).‌  ‌

  ‌

  ‌

  ‌

  ‌

  ‌

  ‌

  ‌

JULY‌‌2021‌‌COORDINATED‌‌INAUTHENTIC‌‌BEHAVIOR‌‌REPORT‌ 15‌  ‌
The‌‌attempt‌‌to‌‌reach‌‌influencers‌‌proved‌‌to‌‌be‌‌the‌‌operation’s‌‌undoing.‌‌Rather‌‌than‌‌taking‌‌ 
Fazze’s‌‌money‌‌(reportedly‌‌2,000‌‌euros),‌‌one‌‌F
‌ rench‌‌‌and‌‌one‌‌G
‌ erman‌‌‌influencer‌‌exposed‌‌the‌‌ 
campaign.‌‌This‌‌triggered‌‌a‌‌wave‌‌of‌‌open-source‌‌research‌‌that‌‌identified‌‌Fazze’s‌‌corporate‌‌ 
ownership‌‌and‌‌the‌‌influencers‌‌they‌‌targeted.‌‌In‌‌response,‌‌Fazze‌‌appears‌‌to‌‌have‌‌deleted‌‌most‌‌ 
of‌‌its‌‌fake‌‌articles‌‌by‌‌the‌‌end‌‌of‌‌May,‌‌and‌‌its‌‌staff‌‌removed‌‌the‌‌references‌‌to‌‌Fazze‌‌from‌‌their‌‌ 
social‌‌media‌‌bios.‌‌According‌‌to‌‌public‌‌reporting,‌‌when‌‌journalists‌‌approached‌‌the‌‌influencers‌‌in‌‌ 
Brazil‌‌and‌‌India,‌‌they,‌‌too,‌‌deleted‌‌their‌‌content.‌  ‌

Image‌  ‌

Screenshot‌‌of‌‌the‌‌profile‌‌of‌‌one‌‌of‌‌the‌‌accounts‌‌run‌‌by‌‌the‌‌ 
operation,‌‌subsequent‌‌to‌‌its‌‌exposure‌‌in‌‌May‌‌2021.‌  ‌

  ‌

 ‌

JULY‌‌2021‌‌COORDINATED‌‌INAUTHENTIC‌‌BEHAVIOR‌‌REPORT‌ 16‌  ‌

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy