We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 15
WHISTLEBLOWER
-- REDACTED FoR coNGRESS =~ ZO MD
Report government and corporate lawbreaking.
‘Without breaking the law.
ANONYMOUS WHISTLEBLOWER DISCLOSURE—
SEC Office of the Whistleblower
Via Online Portal & Fax
pp rities Law Violations by Facebook, Inc. (NASDAQ:
EB), SEC TCR
Facebook misled investors and the public about the negative
consequences of its algorithms, which claim to prioritize
“meaningful social interactions” or “MSI” (e.g., reshares of
friends’ posts) but which actually promote virality of
polarizing misinformation and hate speech.
To the SEC Office of the Whistleblower:
1. The instant letter is one of multiple disclosures related to the above-captioned
matter. Our anonymous client is disclosing original evidence showing that
Facebook, Inc. (NASDAQ: FB) has, for years past and ongoing, violated U.S,
securities laws by making material misrepresentations and omissions in
statements to investors and prospective investors, including, inter alia, through
filings with the SEC, testimony to Congress, online statements and media stories.
2. Summary. Since 2018, Facebook (and in particular Mark Zuckerberg) has prioritized
“meaningful social interactions” or “MSI,” which means that its algorithms are more
likely to show content that is predicted to get reactions or “content” (e.g.,
comments, reshares, or “likes”) from friends or family. However, although Facebook
promotes “MSI” as being beneficial for relationships and wellbeing, the algorithm
increases divisive, hateful content.
Whistleblower Aid is a U.S. tax-exempt, 501(c)(3) organization, EIN 26-4716045.
https:/WhistleblowerAid.org — Anonymously via Tor Browser:
https//p6ufg7/SqskewSScglxtohktyt35rbl4éyultzyuytadtvicywaSpclid.onion
‘Contact via SecureDrop over Tor: http:/whistlebloweraid securedrop.toronion — via Signal App: #1 201-773-1371
1WHISTLEBLOWER
+ REDACTED FOR CONGRESS -- OAD
Report government and corporate lawbreaking.
‘Without breaking the law.
IND AI TI \TEMENTS Al
3. As background, “meaningful social interactions” or “MSI” is defined as:
“(All interactions between two users where the initiator is not the same as
receiver (e.g. a like on a friend reshare, or a comment reply to a user's
comment on a public post).”"
4, Facebook's public priority shifted to “MSI” because it was a way to increase
“content” on the platform (e.g., a reshare of a friend’s post is considered “content”)
when content was otherwise in decline in 2018.
5. “Downstream MSI” is the process by which:
A user posts content, then it gets shown to a viewer using an algorithm
(d_share_msi_score), who then reshares the content, which then creates
“downstream MSI" through likes/reactions, comments, comment
likes/reactions, and comment replies to and from the viewer's friends, who
then continue to reshare the content and so on?
6. In 2018, Mark Zuckerberg announced a shift from prioritizing “time spent” on
Facebook to focusing on “meaningful social interactions,” emphasizing a
focus on showing friend/family content in news feeds:
“[T]he time we all spend on Facebook is time well spent... . we've always put
friends and family at the core of the experience. Research shows that
strengthening our relationships improves our well-being and happiness .
Since there's more public content than posts from your friends and family,
the balance of what's in News Feed has shifted away from the most
important thing Facebook can do -- help us connect with each other. . .
The research shows that when we use social media to connect with people
we care about, it can be good for our well-being. We can feel more
connected and less lonely, and that correlates with long term measures of
happiness and health. . . I'm changing the goal ! give our product teams from
A n h
meaningful social interactions. . . The first changes you'll see will be in
News Feed, where you can expect to see more from your friends, family and
groups. . .. you'll see less public content like posts from businesses, brands,
and media. . .. the time you do spend on Facebook will be more valuable.
And if we do the right thing, | believe that will be good for our community and
Deriving MSI Weight, p. 5. Emphasis is added throughout this disclosure in bold/underlined text.
2eplacing Downstream MSI for Civic and Health, p. 7.
— ANONYMOUS WHISTLEBLOWER DISCLOSURE —WHISTLEBLOWER
= mupgerep FoR concasss —- OND
Report government and corporate lawbreaking.
‘Without breaking the law.
cour business over the long term too. . . By focusing on bringing people closer
together . .. Facebook is time well spent.”*
7. Facebook, and in particular Mark Zuckerberg, have continued to make these
types of statements through the present time. For example, in the March 2021
hearing “Disinformation Nation: Social Media’s Role in Promoting Extremism and
Misinformation,”* Congressman Kinzinger asked:
"So Mr. Zuckerberg, let me ask you: According to Hany Farid at Berkeley,
numerous external studies and some of your own internal studies have
revealed that your algorithms are actively promoting divisive, hateful,
and conspiratorial content because it engages users t. more
time. Do you think those studies are wrong? And if not, what are you guys
doing to reverse course on that?”
8. Mark Zuckerberg responded:
“For the rest of the content in News Feed and on Instagram, the main thing
that | would say is | do think that there is quite a bit of misperception about
how our algorithms work and what we optimize for. | have heard a lot of
people say that we are optimizing for keeping people on the service. The way
that we view this is that we are trying to hel; le have meaningful
social interactions. People come to social networks to be able to connect
with people. If we deliver that value, then it will be natural that people use our
services more. But that is very different from setting up algorithms in order to
just kind of try to tweak and optimize and get people to spend every last
‘minute on our service, which is not how we designed the company or the
services.”
9. In Facebook's Q4 2020 results conference call, Mark Zuckerberg stated:
“So now that we've helped billions of people stay connected with friends
and family, helping everyone find and participate in communities that are
meaningful to them has been our next goal. We even updated our mission
a few years ago to reflect this, making it: ‘give people the power to build
community and bring the world closer together.”
*httpslfabout fb, com/news/2018/0 \inews-fees-yi-bringing-people-closer-togethert
‘ntipsffdoes house gov/meetings!IF/AF 16/20210325/111407/4HRG-117-IF16-Transcrip-20210325 paf.
*hitpsl/s21.q4edn.com/39680738ifles/doc_financials/2020/q4/FB-Q4-2020-Conference-Call-Transcript pdf.
— ANONYMOUS WHISTLEBLOWER DISCLOSURE —WHISTLEBLOWER
Report government and corporate lawbreaking.
‘Without breaking the law.
10. n addition, in its Notice of Annual Meeting and Proxy Statement in 2021,
shareholders made a proposal to address widespread platform misuse. In opposing
this proposal, Facebook represented:
“(Wye have taken a number of steps te uae prone News Feed content that
In 2018, we
made a fundamental change to the way content is surfaced in people's News
Feed to prioritize posts from friends and family . . . to try and minimize the
amount of divisive content that people see. We have reduced clickbait
headlines, reduced links to misleading and spam posts, and improved how
comments are ranked to show people those that are more relevant and of
higher quality. .. . We also regularly partner with external researchers in
efforts to better understand the impact of platforms like ours on social issues
- Given our efforts and transparency around our actions to counter
platform misuse, . . [we are] against this proposal.”*
11. Similarly, Facebook has made misstatements in its public pages. For example, in its
public page on “Bringing People Closer Together,” Facebook outlines:
“Today we use signals like how many people react to, comment on or share
posts to determine how high they appear in News Feed.
With this update, we will also prioritize posts that spark conversations and
meaningtul interactions between people. To do this, we will predict which
posts you might want to interact with your friends about, and show these
posts higher in feed. These are posts that inspire back-and-forth discussion
in the comments and posts that you might want to share and react
to—whether that’s a post from a friend seeking advice, a friend asking for
recommendations for a trip, or a news article or video prompting lots of
discussion
posts in ‘News Food.”
SUMMARY OF ORIGINAL EVIDENCE
12. Facebook's records confirm that Facebook's statements were false.
13. Internal documents highlight how prioritizing "MSI" such as “reshares” actually
furthers misinformation and other divisive, low-quality content:
“https: taww.se0.gov/Archivesledgaridatal 132680 1/000132680121000022/facebook2021definitiveprox him,
“https www.Facebook.com/ousiness/news!news-fead-fy-bringing-people-closer-together.
— ANONYMOUS WHISTLEBLOWER DISCLOSURE —WHISTLEBLOWER
Report government and corporate lawbreaking.
‘Without breaking the law.
“Our ranking systems have specific separate predictions for not just what you
would engage with, but what we think you may pass along so that others may
engage with. Unfortunately, research has shown how outrage and
misinformation are more likely to be viral, and recent experiments that
deprecate these models indicate that removing these models does
“Feedback and UX research with news publishers and political actors also
suggests that share downstream MSI is leading them to post more divisive
19
“The result was a bit concerning: net sentiment was inversely correlated with
FB-generated traffic (outbound clicks). In other words: the more negative
comments a piece of content instigates, the higher likelih the
link to get more traffic. . . might reach the conclusion that darker, more
divisive content is better for business.”"”
“Taking all US outbound clicks and comment sentiment scores on posts
linking to “the same* popular domain in the US (3wk dataset), I find the
following:
There's a (visible) general correlation between negative correlation between
negative comment sentiment and number of outbound clicks (imperfect
proxy for VPVs). From a publisher's point of view, this data would seem to
encourage posting more content that leads to negatively charged comment
threads.
Chart: To each url, assign a net sentiment score = 95th percentile pos_hi
sentiment - 95th percentile neg_hi sentiment. Binning by
net_sentiment_score, plot the average number of clicks (blue) as well the
95th percentile num_clicks (orange)."""
“Political parties .. . claim that Facebook's algorithm change in 2018 (MS!)
has shanaad the nature of polities, For the worse. They argue that the
emphasis on ‘reshareabili itematically rewards provocative,
low-quality content”
‘ We are Responsible for Viral Content, p. 5
Replacing Downstream MSI for Civic and Health, p. 3.
oes Facebook Reward Outrage? Posts that generate negative reactions get more clicks, p. 2
“Case Study: (Controlling for Publisher) Posts with Negatively Charged Comment Threads Fare Better in
Faece
NBII Pottca! Party response tothe “18 Algor change, 9.1
— ANONYMOUS WHISTLEBLOWER DISCLOSURE —WHISTLEBLOWER
-- REDACTED FOR CONGRESS -- ZOD
Report government and corporate lawbreaking.
‘Without breaking the law.
“The problem is that we do not and possibly never will have a model that
captures even a majority of integrity harms, particularly in sensitive areas... .
Hate Speech" is one of the ‘big three’ community integrity problems on
Facebook (along with Nudity & Pornography and Graphic Violence). The hate
speech team has a classifier for both predicted violating and also borderline
hate speech . . . Misinformation is another core integrity problem... Even in
the best of circumstances (e.g., in the US), the fact checkers have fairly slow
response time and don't check that many pieces of content. That means that
content is often not caught until after it has gotten a lot of distribution, and
many things are never caught. In most other countries, we do not have any
fact-checking partners at all... we know that divisive content (particularly
divisive political content) is one of the biggest problems facing the platform. .
. So far, the existing technology does not appear to meet the bar for
monitoring, not to mention demotion”
disproportionately.""°
14, Specifically, evidence outlines how harmful content is more viral (¢.g., content
eliciting anger produces more “reshares” and other indicators of "MSI”):
“Our aim to foster more meaningful interactions (MSI) with close friends is
deeply laudable. But ee
important slices of public content, such as Ind news. As we will
see, there is strong evidence that this is Sonate to our downstream
models.”
“comment thread negativity correlates well with expected value for
number of outbound clicks.”"*
15. This has resulted in a notable increase in “negative” political posts:
reser rerenetee in the EU reveals that political parties ‘feel strongly
thelr communications oa Facebpok with te downstream sifectot
leading them into more extreme policy positions.’ For example, in Poland,
‘one party’s social media management team estimate that they have shifted
See also disclosure te. Hate Spech,
Demoting on Integrity Signals is Not Enough, p. 1-3.
‘Docs MSI Metric FAST Review 2019-11-14, p. 19, 21,26.
Does Facebook Reward Outrage? Posts thal generate negative reactions get more clicks, p. 3.
— ANONYMOUS WHISTLEBLOWER DISCLOSURE —WHISTLEBLOWER
== REDACTED FOR conGRESS -- OAD
Report government and corporate lawbreaking.
‘Without breaking the law.
the proportion of their posts from 50/50 positive/negative to 80% negative
and 20% positive, explicitly as a function of the change to the algorithm
.. Many parties, including those that have shifted strongly to the negative,
worry about the long-term effects on democracy.’ We have heard similar
feedback from parties in India and Taiwan. News publishers, too, are
concerned about the incentives MSI created.”"”
“Political parties across Europe claim that Facebook's algorithm change in
2018 (MSI) has changed the nature of politics. For the worse . . . they feel
that they have been forced to adapt to the change by producing far more
negative content than before. . . Many parties. . . worry about the long-term
effects on democracy. . . they are trapped in an inescapable cycle of
negative campaigning by the incentive structures of the platform...
evidence around how anger reactions, overall, is weaponized by political
figures and creating negative incentives on the platform.”"®
16. In particular, “downstream MSI” prioritizes “interactions” over quality:
“The principal way MSI works on such public content, however, is via
downstream models, particularly d_share_msi_score. Because MSI is
desi med fo st friend interac tions, it sioesort value whether vou'l like
Wall Street Journal, etc. steeds, the way such icobtane creators
contribute to MSI is by posting content that you might reshare for your
friends to engage on or reshare themselves. This is precisely what we predict
and uprank via d_share_msi_score.”"°
17. Facebook knows that “downstream MSI" or “deep reshared” content that is
reshared multiple times is more likely to contain harmful content:
“Our observational results confirm that for Groups posts deeper reshares are
associated with higher prevalence of FUSS Red or Yellow content to about
depth 10 [define] .. . Overall Red and Yellow content can be quite high--it
can add up to about 20% of total VPVs [View Port Views”, the company term
of art for viewer impressions] . . . The multi group picker looks great for
it ing ment--Mi ring, and many other m
Replacing Downstream MSI for Civic and Health, p. 8
Political Party response to the '18 Algorithm change, p. 4, 24, 26
Replacing Downstream MSI for Civic and Health, p. 9.
— ANONYMOUS WHISTLEBLOWER DISCLOSURE —WHISTLEBLOWER
-- REDACTED FOR CONGRESS -- ZOAID
Report government and corporate lawbreaking.
‘Without breaking the law.
reshare depths up to depth 10."
“fBleshare depth [the number of shares in the chain from a given piece of
shared content is cort with misinformation . . . other integrity harms
also correlate with reshare depth.”*"
18. Further, internal teams have identified these issues with MSI and proposed
recommendations to address the harms (without losing other value):
“We propose to re-weight the existing predictive models that compris
the scoring function for Civic posts in feed to better optimize for both
integrity outcomes and individual civic value . .. Currently, Newsfeed
ranks all posts by primarily optimizing for Sessions and MSI. For Civic posts
in particular, however, we believe Newsfeed should rank for different
objectives."”
“Why do we think we need to change the ranking objective for Civic posts? 1.
We have evidence that people think that political content on Facebook is low
quality, untrustworthy, and divisive. So our current ranking objectives are not
creating a wholly valuable civic experience for users. .. 2. User's perceptions
of valuable civic content does not always line up with civic content that
scores highest for them. 3. Our current ranking objectives do not optimize
for integrity outcomes, which can have dangerous consequences. For
overall MSI, was contributing hugely to Civic imisinfo. its removal for
Civic posts is going to result in a 30 - 50% decrease in Civic
misinformation.’”
“These experiments strongly suggested that we could reduce distribution
of link misinfo by 40-50% and photo misinfo by 20-30% in these topics,
compared to a 10-15% reduction in civic distribution and a 15-20% reduction
in health distribution overall."
‘A ranking change which reduces ranking based on max reshare depth
nificant nn a variety of integrity measures... .
Observed reductions in integrity harms including misinformation, N&P [nudity
[Groups Reshare Depth, p.2, 17,
ax Reshare Depth experiment, p. 2-3
Product brief ranking for civic health, p. 1
Product brief - ranking for civic health, p. 2.
Replacing Downstream MSI for Civic and Heatth, p. 11
— ANONYMOUS WHISTLEBLOWER DISCLOSURE —WHISTLEBLOWER
OND
-- REDACTED FOR CONGRESS -- Report government and corporate lawbreaking.
‘Without breaking the law.
and pornography], violence, disturbing, and bullyingvaried from 2-15% with
no impact to DAP, time spent, or sessions. However, achieving such a win
is fice might require a change in the w
MSI, as the current formulation of MSI is explicitly reduced by a reduction in
sharing behaviors despite other core engagement measures being unmoved.”
5
An experiment setting a maximum reshare depth found “Reshares were
reduced significantly . . . greatly reducing MSI. Users instead redirected
their attention to other sources . . . This suggests that the specific shares
reduced may not have been as important to users’ experiences as MSI would
indicate. If so, it would further suggest MSI is not capturing user value
precisely . . . opening the door for further optimizations which could have net
increases to both user value and integrity concerns . . . This sort of ranking
change may be a net win in terms of moving the integrity-engagement
frontier outward. However, achieving such a win in practice might require a
change in the way we formulate and goal on MSI."
‘An effective, content-agnostic approach to mitigate the harms posed
by high-harm misinfo (e.g. civic or health) would be to dampen virality
within these topics by hard demoting all deep reshares where the viewer is
not a friend or follower of the original poster. .. its easily scalable and could
catch loads of misinfo that might never be caught by classifiers or human
reviewers . .. there's minimal risk of unfairness . . . In the US [] This could
reduce civic link misinfo VPVs [n.b. viewer impressions] by 25% and
civic photo misinfo VPVs by 50%."""
“[Wje . .. realized MSI currently has lacked an important dimension
around social context and content quality . . . we ran a big
interaction-level meaningfulness survey to understand better how meaningful
people feel interactions are that tend to be associated with lower quality and
some integrity problems (eg some types of reactions, reshares).”**
“There's a growing set of research showing that some viral channels are used
for bad... we've also identified opportunities where reducing virality may
Max Reshare Depth experiment, p. 1.
Max Reshare Depth experiment, p. 4-5
Fighting high harm misinto with deep reshare damping.
(SI Metric Changes for 2020HT, p. 1
— ANONYMOUS WHISTLEBLOWER DISCLOSURE —WHISTLEBLOWER
-- REDACTED FOR CONGRESS -- ZOD
Report government and corporate lawbreaking.
‘Without breaking the law.
gases), across the Family of Apps..."
“We have further [MSI] rules under consideration (these could be added to
the metrics”: “Engagement bait comments,” “Bullying comments,” “Other
integrity rules,” and “Various user-level capping schemes.'””
19.For instance, integrity and other teams developed tracking metrics for these types
of issues (related to prioritizing “MS!”)
“spam comments, single character comments, deleted comments,
engagement bait comments, and bullying comments . . Engagement bait
[encompasses] Comments that goad users into interacting with likes, shares,
‘comments, and other actions . . . to take advantage of our News Feed
algorithm by boosting engagement in order to get greater reach.”
20. In fact, in India, Facebook adopted a “hybrid-MSI” approach to address the
above-referenced issues (but did not use similar measures elsewhere):
“[WWje found that an MSI heavy optimization strategy was hurting Android
DAP in India and we could recover the DAP losses by reducing the
emphasis on MSI and increasing emphasis on video in the form of In Feed
Recommendations (IFR) . .. we had identified 11 countries where we were
following a more balanced strategy of MSI mixed with appropriate amounts
of video fi.¢., non-MSI public content]. . . So it is right to think of the
21. For example, in a proposal to cap the number of comments for each user (to
address issues with high-volume commenters and fake engagement):
“Approximately 3M users per day would hit the proposed cap of 100
comments per day... Over the course of one week 10.5M commenters
would hit the cap at least once. . . over the course of one month 28M
commenters would hit the cap at least once.’
22. Teams also proposed:
Viralty Reduction as Integnty Strategy pdf, p.1
[Metric Changes in the next couple of weeks to make MSI capture more useful social interactions, p. 2
[Comment Quality:Integrty Deltoid Metrics, p. 1, 7.
MSI Revisited Part 4, p. 3-4.
(Commentor Capping (0), p. 3.
— ANONYMOUS WHISTLEBLOWER DISCLOSURE —
10WHISTLEBLOWER
Report government and corporate lawbreaking.
‘Without breaking the law.
“ff the same author, on same post created 10 short (< = 5 characters)
comments or more, filter them all out from the metric...” Examples include
short repeated comments such as: “the ones who shouldn't die died because
of you.” “Fucking garbagel!” “fuck your mother! Why don’t you just die”
“Monkey” “Die loyal fans” "Crazy woman” “Thief” etc.°*
23. Likewise, while tests to give “anger” reactions lower weight finally occurred in late
2020 (a year after knowing that would decrease violating, low quality content), these
measures were only temporary in scope (despite being effective):
[2019] “We find that angrys, hahas, wows seem more frequent on civic
low quality news, civic misinfo, civic toxicity, health misinfo, and health
antivax content... while loves, sorrys and likes are typically less frequent
on these integrity hars". .. Comments with good motifs [e.g., “love” icon]
are 15x less likely to be violating . . . 1.5x less likely to be hateful... . 18%
more likely to be high quality.”
“(We find that civic content classified as toxic has 2X more hahas and 33%
‘more angers than it has heart reactions . . .In particular, we consistently find
that shares, angrys, and hahas are much more frequent on civic low quality
news, civic misinfo, civic toxicity, health misinfo, and health antivax content . .
. Comments are often . . . also more frequent on these Integrity harms."
“[W]le want to see if we should change the MSI weight for different reaction
types . . . [for example] we see that the anger and haha reactions are highly
prevalent on misinfo and toxicity. They are also connected to subjective bad
experiences (Haha is an especially high predictor of civic content viewers rate
to be not important, trustworthy, or good for their communities [After further
review of integrity evidence below, we decided the evidence for Anger is
stronger than for Haha)."*”
[2020] "MSI weights foreach recction type have been re-evaluate. . Anger
ns is significantly me it mt it
standards while also showing mixed results in MSI surveys. Note that this
launch is temporary until core models get updated to exclude anger. At that
time, in a few weeks from now, this launch will be reversed."®
tering out low value interactions from MSI, p. 7,47.
FAST Review 2019-11-14, p. 19, 21, 26.
ick Look at MSI Componeneets and Integrity, p. 2-5
Reaction weight Revisions 2020H2, p. 1, 3
Using p(anger) to reduce the impact angry reactions have on ranking levers, p. 1.
— ANONYMOUS WHISTLEBLOWER DISCLOSURE —
nWHISTLEBLOWER
-- REDACTED FOR CONGRESS -~- ZOD
Report government and corporate lawbreaking.
‘Without breaking the law.
24, However, Mark Zuckerberg refused to adopt recommendations to combat
harmful content (even hate speech and content invoking violence off of the
platform) if it impacted his “metric” of “meaningful social interactions” or “MSI”. For
example, after specialists met in April 2020 to discuss suggestions for “soft actions”
to reduce the prevalence of bad content in “News Feed,” it was summarized:
“Downstream model depreciation: Mark doesn’t think M ;
++ We wouldn't launch if there was a material tradeoff with MSI impact.”
25. Furthermore, as outlined by former employees and other internal records:
“(1) I think FB is probably having a net negative influence on politics in
Western countries; (2) | don’t think that leadership is involved in a
good-faith effort to fix this .. . Facebook could substantially decrease the
amount of harmful political content by being more opinionated on quality. 1
have seen a dozen proposals to measure the objective quality of
content on News Feed diluted or killed because either (1) they have a
disproportionate impact across the US political spectrum, typically harming
conservative content more; or (2) they cannot be framed in terms of
subjective quality (“what the users want”). . . Facebook's content policy
decisions are routinely influenced by political considerations.”
“time and again L've seen promising interventions from integrity product
teams, with strong research and data support, be prematurely stifled or
v nstrair ke sision makers--often based on fears of
public and policy stakeholder responses . . . Out of fears over potential public
and policy stakeholder responses, we are knowingly exposing users to
risks of integrity harms. [] For example, we've known for over a year now
that our recommendation systems can very quickly lead users down the path
to conspiracy theories and groups... The end result is .... falling victim to
integrity harms that are facilitated or amplified by unforeseen interactions
between features and surfaces . .. To discourage harmful content
distribution .. . we should . . . Identify ways to remove or reduce
‘engagement boosts for high confidence predicted low-integrity content .. .
e.g., continue to ions Jil it ream M
boosts for sensitive content ..."**
» Soft Action Proposal + Deck presented lo Mark, p. 1-2
‘ Last Day at Facebook - Badge Post, p. 1-2.
‘ Badge Post - DS Misinfo, p.2-4, 16.
— ANONYMOUS WHISTLEBLOWER DISCLOSURE —
2