By Evelyn Douek
Members of the Myanmar military have systematically used
Facebook as a tool in the government’s campaign of ethnic cleansing against
Myanmar’s Rohingya Muslim minority, according to an incredible
piece of reporting by the New
York Times on Oct. 15. The Times writes that the military harnessed Facebook
over a period of years to disseminate hate propaganda, false news and
inflammatory posts. The story adds to the horrors known about the ongoing
violence in Myanmar, but it also should complicate the ongoing debate about
Facebook’s role and responsibility for spreading hate and exacerbating conflict
in Myanmar and other developing countries. https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html?action=click&module=Top%20Stories&pgtype=Homepage
Context: The Atrocities in Myanmar
The Times report comes in the context of growing calls
for accountability for the campaign of violence inflicted on the Rohingya. On
Sep. 12, the U.N.-commissioned independent Fact Finding Mission (FFM) released
its final report, which called for
members of the Myanmar military to be investigated and prosecuted for genocide,
crimes against humanity and war crimes. The U.S. State Department also released
a report documenting evidence that the military’s operations were “well-planned
and coordinated.” As these reports show, the atrocities in Myanmar have become
one of the world’s most pressing human rights situations. The FFM concludes
that the exact number of casualties from the “widespread, systematic and
brutal” killings may never be known, but is more than 10,000. The over 400-page
FFM report contains devastating accounts of wide-ranging crimes against
humanity, including torture, rape, persecution and enslavement. Hundreds of
thousands of people remain displaced. https://www.ohchr.org/EN/HRBodies/HRC/Pages/NewsDetail.aspx?NewsID=23575&LangID=E
Alongside this developing body of evidence and consensus
about the crimes that have been committed in Myanmar, debate has raged over Facebook’s role in these events. https://www.lawfareblog.com/why-were-members-congress-asking-mark-zuckerberg-about-myanmar-primer
In recent months, Facebook has taken steps to accept its
role and responsibility. In a surprising concession before the Senate
intelligence committee in September 2018, Chief Operating Officer Sheryl
Sandberg even accepted that Facebook may have a legal obligation to take down
accounts that incentivize violence in countries like Myanmar. Sandberg called
the situation “devastating” and acknowledged that the company needed to do
more, but highlighted that Facebook had put increased resources behind being
able to review content in Burmese. Shortly before the hearing, Facebook
announced that it had taken the unusual step of removing a number of pages and
accounts linked to the Myanmar military for “coordinated inauthentic behaviour”
and in order to prevent them from “further inflam[ing] ethnic and religious
tensions.”
The FFM’s report did acknowledge that Facebook’s
responsiveness had “improved in recent months” but found that overall the
company’s response had been “slow and ineffective.” It called for an
independent examination of the extent to which Facebook posts and messages had
increased discrimination and violence.
The New York Times report
Paul Mozur’s recent reporting in the Times recounts how
as many as 700 people worked shifts in a secretive operation started by
Myanmar’s military several years ago. The military personnel developed large
followings for fake pages and accounts with no visible connection to the
military, which they then flooded with hate propaganda and disinformation. To
encourage people to turn to the military for protection, the pages often aimed
at stoking ethnic tensions and generating feelings of vulnerability. Although
this particular campaign is half a decade old, it continues a long practice of
Myanmar’s military engaging in psychological warfare, employing techniques
learned by officers sent to Russia for training.
Following the publication of the Times story, Facebook
announced it was removing more “seemingly independent entertainment, beauty and
informational Pages” that were being used to push military propaganda.
Altogether, the pages had about 1.35 million followers.
Hate speech and social media in the context
of mass atrocities
The extent to which hate speech and propaganda can be said
to factually and legally cause mass atrocities is a complicated issue. Jonathan
Leader Maynard and Susan Benesch have observed that it is “one of the most
underdeveloped components of genocide and atrocity prevention, in both theory
and practice”—and that’s before social media enters the picture. As Zeynep
Tufekci tweeted years ago, Myanmar may well be the first social-media fueled
ethnic cleansing. International law hasn’t even begun to grapple with how to
take into account the role of social media in unravelling and imposing
responsibility for international crimes.
There is a long road ahead if international law is to do
so now. Challenges include evidence-gathering when fact-finders are refused
access by the government, issues to do with the International Criminal Court’s
jurisdiction that may result in partial accountability, as well as the inherent
conceptual difficulties of line-drawing when finding the nexus between speech
and violence. The case law on speech in the context of genocide has developed
in a piecemeal fashion, resulting in inconsistencies and incoherence. The road
to accountability in Myanmar may offer an opportunity to develop and clarify
these rules, as well as wrestle with how social media fits in. The FFM report
provides a starting place, concluding that there is “no doubt that the
prevalence of hate speech in Myanmar significantly contributed to increased
tension and a climate in which individuals and groups may become more receptive
to incitement and calls for violence” and “[t]he role of social media is
significant.”
Complicating the narrative
The FFM has called for more work to be done to understand
the effects of Facebook on the spread of violence in Myanmar. Reporting shows
that Facebook was an “absentee landlord” in Myanmar. It ignored warnings about
the abuse of its platform in the country for years, engaged poorly with civil
society actors and generally—as the company itself has admitted—has been far
too slow to act in the context of horrific crimes. For this reason, observers
have compared Facebook to providing a match or tinder in the uniquely explosive
environment of Myanmar.
But this analysis may need updating in light of the new
evidence that the spread of anti-Rohingya misinformation across Facebook was
not merely organic, but the result of systematic and covert exploitation by the
military. As noted by Daphne Keller, the Director of Intermediary Liability at
Stanford’s Center for Internet and Society and former Associate General Counsel
at Google, responding to problems resulting from innate structural flaws of
social media requires a “different analysis and response” than a response to a
calculated exploit by bad actors.
In these early days of trying to untangle the role of
Facebook in the horrors inflicted against the Rohingya minority it’s worth
carefully examining the issues raised by the FFM report and the NYT reporting:
Facebook’s relationship with state officials:
Facebook’s community standards, which set out when it will remove content such
as hate speech, include an exception for content that it considers “newsworthy,
significant or important to the public interest.” This allows for subjective,
political judgments. In seeking to avoid becoming the “arbiter of truth,”
Facebook has been reticent to censor the speech of government officials around
the world even when they amount to a breach of the company’s policies on hate
speech. There is an added tension in Facebook censoring the speech of political
figures to their own populations: How should Facebook evaluate when the harm
caused by such speech outweighs its public value? Years of coordinated, covert
posts in the context of mass violence seems a clear case, but the use of
Facebook by the Myanmar government also includes many overt posts by members of
state parliament. Indeed, as the FFM report states, “In a context of low digital
and social media literacy, the Government’s use of Facebook for official
announcements and sharing of information further contributes to users’
perception of Facebook as a reliable source of information.”
The benefits of Facebook and its facilitation
of freedom of expression: The FFM report highlights that the
increased access to information and means of communication has been one of the
most tangible benefits of the democratisation process in Myanmar, and that
Facebook itself “can and has been used in many ways to enhance democracy and
the enjoyment of human rights.” This might be particularly important given that
Myanmar authorities “do not tolerate scrutiny or criticism”—an emblematic
example being the jailing of two Reuters reporters who documented one massacre
of ten Rohingya. It is not obvious that Myanmar would be better off without
Facebook, which provides an important means of communication for journalists
and local businesses. This does not excuse harm it may have caused, but is an
important part of the bigger picture of Facebook’s role in the country.
Constructing the counterfactual:
Mozur wrote on Twitter that researchers estimate two-thirds of the hate speech
found on Facebook in Myanmar began with the military. Yet it is likely
impossible to know how much of the remaining third would have been created had
military propaganda not established an enabling and encouraging context. For
this reason, it will be difficult to understand Facebook’s role in the spread
of hate speech and violence.
Facebook’s role in documenting mass
atrocities: One of the most difficult aspects of
prosecuting the crime of genocide is the stringent requirements of proving
specific genocidal intent. The FFM report concludes that there is sufficient
information suggesting such intent in the Myanmar case—relying heavily on the
record created by social media. The report points to many posts and statements
made on Facebook, noting one statement by the military’s commander-in-chief
that the “clearance operations” were part of the “unfinished job” of solving
the “Bengali problem” (referring to the Rohingya). Facebook has been criticized
in the past for removing posts that could be evidence of war crimes, but it has
confirmed it is “preserving data” on the Burmese accounts and Pages it has
removed in the latest rounds of takedowns. The FFM has called on Facebook to
make this data available to judicial authorities to enable accountability and
stated its regret that Facebook has been unable to provide information about
the spread of hate speech on its platform. Facebook should take its
responsibility for transparency seriously, given that its data could provide
powerful insights into the connection between hate speech and mass atrocity in
both the Myanmar case and more generally. Civil society groups have long
expressed frustration with Facebook’s lack of openness and collaboration in
addressing this problem.
Facebook’s strategies for removing hate
speech going forward: Despite Facebook’s assurances that it is
devoting more resources to content moderation in Myanmar; it only removed the
accounts associated with the military’s campaign after being notified about
them by the Times. This follows a pattern of Facebook only removing troubling
content after that content is highlighted publicly by third parties.
Furthermore, while Facebook has said it will have 100 Burmese content
moderators by the end of the year, the Times article suggests that the military
had as many as 700 people working on these propaganda campaigns—raising a
serious question about whether Facebook is devoting sufficient resources to
fixing the problem. Mark Zuckerberg, Facebook’s CEO, likes to refer to the
exploitation of his platform as an “arms race.” But this new reporting suggests
Facebook is being outgunned.
Facebook’s reliance on artificial
intelligence: Though Facebook has consistently pointed to
its investment in artificial intelligence (A.I.) to proactively flag posts that
break its community standards, the highly context-dependent nature of hate
speech makes it difficult for A.I. to monitor effectively. The FFM report
underscores this reality by describing how many of the slurs and euphemisms
used to vilify the Rohingya are subtle, relying on specific understandings of
history and context and even on local pronunciation. A.I. is not well-suited to
these kinds of judgments. Reuters recently reported that a Burmese post saying
“Kill all the kalars that you see in Myanmar; none of them should be left
alive” was translated into English on the platform as “I shouldn’t have a
rainbow in Myanmar.” At the very least, this suggests its tools are still
struggling with the local language.
Facebook’s vulnerability to exploitation:
While the Times reporting suggests that the problems in Myanmar may not be due
to an innate flaw in the platform but rather its exploitation, it remains true
that a pattern of such incidents around the world suggests that Facebook is
especially vulnerable to such concerted efforts. Indeed, the Times suggests
that the military campaign used many of the same techniques as Russian
influence operations such as those present during the 2016 U.S. election.
Furthermore, despite decades of such propaganda efforts, the military’s
exploitation of Facebook since its recent arrival in Myanmar has been
especially effective. As a recent Brookings report notes, the situation in
Myanmar “epitomizes the magnifying effect that new technology is having on old
conflicts.”
Facebook’s mitigation obligations:
Facebook’s own research shows the power of its platform. Famously and controversially,
the company facilitated research showing that it has the power to manipulate
the mood of its users depending on what posts it showed in their News Feeds.
Other research has shown that Facebook can have a significant real-world effect
on voter turnout by exposing users to certain nudges. Such research obviously
raises concerns about the power of platforms to covertly manipulate its users.
But in the context of ongoing mass atrocities, it is also worth asking whether,
if Facebook has this power, it also has an obligation to not only prevent the
spread of hate speech in these contexts but also try to mitigate the situation.
In the face of the “crime of crimes,” when Facebook is already embedded in
society, what is its responsibility to protect?
These are enormously difficult issues, and the important
effort of getting answers will require a large amount of work and cooperation
by Facebook itself as well as outside researchers.
Even if Facebook cooperates, there is also the question
of what liability the company might face for its role enabling mass atrocities.
As a private company, Facebook is not subject to international criminal
liability. Yet calls have grown louder for the platform to face some sort of
penalty. During Sandberg’s testimony before the Senate in September, Senator
Mark Warner seized on her acknowledgment of legal obligation in Myanmar,
stating that social media companies that had not acted responsibly should be
subject to sanction. But it is unclear what this responsibility or sanction
would look like within the United States—much less the world.