— by Nicholas Surges

Meta (formerly known as the Facebook Corporation) has increasingly come under scrutiny for the role it plays in shaping the social media landscape. As the largest social media company, it has set many precedents in how hate speech, misinformation, and manipulated or inauthentic content is moderated online.

This raises an interesting question. Can social media corporations be trusted to act in the public interest, or will their moderation always serve their primary interest of protecting their shareholders?

On October 4th, 2021, former Facebook employee Frances Haugen testified before the United States Senate Committee on Commerce, Science, and Transportation to argue the need for greater regulation of social media platforms. Haugen had been responsible for leaking internal documents to the Wall Street Journal as part of their ongoing Facebook Files investigation into unethical behaviour by the social media giant.

In her testimony, Haugen defended her decision to speak out against her former employer. As she stated: “The company’s leadership keeps vital information from the public, the U.S. government, its shareholders, and governments around the world. The documents I have provided prove that Facebook has repeatedly misled us about what its own research reveals about the safety of children, its role in spreading hateful and polarizing messages, and so much more.” Haugen went on to argue that Meta’s lack of transparency makes it difficult to hold them accountable for unethical behaviour.[1]

According to Meta’s policy rationale, much of Facebook & Instagram’s moderation is automated, using artificial intelligence and machine learning to remove offensive content as it is posted. This is particular useful for duplicate posts of previously flagged material. Human review teams exist to provide further input in cases where Facebook’s algorithms are unable to determine whether or not a post is offensive. This team of over 15,000 full-time human reviewers review cases flagged by machine-based automoderators and make final rulings.

Of course, beyond what the public has been given through Meta’s transparency centre, the details of their moderation system are vague. While Meta claims that their human reviewers receive over 80 hours of live training, their policy centre has very no information about what credentials are needed to become a moderator or how the company addresses policy gaps related to cultural context, language fluency, and artistic expression.

An October 25, 2021 article by Reuters called attention to the fact that Meta’s moderation has not kept pace with their global expansion: as the giant continues to spread into new markets, the languages spoken in those markets pose a stumbling block to its algorithms and human staff’s abilities to flag abusive content.

A notable example is the lack of functionality in Burmese, the language spoken in Myanmar. As the country is country still rocked by an ongoing ethnic conflict, Facebook’s inability to moderate content spoken in Burmese means that they cannot properly address content stoking ethnic hatred.[2]

India is another case study in Facebook’s failure to provide the language support necessary to police problematic content. The country is home to over 300 million Facebook users – the largest number in the world – and yet Facebook only provides service in 11 of India’s 22 official languages.[3] Given the country’s religious tensions between Hindus and Muslims, this seems like a glaring oversight.

These shortcomings are at least ostensibly addressed by the existence of an Oversight Board, which exists as an independent body to whom appeals can be made regarding rulings by moderation teams. The board is made up of experts in human rights, journalism and freedom of expression, and other relevant policy areas. While the members of the board are appointed by the company, they are not accountable to them in the way that full-time moderation staff are.

The Oversight Board allows users to contest rulings by Facebook’s full-time human moderators, who sometimes apply community standards without considering the context of a post. In case 2021-012-FB-UA, a wampum belt titled “Kill the Indian/Save the Man,” was deemed hate speech. The Oversight Board would later overturn this ruling, stating that “in context [the use of the] phrase draws attention to and condemns specific acts of hatred and discrimination.”[4]

Similarly, a post in which a quote misattributed to Joseph Goebbels (“Arguments must be crude, clear, and forcible, and appeal to emotions and instincts, not the intellect”) was removed because Goebbels is on the company’s list of dangerous individuals. The quote is actually from the forward to British historian Hugh Redwald Trevor-Roper’s Final Entries, 1945, which was based on discovered diary entries from Goebbels.[5] While it is thus not one of Goebbels’ quotes, it does serve as a qualified encapsulation of the propaganda policies of the Third Reich.

The user posted the quote in order to make a statement about demagoguery, crypto-fascism, and populist appeals to emotion (specifically in reference to Trumpism in the United States), but in their initial decision Facebook ruled that quotes by dangerous individuals cannot be shared unless the user makes it explicitly clear that the intent is to counter hate speech, extremism, or to share it for educational or news purposes. This is not explicitly clear in Facebook’s public-facing policies.

As the Oversight Board noted when they overturned the decision on the grounds that the quote in itself did not support the Nazi regime or hate speech, there is a gap between what is explicitly permitted or banned in Facebook’s public-facing community standards and the criteria used by human moderators employed by the company.[6] The case is also interesting because the actual provenance of the quote was never in question: the fact that the quote was never one actually said by Goebbels didn’t figure into the ruling, which has troubling implications for historical revisionism and misinformation.

As is probably already evident by the complexity of some of the previously mentioned rulings, the creation of this higher system of appeal has proved imperfect. Some of the reasons are purely mechanical: appeals can only be launched by users who have an active account on posts that have already been reviewed and must be submitting within 15 days of the initial ruling. This means that users who have already been banned or who have deleted their accounts have no means of submitting an appeal to the board. Launching an appeal also requires knowledge of the oversight board, which many users may not even be aware exists.

After the September 13, 2021 report in the Wall Street Journal called attention to hypocrisy by Facebook’s XCheck program, the board was forced to examine whether or not Facebook was consistently applying its professed standards. This system, which deals with high-profile users or organizations who are “important”, “popular”, or “PR-risky”, includes a whitelist system. Users or pages who are added to this list are treated more leniently than ordinary users.

In the subsequent announcement, the Oversight Board concluded that “Facebook has not been fully forthcoming with the board on its ‘cross-check’ system, which the company uses to review content decisions relating to high-profile users.”[7] The board also noted that Facebook does not fully comply with all of their requests for further information needed to inform their rulings, denying some of their requests for further context as “irrelevant” – a judgement that should probably be made by the board.

Furthermore, the board noted in their report for the third quarter of 2021 that Meta was only introducing 9 of the board’s 25 recommendations fully. 4 it claimed to be introducing “in part”, 5 it claimed to be “assessing feasibility” of, 5 it claimed it was already doing, and 2 it rejected outright. This illustrates how the boards recommendations may not always being implemented in full: Meta may take them under advisement but is under no obligation to act upon them.[8]

Taken at a whole, these findings suggest that social media companies cannot, at present, be trusted to act in the public good. Is it time to start pushing for greater transparency and accountability from the social media sector?

*Image used courtesy of Creative Commons Attribution-Share Alike 4.0 International

[1] Statement of Frances Haugen, Before the Sub-Committee on Consumer Protection, Product Safety, and Data Security, October 4, 2021. https://www.commerce.senate.gov/services/files/FC8A558E-824E-4914-BEDB-3A7B1190BD49

[2] Elizabeth Culliford and Brad Heath, “Facebook knew about, failed to police, abusive content globally – documents”, Reuters, October 25 2021. https://www.reuters.com/technology/facebook-knew-about-failed-police-abusive-content-globally-documents-2021-10-25/

[3] Salimah Shivji, “Facebook has a massive disinformation problem in India. This student learned firsthand how damaging it can be”, CBC News, December 9 2021. https://www.cbc.ca/news/world/india-facebook-disinformation-1.6276857

[4] Case decision 2021-012-FB-UA, Oversight Board, December 9 2021. https://oversightboard.com/decision/FB-L1LANIA7/

[5] Joseph Goebbels and H. R. Trevor-Roper. Final Entries, 1945: the Diaries of Joseph Goebbels. Edited and Introduced by Hugh Trevor-Roper. New York: Putnam, 1978.

[6] Case decision 2020-005-FB-UA, Oversight Board, January 28, 2021. https://oversightboard.com/decision/FB-L1LANIA7/

[7] “Oversight Board demands more transparency from Facebook”, Oversight Board, October 2021. https://oversightboard.com/news/215139350722703-oversight-board-demands-more-transparency-from-facebook/

[8] “Oversight Board demands more transparency from Facebook”, Oversight Board, October 2021.