The Oversight Board, created in 2020, was meant to hold Meta accountable for decisions about what speech should be allowed on Facebook and Instagram. It has weighed in on several high-profile cases and addressed major flashpoints of disagreement around the world, some referred by the company and others selected by the Board itself. The Board reviewed the removal of former President Trump from the platform for his posts related to the January 6 attack on the U.S. Capitol and weighed in on Meta’s approach to removing Misinformation about COVID-19Based on an international human rights framework, the Council has consistently overturned company decisions on a range of issues, including the elimination of jobs. India and Iran which Facebook had interpreted as a threat of violence; the removal of a Documentary video which revealed the identities of child victims of sexual abuse and murder in Pakistan in the 1990s; and a video of a woman protesting against the Cuban government that depicted men in dehumanizing termsThe Board’s decisions have provided the public with important information about Meta’s policies and practices and have carefully assessed competing human rights values. Its policy recommendations continually push Meta toward more specific and equitable standards and greater transparency.
But the Board’s capacity is limited: it typically takes on 15 to 30 cases each year, and the influence of these decisions on the content moderation rules used in the broader Meta ruleset appears quite limited because application of these rules in similar cases In essence, the company controls Facebook’s content. According to the board’s charter, “when Facebook identifies that identical content with a parallel context continues to exist on Facebook, it will take action by considering whether it is technically and operationally feasible to apply the board’s decision to that content as well.” Moreover, the company insists on keeping the board in the dark about a critical aspect of its operations: the algorithms that control Facebook’s content.overwhelming majority” of decisions on whether to eliminate positions, expand them or demote them. As a result, the Board is limited to influencing only one small portion of the millions of daily decisions Meta makes about speech on Facebook, Instagram, and now Threads. It is also hampered in properly assessing cases because it cannot, for example, fully analyze the safety threat posed by a post without information about how algorithms have amplified or suppressed the post. For the Board to fully fulfill its role in holding Meta accountable for regulating speech on its platforms, it must have access to the algorithms at the heart of this system.
While the Oversight Board is restricted in the types of cases it can review, its letter It gives it flexibility to exert its influence in other ways. The Board has made the most of this, for example, by taking an expansive view of its authority to issue policy advice; 251 recommendations were published over the past three years. A 2021 article by Edward Pickup in the Yale Journal of Regulation Bulletin argues convincingly that the Board’s mandate also encompasses a “Latent power” to review Facebook’s algorithms.
The authority to review algorithms is central to the Board’s decision-making in many types of cases. According to the Board’s charter, “in cases under review, Facebook will provide such information, consistent with applicable legal and privacy restrictions, as is reasonably necessary for the board to make a decision.” (A similar provision in the section describing the Board’s powers gives it authority to “request that Facebook provide information reasonably necessary for the board’s deliberations in a timely and transparent manner.”) In the Trump case, the company declined to answer questions about how its platform design, algorithms, and technical features may have amplified Trump’s posts or contributed to the events of January 6. But as the Board has explainedInformation about the reach of Trump’s posts was clearly relevant to the Board’s assessment of key issues: the risk of violence posed by Trump and whether less restrictive measures were possible. In another recent case, the Board invested the company’s removal of a post about COVID-19 misinformation because Facebook failed to show how the post “contributed to imminent harm,” in part because it had not provided information about “the reach” of the post.
Another potential avenue for obtaining relevant algorithms is the Board explicit authority “(i)nterpret Facebook’s Community Standards and other relevant policies (collectively, “content policies”) in light of Facebook’s articulated values” for the cases that come before the Board in an appropriate manner. In the context of analyzing the Board’s authority to issue advisory opinions, Pickup convincingly argues that algorithms are part of the set of “content policies” because they are a set of rules that apply to particular cases. Indeed, the point could be taken further: algorithms are in fact a coded manifestation of the Community Standards they are meant to reflect. Thus, for the Board to fulfill its explicit mandate to interpret Facebook’s content policies to decide cases, it must logically have access to this code. This is not the same as broad-ranging authority to review the company’s algorithms. But the Board has creatively used its authority to make recommendations to Meta during case reviews and follow-up on the company’s responses, creating an opening to review the algorithms that drive content decisions.
The Board recognizes that feedback on algorithms is critical to the success of its mission. One of its goals for 2024 Strategic priorities “How automated law enforcement should be designed and reviewed, the accuracy and limitations of automated systems, and the importance of greater transparency in this area.” And in several cases it has made general recommendations related to algorithms, urging Meta to: commission a “human rights impact assessment of how its news feed, recommendation algorithms and other features amplify harmful health misinformation and its impacts”; improve “automatic image detection”; and perform an internal audit “continuously analyze a statistically representative sample of automated content removal decisions to reverse and learn from implementation errors.”
Meta should cooperate with the Board’s efforts to dig deeper into the algorithms and give it access to the information it needs to do so, with the necessary protections for confidentiality. The company, along with other social media platforms, has been criticized for using algorithms that generate extreme and inflammatory content that keeps users engaged and generates advertising money. So far it has managed to keep these systems secret. But the pressure for algorithmic transparency that has been growing in recent years, including in CongressIt is unlikely to budge. The Oversight Board offers an important avenue for Meta to satisfy the public clamor to understand how these systems work.
Image: Meta logo (via GettyImages).
JOBs Apply News
For the Latest JOBs Apply News, Follow ©JOBs Apply News on Twitter and Linkedin Page.