Editor’s Note: This article is part of a four-piece symposium that examines Kishanthi Parella’s work, “Enforcing International Law Against Corporations: A Stakeholder Management Approach,” featured in Volume 65(2) of the HILJ Print Journal.

Shannon Raj Singh

Any credible discussion on the future of the social media industry must reckon with its history of spectacular failures. Chief among those are the instances where social media has fueled or contributed to the commission of mass atrocities around the world. A wealth of examples is on the tip of our tongues: Leaked Facebook documents betray internal warnings that the company was not doing enough to prevent spiraling ethnic violence in Ethiopia, inaction that is now the subject of litigation in Kenyan courts. The Taliban deftly navigated Twitter’s content moderation rules to spread propaganda amidst its takeover of Afghanistan in 2021, in an effort to add a veneer of legitimacy to a brutal and oppressive regime. And in the sprawling refugee camps of Cox’s Bazar, Bangladesh, close to one million Rohingya remain displaced after unrestrained hate speech on social media played a significant role in inciting ethnic violence in neighboring Myanmar.

But amidst the rubble, we can find evidence of surprising successes, too—moments when social media companies have acted in broad alignment with international legal frameworks and standards on the prevention and punishment of mass atrocities. In select cases, platforms have agreed to share data that could be used to investigate and prosecute mass atrocities, or have acted rapidly to modify their products or policies to prevent them from contributing to atrocity crimes. In the context of Afghanistan, for example, Facebook released an innovative feature aimed at civilian protection: its “locked profile” feature allowed Afghan civilians to rapidly lock down their privacy settings to prevent information on their profiles from being used to target them in a rapidly devolving security situation. Amidst the Russian invasion of Ukraine, Twitter released a content moderation policy prohibiting depictions of prisoners of war, specifically referencing alignment with the Geneva Conventions. In several instances, platforms have developed war rooms and operations centers to respond to emerging dangers posed by their products in conflict and crisis settings. And at various points over the past few years, platforms have released human rights policies, hired human rights teams, and invested in human rights impact assessments to address risks related to their products, policies, and operations. Certainly, these initiatives can help buttress platforms’ reputations as they are being otherwise battered for their failures in conflict settings. But calling them mere PR stunts may obscure the investment, time, and effort of those working to steer platforms toward international law in moments of atrocity risk. What accounts for these bright spots, and how can we replicate them?

Kishanthi Parella’s article, Enforcing International Law Against Corporations: A Stakeholder Management Approach, illuminates how international law is at work in the private sector in “non-obvious ways” (Parella, p. 338). Nowhere is this more true than in the realm of social media, where platforms developed and operated by the private sector play a central role in modern political dialogue, breaking news, armed conflicts, demonstrations, revolutions, and social movements the world over.

Parella’s article offers a thorough landscape assessment of how various stakeholders interact with one another to inform and influence corporate conduct. In her conception, corporate stakeholders—ranging from states to consumers, shareholders, employees, benchmarking organizations, civil society organizations, and others—use an array of strategies to serve as modern enforcers of international law in the private sector. Its core contribution lies in both recognizing the aggregate effect of stakeholder actions as a form of international law enforcement, and mapping their enforcement strategies onto a typology so this work can be done more intentionally going forward.

Although it would seem to apply to a range of issue areas governed by international law, a stakeholder management framework may offer particular promise in pushing social media companies to better align with international legal frameworks relating to atrocity crimes: namely, genocide, crimes against humanity, and war crimes. While these legal frameworks face widespread enforcement challenges in courts of law, they may derive particular power in the corporate context specifically because they relate to the gravest crimes on earth. Indeed, the ability of mass atrocities to shock our collective conscience may well serve stakeholders’ ability to convert corporate violations of international law into reputational, strategic, and operational risks that can incentivize action and change.

There are legal frameworks regarding the role of corporate actors in mass atrocities, but they are notoriously difficult to enforce. While individuals (including corporate executives) can be prosecuted in either domestic or international legal systems for the commission of genocide, war crimes, and crimes against humanity, the Rome Statute does not provide for the prosecution of legal persons before the International Criminal Court. And despite a series of legal efforts that have sought to hold corporations to account for their role in atrocity crimes, enforcement is plainly the exception, not the rule. Indeed, asserting that social media companies should be held responsible for the dissemination of content, posted by an array of actors that can in the aggregate contribute to mass atrocities, can make for a challenging legal argument. Although the law certainly imposes responsibilities in this space, neither violations nor causations are easy to prove.

We must also distinguish obligations to prevent mass atrocities from obligations restraining actors from contributing to their commission. States, for example, are not only prohibited from committing mass atrocities, but also are obligated to help prevent them. These state obligations to prevent derive from distinct legal sources: the 1948 Convention on the Prevention and Punishment of the Crime of Genocide, for example, holds states to a due diligence standard that requires them to act according to their capacity to influence a situation at risk of genocide, wherever it occurs. Common Article 1 of the Geneva Conventions imposes a similar obligation for war crimes, obligating High Contracting Parties to both “respect and ensure respect” for the Conventions — meaning states must not only refrain from committing war crimes themselves, but are also obligated to take measures within their power to prevent war crimes by other states.

But there is no question that these treaties bind states, and not social media companies. And while the UN Guiding Principles on Business and Human Rights—widely recognized as the authoritative global framework on corporate obligations relating to human rights—requires companies to “[a]void causing or contributing to adverse human rights impacts through their own activities, and . . . [s]eek to prevent or mitigate adverse human rights impacts that are directly linked to their operations, products or services,” their nonbinding nature presents significant roadblocks to consistent enforcement.

Amidst these legal obstacles, and in an age where social media companies wield as much influence—if not more—over the risk of mass atrocities as many states, how can we encourage platforms to act more responsibly in atrocity risk settings? And what promise does a stakeholder engagement model hold for encouraging social media companies to reflect and uphold norms relating to mass atrocities?

Although Parella’s model of stakeholder management anticipates many of the core players in the social media context (such as shareholders, employees, civil society organizations, and states), it perhaps fails to adequately capture the unique nature of the social media user. As powerfully stated by technology ethicist Tristan Harris, social media users are simultaneously the “consumers” and the “products.” User data is packaged and sold to drive profit through a business model premised on targeted advertisement, rendering individuals both consumers of social media platforms and part of what is being sold. In addition, unlike most industries within the private sector, social media stands apart because the range of relevant “consumer” stakeholders encompasses literally billions of people. Meta has acknowledged this challenge explicitly, noting that its “stakeholder base includes every person or organization that may be impacted by [its] policies,” while being clear that it “can’t meaningfully engage with billions of people.”

So while stakeholder engagement in other sectors brings to mind outreach to a set of fairly clear-cut communities, social media tests the boundaries of what the category of “stakeholder” even means. In the mining industry, for example, stakeholder management may be premised on engagement with local communities directly affected by the sourcing of minerals, vendors throughout the supply chain, and a set of downstream consumers. But in the realm of social media, both every individual who has a social media account and every individual who may be affected by developments on social media platforms are veritable stakeholders of this industry. Social media has become so central to democratic processes, to peace, stability, and the risk of armed conflict, that it is difficult to envision who would not want to have the ability to shape its development and governance. Who, among us, is not a stakeholder in the way that our modern “public squares” organize, amplify, censor, and present purported information?

But as Parella recognizes, not all stakeholders have equal power. The sheer volume of social media stakeholders dilutes individual power, a fact which implicitly suggests the potential for stakeholder alliances to shift corporate conduct. Should those billions of stakeholders organize into meaningful blocs or groups that can articulate risks related to atrocity prevention, imagine the aggregate power they could wield to influence platform resourcing and decision-making. The fact that a stakeholder management model makes this so evident is valuable in itself—but it does not necessarily provide a ready answer to the modalities of “managing” such an extraordinary volume of stakeholders. In the social media industry in particular, this warrants further consideration.

At the same time, a stakeholder management model can be illuminating in demonstrating the array of enforcement opportunities open to actors in the social media space. One such actor—and a unique stakeholder with little direct precedent—is the Oversight Board. Established by Meta in 2020, the Oversight Board is mandated to make principled, independent decisions on selected cases about how digital content is handled by Facebook and Instagram, and now Threads as well. While created by Meta, the Board is funded by an independent trust (funded by Meta), and, pursuant to its Charter, Board members exercise independent judgment on Meta’s decisions and operations.

While skepticism about its ability to drive long-term change has been plentiful, the good news is that, from the outset, the Oversight Board seems to have accepted the relevance of international law as a core part of its mandate. Its decisions on cases selected for review regularly reference international law, including the Geneva Conventions, the International Covenant on Civil and Political Rights, Human Rights Committee jurisprudence, and the UN Guiding Principles on Business and Human Rights. In a world where the international legal community has largely failed to effectively wield the law as a sanction for social media companies’ conduct in conflict zones, the Oversight Board is “augment[ing] the architecture of international institutions that detect and punish violations of international law” (Parella, p. 341).

To date, the Oversight Board has (consciously or unconsciously) engaged in a range of strategies to enforce international law. Some of its work can be considered predicative enforcement: conduct that does not directly engage a corporation but creates the conditions for another stakeholder to do so. In 2021, for example, the Board issued a decision recommending that Facebook “[m]ake clear in its corporate human rights policy how it collects, preserves and, where appropriate, shares information to assist in investigation and potential prosecution of grave violations of international criminal, human rights and humanitarian law.” Serving a somewhat similar (if more toothless) function to mandated disclosure laws, its calls for transparency can push Meta to share information that it might not otherwise disclose, providing a foundation for other stakeholders to directly engage the platform on policies and practices that impact the prevention and punishment of mass atrocities.

The Oversight Board can also engage in action that “magnifies the impact of action taken by other stakeholders” (Parella, p. 329). This “amplified enforcement” (Parella, p. 329) strategy can play an important role in raising the magnitude of a risk for Meta, drawing attention to its actions in atrocity risk settings. Among other avenues, this can occur through the use of the Oversight Board’s “agenda-setting” function (Parella, p. 330) to influence the risks that a platform faces because of its conduct in atrocity risk settings. In December 2023, for example, the Oversight Board announced that it would be reviewing a case related to content depicting the apparent aftermath of a strike on a yard outside Al-Shifa Hospital in Gaza City. The content—which was removed by Meta—depicted “people, including children, injured or dead, lying on the ground and/or crying,” while a caption in Arabic and English suggested the hospital was targeted by Israeli forces. Strikingly, Meta reversed its decision—restoring the post to the platform—not because the Board asked it to, but simply upon learning the Board had taken up the case. In this case, through its agenda alone, the Oversight Board influenced Meta’s actions, causing it to reassess its decision to remove content documenting purported atrocities. This is particularly powerful where the removed content at issue is intended to raise public awareness of the risk of mass violence. To the extent that the media also then picks up on the Oversight Board’s decisions, its enforcement function can be amplified further.

But perhaps most impactfully, the Oversight Board’s decisions on cases—akin to court decisions in some ways—can be regarded as direct enforcement of international law. In a decision on digital content threatening violence in Ethiopia, for example, the Board found that “Meta has a human rights responsibility to establish a principled, transparent system for moderating content in conflict zones to reduce the risk of its platforms being used to incite violence or violations of international law. It must do more to meet that responsibility.” Although other stakeholders may build upon the Oversight Board’s decisions, these decisions are themselves a form of direct engagement with Meta. They often contain recommendations that go well beyond addressing how the platform should respond to an individual piece of content, calling for systemic change in how the platform responds to human rights risks in similar settings. In the Ethiopia decision, for example, the Oversight Board called on Facebook to both “publish information on its Crisis Policy Protocol,” and to “assess the feasibility of establishing a sustained internal mechanism that provides it with the expertise, capacity and coordination required to review and respond to content effectively for the duration of a conflict.”

It is not only significant that stakeholders such as the Oversight Board are calling for changed policies and practices from social media companies in atrocity risk settings—they are also invoking platforms’ international legal responsibilities in the process. Make no mistake: the Oversight Board warrants recognition as an emerging mechanism for the enforcement of international law, drawing on an array of enforcement strategies outlined in Parella’s model. Where the law does not itself represent a persuasive sanction, stakeholders of social media companies may be able to drive more immediate alignment with international law.

At the same time, it is worth bearing in mind Evelyn Douek’s prescient warning that “[t]he indeterminacy of [international human rights law] creates room for its co-optation by platforms, rather than their being constrained by it.” Certainly, the same could be said for legal frameworks relating to mass atrocities. Preventive obligations remain largely undefined even for state actors, and accountability for complicity in the commission of mass atrocities is pursued for only the smallest subset of responsible actors. We do not want social media platforms adopting the “language” of atrocity prevention unless it is accompanied by meaningful conduct to prevent and mitigate atrocity risks. Stakeholder engagement can help here too, but will need to ensure that advocacy is tied to the tracking and monitoring of data-driven indicators of progress by platforms operating in atrocity risk settings.

Perhaps the greatest benefit of a stakeholder engagement model is that it nods to our collective agency and responsibility in shaping a sector that is notoriously opaque. There is much to be said for the noble efforts of trust and safety professionals working to change social media companies from within—the wins referenced above could surely not have occurred without their work and expertise. But we must not forget that we find ourselves today in the midst of a “human rights recession,” a trend that extends to the tech industry. Amidst mass layoffs of teams focused on human rights, trust and safety, and election integrity, Parella’s framework offers us a necessary roadmap for the way forward. There will always be power in identifying opportunities to prosecute and punish those who contribute to atrocity crimes—natural persons and legal persons alike. But in the meantime, a stakeholder engagement model helps us conceptualize how those both inside and outside social media companies can steer platforms toward more responsible conduct in atrocity risk settings, in the moments it matters most.

 


Cover image credit