{"id":10653,"date":"2024-12-20T09:37:04","date_gmt":"2024-12-20T14:37:04","guid":{"rendered":"https:\/\/journals.law.harvard.edu\/ilj\/?p=10653"},"modified":"2024-12-20T09:37:48","modified_gmt":"2024-12-20T14:37:48","slug":"pushing-the-boulder","status":"publish","type":"post","link":"https:\/\/journals.law.harvard.edu\/ilj\/2024\/12\/pushing-the-boulder\/","title":{"rendered":"Pushing the Boulder: Engaging Social Media Companies in Atrocity Prevention"},"content":{"rendered":"<p><em>Editor\u2019s Note: This article is part of a four-piece symposium that examines Kishanthi Parella&#8217;s work, &#8220;Enforcing International Law Against Corporations: A Stakeholder Management Approach,&#8221; featured in <a href=\"https:\/\/journals.law.harvard.edu\/ilj\/wp-content\/uploads\/sites\/84\/01_HLI_65-2_Parella.pdf\">Volume 65(2) of the HILJ Print Journal<\/a>.<\/em><\/p>\n<p><strong>Shannon Raj Singh<\/strong><\/p>\n<p>Any credible discussion on the future of the social media industry must reckon with its history of spectacular failures. Chief among those are the instances where social media has fueled or contributed to the commission of mass atrocities around the world. A wealth of examples is on the tip of our tongues: <a href=\"https:\/\/www.cnn.com\/2021\/10\/25\/business\/ethiopia-violence-facebook-papers-cmd-intl\/index.html\">Leaked Facebook documents<\/a> betray internal warnings that the company was not doing enough to prevent spiraling ethnic violence in Ethiopia, inaction that is now the subject of <a href=\"https:\/\/www.amnesty.org\/en\/latest\/news\/2022\/12\/kenya-meta-sued-for-1-6-billion-usd-for-fueling-ethiopia-ethnic-violence\/\">litigation<\/a> in Kenyan courts. The Taliban deftly navigated Twitter\u2019s content moderation rules to spread <a href=\"https:\/\/www.washingtonpost.com\/technology\/2021\/08\/18\/taliban-social-media-success\/\">propaganda<\/a> amidst its takeover of Afghanistan in 2021, in an effort to add a veneer of legitimacy to a brutal and oppressive regime. And in the sprawling refugee camps of Cox\u2019s Bazar, Bangladesh, close to one million Rohingya <a href=\"https:\/\/www.aljazeera.com\/opinions\/2023\/8\/25\/facebook-should-pay-for-what-it-did-to-my-people-rohingya\">remain displaced<\/a> after unrestrained hate speech on social media played a significant role in inciting ethnic violence in neighboring Myanmar.<\/p>\n<p>But amidst the rubble, we can find evidence of surprising successes, too\u2014moments when social media companies have acted in broad alignment with international legal frameworks and standards on the prevention and punishment of mass atrocities. In select cases, platforms have agreed to <a href=\"https:\/\/www.reuters.com\/article\/us-myanmar-facebook\/facebook-shares-data-on-myanmar-with-united-nations-investigators-idUSKBN25L2G4\/?il=0\">share data<\/a> that could be used to investigate and prosecute mass atrocities, or have acted rapidly to modify their products or policies to prevent them from contributing to atrocity crimes. In the context of Afghanistan, for example, Facebook released an innovative feature aimed at civilian protection: its \u201c<a href=\"https:\/\/www.theverge.com\/2021\/8\/20\/22634209\/facebook-hides-friends-lists-instagram-safety-afghanistan-taliban-security\">locked profile<\/a>\u201d feature allowed Afghan civilians to rapidly lock down their privacy settings to prevent information on their profiles from being used to target them in a rapidly devolving security situation. Amidst the Russian invasion of Ukraine, Twitter <a href=\"https:\/\/www.hrw.org\/news\/2022\/05\/24\/twitter-moves-protect-prisoner-war-identities-ukraine\">released<\/a> a content moderation policy prohibiting depictions of prisoners of war, specifically referencing alignment with the <a href=\"https:\/\/www.ejiltalk.org\/twitter-as-enforcer-of-the-geneva-conventions\/\">Geneva Conventions<\/a>. In several instances, platforms have developed war rooms and <a href=\"https:\/\/about.fb.com\/news\/2022\/02\/metas-ongoing-efforts-regarding-russias-invasion-of-ukraine\/\">operations centers<\/a> to respond to emerging dangers posed by their products in conflict and crisis settings. And at various points over the past few years, platforms have released <a href=\"https:\/\/about.fb.com\/wp-content\/uploads\/2021\/03\/Facebooks-Corporate-Human-Rights-Policy.pdf\">human rights policies<\/a>, hired <a href=\"https:\/\/about.google\/human-rights\/\">human rights teams<\/a>, and invested in <a href=\"https:\/\/www.bsr.org\/reports\/BSR-Twitch-Human-Rights-Impact-Assessment-Report_2.pdf\">human rights impact assessments<\/a> to address risks related to their products, policies, and operations. Certainly, these initiatives can help buttress platforms\u2019 reputations as they are being otherwise battered for their failures in conflict settings. But calling them mere PR stunts may obscure the investment, time, and effort of those working to steer platforms toward international law in moments of atrocity risk. What accounts for these bright spots, and how can we replicate them?<\/p>\n<p>Kishanthi Parella\u2019s <a href=\"https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=4624056\">article<\/a>, <em>Enforcing International Law Against Corporations: A Stakeholder Management Approach<\/em>, illuminates how international law is at work in the private sector in \u201cnon-obvious ways\u201d (Parella, p. 338). Nowhere is this more true than in the realm of social media, where platforms developed and operated by the private sector play a central role in modern political dialogue, breaking news, armed conflicts, demonstrations, revolutions, and social movements the world over.<\/p>\n<p>Parella\u2019s article offers a thorough landscape assessment of how various stakeholders interact with one another to inform and influence corporate conduct. In her conception, corporate stakeholders\u2014ranging from states to consumers, shareholders, employees, benchmarking organizations, civil society organizations, and others\u2014use an array of strategies to serve as modern enforcers of international law in the private sector. Its core contribution lies in both recognizing the aggregate effect of stakeholder actions as a form of international law enforcement, and mapping their enforcement strategies onto a typology so this work can be done more intentionally going forward.<\/p>\n<p>Although it would seem to apply to a range of issue areas governed by international law, a stakeholder management framework may offer particular promise in pushing social media companies to better align with international legal frameworks relating to atrocity crimes: namely, genocide, crimes against humanity, and war crimes. While these legal frameworks face widespread enforcement challenges in courts of law, they may derive particular power in the corporate context specifically because they relate to the gravest crimes on earth. Indeed, the ability of mass atrocities to shock our collective conscience may well serve stakeholders\u2019 ability to convert corporate violations of international law into reputational, strategic, and operational risks that can incentivize action and change.<\/p>\n<p>There are legal frameworks regarding the role of corporate actors in mass atrocities, but they are notoriously difficult to enforce. While individuals (including corporate executives) can be prosecuted in either domestic or international legal systems for the commission of genocide, war crimes, and crimes against humanity, the <a href=\"https:\/\/www.icc-cpi.int\/sites\/default\/files\/RS-Eng.pdf\">Rome Statute<\/a> does not provide for the prosecution of legal persons before the International Criminal Court. And despite a series of <a href=\"https:\/\/www.theguardian.com\/technology\/2021\/dec\/06\/rohingya-sue-facebook-myanmar-genocide-us-uk-legal-action-social-media-violence\">legal efforts<\/a> that have sought to hold corporations to account for their role in atrocity crimes, enforcement is plainly the exception, not the rule. Indeed, asserting that social media companies should be held responsible for the dissemination of content, posted by an array of actors that can <em>in the aggregate<\/em> contribute to mass atrocities, can make for a challenging legal argument. Although the law certainly imposes responsibilities in this space, neither violations nor causations are easy to prove.<\/p>\n<p>We must also distinguish obligations to <em>prevent <\/em>mass atrocities from obligations restraining actors from contributing to their commission. States, for example, are not only prohibited from committing mass atrocities, but also are obligated to help prevent them. These <a href=\"https:\/\/academic.oup.com\/jhrp\/article\/14\/3\/769\/6646799\">state obligations<\/a> to prevent derive from distinct legal sources: the <a href=\"https:\/\/www.un.org\/en\/genocideprevention\/documents\/atrocity-crimes\/Doc.1_Convention%20on%20the%20Prevention%20and%20Punishment%20of%20the%20Crime%20of%20Genocide.pdf\">1948 Convention on the Prevention and Punishment of the Crime of Genocide<\/a>, for example, holds states to a <a href=\"https:\/\/legal.un.org\/avl\/pdf\/ha\/cppcg\/cppcg_e.pdf\">due diligence<\/a> standard that requires them to act according to their capacity to influence a situation at risk of genocide, wherever it occurs. <a href=\"https:\/\/www.ohchr.org\/en\/instruments-mechanisms\/instruments\/geneva-convention-relative-protection-civilian-persons-time-war\">Common Article 1<\/a> of the Geneva Conventions imposes a similar obligation for war crimes, obligating High Contracting Parties to both \u201crespect and ensure respect\u201d for the Conventions \u2014 meaning states must not only refrain from committing war crimes themselves, but are also <a href=\"https:\/\/www.icj-cij.org\/case\/131\/advisory-opinions\">obligated<\/a> to take measures within their power to <a href=\"https:\/\/ihl-databases.icrc.org\/en\/ihl-treaties\/gci-1949\/article-1\/commentary\/2016\">prevent<\/a> war crimes by other states.<\/p>\n<p>But there is no question that these treaties bind states, and not social media companies. And while the <a href=\"https:\/\/www.ohchr.org\/sites\/default\/files\/documents\/publications\/guidingprinciplesbusinesshr_en.pdf\">UN Guiding Principles on Business and Human Rights<\/a>\u2014widely recognized as the authoritative global framework on corporate obligations relating to human rights\u2014requires companies to \u201c[a]void causing or contributing to adverse human rights impacts through their own activities, and . . . [s]eek to prevent or mitigate adverse human rights impacts that are directly linked to their operations, products or services,\u201d their nonbinding nature presents significant roadblocks to consistent enforcement.<\/p>\n<p>Amidst these legal obstacles, and in an age where social media companies wield as much influence\u2014if not more\u2014over the risk of mass atrocities as many states, how can we encourage platforms to act more responsibly in atrocity risk settings? And what promise does a stakeholder engagement model hold for encouraging social media companies to reflect and uphold norms relating to mass atrocities?<\/p>\n<p>Although Parella\u2019s model of stakeholder management anticipates many of the core players in the social media context (such as shareholders, employees, civil society organizations, and states), it perhaps fails to adequately capture the unique nature of the social media user. As powerfully <a href=\"https:\/\/sauconpanther.org\/2535\/arts-and-entertainment\/if-youre-not-paying-for-the-product-then-you-are-the-product\/\">stated<\/a> by technology ethicist Tristan Harris, social media users are simultaneously the \u201cconsumers\u201d and the \u201cproducts.\u201d User data is packaged and sold to drive profit through a business model premised on targeted advertisement, rendering individuals both consumers of social media platforms and part of what is being sold. In addition, unlike most industries within the private sector, social media stands apart because the range of relevant \u201cconsumer\u201d stakeholders encompasses literally billions of people. Meta has <a href=\"https:\/\/transparency.fb.com\/policies\/improving\/our-stakeholders\/\">acknowledged<\/a> this challenge explicitly, noting that its \u201cstakeholder base includes every person or organization that may be impacted by [its] policies,\u201d while being clear that it \u201ccan\u2019t meaningfully engage with billions of people.\u201d<\/p>\n<p>So while stakeholder engagement in other sectors brings to mind outreach to a set of fairly clear-cut communities, social media tests the boundaries of what the category of \u201cstakeholder\u201d even means. In the mining industry, for example, stakeholder management may be premised on engagement with local communities directly affected by the sourcing of minerals, vendors throughout the supply chain, and a set of downstream consumers. But in the realm of social media, both every individual who has a social media account and every individual who <em>may be affected <\/em>by developments on social media platforms are veritable stakeholders of this industry. Social media has become so central to democratic processes, to peace, stability, and the risk of armed conflict, that it is difficult to envision who would not want to have the ability to shape its development and governance. Who, among us, is not a stakeholder in the way that our modern \u201cpublic squares\u201d organize, amplify, censor, and present purported information?<\/p>\n<p>But as Parella recognizes, not all stakeholders have equal power. The sheer volume of social media stakeholders dilutes individual power, a fact which implicitly suggests the potential for stakeholder alliances to shift corporate conduct. Should those billions of stakeholders organize into meaningful blocs or groups that can articulate risks related to atrocity prevention, imagine the aggregate power they could wield to influence platform resourcing and decision-making. The fact that a stakeholder management model makes this so evident is valuable in itself\u2014but it does not necessarily provide a ready answer to the modalities of \u201cmanaging\u201d such an extraordinary volume of stakeholders. In the social media industry in particular, this warrants further consideration.<\/p>\n<p>At the same time, a stakeholder management model can be illuminating in demonstrating the array of enforcement opportunities open to actors in the social media space. One such actor\u2014and a unique stakeholder with little direct precedent\u2014is the Oversight Board. Established by Meta in 2020, the <a href=\"https:\/\/www.oversightboard.com\/\">Oversight Board<\/a> is mandated to make principled, independent decisions on selected cases about how digital content is handled by Facebook and Instagram, and now Threads as well. While created by Meta, the Board is funded by an independent trust (<a href=\"https:\/\/www.oversightboard.com\/news\/1111826643064185-securing-ongoing-funding-for-the-oversight-board\/\">funded<\/a> by Meta), and, pursuant to its <a href=\"https:\/\/oversightboard.com\/attachment\/494475942886876\/\">Charter<\/a>, Board members exercise independent judgment on Meta\u2019s decisions and operations.<\/p>\n<p>While skepticism about its ability to drive long-term change has been plentiful, the good news is that, from the outset, the Oversight Board seems to have accepted the relevance of international law as a core part of its mandate. Its decisions on cases selected for review regularly reference international law, including the <a href=\"https:\/\/www.oversightboard.com\/decision\/FB-YLRV35WD\">Geneva Conventions<\/a>, the <a href=\"https:\/\/www.oversightboard.com\/decision\/FB-H6OZKDS3\/\">International Covenant on Civil and Political Rights<\/a>, <a href=\"https:\/\/oversightboard.com\/decision\/FB-MP4ZC4CC\/\">Human Rights Committee jurisprudence<\/a>, and the <a href=\"https:\/\/www.oversightboard.com\/decision\/FB-691QAMHJ\">UN Guiding Principles on Business and Human Rights<\/a>. In a world where the international legal community has largely failed to effectively wield the law as a sanction for social media companies\u2019 conduct in conflict zones, the Oversight Board is \u201caugment[ing] the architecture of international institutions that detect and punish violations of international law\u201d (Parella, p. 341).<\/p>\n<p>To date, the Oversight Board has (consciously or unconsciously) engaged in a range of strategies to enforce international law. Some of its work can be considered predicative enforcement: conduct that does not directly engage a corporation but creates the conditions for another stakeholder to do so. In 2021, for example, the Board issued a <a href=\"https:\/\/www.oversightboard.com\/decision\/FB-691QAMHJ\">decision<\/a> recommending that Facebook \u201c[m]ake clear in its corporate human rights policy how it collects, preserves and, where appropriate, shares information to assist in investigation and potential prosecution of grave violations of international criminal, human rights and humanitarian law.\u201d Serving a somewhat similar (if more toothless) function to mandated disclosure laws, its calls for transparency can push Meta to share information that it might not otherwise disclose, providing a foundation for other stakeholders to directly engage the platform on policies and practices that impact the prevention and punishment of mass atrocities.<\/p>\n<p>The Oversight Board can also engage in action that \u201cmagnifies the impact of action taken by other stakeholders\u201d (Parella, p. 329). This \u201camplified enforcement\u201d (Parella, p. 329) strategy can play an important role in raising the magnitude of a risk for Meta, drawing attention to its actions in atrocity risk settings. Among other avenues, this can occur through the use of the Oversight Board\u2019s \u201cagenda-setting\u201d function (Parella, p. 330) to influence the risks that a platform faces because of its conduct in atrocity risk settings. In December 2023, for example, the Oversight Board <a href=\"https:\/\/www.oversightboard.com\/news\/318968857762747-oversight-board-announces-new-cases-on-israel-hamas-conflict-for-expedited-review\/\">announced<\/a> that it would be reviewing a case related to content depicting the apparent aftermath of a strike on a yard outside Al-Shifa Hospital in Gaza City. The content\u2014which was removed by Meta\u2014depicted \u201cpeople, including children, injured or dead, lying on the ground and\/or crying,\u201d while a caption in Arabic and English suggested the hospital was targeted by Israeli forces. Strikingly, Meta <a href=\"https:\/\/transparency.fb.com\/oversight\/oversight-board-cases\/al-shifa-hospital\">reversed<\/a> its decision\u2014restoring the post to the platform\u2014not because the Board asked it to, but simply upon learning the Board had taken up the case. In this case, through its agenda alone, the Oversight Board influenced Meta\u2019s actions, causing it to reassess its decision to remove content documenting purported atrocities. This is particularly powerful where the removed content at issue is intended to raise public awareness of the risk of mass violence. To the extent that the media also then picks up on the Oversight Board\u2019s decisions, its enforcement function can be amplified further.<\/p>\n<p>But perhaps most impactfully, the Oversight Board\u2019s decisions on cases\u2014akin to court decisions in some ways\u2014can be regarded as direct enforcement of international law. In a decision on digital content threatening violence in Ethiopia, for example, the Board <a href=\"https:\/\/www.oversightboard.com\/decision\/FB-E1154YLY\">found<\/a> that \u201cMeta has a human rights responsibility to establish a principled, transparent system for moderating content in conflict zones to reduce the risk of its platforms being used to incite violence or violations of international law. It must do more to meet that responsibility.\u201d Although other stakeholders may build upon the Oversight Board\u2019s decisions, these decisions are themselves a form of direct engagement with Meta. They often contain recommendations that go well beyond addressing how the platform should respond to an individual piece of content, calling for systemic change in how the platform responds to human rights risks in similar settings. In the Ethiopia <a href=\"https:\/\/www.oversightboard.com\/decision\/FB-E1154YLY\">decision<\/a>, for example, the Oversight Board called on Facebook to both \u201cpublish information on its Crisis Policy Protocol,\u201d and to \u201cassess the feasibility of establishing a sustained internal mechanism that provides it with the expertise, capacity and coordination required to review and respond to content effectively for the duration of a conflict.\u201d<\/p>\n<p>It is not only significant that stakeholders such as the Oversight Board are calling for changed policies and practices from social media companies in atrocity risk settings\u2014they are also invoking platforms\u2019 international legal responsibilities in the process. Make no mistake: the Oversight Board warrants recognition as an emerging mechanism for the enforcement of international law, drawing on an array of enforcement strategies outlined in Parella\u2019s model. Where the law does not itself represent a persuasive sanction, stakeholders of social media companies may be able to drive more immediate alignment with international law.<\/p>\n<p>At the same time, it is worth bearing in mind Evelyn Douek\u2019s prescient <a href=\"https:\/\/scholarship.law.uci.edu\/cgi\/viewcontent.cgi?article=1042&amp;context=ucijil\">warning<\/a> that \u201c[t]he indeterminacy of [international human rights law] creates room for its co-optation by platforms, rather than their being constrained by it.\u201d Certainly, the same could be said for legal frameworks relating to mass atrocities. Preventive obligations remain largely undefined even for state actors, and accountability for complicity in the commission of mass atrocities is pursued for only the smallest subset of responsible actors. We do not want social media platforms adopting the \u201clanguage\u201d of atrocity prevention unless it is accompanied by meaningful conduct to prevent and mitigate atrocity risks. Stakeholder engagement can help here too, but will need to ensure that advocacy is tied to the tracking and monitoring of data-driven indicators of progress by platforms operating in atrocity risk settings.<\/p>\n<p>Perhaps the greatest benefit of a stakeholder engagement model is that it nods to our collective agency and responsibility in shaping a sector that is notoriously opaque. There is much to be said for the noble efforts of trust and safety professionals working to change social media companies from within\u2014the wins referenced above could surely not have occurred without their work and expertise. But we must not forget that we find ourselves today in the midst of a \u201c<a href=\"https:\/\/hls.harvard.edu\/today\/education-of-an-idealist\/\">human rights recession<\/a>,\u201d a trend that extends to the tech industry. Amidst <a href=\"https:\/\/techcrunch.com\/2022\/11\/04\/elon-musk-twitter-layoffs\/\">mass layoffs<\/a> of teams focused on human rights, trust and safety, and election integrity, Parella\u2019s framework offers us a necessary roadmap for the way forward. There will always be power in identifying opportunities to prosecute and punish those who contribute to atrocity crimes\u2014natural persons and legal persons alike. But in the meantime, a stakeholder engagement model helps us conceptualize how those both inside and outside social media companies can steer platforms toward more responsible conduct in atrocity risk settings, in the moments it matters most.<\/p>\n<p>&nbsp;<\/p>\n<p>[hr gap=&#8221;1&#8243;]<\/p>\n<p><span style=\"color: #800000\"><a style=\"color: #800000\" href=\"https:\/\/www.pexels.com\/photo\/close-up-of-a-smartphone-screen-displaying-social-media-app-folder-15406294\/\">Cover image credit<\/a>\u00a0<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Shannon Raj Singh<\/p>\n","protected":false},"author":96,"featured_media":10658,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"jetpack_post_was_ever_published":false,"_FSMCFIC_featured_image_caption":"","_FSMCFIC_featured_image_nocaption":null,"_FSMCFIC_featured_image_hide":null,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[121,426,366],"tags":[],"class_list":["post-10653","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-article-series","category-parella-symposium","category-perspectives"],"jetpack_featured_media_url":"https:\/\/journals.law.harvard.edu\/ilj\/wp-content\/uploads\/sites\/84\/Singh-scaled-e1734705332353.jpg","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/peZu3S-2LP","jetpack_likes_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/journals.law.harvard.edu\/ilj\/wp-json\/wp\/v2\/posts\/10653","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/journals.law.harvard.edu\/ilj\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/journals.law.harvard.edu\/ilj\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/journals.law.harvard.edu\/ilj\/wp-json\/wp\/v2\/users\/96"}],"replies":[{"embeddable":true,"href":"https:\/\/journals.law.harvard.edu\/ilj\/wp-json\/wp\/v2\/comments?post=10653"}],"version-history":[{"count":0,"href":"https:\/\/journals.law.harvard.edu\/ilj\/wp-json\/wp\/v2\/posts\/10653\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/journals.law.harvard.edu\/ilj\/wp-json\/wp\/v2\/media\/10658"}],"wp:attachment":[{"href":"https:\/\/journals.law.harvard.edu\/ilj\/wp-json\/wp\/v2\/media?parent=10653"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/journals.law.harvard.edu\/ilj\/wp-json\/wp\/v2\/categories?post=10653"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/journals.law.harvard.edu\/ilj\/wp-json\/wp\/v2\/tags?post=10653"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}