THE SOCIAL MEDIA TRIBUNAL VERDICT
This tribunal relates to alleged violations of human rights and other serious harm caused to people around the world by social media companies such as Facebook, X, formerly Twitter, YouTube, Discord, Snapchat, Instagram, TikTok, Reddit and many others I haven't named.
We begin by noting, as pointed out somewhat by defense counsel today, that social media, as a part of the digital age in which we live, are globally influencing the life of billions of people all over the world. Social media could be an important tool to support freedom, peace and human rights. However, because most platforms are dominated by large commercial enterprises and their obvious interest in increasing profits as well as having economic and political power, these platforms often fail to serve the important interest of supporting freedom, peace and human rights.
Accordingly, we will conclude our findings of fact and conclusions of law by respectfully making a number of recommendations, which we believe will improve the ecosystem in which social media operates so these interests are protected and furthered.
Having heard the testimony of many fact witnesses who testified regarding the harms they or their family members suffered due to the alleged conduct of social media companies, as well as testimony from experts in many fields related to the impact of social media on adults and children throughout the world, and having heard arguments from highly qualified and highly respected and experienced counsel for the prosecution and for the defense, the Tribunal will now make what we call preliminary findings of fact, certain conclusions of Law, and we will issue specific recommendations that we urge the social media companies to adopt.
This tribunal is governed by the Statute of the Court of the Citizens of the World. We rely on some of those rules in particular that are found in the statute. Specifically, rule 18 provides, in pertinent part for this Court's jurisdiction. And I quote, the court shall possess global jurisdiction over individuals in their personal or professional capacities, corporations and any other legal or natural entities, regardless of their domicile, nationality or governing authority. Yet the Court's authority shall be limited to the analysis and evaluation of evidence serving as a means to render an impartial judgment on alleged human rights violations. The jurisdiction of the court shall encompass human rights violations under consideration in this case, the jurisdiction of the court shall encompass human rights provisions contained in the Rome Statute, international human rights conventions, the general principles of international law and customary international law that closes the quote of rule 18.
Rule 30 provides, in pertinent part as follows, quote: The standard of proof utilized in the court proceedings considering allegations of human rights violations shall meet the threshold of reasonable grounds to Believe. In particular, the court has applied the law conventions and guiding principles of the following: the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, the European Convention on Human Rights, the UN guiding principles on Business and Human Rights, and the Digital Services Act of the European Union.
And now I begin our findings of fact, which I number:
Number one, based on the testimony of many witnesses, we find that there are reasonable grounds to believe that the social media companies knowingly and intentionally allowed and/or failed to remove unlawful or inappropriate content, including but not limited to hate speech, incitement to commit violence against people based on their race, ethnicity, gender, national origin, also sexually explicit material targeted to children, violent acts involving children and extortion schemes based on invasions of personal privacy.
We reject the defense that social media platforms were justified in allowing such materials to be available because of the concept of freedom of speech. Speech that incites or causes violence that tramples on human rights such as the right to privacy and the right to protect one's own reputation violates the rule of law because it causes harm and is outside the bounds of ethical, legal and humanitarian norms. It undermines the principles of respect, dignity and peaceful coexistence that are essential to human rights.
We find that there are reasonable grounds to believe that the social media platforms are not proactive or reactive in ensuring that content posted on their platforms by users complies with their own policies and the principles governing human rights. This failure is a direct result, we find, of the social media platforms' desire for increased profits rather than protecting the rights of their users. Advertising is the way in which social media obtain their revenue and their profit. The evidence showed that the more the user accesses the platform and the longer the user remains on the platform, the more the platform will benefit by users noticing the ads, looking at the contents of the ads, and potentially purchasing an advertised product. But all three of those activities generate revenue to the platform.
Number two, the EU's Digital Services Act adopted in 2022 requires social media companies to undertake content moderation of material posted on their platforms. Similarly, in India, the IT Act of 2000 requires social media companies to conduct due diligence, including reporting cyber security instances to a computer emergency response team and they must take down content upon receiving notice of a court order or direction from a government agency. The Indian Protection of Children from Sexual Offenses Act, called POCSO of 2012, mandates that social media companies report criminal or inappropriate content, such as child sexual abuse material, to law enforcement agencies.
While neither the DSA nor Indian law govern all of the world's jurisdictions, we find that there are reasonable grounds to believe that the failure to review content either prior to publication or surely after receiving notice that certain content is dangerous and inappropriate is grossly negligent and reckless.
We also find that social media platforms have the technology to review content by using artificial intelligence and/or by using highly trained human reviewers, and that the cost to do so would not be prohibitive. We also find that social media platforms undoubtedly know that some of the content displayed on their platforms is dangerous and inappropriate, often leading to adverse effects on the safety, mental health and well-being of their users. Nonetheless, the preponderance of the credible evidence shows that despite that knowledge, the social media platforms often permit this material to be published and/or decline to promptly take it down when informed by users of the dangers posed by the continued access to this information.
Number three, we find that there are reasonable grounds to believe that the social media platforms not only violate the provisions of the laws, guiding principles and conventions cited above, but also knowingly and intentionally violate their own Terms of Use and Privacy policies, which explicitly prohibit the publication of hate speech, incitement to violence, threats of violence, extortion and other impermissible conduct content.
This is circumstantial evidence, at the very least, that the social media platforms knowingly and intentionally are permitting their platforms to be used by bad actors who have committed cyber crimes and/or have interfered in elections by disseminating false or misleading information and/or have been complicit in encouraging genocide or the involuntary transfer of population.
Number four, we briefly provide only a few examples at this time in support of this finding:
- Social media was used in Myanmar to encourage sexual violence assaults on the minority Muslim Rohingya population, causing hundreds of thousands to flee the country. Posts such as, quote, "All Rohingya must be killed," quote, "Rohingya are vermin that must be eliminated," quote, "Rohingya must be driven out of the country." Those types of comments, posts were allowed to appear on a social media platform.
- Posts were published on social media platforms also that suggest that children were ugly and were hated by everyone.
- Other posts led children to engage in self-harm or encourage children to engage in sexually explicit acts, and then suggested that, having done so, they were threatened by what is called "sextortion." Sextortion perpetrators told them that if they did not pay money, videos or pictures of those acts would be widely disseminated, and they were told that their life was over anyway, and their only choice was to commit suicide.
- Even after a child did commit suicide, family members continued to be threatened with public exposure of the sexually explicit images.
Number five, we reject the defense contention that the social media platforms are not responsible for the behavior of third parties who are the ones committing cyber crimes or engaging in criminal activity such as extortion, cyberbullying or cyberstalking.
It is true that the social media platforms are not committing those crimes directly, but by permitting these third parties to publish material that is criminal or will lead to criminal conduct, and by failing to remove such material despite actual notice, the social media platforms are facilitating those crimes.
Number six, these types of social media content clearly violate human rights and freedom, such as the right to privacy, and they cause severe consequences to the victims. Victims of that conduct, including self-harm, psychological trauma and even death. By failing to act to prevent hate speech and online criminal activity, the social media platforms violate the human rights of their users, as specified in the law conventions and guidance cited earlier.
Number seven, we find that social media platforms have made no effort to protect children from harmful content. Moreover, we find that the social media platforms are well aware that children are endangered due to the absence of parental control and parental access to their children's social media accounts.
We also find that social media platforms have been complicit in knowingly causing children, through algorithmic recommendations, to become addicted to using social media. The testimony showed that many children spend as much as five hours a day on these platforms, even waking up repeatedly during the night to check their feeds.
As just one example, a witness testified that intimate images she had placed in a supposedly safe and private place were stolen from a social media platform despite being in a privacy setting and a "my eyes only" feature. The images were widely disseminated after the data theft, causing her great harm, such that she was forced to flee her country of residence.
And now I turn to our conclusions of law.
We conclude that the social media platforms, in general, have violated the following conventions, laws and guidance.
I begin with international human rights law, and I cite the UN Declaration of Human Rights, Articles 2, 3, and 12. In particular, I won't read the language of those, but those are the sections I cite.
We also cite the ICCPR, Article 20, paragraph 2, which prohibits incitement to hatred, discrimination or violence. Section 10.2 of that same statute—the ICCPR—mentions restrictions on freedom of expression to prevent incitement to hatred, and Article 14 prohibits discrimination. Again, I'm not reading the language of those sections, but we rely on those.
In addition, the United Nations Convention on the Rights of the Child, we cite in particular Article 16 of that convention that states that every child has the right to privacy.
We cite the Convention on the Elimination of All Forms of Racial Discrimination. Article 2 condemns discrimination against women in all its forms and condemns other inappropriate race discrimination.
We cite the United Nations guiding principles on business and human rights, in particular, Article 17.
And now I cite the OECD guidelines for multinational enterprises, Article 2, a.2, which does say that enterprises should take fully into account established policies in the countries in which they operate and consider the views of other stakeholders, and an enterprise should respect the internationally recognized human rights of those affected by their activities.
Article 17 of the ICCPR—I think I'm going over that again—says no one should be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honor and reputation.
And ECHR, Article 8 says everyone has the right to respect for his private and family life, his home and his correspondence.
In particular, we turn to failure to address cyberbullying and harmful content, including promotion of harmful challenges, revenge porn and self-harm. And we cite in particular Article 6, 7, 17, 19 and 20, paragraph 2 of the ICCPR.
We cite Articles 8 and 10 of the ECHR, which, just to pick the last one of those, says everyone has the right to freedom of expression.
We now turn to our recommendations, of which we have 17, but they're short.
1. Social media platforms must be held accountable for their actions by being exposed to civil penalties.
2. Social media platforms must act responsibly to respect the human rights of its users.
3. Social media platforms must accept the obligation to filter and screen the content of online speech to ensure that cyber crimes such as hate speech, criminal conduct and the spread of dangerous misinformation and disinformation is significantly reduced and soon eliminated.
4. Social media platforms must invest in and implement technical and manual measures to ensure that all artificial intelligence-based algorithms comply with global ethical standards and best practices.
5. Social media platforms must deploy technical tools and adequate skilled human resources to detect, prevent and mitigate unlawful content such as fake news, cyberbullying, hate speech and revenge porn on their platforms.
6. Social media platforms must adopt and enforce mechanisms to promptly address complaints, including immediately removing pages, posts and accounts that clearly violate the laws, conventions and guidance cited above. And in today's world, promptly must mean within 12 hours of a determination that such pages contain criminal or inappropriate content.
7. Social media platforms must be transparent as to the use of algorithms used to persuade users to take certain actions and should prevent addiction in children that lead to serious physical and psychological consequences.
8. Social media platforms must adopt and implement technology for verifying the age of children and provide parents with real-time access to their children's accounts. The platforms must also adopt a method for immediately flagging inappropriate content and notifying parents of that content.
9. Social media platforms must use AI and machine learning technology to monitor and detect illegal activity on their platform, such as deepfakes, grooming or cyberbullying.
10. Social media platforms must restrict the amount of time a child—and a child is someone under 18—can spend on a particular social media platform.
11. Social media platforms must immediately suspend the accounts of predators or groomers that target or threaten children.
12. Social media platforms must ensure that information collected from a user to fulfill notice and consent requirements must be prominently displayed to all users—and that means not buried in a five-page Terms of Use, but prominently displayed.
13. Social media platforms must ensure they deploy artificial intelligence and other resources to detect and terminate fake accounts created by bad actors to commit social engineering frauds and other dangerous and inappropriate activity on their platforms.
14. We also, most respectfully, recommend that UN conventions consider revisions and amendments in order to make social media platforms more transparent and to make them accountable for violations of human rights.
15. We also recommend that national governments consider and pass legislation to address the concerns raised during the hearings before this tribunal. Such legislation should provide for both civil and criminal penalties, depending on the nature of the violation of both national laws and international conventions. Such laws should explicitly prohibit the publication of pornographic material, posts aimed at sexual abuse, online challenges inciting suicide, encouraging suicide or self-harm, and protecting children by ensuring parental access to accounts of minors and implementing parental consent to protect their privacy. Such laws should require that the identity of users be verified and that age limitations be observed by implementing and enforcing specific age-gating practices, prohibiting underage users from accessing inappropriate and dangerous material.
We most respectfully recommend that a court or tribunal with jurisdiction consider the imposition of monetary penalties against Facebook for its conduct in furthering the persecution of the Rohingya in Myanmar. We also recommend that compensation be extended to the victims of this persecution.
16. Similarly, we respectfully also recommend that national courts or tribunals consider compensation to the victims of social media's violations of their human rights.
17. Finally, we most respectfully recommend that this tribunal reconvene one year from now to assess whether the social media platforms have adopted many of these important recommendations and have taken seriously their obligation to protect the human rights of their users.
This constitutes the unanimous oral judgment of this tribunal.
Judges:
• Hon. Shira A. Scheindlin - Appointed by President Bill Clinton as a US Federal Judge (Presiding Judge).
• Herta Däubler-Gmelin – Former German Justice Minister.
• (Dr.) Karnnika A Seth – Cyberlaw Expert practising in the Supreme Court of India since 25 years.
Judgment of the Social Media Tribunal held at Berlin,
March 17-21, 2025
1 INTRODUCTION
The Court of the Citizens of the World, a people’s tribunal, sponsored this third tribunal relating to alleged violations of human rights and other serious harm caused to people around the world by social media companies (“SM”) such as Facebook, X (formerly Twitter), You Tube, Discord, Snapchat, Instagram, TikTok, Reddit, and many others.
We begin by noting that Social Media platforms, as a part of the digital age in which we live, are globally influencing the life of billions of people all over the world. In fact, there are currently approximately five billion users of SM worldwide which is nearly 64% of the global population1 . Importantly, SM platforms have largely replaced traditional media. As such, information appearing on those platforms shape our democratic discourse. These SM platforms could be an important tool to support freedom, peace, and human rights. However, because most platforms are dominated by large commercial enterprises and their obvious interest in increasing profits, as well as economic and political power, these platforms often fail to serve the important interest of supporting freedom, peace and human rights. Accordingly, we will conclude our findings of fact and conclusions of law by respectfully making recommendations that we believe will improve the ecosystem in which SM operates so that these interests are protected and furthered.
Having heard the testimony of many fact witnesses regarding the harms they or their family members suffered due to the alleged conduct of SM companies, as well as testimony from experts in many fields related to the impact of SM on adults and children throughout the world, and having heard arguments from highly qualified and experienced counsel for the Prosecution and the Defence, the Tribunal will now make findings of fact, conclusions of law,and issue specific recommendations that we urge the SM companies, UN Conventions and National Governments to adopt.
For procedural purposes, the Tribunal is governed by the Statute of the Court of the Citizens of the World. We rely on certain of the Rules found in that Statute. Specifically, Rule 18 provides, in pertinent part, for this Court’s jurisdiction.
The Court shall possess global jurisdiction over . . . individuals in their personal or professional capacities, corporations, and any other legal or natural entities, regardless of their domicile, nationality, or governing authority. . . . The Court’s authority shall be limited to the analysis and evaluation of evidence serving as a means to render an impartial judgment on alleged . . . human rights violations . . . . The jurisdiction of the Court shall encompass . . . human rights violation under consideration in [this] case. . . . The jurisdiction of the Court shall encompass human rights provisions contained in the Rome Statute; International Human Rights Conventions; the General Principles of International Law; and Customary International Law.
Rule 30 provides, in pertinent part, as follows: “The standard of proof utilized in the Court proceedings considering allegations of human rights violations shall meet the threshold of ‘reasonable grounds to believe.’
For substantive purposes, the Court has applied the law, conventions and guiding principles as follows: The Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights (“ICCPR”); the European Convention on Human Rights (“ECHR”); the UN Guiding Principles on Business and Human Rights (“UNGP”); and the Digital Services Act of the European Union (“DSA”).
The evidence, consisting primarily of witness testimony, reflected the failure of SM platforms to address: (1) the proliferation of disinformation and misinformation; (2) the amplification of hate speech and extremist content; (3) the invasion of privacy and data exploitation of users including children; (4) the rise in cyberbullying and other harmful content; and (5) the recent policies adopted by some of the SM platforms that have decided to label content as “sensitive” rather than taking down dangerous or hateful content.
2 FINDINGS OF FACT
1. Based on the testimony of several witnesses we find that the SM platforms have had a negative impact on democratic discourse, citizen well-being and safety based on the proliferation of misinformation, disinformation, and counter-factual conspiracy theories. In particular, we cite in support of this conclusion the testimony of Baroness Kidron, a member of the UK House of Lords and a founder of the UK 5 Rights Foundation. She cited, in particular, the impact of misinformation and disinformation as influencing the outcome of the Brexit vote in the UK and the response to the Covid-19 pandemic in many countries around the world. She recommends more aggressive regulation of SM companies because without regulation they pose a risk to democracy. She also urges that the SM companies be held accountable for the societal harms they cause. In particular, she concluded that the state has a duty, at the very least, to protect children.
2. We also cite, in support of this finding of fact, the testimony of journalist Carole Cadwalladr. In her testimony she explained how Cambridge Analytica used Facebook to support the campaign for the UK to leave the EU. She gave the example of ads targeted to one small town stating that “Turkey will join the EU and Muslims will flood your town.” She also cited pictures of ISIS terrorists committing murders. This type of speech and visual images, spreading fear and misinformation, caused a surprisingly strong vote in favor of leaving the EU. Ms. Cadwalladr was targeted with death threats because of her work as a journalist in exposing this conduct.
3. Another witness, Rewan al Haddad, the campaign director of Eko, an organization seeking to hold corporations accountable for their conduct testified that SM companies repeatedly failed to block ads containing hate speech, often accompanied by digital images. Her organization submitted ten ads containing hate speech to both Meta and X. The ads called for immigrant to be gassed and put in concentration camps. Immigrants were called animals and pathogens. Half of those ads were nonetheless approved by Meta and all ten were approved by X. These ads were to be used in Germany and in India, in order to influence critical elections. These posts should have been blocked according to the standards of permissible speech adopted by these companies in their own terms of service.
4. Based on the testimony of many witnesses, we find that there are reasonable grounds to believe that the SM companies, knowingly and intentionally allowed, and/or failed to remove, unlawful or inappropriate content including, but not limited to hate speech, incitement to commit violence against people based on their race, ethnicity, gender, or national origin, sexually explicit material targeted to children, violent acts involving children, and extortion schemes based on invasions of personal privacy. We cite the testimony of several witnesses in support of this finding of fact. Alexandra Pascalidou, a journalist, testified that she was a victim of racism and sexism, and was repeatedly subjected to hate speech. She testified that hate speech is not free speech but rather is used as a weapon to silence certain voices.
5. With respect to hate speech we cite, in particular, the testimony given by Nay San Lwin, a Rohingya activist and co-founder of Free Rohingya Coalition, and Tun Khin, a Rohingya activist and President of the Burmese Rohingya Organisation of the UK. Mr. Lwin testified that Facebook was responsible for spreading misinformation that led to the persecution of the Rohingya, a Muslim minority in Myanmar. He decried the lack of regulations prohibiting harmful content on SM. Both witnesses testified that many posts identified all Rohingya as criminals. Ultimately 750,000 Rohingya were forced to flee to Bangladesh. As noted, both witnesses testified that hate speech is not free speech. Both witnesses strongly supported the need for moderators to monitor the content of speech to prevent hate speech and the violence such speech encourages. Further, the witnesses testified that despite actual notice, Facebook failed to take down the reported hate speech.
6. The Tribunal also received documentary evidence from the Human Rights Council’s report of an international fact-finding mission on Mynamar stating that “Facebook has been a useful instrument for those seeking to spread hate (against the Rohingya community), in a context where, for most users, Facebook is the Internet. Although improved in recent months, the response of Facebook has been slow and ineffective”. That evidence also demonstrated that Facebook acted contrary to its own policies by failing to remove hate speech. We quote from Facebook’s own policy against hateful content as follows:
We believe that people use their voice and connect more freely when they don't feel attacked on the basis of who they are. That is why we don't allow hateful conduct on Facebook, Instagram or Threads…
We define hateful conduct as direct attacks against people – rather than concepts or institutions – on the basis of what we call protected characteristics (PCs): race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease. Additionally, we consider age a protected characteristic when referenced along with another protected characteristic. We also protect refugees, migrants, immigrants and asylum seekers from the most severe attacks (Tier 1 below) . . .
We remove dehumanising speech, allegations of serious immorality or criminality, and slurs. We also remove harmful stereotypes, which we define as dehumanising comparisons that have historically been used to attack, intimidate or exclude specific groups, and that are often linked with offline violence. Finally, we remove serious insults, expressions of contempt or disgust, swearing and calls for exclusion or segregation when targeting people based on protected characteristics.
7. For this finding of fact, we also cite the testimony of Imran Ahmed, the CEO of the Center for Countering Digital Hate, located in the UK and the US. He pointed out that the algorithms used to spread hate speech and disinformation are hidden from view. He testified that transparency in the use of algorithms is essential. Transparency is important to ensure that content which violates a platform’s policies is not amplified by its own AI-based recommendations. He also testified that independent content moderation was essential to eliminating hate speech and dangerous conspiracy theories that often encourage and incite violent conduct. He testified that platforms should moderate their content through AI, but followed by human moderators, followed by a process for taking down offensive content after receiving and reviewing a report of such content on the platform. He concluded that the absence of effective fact-checking and content moderation causes an increase in hate speech on SM platforms. He recommended transparency of and accountability for the SM companies.
8. The Tribunal also received another Report prepared by the Center for Countering Digital Hate, titled “X content moderation failure.” The Report found that X continues to host 86% of the posts that have been reported to contain extreme hate speech despite its published policy to combat content “motivated by hatred, prejudice or intolerance”. A News report published in the Los Angeles Times on September 25, 2024, was also submitted to the Tribunal. The article is titled “Elon Musk’s X says its policing harmful content as scrutiny of the Platform grows.” The article noted that X’s approach “is to restrict the reach of potentially offensive posts rather than taking down the posts.” X’s 2024 transparency report, which was also received in evidence,showed that X chose to “label” content rather than removing or suspending accounts. The transparency report showed that of the 82 million complaints received by X, X only suspended one million accounts and removed or labelled 2.6 million posts.
We quote here the following clause from X’s Policy titled “Hateful conduct”
Hateful references:
We prohibit targeting individuals or groups with content that references forms of violence or violent events where a protected category was the primary target or victims, where the intent is to harass. This includes, but is not limited to media or text that refers to or depicts:
● genocides, (e.g., the Holocaust);
● lynchings.
What happens if you violate this policy?
The following is a list of potential enforcement options for content that violates this policy:
Making content less visible on X by:
Removing the Post from search results, in-product recommendations, trends, notifications, and home timelines.
Restricting the Post discoverability to the author’s profile. Downranking the Post in replies.
Restricting likes, replies, reposts, quotes, bookmarks, share, pin to profile, or engagement counts.
Excluding the Post from having ads adjacent to it. Excluding Posts and/or accounts in email or in-product recommendations.
Requiring Post removal.
For example, we may ask someone to remove the violating content and serve a period of time in read-only mode before they can Post again.
Suspending accounts that violate our Hateful Profile policy.
9. Having examined the documentary evidence and heard the witnesses, we conclude that reduced visibility/labelling of hate speech content rather than blocking of such posts is clearly unjustifiable and is violative of human rights. Dr. Julia Ebner, a professor at the University of Oxford who specializes in studying radicalism and extremism has published articles and books on the spread of hate speech and other harmful content on the Internet. She has concluded that the SM companies provide a safe haven to extremists who reach a very wide audience and inspire violence based on the hate speech they espouse. She gave an example of QAnon which has used platforms including Discord and Telegram to encourage antisemitism, misogyny, white supremacy, and jihadism. Like Mr. Ahmed, Dr. Ebner recommended the need for transparency of algorithms, and accountability on the part of SM companies for failing to prevent the spread of violent and hateful speech and to take down illegal content when reported.
10. We reject the defense that SM platforms were justified in allowing such materials to be available because of the concept of freedom of speech. Speech that incites or causes violence, tramples on human rights – such as the right to privacy and the right to protect one’s reputation -- violates the Rule of Law because it causes harm and is outside the bounds of ethical, legal, and humanitarian norms. It undermines the principles of respect, dignity, and peaceful coexistence that are essential to human rights.
11. We find that there are reasonable grounds to believe that the SM platforms are not proactive or reactive in ensuring that content posted on their platforms by users complies with their own policies, and the principles governing human rights. This failure is a direct result of SM platforms’ desire for increased profits, rather than protecting the rights of their users. Advertising is the way in which the SMs obtain their revenue and their profit. Digital advertising is worth one billion dollars. The evidence showed that the more the user accesses the platform, and the longer the user remains on the platform, the more the platform will benefit by users noticing the ads, looking at the contents of the ads, and potentially purchasing an advertised product. All these activities by users generate revenue to the platform.
12. For this finding of fact we rely, in particular, on the testimony of Professor Matthew Hindman of George Washington University, who testified that digital platforms make almost all of their money through advertising. He noted that Meta and X make 98% of their revenue through advertising, and Reddit makes 91% of its revenue through advertising. His research supports the conclusion that more money is made when users stay on the platform as long as possible. He reports that the platforms encourage addictive behavior by users, including children. Professor Hindman testified that the takedown rates for reported illegal content on these platforms was less than 5% per year. He further testified that the majority of AI based recommendations on Facebook were to amplify “overwhelmingly toxic” content by superusers (those who most frequently post) and that the majority of such posts were from fake accounts.
13. For this finding of fact, we also rely on the testimony of Imran Ahmed, identified in paragraph 7 above. He testified in detail regarding the use of algorithms by the SM companies. He explained that algorithms are used by the SM companies to learn as much as possible about each user. In doing so, a formula is developed to increase profits by keeping users on the platform as much as possible. He testified that the formula is as follows: time spent on the platform times the number of user times the frequency of ads times the price of ads determines the revenue.
14. The EU’s Digital Services Act (“DSA”), adopted in 2022, requires SM companies to undertake content moderation of material posted on their platforms. The term “content moderation” is cited twenty-two times in the DSA. Similarly, in India, the IT Act of 2000 requires SM companies to conduct due diligence, including reporting certain cybersecurity instances to a Computer Emergency Response Team within 6 hours and requiring that they take down content upon receiving notice of a court order or direction from a government agency or voluntarily based on a user filing a complaint to its grievance redressal officer. The Indian Protection of Children from Sexual Offenses Act (“POCSO”) of 2012 mandates that SM companies report criminal or inappropriate content such as Child Sexual Abuse Material (“CSAM”) to law enforcement agencies.
15. While neither the DSA, nor Indian law, govern all the world’s jurisdictions, we find that there are reasonable grounds to believe that the failure to review content, either prior to publication, or surely after receiving notice that certain content is dangerous and/or inappropriate, is grossly negligent and reckless. We also find that SM platforms have the technology to review content by using AI and/or by using highly trained human reviewers and that the cost of doing so would not be prohibitive. We also find that SM platforms undoubtedly know that some of the content displayed on their platforms is dangerous and/or inappropriate, often leading to adverse effects on the safety, mental health and well-being of their users. Nonetheless, the preponderance of the credible evidence shows that despite that knowledge, the SM platforms often permit this material to be published and/or decline to promptly take it down when informed by user(s) of the dangers posed by the continued access to this information.
16. We find that there are reasonable grounds to believe that the SM platforms not only violate the provisions of the laws, guiding principles, and conventions cited above, but also knowingly and intentionally violate their own Terms of Use and Privacy Policies, which explicitly prohibit the publication of hate speech, incitement to violence, threats of violence, extortion and other impermissible content. We cite, in particular, the testimony of Sophie Zhang, formerly employed by Facebook, who testified that the company did a very poor job of policing adherence to their own Terms of Use and Privacy policies, particularly with respect to the proliferation of fake accounts spreading misinformation. Zhang also testified that the size of the content moderation team at Facebook was disproportionately lean as compared to the number of complaints received. She believed that Facebook’s decisions were arbitrary, often motivated by business considerations, with no action or delayed action on reporting illegal content. This is circumstantial evidence, at the very least, that the SM platforms knowingly and intentionally are permitting their platforms to be used by bad actors, who have committed cybercrimes and/or have interfered in elections by disseminating false or misleading information, and/or have been complicit in encouraging genocide or the involuntary transfer of population.
17. For this finding of fact, we also cite the testimony of Nina Jankowicz, the former Executive Director of the Department of Homeland Security’s Disinformation Governance Board. She testified, for example, that elections in Romania were cancelled due to election interference by SM platforms. She noted the ubiquitous presence of fake ads and fake news on SM. She also testified that women who report fake news often become the target of psychological abuse and threats to their safety by bad actors. She testified that a lot of women just self-select out of public life because of the threats that they face online. She reported many instances to SM companies where she was targeted through memes but that the platform failed to remove the illegal content despite that it violated its own policies. She recommended that legislation be enacted (and enforced) calling for transparency of and accountability for SM’s actions. She further recommended that SM platforms should adopt robust fact checking practices to detect fake news and prevent its amplification.
18. For this finding of fact, we also cite the testimony of Professor Arno Lodder, a professor of internet governance and regulation at a university in Amsterdam. He is an expert in data protection and privacy and has frequently published on these topics. He testified that SM platforms have user interface designs that manipulate users to share more personal data than is necessary or that they even know they are sharing. He recommended deployment of privacy protection measures and that SM companies adopt adequate complaint procedures requiring that inappropriate content be swiftly removed. He believes that this requires human intervention to respond responsibly and quickly to legitimate complaints.
19. With respect to hate speech, we provide only a few examples in support of this finding. Witnesses such as Nay San Lwin, and Tun Khin have testified that SM was used in Myanmar to encourage sexual violence, assaults on the minority Muslim Rohingya population, causing hundreds of thousands to flee the country. Posts such as “all Rohingya must be killed” “Rohingya are vermin that must be eliminated” “Rohingya must be driven out of the country” were allowed to appear on a SM platform.
20. Posts were published on SM platforms suggesting that children were ugly and were hated by everyone. Other posts led children to engage in self-harm or encouraged children to engage in sexually explicit acts and then suggested that having done so they were threatened by sextortion perpetrators that if they did not pay money videos or pictures of those acts would be widely disseminated and were told that their life was over any way and their only choice was to commit suicide. Even after a child committed suicide, family members continued to be threatened with public exposure of the sexually explicit images.
21. We reject the defense contention that the SM platforms are not responsible for the behavior of third parties who may be committing cybercrimes or engaging in criminal activities such as extortion, cyberbullying or cyberstalking. It is true that the SM platforms are not committing those crimes directly. But by permitting these third parties to publish material that is criminal or will lead to criminal conduct, and by failing to remove such material despite actual notice and expeditiously suspend and terminate accounts used to perpetrate cybercrimes, the SM platforms are facilitating those crimes.
22. These types of SM content clearly violate human rights and freedoms, such as the right to privacy, and the exploitation of data, and cause severe consequences to the victims of such conduct including self-harm, psychological trauma, and death. By failing to act to prevent online criminal activity, the SM platforms violate the human rights of their users as specified in the law, conventions and guidance cited earlier.
23. In support of this finding of fact, we cite the testimony of Noelle Martin, a victim of violations of her right to privacy. She described what she termed as “image based sexual abuse.” This abuse started when she was only seventeen years old by the use of deep fakes of her image. These are created by bad actors who impose her face onto pornographic images that are then widely distributed on SM. She reported this conduct to the websites on which they were posted, but neither did anything to stop the spread of these images. She testified that this conduct disproportionately affects women and girls who are ruthlessly targeted for sextortion. Such images are supposedly banned by the platforms’ terms of service, but the bans are not enforced. As a result, she suffered both mental and physical harm. As a legal matter she testified that the companies violated Article 17 of the ICCPR (see conclusions of law below); Article 12 of the European Declaration of Human Rights, and Article 27 of the Universal Declaration of Human Rights.
24. Another victim, Hana Mossman-Moore, was a victim of cyberstalking which she testified was a result of exposure on the Internet. A friend of hers was stalked by a person who eventually broke into her flat and killed her. Ms. Mossman-Moore testified that she began receiving threatening messages on her SM sites including WhatsApp, Instagram and Facebook. Some of the posts referred to her as “Hooker Hana” and encouraged users to rape her. Intimate pictures of her were widely distributed. Eventually she was forced to flee the country. She testified that the SM companies did little or nothing to prevent or stop this misuse of their platforms.
25. We find that SM platforms have made little or no effort to protect children from harmful content. Moreover, we find that the SM platforms know and are fully aware that children are endangered due to the absence of parental control and parental access to their children’s SM accounts. We also find the SM platforms have been complicit in knowingly causing children, through its algorithmic recommendations, to become addicted to using SM.
26. In support of this finding of fact, we cite the testimony of Leanda Barrington-Leach the Executive Director of an NGO titled “5 Rights Foundation” She testified that children are an important market segment for SM companies. She testified that children are defined as under eighteen, although children under thirteen are often targeted. SM companies, such as TikTok, intentionally design their sites to maximize the time that children spend on their platforms. Examples include push notifications, continuous scrolling, encouraging connecting with friends and in-app purchases. She testified that many children use SM up to five hours per day and often wake up at night to check their feed. She testified that the SM companies do not take appropriate measures to protect children. She noted that the absence of age restrictions and the continued posting of harmful content, coupled with addictive AI based usage of SM, has led to negative mental health effects, like the loss of analytical skills, memory formation, contextual thinking, conversational depth, empathy and increased anxiety among children. She provided the example of Instagram failing to shut down accounts flagged multiple times by the UK secret police and NGOs for sharing AI generated CSAM. She recommends increased efforts on the part of SM companies to undertake content moderation and parental control on children’s use of SM. She further recommended that children’s right to privacy must be respected and their personal data should not be collected for commercial purposes.
27. One witness, who used the pseudonym Jane Doe, testified that intimate images she had placed in a supposedly safe and private place were stolen from Snapchat, a SM platform, despite using a privacy setting with a “my eyes only” feature. The images were widely disseminated and published on other SM after the data theft causing her great harm such that she was forced to flee her country. Snapchat failed to take down the illegal content that invaded her privacy despite repeated written requests to do so. According to the witness, the SM companies have failed to implement robust systems to detect and take down CSAM, despite the fact that technical means can be deployed to do just that.
28. Three parents of children who committed suicide testified that SM platforms were responsible for these tragic deaths. Nicola Harteveld testified that her daughter committed suicide as a result of online bullying on Snapchat. When her daughter was only twelve years old she was subjected to bullying, leading her to believe that she was worthless. She was then subjected to harmful content that encouraged her to commit suicide. She took her own life at the age of fourteen. Her testimony demonstrated that SM platforms have failed to protect vulnerable children, despite clear knowledge of the risk of psychological harm and of suicide resulting from cyberbullying. Nicola Harteveld recommended stronger accountability, content moderation, algorithm transparency of SM and the importance of cyber education.
29. Will Claxton testified that his son, Finn, committed suicide at the age of 16. Fin had been a high-achieving model student until he began using drugs that he purchased through the Internet. He also shared intimate sexual images on a SM website. Eventually he was subjected to “sextortion” through that SM website. The father testified that his son was “groomed” through online bad actors who were responsible for selling him drugs and subjecting him to “sextortion.” Although children on the website used by Finn were required to be thirteen years old, his son was able to enter a false age when he began using Discord, a SM website. Mr. Claxton, an engineer, believes that at least one SM website is a “playground for predators.” He recommended that parents should have the ability to manage their children’s accounts. He also recommended that SM platforms verify the identity of their users and adopt robust age verification mechanisms. Finally, he recommended the use of AI/ machine learning technologies to monitor what is happening on SM platforms, the need to remove unlawful content, and that alerts should be sent to parents of any inappropriate content viewed or posted by their children.
30. A third parent, Brandon Guffey, is a state legislator in South Carolina (USA). His son Gavin committed suicide at the age of seventeen. Gavin was active on SM, where he gained access to drugs. He was also “groomed” to share intimate sexual images. He became a victim of online sexual exploitation. A woman began to blackmail Gavin using a SM platform. Gavin managed to transfer $15 via CashAPP but then decided the only way out of the nightmare was to kill himself. After his death, Mr. Guffey received direct messages from the extortionist as did his younger son (now sixteen), and his fourteen-year-old cousin. Mr. Guffey was devastated but stayed in politics hoping to make a difference. Due to his efforts, his state passed a law criminalizing digital sextortion. As a result of this tragedy, Meta implemented end to end encryption which means that no evidence would be available of the extortion communications. It also banned non-US actors from messaging domestic children under the age of sixteen. But there is still no federal legislation protecting children or requiring accountability by SM companies. Mr. Guffey recommends parental control over the devices used by their children. He also recommends that SM platforms be required to “red flag” bad actor activity and notify parents of such activity. He also strongly recommends the need for national legislation mandating these actions.
31. Another witness, Christopher Hadnagy, an expert in the field of social engineering, testified that children are being targeted on SM through sextortion, cyberstalking and other harmful content. Harmful challenges such as Blue Whale entice children to indulge in self-harm activities. These challenges have even encouraged suicide in extreme cases.
3 CONCLUSIONS OF LAW
We conclude that the SM platforms, in general, have violated the following conventions, laws, and guidance.
1 International Human Rights Law
UN Declaration of Human Rights
Article 2
“Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status.
Furthermore, no distinction shall be made on the basis of the political, jurisdictional or international status of the country or territory to which a person belongs, whether it be independent, trust, non-self-governing or under any other limitation of sovereignty.”
Article 3
“Everyone has the right to life, liberty and the security of person.”
Article 12
“No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.”
2 Prohibition of Incitement to Hatred, Discrimination, or Violence.
ICCPR:
Article 20(2):
1. Any propaganda for war shall be prohibited by law.
2. Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law.”
ECHR:
Article 8:
Right to respect private and family life “Everyone has the right to respect for his private and family life, his home and his correspondence.”
Article 10(2): Freedom of expression:
“The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary.
Article 14: Prohibition of discrimination.
“The enjoyment of the rights and freedoms set forth in this Convention shall be secured without discrimination on any ground such as sex, race, colour, language, religion, political or other opinion, national or social origin, association with a national minority, property, birth or other status.”
UN Convention on the Rights of the Child (UNCRC)
Article 16:
“No child shall be subjected to arbitrary or unlawful interference with his or her privacy, family, home or correspondence, nor to unlawful attacks on his or her honour and reputation.
1. The child has the right to the protection of the law against such interference or attacks.”
2 International Convention on the Elimination of All Forms of Racial Discrimination
Article 2:
(a) Each State Party undertakes to engage in no act or practice of racial discrimination against persons, groups of persons or institutions and to ensure that all public authorities and public institutions, national and local, shall act in conformity with this obligation;
(b) Each State Party undertakes not to sponsor, defend or support racial discrimination by any persons or organizations;
(c) Each State Party shall take effective measures to review governmental, national and local policies, and to amend, rescind or nullify any laws and regulations which have the effect of creating or perpetuating racial discrimination wherever it exists;
(d) Each State Party shall prohibit and bring to an end, by all appropriate means, including legislation as required by circumstances, racial discrimination by any persons, group or organization;
(e) Each State Party undertakes to encourage, where appropriate, integrationist multiracial organizations and movements and other means of eliminating barriers between races, and to discourage anything which tends to strengthen racial division.
3 The Convention on the Elimination of all Forms of Discrimination Against Women,
Article 2:
“State Parties condemn discrimination against women in all its forms, agree to pursue by all appropriate means and without delay a policy of eliminating discrimination against women and, to this end, undertake: (a) To embody the principle of the equality of men and women in their national constitutions or other appropriate legislation if not yet incorporated therein and to ensure, through law and other appropriate means, the practical realization of this principle…”
4 UN Guiding Principles on Business and Human Rights
Article 11
“Business enterprises should respect human rights. This means that they should avoid infringing on the human rights of others and should address adverse human rights impacts with which they are involved.”
Article 17:
“In order to identify, prevent, mitigate and account for how they address their adverse human rights impacts, business enterprises should carry out human rights due diligence. The process should include assessing actual and potential human rights impacts, integrating and acting upon the findings, tracking responses, and communicating how impacts are addressed. Human rights due diligence:
(a) Should cover adverse human rights impacts that the business enterprise may cause or contribute to through its own activities, or which may be directly linked to its operations, products or services by its business relationships;
(b) Will vary in complexity with the size of the business enterprise, the risk of severe human rights impacts, and the nature and context of its operations;
(c) Should be ongoing, recognizing that the human rights risks may change over time as the business enterprise’s operations and operating context evolve.”
OECD Guidelines for Multinational Enterprises on Responsible Business Conduct
Article IIA(2):
“Enterprises should take fully into account established policies in the countries in which they operate, and consider the views of other stakeholders. In this regard:
A. Enterprises should:
1. Respect the internationally recognised human rights of those affected by their activities…”
5 Failure to Address Privacy Invasion, Cyberbullying and Harmful Content Including Promotion of Harmful Challenges, Revenge Porn and Self-Harm
ECHR:
Article 8: Right to respect for private and family life
"Everyone has the right to respect for his private and family life, his home and his correspondence. "
ICCPR
Article 6(1):
"Every human being has the inherent right to life. This right shall be protected by law. No one shall be arbitrarily deprived of his life."
Article 7:
"No one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment….”
Article 17(1):
"No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honour and reputation."
Article 19(2):
"Everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers . . . . .
The conditions and requirements laid down in this Article shall be without prejudice to national civil and criminal procedural law.”
6 UN Resolution 53/144: Declaration on the Right and Responsibility of Individuals, Groups and Organs of Society to Promote and Protect Universally Recognized Human Rights and Fundamental Freedoms
Article 1:
“Everyone has the right, individually and in association with others, to promote and to strive for the protection and realization of human rights and fundamental freedoms at the national and international levels.”
Article 2:
1. “Each State has a prime responsibility and duty to protect, promote and implement all human rights and fundamental freedoms, inter alia, by adopting such steps as may be necessary to create all conditions necessary in the social, economic, political and other fields, as well as the legal guarantees required to ensure that all persons under its jurisdiction, individually and in association with others, are able to enjoy all those rights and freedoms in practice.
2. Each State shall adopt such legislative, administrative and other steps as may be necessary to ensure that the rights and freedoms referred to in the present Declaration are effectively guaranteed.”
7 Caselaw
1. The Tribunal does not address potential criminal liability. We focused only on the potential civil liability of SM companies for violations of human rights, and for failure to conduct due diligence and content moderation, and for the failure to take down illegal content despite receipt of actual notice.
2. In Delfi AS v. Estonia (Application No. 64569/09) decided by the European Court of Human Rights (ECtHR) in 2015, the Court addressed intermediary liability for the failure to adopt measures to detect illegal content and for the failure to take down illegal content upon receipt of actual notice. Delfi, a popular Estonian news website, was held liable for defamatory comments posted by users on its platform. The ECtHR held that Estonia did not violate Article 10 (freedom of expression) of the European Convention on Human Rights by finding Delfi liable for defamatory comments. The Court ruled that Delfi had insufficient safeguards to prevent harmful comments, despite having an automatic filter and a notice-and-takedown protocol. The Court applied a three-part test to determining whether Delfi’s rights to free expression had been violated. First, the ECtHR found that Estonia had interfered with Delfi’s right to free expression, but such interference was “prescribed by law” by virtue of the second paragraph of Article 10 of the ECHR, when it imposed civil penalties for the defamatory comments. Second, the Court held that the award of damages was in accordance with applicable law as Delfi violated Estonia’s Civil Code Act and Obligations Act. Third, the Court noted that imposing civil penalties on Delfi pursued the legitimate aim of “protecting the reputation and rights of others.” The Court adopted a balancing test to determine whether Estonia’s interference with Delfi’s rights was necessary in a democratic society; it concluded that Estonia’s actions were fully justified.
3. The Court in Delfi established criteria to assess the liability of large internet portals for failing to remove hate speech. These criteria were used to determine the balance between freedom of expression and the right to the protection of reputation. The criteria adopted by the Court in Delfi were aptly summarised in Magyar Tartalomszolgaltatok Egyesulete and INDEX.HU ZRT v. Hungary (Application No. 22947/13) as: “the context of the comments, the measures applied by the applicant company in order to prevent or remove defamatory comments, the liability of the actual authors of the comments as an alternative to the intermediary’s liability, and the consequences of the domestic proceedings for the applicant company.”
8 Publisher Liability of SM Platforms
1. In Weaver v. Corcoran, 2015 BCSC 165, the Supreme Court of British Columbia, a Canadian province, adopted what it called the passive instrument test for publication. Weaver sued when a reader posted comments about him on a forum hosted by the National Post. After reviewing cases from other courts, the Canadian court concluded that the jurisprudence establishes that the defendant must have “some awareness of the nature of the reader post is necessary to meet the test of publication.”
Until awareness occurs, whether by internal review or specific complaints that are brought to the attention of the National Post or its columnist, the National Post can be considered to be in a passive instrumental role in the dissemination of the reader postings. It has taken no deliberate action amounting to approval or adoption of the contents of the reader posts. Once the offensive comments were brought to the attention of the defendants however, if immediate action is not taken to deal with these comments, the defendants would be considered publishers as at that date.
2. In Shreya Singhal v. UOI, AIR 2015 SC 1523, the Supreme Court of India held that the exemption from liability under Section 79 of the Information Technology Act, is applicable to SM companies. However, the exemption will not be available if the company failed to take down material after receiving actual knowledge by court order or government communication.
3. In Byrne v. Deane, 1 KB 818, the English Court of Appeal established an important principle regarding publication liability holding that a party in control of a space where defamatory content is posted can be held responsible if it knowingly allows it to remain in that space. Similarly, in Google v Vishaka Industries, AIR 2020 SUPREME COURT 350, the Supreme Court of India adopted a test for determining publisher liability for SM platforms:
If defamatory matter is published, as to who published it is a question of fact. As already noted, publication involves bringing defamatory matter to the knowledge of a person or persons other than the one who is defamed. We would approve of the principles laid down [by Greene L.J. in Byrne (supra)] that ‘in some circumstances, a person by refraining from removing or obliterating the defamatory matter, is not committing any publication at all. In other circumstances, he may be doing so. The test, it appears to me is this: having regard to all the facts of the case, is the proper inference that by not removing the defamatory matter, the Defendant really made himself responsible for its continued presence in the place where it has been put? Whether there is publication, indeed involves asking the question also as noted by the learned Judge, as to whether there was power and the right to remove any such matter. If despite such power, and also, ability to remove the matter, if the person does not respond, it would amount to publication. The said principle, in our view, would hold good even to determine whether there is publication Under Section 499 of the Indian Penal Code (defamatory content). The further requirement, no doubt, is indispensable, i.e., it must contain imputations with the intention to harm or with knowledge or having reasons to believe that it will harm the reputation of the person concerned. In this case, the substantial complaint of the complainant appears to be based on the refusal by the Appellant to remove the matter after being notified. Publication would be the result even in the context of a medium like the internet by the intermediary if it defies a court order and refuses to take down the matter….
9 Data Privacy Violations by SM Platforms and Failure to Detect and Take Down Illegal Content
1. Under prevailing national law in various regions such as the EU, SM platforms face civil liability for failing to protect data privacy and address illegal or harmful content after receiving actual notice. The DSA (referenced above) obligates platforms to act “without undue delay” on illegal content orders (Article 9) and establishes notice-and-action mechanisms, redress systems, and transparency requirements (Articles 16–20). These standards align with privacy protections under Article 17 of the ICCPR and Article 8 of the ECHR, which prohibit arbitrary interference with privacy. International regulatory regimes have increasingly imposed civil liability for data privacy breaches. The EU’s GDPR has led to landmark penalties, including a €1.2 billion fine against Meta (Meta Platforms Ireland Limited previously known as Facebook Ireland Limited), (12 May 2023) for unlawful transatlantic data transfers.
2. In Anderson v. TikTok, Inc., Byte Dance Inc., (3d Cir. 2024), a U.S. appellate court held TikTok liable for distributor liability for promoting a dangerous blackout challenge on its platform through its algorithm that resulted in the death of a child. The United States District Court for the Eastern District of Pennsylvania initially dismissed the case, ruling that TikTok was immune under Section 230 of the Communications Decency Act (“CDA”), which protects platforms from publisher liability for third-party content. On appeal, the United States Court of Appeals for the Third Circuit reversed in part, vacated in part, and remanded the case, ruling that TikTok’s algorithmic recommendations constitute first-party speech, meaning Section 230 does not shield TikTok from distributor liability for its own recommendations. This has marked a doctrinal shift wherein algorithmic curation was treated as first-party content rather than mere facilitation of third-party content.
10 Distributor Liability of SM Platforms
1. Under Art 45 of the DSA, SM platforms’ adherence to voluntary codes of conduct can address online issues, such as illegal content and systemic risks. Although Art 8 of the DSA provides that there is no general obligation to monitor content, the DSA does mandate proactive obligations on Very Large Online Platforms (VLOPs) to mitigate systemic risks including dissemination of illegal content, gender based violence and protection of minors, and violation of fundamental rights of users such as privacy (Article 34), promptly remove illegal content (Articles 9 and 16), and conduct periodic audits (Article 37). Platforms must assess algorithmic systems, content moderation, advertising selection and data practices to determine whether there has been intentional manipulation, including inauthentic use or automated exploitation.
2. An enforcement proceeding brought under the DSA commenced in December 2023 with formal proceedings against X (formerly Twitter) for breaching obligations under Articles 34 and 35 of the DSA, including its failure to effectively mitigate systemic risks and curb the dissemination of illegal disinformation in the context of armed conflict. The case against X highlights serious concerns about content moderation failures and gaps in systemic risk governance.
3. Based on the testimony of the witnesses, as well as the documentary evidence received by the Tribunal, as well as the evolving jurisprudence and caselaw on publisher and distributor liability of SM Platforms, we conclude that SM platforms, UN Conventions, and national governments should implement the following recommendations in order to ensure that the human rights of its users are respected and preserved.
THE MEMBERS OF THIS TRIBUNAL HEREBY ATTEST THAT THIS IS THEIR FINAL JUDGMENT IN THE MATTER OF IN RE SOCIAL MEDIA