Should social media platforms be made accountable for hate speech?

Social media platforms to be accountable for amplifying hate speech from Yair Cohen Solicitor on Vimeo.

8chan is where the suspect of the El Paso Shooting is believed to have posted a white nationalist manifesto rant. The platform in the meantime, had been shut down but the post was then copied to numerous mainstream social media sites beforehand.

Since it’s shutdown, many 8chan users have moved onto a new online gaming platform – Discord. There is a strong connection between some members of the gaming community and extremists and social media platforms need to do a better job of policing the hate speech, since they amplify it by allowing the surface of content to their users.

Should sites like 8Chat allow to exist?

Social media platforms are designed to connect people and amplify voices, but what happens when those voices spread hate and division? This question became tragically relevant in the case of 8chan, a site notorious for hosting violent and extremist content. Before carrying out his attack, the El Paso shooter allegedly posted a white nationalist manifesto on 8chan. Even though the post was quickly removed, its toxic ideas had already gone viral, reaching broader audiences on mainstream platforms like Facebook and Twitter. The incident revealed how easily hate speech can use social media as a megaphone, echoing far beyond its original source.

Does shutting down platforms actually stop hate?

After the El Paso shooting, authorities shut down 8chan, but did this action eliminate the problem? Unfortunately, the answer is no. Former 8chan users quickly migrated to other platforms like Discord, which is widely used by gaming communities. Instead of addressing the root causes of hate speech, shutting down one site often leads to its spread elsewhere. Yair Cohen, an expert in internet law, explains that these migrations create new challenges for law enforcement and moderators, as extremists adapt to exploit new digital spaces. Discord, for instance, was intended to foster collaboration and communication among gamers, but its private channels have become hotbeds for harmful conversations.

Why is the law protecting hate-filled platforms?

It might seem shocking, but legal loopholes in countries like the United States make it difficult to hold platforms accountable for hate speech. Under Section 230 of the Communications Decency Act, social media companies and forum operators are protected from liability for content posted by users. This law, originally intended to encourage free speech, now enables sites like 8chan and Discord to evade accountability for hosting harmful content. Reforming these laws could be a game-changer in combating online hate, but the debate is contentious. Critics worry that changes could threaten free speech, making legal reform a complex and polarising issue.

Should government intervention is needed to stop the spread of hate?

The U.S. government has recognised the dangers posed by extremist platforms and taken steps to curb their influence. One notable approach has been pressuring search engines like Google and Bing to delist sites such as 8chan. This strategy is rooted in the belief that limiting a platform’s visibility will reduce its ability to attract new users and spread harmful ideologies. On the surface, this seems like a logical solution—after all, if people can’t find these sites easily, they’re less likely to engage with the content.

But is this intervention enough to stop the spread of hate? Unfortunately, the reality is far more complicated. Determined individuals often find ways to bypass these restrictions. They move to encrypted platforms, private chat groups, or dark web forums, where their activities are harder to track. In many cases, extremists use mainstream social media platforms to distribute their content indirectly. For example, they may upload manifestos or propaganda as images, which can evade detection by traditional moderation tools. These workarounds make it clear that delisting and bans, while helpful, cannot address the root causes of extremist behaviour.

Government intervention also faces technical and legal challenges. Delisting harmful sites from search results may reduce their accessibility, but it does not eliminate the content or the communities that produce it. Moreover, efforts to regulate or block extremist platforms must navigate the fine line between protecting public safety and upholding free speech rights. Any intervention perceived as overly restrictive could face significant pushback, not just from extremist groups but also from civil liberties advocates.

While government action is undoubtedly an important part of the solution, it is not a silver bullet. Addressing the spread of hate speech requires a multifaceted approach that includes technological advancements, community-driven solutions, and robust partnerships between governments and tech companies. Social media platforms, for instance, must play a more proactive role in moderating content and shutting down harmful discussions before they can gain traction. Education is another vital component—teaching people, especially young users, to critically evaluate online content can help inoculate them against extremist ideologies.

In the end, government intervention is necessary but insufficient on its own. It must be complemented by a broader societal effort to tackle the conditions that allow hate speech to thrive. Only by addressing these deeper issues can we hope to make meaningful progress in combating the spread of extremism online.

Why do banned platforms tend to attract more users?

It’s ironic, but banning extremist platforms often makes them more appealing to certain audiences. When sites like 8chan are removed from mainstream spaces, they gain a rebellious allure, especially among younger users. Yair Cohen warns that these banned platforms often become echo chambers where users glorify violence and encourage each other to act on extremist ideologies. Far from eliminating the problem, delisting or banning these sites can create an underground culture that is even harder to monitor and control.

Are gaming platforms unknowingly breeding extremism?

Gaming platforms are a hub for young, impressionable users, but are they also becoming gateways to hate? Cohen points out that extremist groups have infiltrated certain gaming communities, using them as recruitment tools. On platforms like Discord, private chat rooms and forums allow extremist propaganda to spread under the radar. These spaces often normalise harmful ideologies, framing them as harmless jokes or rebellious behaviour. By the time users realise the seriousness of what they’re engaging with, it can be too late.

Should social media giants take more responsibility for extremists posts?

Facebook, Twitter, and Discord have policies against hate speech, but are they doing enough to enforce them? Yair Cohen argues that these companies must take greater responsibility for the harmful content on their platforms. While they often rely on algorithms to flag offensive material, enforcement is inconsistent, and many harmful posts slip through the cracks. Stricter monitoring, combined with penalties for failing to act, could significantly reduce the spread of hate speech. But achieving this requires a shift in priorities—from engagement metrics to user safety.

If social media companies won’t voluntarily tackle hate speech, can legal reforms force them to? In the United States, there is growing momentum for laws that would hold platforms accountable for certain types of user-generated content. These changes could compel companies to invest in better moderation tools and take a firmer stance against extremism. Critics, however, worry that such laws could stifle free expression. Cohen believes it is possible to strike a balance between accountability and free speech, but it requires careful legislation and collaboration between governments, tech companies, and civil society.

Is free speech a licence for hate?

Free speech is a cornerstone of democratic societies, but does it have limits? When platforms use free speech as a shield for hosting hate speech and extremist ideologies, the consequences can be devastating. Social media companies must recognise that with great power comes great responsibility as these platforms are not just passive conduits of information; they are active participants in shaping public discourse. It’s time for them to step up and take accountability for their role in amplifying hate.

Scroll to Top