Why Social Media companies remove content and should they do more?

Why Social Media companies remove content and should they do more?

A couple of weeks ago, together with some of the leading U.S internet law lawyers and academics I attended a unique event at Santa Clara University Law School, California, where for the first time in the history of the internet tech companies’ legal leaders and policy experts gathered together for a day of panels on content moderation and removal.

Among the attendees were representatives from Facebook, Google, Reddit, Yelp, GlassDoor, Automattic, Pinterest and Wikipedia.

Internet law articles
Social media industry leaders

Discussions covered a variety of aspects of content moderation, including artificial intelligence v human moderation, transparency and content removal policies and appeals.

The event was organised and very well presented by one of the world’s leading experts on internet law, Professor Eric Goldman, from Santa Clara University School of Law.

The one internet giant everyone wanted to hear from, Twitter, had decided not to take an active part in the discussion panels. Twitter’s choice was unfortunate considering its content moderating policies have recently made stormy headlines when it decided to deactivate Twitter user accounts for individuals associated with far-right activism. However, in-house attorneys for the company were among the 200-strong audience and were happy to discuss policies during the evening’s networking reception, which concluded this highly energetic event.

The representatives of the internet companies who participated in the discussion panels were surprisingly candid about their content moderation and removal policies and some even made frank admissions about their reluctance to moderate content and the poor allocation of resources to it. It must be said, that despite this being a wholly U.S. event (with the exception of myself and perhaps one or two other European lawyers), the chilling sounds of potential European regulation of the internet, coupled with the threats of hefty financial penalties for non compliance were heard very clearly from across the ocean despite the fact the word “Europe” was hardly mentioned.

Social media content removal
Does the EU influence content removal decisions?

As the day progressed, it became clearer to me that the unpleasant choice between European driven regulations and the need to self-moderate their own online platforms was slowly being resolved in favour of the latter. But what the U.S. based internet companies don’t seem yet to get is that Europe cares less about content censorship and more about the quality of life of internet users. There is a lot of suspicion amongst internet companies about what Europe is up to and how its tendency to regulate could impact free speech.

By the end of the conference, I became convinced that it was only because of European interventionist policies, internet companies were starting to invest money and resources in the creation and enforcement of content moderation and removal policies.

The next step, I believe, which is also likely to be driven by Europe is an acknowledgement by the companies of the need for investment in customer care and user safety, which would represent a shift in resources away from the internet companies towards their users.

This is likely to be something similar to the European idea of “reallocation of resources”, which will be aimed to ensure that internet users who create content and revenue for internet companies, get some of that revenue reinvested back in them by way of improved customer care and users’ safety.

Contrary to common belief among some internet companies in the U.S., Europe doesn’t wish to regulate internet companies because it is less liberal but the drive towards regulation is fuelled by concerns that the “wild west” online culture adversely affects the safety of ordinary internet users. The European idea of “internet censorship” (at least as it is perceived by many in the U.S.) is in fact less about European governments disliking any particular online content but rather about their belief that the role of governments is not only to govern but it is also to protect and enhance the quality of life of its citizens. This is essentially the idea upon which the European Union is centred upon. So when European governments consider that their citizens are being treated unfairly or even neglected by internet companies, they feel they have an obligation to do something about it.

To understand the extent of the neglect of users, one only needs to look at some of the staggering figures which were candidly provided by some internet companies who were represented on the panel.

Content removal from WordPress
Automattic has an allocation of only 8-10 people to review nearly 60 million websites

For example, Dave Watkis of Automattic, which operates and hosts the world’s most popular free blogging platform WordPress, candidly admitted that Automattic has an allocation of only 8-10 people to review nearly 60 million websites. The safety team of the company has only between 4-6 people whose job is to look at millions of daily posts. Mr Watkis accepted that content moderation is an area that traditionally hasn’t been allocated resources by Automattic and that this will need to change in the future.

Another example is Google. Nora Pukett, Google Senior Litigation Counsel, told the conference that soon, Google will have nearly 10,000 people worldwide working on content moderation and removal of internet posts and accounts. But whilst this sounds like a big number, when one considers the size of the company, the unprecedented number of customers it has and the volume of data that it processes, this allocation of resources is clearly insufficient. Interestingly this number represents an increase of nearly 100% from 2016, an increase that only came about in response to pressure from Europe.

Facebook’s position is not much better. Public Policy Manager at Facebook, Neil Potts, told the Conference that Facebook has approximately 7,500 employees worldwide who are engaged in policy creation and in content moderation. Again, whilst this might sound a lot, after one considers the number of users that Facebook has (approximately 2.2 billion), the dangerous nature of some of the published content and the revenue that Facebook generates directly as a result of users’ published content, one might be forgiven for not being that impressed with this allocation of resources to content moderation and users’ safety.

It is worth pointing out that Facebook does not regard this small army of 7,500 people as “customer support” but rather as content moderators. The focus of the moderators’ efforts is on creating and implementing content policy guidance rather than supporting Facebook users or improving their safety or well-being. In this respect, Facebook does not have “customer support” in the traditional sense. Any interaction Facebook employees might have with Facebook users is only one-sided and completely at the discretion of Facebook. Some Facebook users view this approach as unfair and even degrading. You cannot telephone, email or have a live chat with any of Facebook’s employees. Facebook has done all it can to remove human interaction with its users. The impression given is that users are left to fend for themselves. To this extent, Facebook is faceless, presumably because one of the company’s core beliefs is that content is king whilst those who create the content are treated as no more than content generating serial numbers.

Perhaps the most revealing figure was given by Adelin Cai, who leads the Policy Team at Pinterest. She told the Conference that the number of employees whose job is to moderate content at Pinterest is no greater than 11 ½. This team of content moderators is responsible for some 200 million users, who post in more than 30 languages. The same small group of employees is also responsible for writing and implementing policies on nudity, graphic violence/gore, hate, self-harm, impersonation, regulated goods and so on.

The majority of the rest of the companies who were represented at the Content Moderation Conference (perhaps the most notable was Reddit) admitted to be relying on volunteers or “community members” as free labour which moderates content and/or “police” the working of the content policies. Using “volunteers” in this way saves internet companies the expense of having to invest resources in improving users’ safety and well-being. This is despite the considerable amount of revenue which is being generated through users’ published content. This practice of using unpaid workers to carry out considerable tasks of policing online platforms has been criticised by some attending the conference and is a practice which is unlikely to satisfy European interventionist governments.

Reddit content removal
Reddit admitted to be relying on volunteers or “community members” as free labour

Despite the above, I believe that the majority of the companies who were represented at the Content Moderation Conference genuinely feel that they do more than enough to moderate their online platforms, remove dangerous content and protect their users. Some though, have come across as desperately wanting to demonstrate to Europe that they can be trusted to govern their own online platforms.

But is their effort likely to be enough? It remains to be seen.

Scroll to Top