Why digital platforms could lose their right to exclude users
Never before in human history, have those in charge of carrying public information to billions of people across the planet, been left unregulated. The closest example of large scale information carriers are telephone companies and TV news stations, both of which operate under licences, which prohibit discrimination or selection of their users. Certainly not a selection, which is based on race or on political affiliation.
The case for reforming social media platforms’ right to exclude users
Today, digital platforms play an important public role in delivering information to the world at large. During recent pandemics, wars and social-political crisis, Twitter, Google and Facebook, have been instrumental in communicating important messages to the public as well as disseminating official messages on behalf the governments.
Yet, despite the important public role they play, digital platforms remain largely unregulated. This means they can exclude certain users from receiving information and from participating in public conversations, at their sole discretion.
They can also censor the information that users receive. Their algorithms can be tweaked to force-feed internet users with information, instead of the information that the user was seeking in the first place.
It has recently become common practice for some digital companies to place notices on their platform, which are dismissive of people or are dismissive of certain information that a user was seeking to acquire, whilst at the same time “kindly” suggesting a “better” or a “healthier” choice of information for the user to consume.
Because of their status as private companies, in the US, digital platforms enjoy an absolute protection from liability for third party content published on their platform under Section 230 of the Communications Decency Act.
The debate on whether digital platforms should continue to benefit from Section 230, is often centred on their role as “publishers”.
With this in mind, the recent judgment of the US Supreme Court in the case of Biden v Knight First Amendment Institute [pdf] has provided a refreshing outlook at the issue, following important observation by Thomas J, that some digital platforms might need to be considered as a “common carrier”, which could possibly mean that they would lose the Section 230 immunity and their sole discretion to withhold service to individual users.
Biden v Knight First Amendment Institute, summary of the facts of the case
The case was first brought on behalf of Twitter users who objected to the right of President Donald Trump to block them. The core complaint was that the President’s Twitter account was an “official” account and that the comments section on his Twitter account was a “constitutionally protected public forum”.
The Supreme court overturned the decision of the lower court, which stated that the President acted unlawfully.
The case raises interesting points. If indeed, the lower court was correct in ruling that the President of the United States’ Twitter account was an “official” account, or a “protected public forum” and if Twitter, being a private enterprise, has the power to suspend or even dismiss the President and his Twitter account from the network, can it still be said the President had more than a limited control over the account? if the President’s account is “official”, does this mean that Twitter carries an official, or public role? If so, can a platform that carries an official or public role, refuse service to users (as Twitter did to President Trump)?
Is Donald Trump’s Twitter account a “protected public forum”?
Whilst the Supreme court recognised the fact that because of the change in Presidential administration, the legal point borough by the plaintiffs was now mute, Thomas J. nevertheless took the opportunity to highlight the principal legal difficulties in applying traditional legal doctrines to new digital platforms.
In his judgement, he explains that there is an important need to address the way the legal system deals with highly concentrated, privately owned information infrastructure such as digital platforms, since digital platforms now provide a gate for “historically unprecedented amounts of speech, including speech by government actors”. Never before, he says, have a few private parties possessed a concentrated control over so much speech and access to information.
On the face of it, Thomas J. observes, there are aspects of Mr. Trump’s Twitter account, such as his frequent use of it in his official capacity to deliver public messages, which resemble a designated public forum. A designated public forum is commonly defined as “property that the State has opened for expressive activity by part or all of the public.”
However, Thomas J. also observes that Twitter’s terms of use, that allow the company to shut down any account at any time and for any reason, provide a strong indication that the Twitter platform is in fact a private space.
If the Twitter platform is a private space, the First Amendment free speech rights might not apply to the individuals blocked by Mr Trump.
Thomas J. accepts that it is, of course, possible for the government to use a private space, which would not by itself exclude First Amendment rights, such as where a private conference room is hired by a government body for the purpose of carrying out a public meeting, but what will dominantly determine the question whether the space is a public space or a private space, is the level of control that the government can exercise over such space.
In the case of the President’s Twitter account, there is no question that Mr Trump, or the government’s control over Twitter digital platform was minimal and that it was Twitter who was calling the shots.
The case for removing Twitter’s right to exclude users
If concentrated control over online content is part of the problem, then, Thomas J. observes, the solution might be to limit the right of the private companies to exclude users from their platforms.
And, he continues, the way to limit the right of social media companies to exclude users from their platforms, is to look at how the English, and later the US government have traditionally handled certain private companies known as “common carriers”.
Common carriers might be companies that possess a substantial market share, companies that declare themselves to be providing a public service or companies that provide a service, which might begin as a private initiative but which over time transition to become involved in public interest issues.
Thomas J. notes that common carriers had traditionally been the subject of “special regulations, including a general requirement to serve all comers.”
Thomas J goes on to explain the historic context for regulating transportation and communication networks, such as railroad and telegraphs, to make them “bound to serve all customers alike, without discrimination.”
And here is the parallel that Thomas J. finds between traditional common carriers and modern tech companies: “a traditional telephone company laid physical wires to create a network connecting people. Digital platforms lay information infrastructure that can be controlled in much the same way.” In addition, digital platforms have a dominant market share similar to a substantial common carrier, which makes “the analogy to common carriers…even clearer”
In return for coming under government regulations, Thomas J. continues, common carriers often receive government privileges, such as immunity from law suites in certain circumstances or a degree of acceptance by the government of their dominant market position.
“There is a fair argument”, he says, that “some digital platforms are sufficiently akin to common carriers or places of accommodation to be regulated in this manner.”
Thomas J. also makes a powerful point about the level of control that digital platforms have over dissemination of speech when he says:
“When a user does not already know exactly where to find something on the Internet—and users rarely do—Google is the gatekeeper between that user and the speech of others 90% of the time. It can suppress content by deindexing or downlisting a search result or by steering users away from certain content by manually altering autocomplete results”
“Facebook and Twitter can greatly narrow a person’s information flow through similar means. And…Amazon can impose cataclysmic consequences on authors by, among other things, blocking a listing.”
Thomas J. then goes on the address a popular argument, that supports the right of digital platforms to exclude users and to continue to practice unaccountable censorship of information. The argument is founded upon false choice, and it says that if a use doesn’t like Twitter, he or she doesn’t have to use it. Instead, the user can use an alternative platform.
Thomas J. puts forward a powerful counter argument, which is based on comparable alternatives:
“A person always could choose to avoid the toll bridge or train and instead swim the Charles River or hike the Oregon Trail.” Thomas J says, “but, in assessing whether a company exercises substantial market power, what matters is whether the alternatives are comparable. For many of today’s digital platforms, nothing is.”