Policing the internet, is this a beginning of a new era?
So, who is really in charge of policing the internet? In my book, The Net Is Closing Birth of the E-Police, I made the point that no one in fact was in charge and that if you were in favour of ending the current state of online anarchy and replacing it with regulations and safety standards, similar to those that we are used to in the real world, you should join the calls for the government to step in.
So many people did and during 2017 internet users began to demand more from those who are meant to protect them and the government has responded. Internet giants have been warned time after time to up their self-regulatory game or face external policing of the internet.
For many years, internet companies claimed that moderating and removing user generated content was undemocratic, hard and expensive. In 2017, it seems, they had a change of heart.
Internet companies now say that they have changed their approach and that they are now up for self-regulation, which would include new self-made user rules, more people hiring, more user account blocking, more fake accounts removal and far more exercise of own initiatives in reporting suspicious activities of their users to the authorities.
So is it true, that internet companies such as Facebook, Google, Yelp and Automoatic are doing more to keep us safe online?
I always said that when it came to self-regulation of the internet by internet companies, I will only believe it when I see it. So later this week I will flying to Santa Clara in California to take part in a rare event, which for the first time in the history of the internet will see representatives from Reddit, Google, Facebook, Wikimedia, Yelp, Pinterest and Microsoft (to mention just a few) coming together publicly to openly discuss their own philosophies, strategies and policies of online self-regulation. From the volume and quality of the pre-reading material I was sent prior to the conference, it seems this time they mean business.
The conference will also explore how internet companies operationalise the moderation and removal of third party/user-generated content on their own platforms and what are the main operational challenges they face and their possible solutions.
I am looking forward to attending and then reporting back from the Content Moderation & Removal at Scale conference, to our clients and to the government and of course I will be updating this blog in due course.