By OpenRightsGroup: Theresa May and Emmanuel Macron’s plans to make Internet companies liable for ‘extremist’ content on their platforms are fraught with challenges. They entail automated censorship, risking the removal of unobjectionable content and harming everyone’s right to free expression.
The Government quietly announced Tuesday June 13th that Theresa May and the French President Emmanuel Macron will talk today about making tech companies legally liable if they “fail to remove unacceptable content”. The UK and France would work with tech companies “to develop tools to identify and remove harmful material automatically”.
No one would deny that extremists use mainstream Internet platforms to share content that incites people to hate others and, in some cases, to commit violent acts. Tech companies may well have a role in helping the authorities challenge such propaganda but attempting to close it down is not as straightforward or consequence-free as politicians would like us to believe.
First things first, how would this work? It almost certainly entails the use of algorithms and machine learning to censor content. With this sort of automated takedown process, the companies instruct the algorithms to behave in certain ways. Given the economic and reputational incentives on the companies to avoid fines, it seems highly likely that the companies will go down the route of using hair-trigger, error-prone algorithms that will end up removing unobjectionable content.