
The writer, a lawyer at Orrick in Paris, was previously counsel for Twitter UK
Within the past few days, Elon Musk has been reminded of Twitter’s legal obligations in relation to online content by both the president of France, Emmanuel Macron, and by Thierry Breton, the EU commissioner for internal markets. These public interventions come in response to concerns that the platform will become some kind of free-speech hellscape.
It is widely acknowledged that dealing with harmful content is one of the toughest problems thrown up by the internet. Mistakes are common and a satisfactory, scalable solution has proved elusive. In Europe, the pendulum has moved towards a law-based model rather than self-regulation. Multiple new laws require online intermediaries to take action in relation to illegal content on their sites or face huge fines. Last month, the EU’s Digital Services Act came into force. This is part of a framework of overlapping laws that include the EU copyright directive, the terrorism content regulation and many national laws such as the UK’s proposed online safety bill. And there are more to come.
Both compliance with and enforcement of this regime will be difficult. Whether it will actually make the internet safer remains an open question. A dispute in France illustrates how hard it is to regulate, even when everyone agrees on the objective.
In October, a French court decided that a constitutional challenge raised by the controversial website Pornhub may proceed against a law which allows the blocking of porn sites that fail to prevent access by children. In 2020 the government amended the criminal code to specify that the use of age declaration tools (“click here if you are over 18”) will not be sufficient. New legislation empowered France’s online content regulator, Arcom, to seek blocking orders against websites that don’t implement robust age gating.
But at present there is no age verification technology that French authorities consider both effective and privacy preserving. Arcom is supposed to publish guidelines on compliant tools, but hasn’t. However, it served enforcement notices last year against five free-to-view porn sites, including Pornhub, giving them 15 days to replace their age model, or risk being blocked. The websites’ lawyers claim that attempts to engage in discussion were ignored. The sites took no action and so blocking proceedings were initiated.
Like it or not, it is understandable that faced with the choice of adopting ineffective age verification tools that might compromise user privacy, or being blocked, Pornhub’s owner MG Freesites opted to challenge the law. The court agreed that there was a question to be tried and the challenge is proceeding. And so, more than two years after the French law was toughened, a pressing social problem persists.
The European Commission recently published the draft of a new law aimed at preventing online dissemination of child sexual abuse material. The objective is also child protection. The need is urgent — reports of abuse and images depicting it have increased dramatically. Key to this draft is a detection obligation that would require adopting technology that identifies such images, and also grooming behaviour.
The project has elicited much condemnation. Critics cite the excessive surveillance of internet users. Some have questioned the existence of technical solutions able to implement the law. In contrast, children’s rights organisations have welcomed the initiative.
Given the polarised views and the fundamental rights at play, the road to adopting this regulation will at best be long. If it ever gets enacted, the French dispute provides a clue to the implementation challenges ahead. Politicians are understandably using the Twitter situation to promote Europe’s paradigm shift. How we make the internet safer needs more public debate. But we’re still a long way from a solution.