Over the last two decades, Elon Musk has become a household name for creating and leading some of the most successful companies in the world, including Tesla, SpaceX and PayPal. He now has his sights set on Twitter, and plans to turn the already successful social media platform into a tech juggernaut, growing user count by 200% over the next three years and bringing in five times more revenue by 2028.

Large companies are bought and sold all the time, and consumers aren’t generally concerned about who the companies' owners are. But this controversial deal – if it goes ahead – gives Elon Musk access to a very powerful way of influencing the world.

Musk is known for sharing his thoughts online, uncensored, so we already have a good sense of what he plans for Twitter. He has promised to scale back content moderation to protect free expression and to restore former US President Donald Trump’s account. But should we be worried about this more relaxed approach to moderation?

Firstly, what is content moderation?

Content moderation is how social media platforms ensure that content follows their rules and guidelines, and is safe for users to view. Companies like Twitter usually use a mix of algorithms and people to review content reported by users, and algorithms alone to proactively search for violating content. Moderation is vital to reduce abusive and toxic messages on social media – but it’s not perfect. Content moderation processes won’t catch everything and, in some cases, moderation removes content that should be allowed to stay. They also miss lots of genuinely toxic content and activity, which can be incredibly difficult to identify. For instance, many fake accounts and bots on Twitter do not just have long alphanumeric strings for names (e.g. ‘werkulderkle29192’), egg icons for profile pictures, and retweet scam ads every 15 minutes. They are also sophisticated operators which mimic real human behavior, even acting innocuously for months until they do something nefarious.

The law might seem like an obvious starting point for online safety (this is, after all, what Elon Musk has pushed for!) – but in reality, turning to the law raises more problems than it solves. Few laws have been passed specifically to address online safety, and those that do have only been brought in very recently, such as the 2021 Online Safety Act in Australia or 2017’s NetzDG in Germany. In the UK, the forthcoming Online Safety Bill has the admirable aim of ensuring that offences offline are also illegal online; but it still has not yet been passed into law. And a fundamental problem is that it builds on pre-existing laws and so could potentially replicate the same gaps, limitations and practical challenges. For instance, the 1986 Public Order Act is the main way in which hate is tackled in offline settings in the UK – but it still does not consider misogyny to be a hate crime, despite widespread violence and abuse against women. These issues are compounded by the fact that platforms are inherently global, yet laws are set by each country – and, as such, the provisions in the Online Safety Bill for the UK are different to those outlined in Australia, New Zealand and America (and all other territories introducing online safety laws). Put simply, most platforms would find it almost impossible to just “apply the law”. 

But, in turn, the big problem with platforms going beyond the law is that deciding what can be seen, said and shared online has been left to a tiny number of Silicon Valley executives. And at a time when most platforms are beefing up their online trust, safety and integrity, Elon Musk is using that power to propose a radical shake-up – to dramatically downsize the amount of moderation that it applies, raising concerns that Twitter could become more like free speech niche sites such as Gab, 4chan and Truth Social. 

A case for reducing moderation

Scaling back content moderation may at first, understandably, sound concerning. But the flipside of less top-down content moderation is that users could be empowered to control what content they view. If Twitter develops a market for what political scientist Francis Fukuyama calls ‘middleware’ (technology that allows users to choose their own software to identify and handle harmful content), we could witness a huge increase in choice through personalised trust and safety. This would take editorial power away from a very small number of large, wealthy technology platforms and give it to users, enabled by a diverse range of competitive firms that would allow people to tailor their experience online.

But this, again, presents challenges. First, the average social media user is going to find it difficult and time-consuming to evaluate the different middleware tools and choose the right one for them. Second, and perhaps most importantly, there just aren’t that many providers to choose from. Although there is mounting investor interest in the growing market of 'safety tech' providers, and trust and safety trade organisations are appearing, the market is still emerging. If the Musk deal goes through and if he does reduce content moderation (and those are two big ifs!) then there is a lot to do to ensure that Twitter doesn’t just become even more toxic.   

And, of course, even if the middleware market is created then we may see a range of new problems appear. It could increase polarisation of opinions; leave people unprotected; create more digital inequalities based on ability to pay; and we may see insidious effects of people opting not to remove potentially harmful – or what might be considered ‘grey area’ – content. And there’s no guarantee that other platforms would follow Twitter, potentially limiting the full impact and growth of the market.

But… that aside, this may well be the best option in online safety that hasn’t yet been tried. Without more diversity and options, we will continue to have top-down moderation from platforms that alienates users, is rarely explained, and routinely makes serious mistakes. 

Dr Bertie Vidgen is Head of Online Safety at The Alan Turing Institute and co-founder of Rewire, a tech startup building socially responsible AI for online safety. 

 

Top image: Rokas Tenys / Shutterstock