Far-Right Extremism on Metaverse: What will it look like?

Horizon Worlds, Facebook’s new virtual reality platform, is about to be launched globally. Georgios Samaras considers the opportunities Horizon Worlds will present for extremist groups and disinformation, drawing on his research into content shared by Greek far-right extremists on social media.


The world is getting ready to witness the global launch of Facebook’s new Metaverse application, Horizon Worlds, which debuted in US and Canada in December 2021. But what exactly is Horizon Worlds? Basically, Zuckerberg’s new vision in virtual reality (VR), which aims to bring together Facebook users in a world of animated avatars and exploration. Sounds familiar? Some have already pointed out the similarities with other online worlds, such as Second Life. Although the animation and design of both environments look identical, the purpose of Horizon Worlds is to expand the largest social networking platform and enable communication and collaboration in a brand-new virtual environment.

Zuckerberg announced his new virtual world back in October 2021, in a presentation that received mostly negative feedback. He did not address any of the allegations raised related to extremism or radicalisation on Facebook. The shiny presentation emphasised how convenient it will be to join your friends, family, or co-workers in a “limitless” VR space. Build your avatar and start interacting with one another. What could possibly go wrong here?

What Facebook promises to offer in 2022, has already been delivered by other applications and video games, such as Second Life with more guaranteed online safety. The real issue here is that Facebook’s ineffective measures towards moderating harmful content on the platform, could result in a chaotic launch of Horizon Worlds. After all, some users have already reported serious incidents, including sexual harassment, on Metaverse.

Facebook itself does not have a great track record when it comes to content moderation. It has been unable to come up with effective processes, with many of those groups appearing to make sophisticated use of private pages or groups. Turning some content private means that most users will not be able to access it, therefore it is remains exclusive to its members. Of course, Facebook relies primarily on user reports. Therefore, if users are unable to detect such activity it probably goes unreported.

The numbers don’t make Facebook’s efforts at moderating content look impressive. In 2008, Facebook’s population reached 100 million active users. There were only 12 content moderators. Of course, content used to be less offensive, and moderators were primarily removing certain keywords, such as ‘Hitler’ or ‘Holocaust’. Following Facebook’s speedy growth, which resulted in 1 billion users in 2013, there were only 1000 moderators. Similarly, in 2020 when Facebook passed the 3 billion milestone, which is almost half of the world’s population, there are only 15,000 active content moderators. The ratio is one moderator per 200,000 users.

In short,  Facebook has not fixed its problem of being ‘overrun by hate speech and disinformation’, as the New Yorker put it. So, how can they guarantee the safety and security of users in virtual worlds such as Horizon Worlds?

A virtual environment could indeed provide a very fertile ground for far-right extremists, who will be able to visualise themes and content with significantly more freedom – and most importantly, by taking advantage of animation. Violent extremists could find the metaverse a useful recruiting and organizing tool – and a target-rich environment, which might end up becoming their private playground. This is uncharted waters for most researchers, but some analyses already show that things are not looking good. Mixing Artificial Intelligence and VR in the Metaverse means that anyone will gain access to virtual spaces where followers can listen to speeches, engage in recruiting activities, and discuss future actions organised by those groups.

The good news is that Metaverse might not be the most straightforward way to do that as this can be achieved at a faster pace through traditional forms of communication such as instant messaging and grouping on social media. The bad news is that there might be an even darker side to Metaverse. It could offer new technological ways to plan extremist/terrorist acts across a diffuse membership.

Let’s take the Capitol riot as a notable example. Extremist leaders could build brand new virtual environments with representations of any physical building, which would allow them to walk members through routes leading to key objectives. The same thing applies to other government buildings and public spaces that can be re-created in digital worlds and, most importantly, allow collaboration between users. Imagine if, on January 6, 2021, Capitol rioters knew where to go in order to reach the Members of the House, instead of occupying the Senate Chamber.

A plethora of studies have investigated visual content shared by extremists on Facebook, looking at disinformation and anti-vax rhetoric to anti-democratic notions and the creation of racist memes. Visualisation is widespread. Language is not a barrier when it comes to creating and sharing such content, as can be seen from the online rise of far-right extremism in the years of the fiscal crisis. My research broadly explored the content shared by Greek far-rights extremists on Facebook, Twitter, and YouTube, and revealed their ultranationalist discourse which constantly engaged in the abuse of hate speech regulations. Most importantly, far-right extremist groups tend to idolise the same personas, and use similar techniques to share content, such as memes, GIFs, quotes, and jokes.

In sum, while it’s hard to tell what precisely far-right extremism will look like on Metaverse, one thing is certain: Facebook’s self-moderation activities on the platform are ineffective and this could result in dire consequences now that Metaverse is launched. Facebook is aware of how its algorithms and recommendation systems push some users to extremes. Other platforms, such as Twitter and YouTube have done much more in tackling extremism and hate speech. Users deserve answers on how the platforms themselves are designed to funnel specific content to certain users, and how that might distort users’ views and shape their behaviour, online and offline. If Metaverse does not focus on implementing effective moderation strategies, most of the above-mentioned hypothetical scenarios could come true.


Georgios Samaras has completed his PhD in European Studies at King’s College London, where he currently works as a Research Associate. He also teaches Politics at University College London and the London School of Economics.

Note: The views expressed in this post are those of the author, and not of the UCL European Institute, nor of UCL.

Photo by Joshua Hoehne on Unsplash.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s