From clicks to chaos: How social media algorithms amplify extremism

Published By with Comments

Categorized as Uncategorized Tagged , , ,

Algorithms are quickly becoming a keystone of content distribution and user engagement on social media. While these systems are designed to enhance the user’s experience and engagement, they often unintentionally amplify extremist propaganda and polarising narratives. This amplification can exacerbate societal divisions, promote disinformation, and bolster the influence of extremist groups. This is called “algorithmic radicalisation”, which shows how social media platforms coax users into ideological rabbit holes and form their opinions through a discriminating content curation model.

This article explores the mechanisms behind algorithmic amplification, its impact on the amplification of extremist narratives, and the challenges in countering extremism.

Social media algorithms are computerised rules that examine user behaviour and rank content based on interactive metrics such as likes, comments, shares, and timelines. They also use machine learning models to make customised recommendations. This process also works as an amplifier because posts with higher engagement or shares quickly tend to gain popularity, and viral trends sometimes emerge in this process. Algorithms may also create echo chambers if they only show similar viewpoints repeatedly to keep users engaged.

Hashtags are another vital factor. They serve as keywords to classify content, making it discoverable to a broader audience. When a hashtag is used, it assists the algorithm in recognising the topic of a post and linking it to users searching for or following that hashtag. Posts with trending or niche-specific hashtags are prioritised, with high engagement boosting their visibility further. Algorithms and hashtags amplify content by targeting specific audiences and promoting posts aligned with the user’s interests and behaviours.

Algorithms are at the core of social media platforms like YouTube, TikTok, Facebook, X (formerly known as Twitter) and Instagram, and they modify what is sent to users as per their digital interactions, behaviours, preferences and engagement. Algorithms usually promote emotionally provocative or controversial material by focusing on metrics such as likes and shares, creating feedback loops that amplify polarising narratives. In one of his studies, academic Joe Burton indicated that such algorithmic biases could heighten engagement through fear, anger or outrage, inadvertently giving rise to extremist ideologies and making users vulnerable to radical content.

The two extremist groups that have effectively utilised these platforms to spread propaganda and recruit members are the Islamic State (IS) and al-Qaeda. For example, IS uses X and Telegram to foster a sense of belonging among its followers, often publishing emotionally provocative content aimed at radicalising people. Meanwhile, al-Qaeda uses YouTube to deliver speeches and training with encrypted links in their videos. TikTok has been utilised on the other end of the spectrum by far-right-wing elements. TikTok’s “For You” page frequently recommends far-right-wing material to users, drawing them into algorithmic rabbit holes that amplify extremist ideologies.

Algorithmic exploitation does not exist only within the realm of terrorism. It is also used for disinformation during elections, sometimes resulting in violence and polarisation. Such examples underline how algorithms, by prioritising engagement over accuracy, facilitate the spread of disinformation, polarisation, and extremist narratives, making them pivotal tools in modern cyber and ideological warfare. Extremist strategies often align with how algorithms optimise engagement by pushing emotionally charged content. Creating ”filter bubbles”, algorithms expose users to ideologies matching their biases, reinforcing extremist beliefs.

Given their opacity, algorithms in social media present challenges that need to be addressed when it comes to the presence of extremist content. They work as “black boxes” in which even developers do not understand the underlying processes for recommending certain content. For instance, TikTok’s “For You” page has been flagged for sensational and extremist material, but its operational mechanics are limiting the mitigation of algorithmic bias. This is what the extremist groups exploit—they change their content to euphemisms or symbols to evade detection systems. Moreover, algorithms’ global disposition without adaptation to local sociocultural contexts worsens the problem.

Balancing free speech and effective content moderation is a complex issue. Policies such as the Netz law in Germany, which aims to curtail online hate speech, force platforms to remove harmful content within tight deadlines. Extremist groups find the weaknesses in these balancing acts by preparing content in ways that just edge within legal boundaries, allowing them to continue the dissemination of divisive ideologies.

The algorithmic amplification of extremism has been minimised through Artificial Intelligence (AI)-driven moderation, such as YouTube’s machine-learning model 2023, which reduced flagged extremist videos by 30 percent. Nonetheless, coded language and satire have been used to avoid detection, mainly by IS and al-Qaeda. Counter-narrative strategies, such as Instagram redirecting searches to tolerance-promoting content, offer constructive alternatives.

Content retrieved from: https://www.orfonline.org/expert-speak/from-clicks-to-chaos-how-social-media-algorithms-amplify-extremism.

Leave a comment

Your email address will not be published. Required fields are marked *

Trenton, New Jersey 08618
609.396.6684 | Feedback

Copyright © 2022 The Cult News Network - All Rights Reserved