Changing social media algorithms is enough to reduce political hostility

Share:

Reducing the visibility of polarizing content in social media news feeds can tangibly decrease partisan hostility. To reach this conclusion, my colleagues and I developed a method for modifying the ranking of posts in news feeds , an operation previously limited to social media platforms.

The readjustment of feeds to limit exposure to posts expressing anti-democratic attitudes or partisan animosity has influenced both users’ emotions and their perception of people with opposing political views.

I am a computer scientist specializing in social computing, artificial intelligence, and the web. Since only social media platforms can modify their algorithms, we have developed and made available an open-source web tool that allows for the real-time reorganization of news feeds of consenting participants on X, formerly Twitter.

Drawing on social science theories, we used a language model to identify posts likely to polarize users, such as those advocating political violence or the imprisonment of members of the opposing party. These posts were not removed; they were simply moved further down the feed, requiring users to scroll more to see them, thus reducing their exposure.

We conducted this experiment over ten days in the weeks leading up to the 2024 US presidential election. We found that limiting exposure to polarizing content measurably improved participants’ perceptions of members of the opposing party and reduced their negative emotions while scrolling through their news feeds. Notably, these effects were similar across all political parties, suggesting that the intervention benefits all users, regardless of their affiliation.

Why this is important

A common misconception is that we must choose between two extremes: engagement-based algorithms or purely chronological feeds. In reality, there is a wide range of intermediate approaches, depending on the objectives for which they are optimized.

News feed algorithms are generally designed to capture your attention and therefore have a significant impact on your attitudes, mood, and perception of others . It is thus urgent to have frameworks that allow independent researchers to test new approaches under realistic conditions.

Our work opens this path: it shows how to study and prototype alternative algorithms on a large scale, and demonstrates that with large language models (LLMs), platforms finally have the technical means to detect polarizing content likely to influence the democratic attitudes of their users.

What other research is being conducted in this area?

Testing the impact of alternative algorithms on active platforms is complex, and the number of such studies has only increased recently.

For example, a recent collaboration between academics and Meta showed that switching to a timeline was not enough to reduce polarization. A related effort, the Prosocial Ranking Challenge led by researchers at the University of California, Berkeley, explores multi-platform ranking alternatives to promote positive social outcomes.

Meanwhile, advances in LLM development are enabling better modeling of how people think, feel, and interact. There is growing interest in giving users more control, allowing them to choose the principles that guide what they see in their feed—for example, with Alexandria, a library of pluralistic values , or the Bonsai feed reorganization system . Social platforms, such as Bluesky and X, are also moving in this direction.

And then

This study is just a first step toward designing algorithms that are aware of their potential social impact. Many questions remain open. We plan to investigate the long-term effects of these interventions and test new ranking objectives to address other risks related to online well-being, such as mental health and feelings of satisfaction. Future work will explore how to balance multiple objectives—cultural context, personal values, and user control—to create online spaces that foster healthier social and civic interactions.

Author Bio: Tiziano Piccardi is Assistant Professor of Computer Science at Johns Hopkins University

Tags: