How unfollowing changed everything
When users unfollowed extreme accounts, their digital lives improved—fueling momentum for algorithmic plurality.
Polarization isn't just a political problem—it's a design flaw in our social media ecosystems. But what if there were a way to reduce the toxicity of our digital lives without logging off entirely?
In “Unfollowing Hyperpartisan Influencers Reduces Partisan Animosity and Misinformation Exposure,” researchers from NYU, Cambridge, and UNC, conducted real-world field experiments which revealed a potentially precise, actionable fix. The experiments involved 1,600+ Twitter/X users who were paid a small fee to unfollow hyperpartisan influencers—accounts known for spreading inflammatory or misleading content. Some participants were also nudged to follow accounts that posted about science and nature.
The results?
A 23.5% drop in political animosity toward the opposing party, with effects lasting at least six months.
Increased satisfaction with participants’ social feeds, without any drop in overall engagement.
Reduced exposure to political content and misinformation up to a year later.
Boosts in well-being, including awe and curiosity, among those who followed science-focused accounts.
Perhaps most striking: nearly half of the participants refused to re-follow the partisan accounts once the study ended. As the authors note, this intervention didn’t just shift behavior—it reshaped preferences.
A scalpel, not a sledgehammer
Author Steve Rathje and colleagues write, “unfollowing is a targeted approach: like a scalpel, it surgically removes a few harmful parts of one’s feed, allowing the beneficial aspects to remain.” The study also highlights a broader issue. According to Pew Research, just 10% of Twitter/X users generate 97% of political content. This small cohort of superusers dominates the discourse, warping our perception of what’s “normal” or representative. The resulting digital distortion, where extreme views appear mainstream shapes our views not only on politics but also on vaccines, gender, climate, and more, triggering a perception that common ground is impossible to find.
Instead of asking users to quit social media altogether or try lab-based simulations, the study let people remain fully active online—just minus the divisive content. It gave users what research has shown they want - a less toxic and polarizing feed - and fostered more curiosity and balance.
Algorithmic plurality and choice
This research highlights what could be possible if users had more choices over how content is recommended to them—an idea central to the European Union’s Digital Services Act (DSA). Articles 27 and 38 of the DSA mandate that very large online platforms (VLOPs) must offer users more autonomy in how content is ranked, including the ability to opt out of algorithmic profiling entirely. It’s a powerful regulatory recognition that algorithmic design is not neutral—and that users deserve meaningful control over their online experience.
The Panoptykon Foundation has long advocated for algorithmic plurality: letting users switch between different recommender systems depending on their intent. A simple “Why are you here?” prompt—followed by content feeds tailored for news, entertainment, learning, or connection—could help shift how platforms engage users.
Their June 2025 report, Towards Algorithmic Pluralism in the EU, outlines core policy shifts to make user choice a reality:
Make switching easier: Platforms should provide intuitive, visible controls that let users understand and modify how their feed is ranked.
Allow third-party recommender systems: Let users access alternative algorithms designed outside the platform—so recommendation logic can prioritize goals other than engagement, like balance or quality.
Ensure interoperability: Platforms must adopt interoperability across platforms and services, so users can carry their preferences across platforms.
Together, these reforms would let users shape their digital environments, disrupting the current one-size-fits-all feed logic, sparking more transparency and advancing towards healthier online public spheres.
Why people post partisan
But if feed curation helps reduce harm, why is hyperpartisan content so prevalent in the first place?
That’s what researchers Antoine Marie and Michael Bang Petersen seek to reveal in the July 2025 paper entitled “Motivations to Connect with Like-Minded Audiences Increase Partisan Sharing on Social Media.” In a series of experiments, Marie and Petersen found that the desire to affirm group identity was a stronger motivator than the desire to educate or persuade. People posted more partisan content when they thought their audience agreed with them—and became more moderate when they thought it wasn’t.
Interrupting the toxicity cycle
Together, these studies show us a feedback loop—underscoring the necessity for progress towards algorithmic plurality.
Hyperpartisan influencers post extreme content to validate their audiences.
Algorithms optimized for clicks give them disproportionate reach.
Users mistake extreme views for mainstream ones.
Polarization deepens, and posting becomes more extreme.
The cycle repeats.
These findings point to the imperative to shift how we design, govern, and engage with digital spaces.
For tech companies, this research underscores the value of user-centered design interventions—like feed audit tools, transparent ranking systems, and content quality nudges. But it also points toward a more ambitious opportunity: designing for algorithmic plurality. Offering users meaningful options in how content is filtered—whether for engagement, balance, or well-being—could shift platform dynamics toward healthier digital ecosystems, without curbing free expression.
For policymakers, the findings reinforce the urgency of regulating algorithmic amplification. If a tiny group of superusers is generating most toxic or misleading content, and platform design is amplifying their voices, then the problem is architectural—not just behavioral. It’s essential to ensure users can opt out of engagement-optimized algorithms, access third-party recommender systems, and carry their preferences across platforms.
And for the public, until systemic changes arrive, we can keep trying to shape our own digital experiences. Try unfollowing the loudest, most toxic voices—and you may discover a feed that feels more grounded, more curious, and more connected.
Lena Slachmuijlder is Senior Advisor at Search for Common Ground and Co-Chairs the Council on Tech and Social Cohesion.


Great article! I feel more relieved after unfollowing some people and pages. But I feel those people still exist in the world, and I still need to understand what they are going through to know how to address them properly. Are there any recommendations on unfollowing but at the same time not falling into my own echo chamber?