When platforms polarize, peacebuilding must evolve
As digital platforms enable harms by design, a new framework from Build Up urges peacebuilders to target systems, not symptoms.
From Gaza to Myanmar, the front lines of conflict are no longer confined to streets and borders—they’re active in group chats, timelines, and algorithm-driven feeds. As digital spaces shape how violence unfolds and spreads, one thing is clear: peacebuilding can’t succeed without addressing the systems that enable digital harm. It's not just what happens online—it’s how platforms are enabling digital harms.
A new report by Build Up, Addressing Digital Harms in Conflict: a Review of Global Best Practices, commissioned by the UK’s Foreign, Commonwealth and Development Office, offers a framework to do exactly that. It begins by clarifying a conceptual muddle—and in doing so, reframes the challenge entirely.
To learn more about the report, join Build Up’s public webinar on 29 May, featuring reflections from three peacebuilding and mediation practitioners on how digital harms affect their work.
How harm becomes digital
The report introduces three foundational concepts:
Harm in conflict is anything that creates or intensifies division between groups—what’s often called affective polarization. These frayed social bonds, when broken, can lead to violence.
Digital affordances refer to what digital technologies enable: how information is produced, distributed, and experienced online. These affordances shape not just what content appears, but how it travels and who sees it.
Digital harm is the damage—intentional or incidental—that results from these affordances. It’s not just what’s said, but how platforms make it viral, target it, or suppress it.
How harm is digitally enabled
From that foundation, the report identifies five key digital affordances driving harm in today’s conflicts:
Offensive cyber operations: Think doxxing, data leaks, and digital intimidation. In Sudan and Nigeria, these tactics silence civil society and threaten mediators.
Network control: Shutdowns and surveillance as tools of repression. In Tigray, a nearly two-year digital blackout was not just a connectivity issue—it was a human rights crisis.
Information deception and manipulation: From deepfakes to false narratives, these tactics fracture trust and escalate tensions. Disinformation isn’t random—it’s engineered.
Manipulative influence operations: Coordinated, often automated campaigns that make fringe voices seem mainstream—and vice versa.
Algorithmic amplification: The most invisible force of all. Engagement-driven algorithms reward divisive, emotionally charged content, making polarization a feature, not a flaw.
Together, these affordances create an ecosystem where harm spreads rapidly, often leading to irreparable damage in already fragile conflict zones.
The limits of content moderation and counter-narratives
With this diagnosis, the report urges a shift beyond digital literacy, content take-downs or counter-narratives. As it states:
“To many peacebuilders and mediators, content moderation and counter-influence strategies can feel like an uphill battle against a torrent of divisive content in a polarized information ecosystem primed to believe and promote digital harm.”
The message is clear: no amount of fact-checking can meaningfully address the structural incentives driving division. We can’t “digital-literacy” our way out of a space that is being systemically and algorithmically manipulated during high conflict, or where, as in the example of TikTok in the run-up to Kenya’s 2022 elections where hate speech and disinformation reached hundreds of thousands via recommender systems.
That’s why the report emphasizes the need to move upstream—away from content and toward systems. In doing so, it echoes a powerful line from ARTICLE 19’s December 2024 report Clearing the Fog of War, pointing to the threats to freedom of expression during times of war:
“It remains ARTICLE 19’s firm belief that, whether in times of peace or conflict, the emphasis should shift from the content itself to the systems and incentives that determine how content is generated, distributed, and amplified online.”
This doesn’t imply abandoning traditional peacebuilding or mediation tools. It’s about recognizing that these tools must now operate in a second, digital orbit—one governed by different dynamics and requiring new approaches.
Here’s how to respond
The Build Up report outlines two levels of response:
At the system level: There’s an urgent need for platform redesign. This includes rethinking engagement-based algorithms, prioritizing resilience over virality, and ensuring companies build conflict sensitivity into their core operations—not as an add-on, but as a design principle.
At the practice level: Digital peacebuilding must move beyond isolated efforts. Programs should aim to shift online behavior norms, support connection across divides, and build long-term resilience against divisive manipulation. While still emergent, the theory of change is simple: sustained, intentional engagement can transform digital spaces into sites of social repair.
These strategies also align with growing global attention to corporate responsibility. As highlighted in the Global Network Initiative’s recent learning series on tech and armed conflict:
“Responsible business frameworks must incorporate conflict-specific analysis and risk mitigation, as ICT companies play increasingly central roles in both the escalation and resolution of modern conflicts.”
Peacebuilding methods are evolving. To be effective, we must name how digital harm operates, understand the affordances that enable it, and respond at both systemic and human levels. Otherwise, we risk treating symptoms in a space designed to spread the disease.
Lena Slachmuijlder Co-Chairs the Council on Tech and Social Cohesion.