Intercepting Harm
A new report offers a roadmap for stopping abuse at the source —before it ever reaches its target.
Too often, the burden of confronting technology-facilitated gender-based violence (TFGBV) falls squarely on those most affected. Our typical responses—removing abusive content after it’s gone viral or pursuing legal action long after the damage is done—arrive too late. Survivors are expected to report, mute, block, and navigate complex settings, all while enduring the fallout.
But what if the harm never had to reach them at all? What if we could redesign the digital sphere so that abuse is harder to commit in the first place?
That’s the driving perspective in “Prevention by Design, a Roadmap for Tackling TFGBV at the Source,” a new white paper co-published by the Council on Tech and Social Cohesion, the Integrity Institute, and Search for Common Ground.
Authored by 14 experts, the report is the result of a collaboration between technologists with experience on trust, safety, and product teams at major platforms, and civil society leaders from South Asia who support survivors, pursue accountability, and push for safer digital spaces. It bridges platform expertise with frontline insight to offer concrete, feasible interventions—none of which require new laws or lengthy prosecutions. Just smarter design.
As the report puts it:
“We can’t outscale abuse by adding more moderators alone. We need to reduce its volume at the source—by designing systems that discourage and defuse it in the first place.”
Why ‘connecting the world’ favors abuse
When social media platforms were built, they were optimized to connect us—to make it easy to find people, invite them into groups, see who’s connected to whom. But those same features that make it easy to build community also make it easy to target, dox, or swarm someone.
TFGBV is amplified by these design choices. Algorithms prioritize engagement, which often means elevating the most inflammatory or emotional content. Abusers can easily find victims’ networks, create multiple new accounts to bypass blocks, or flood reporting systems to silence dissenting voices.
Instead of waiting for these harms to happen and then trying to clean up afterward, this report urges us to redesign the space itself by focusing on two tiers of action:
Behavior-Focused Interventions, which aim to influence user interactions in real time—discouraging harmful behavior and empowering users to shape safer online experiences.
Upstream Design Solutions, which tackle the structural features of platforms—like algorithms, defaults, and user onboarding—that shape what’s possible long before abuse occurs.
Each recommendation is broken down into what it is, how it works, and the evidence behind it—plus examples of platforms that have implemented pieces of these ideas.
The white paper offers eight key recommendations:
Nudges that prompt users to reconsider before posting harmful content.
Filters that allow users to set boundaries around the content they see.
Safer onboarding to guide new users toward protective settings from the start.
Default privacy settings that limit exposure and vulnerability by design.
Rate limits on new or unverified accounts, making it harder for abusers to mass-target users.
Algorithmic changes to move away from engagement-based ranking, which tends to amplify inflammatory and harmful content.
Quarantine systems for gray-area content that may not violate policies but still poses harm.
Improved feedback loops for reporting tools, ensuring users feel heard and protected when they flag abuse.
Through the report, we see that many of these ideas are already being deployed by companies such as Instagram, X, TikTok and FACEIT and showing results:
Instagram’s Hidden Words filter led to a 40% drop in harmful comments for users with large followings.
FACEIT, a major gaming platform, saw a 20% decline in toxic messages after using the Perspective API for filtering.
Instagram’s nudges prompted users to delete or amend their comments 50% of the time after being nudged.
As one quote from the report highlights:
“The nudge changes the behavior in the moment, but more importantly, it has a lasting impact. People are more likely to rethink their approach in future interactions.”
Growing demand
The call for safety-by-design isn’t happening in isolation. Last year, the Australian eSafety Commissioner produced a guide for the tech industry entitled Technology, Gendered Violence and Safety by Design, highlighting many promising practices from Apple, Google, TikTok, Meta, YouTube, Bumble and Twitch. The guide points to these practices to underscore its overall message: “The burden of safety should never fall solely on the user, especially those who are victim survivors of gender-based violence. This means anticipating potential risks during online interactions and designing features to eliminate potential misuse, reducing people’s exposure to harms.”
Last month, the UK’s Office of Communications (Ofcom) kicked off a consultation around its draft guidance comprised of nine areas where technology firms should do more to improve women and girls’ online safety, including designing their services to prevent harm and support their users.
This guidance comes as increasing evidence points to how misogynistic online ecosystems don’t just harm individuals—they can become gateways to extremism and offline violence. As we design safer platforms, we’re not just protecting people from harassment; we’re disrupting radicalization pathways.
No, it doesn’t require regulation
There’s a temptation to believe that safer platforms depend on sweeping legislation or costly backend overhauls. But Prevention by Design shows that many of the most effective changes are low-cost, scalable, and already being used—just not comprehensively.
And there’s another incentive: this isn’t just good policy—it’s good business. Safer platforms build trust, boost user satisfaction, and reduce churn. As the report puts it:
“By being transparent about the tools available and how they can enhance the user's experience, platforms can build long-term trust, increasing the likelihood that users will remain engaged and continue using the service with confidence.”
This is a call to the industry: safety doesn’t have to wait for legislation. It can start now—with smarter defaults, better design, and the will to act.
Lena Slachmuijlder Co-Chairs the Council on Tech and Social Cohesion.