AI, war and peace, post Paris
The AI Action Summit revealed how AI is pulled into war-making AND enabling wider participation in its design and governance, reflects Lena Slachmuijlder
Fresh from five days in Paris, where the AI Action Summit and its many side events exposed a whirlwind of ideas, power struggles, and competing visions for the future of AI —from digital democracy to warfare and beyond.
Depending on who you asked, it was either a step forward or a massive disappointment. Some felt they had made gains, while others left feeling that AI safety concerns were sidelined, and that the divide between the Global North and South was sharper than ever.
Without any claim to offering a comprehensive read-out, I’m sharing five key takeaways in aspects directly affecting AI’s influence on peace, war, and social cohesion.
1️⃣ AI for warmaking
AI is increasingly shaping modern warfare, and efforts to set international guardrails remain fractured. While 26 nations signed the Paris Declaration on Maintaining Human Control in AI enabled Weapon Systems, key military powers—the US, UK, China, and France—did not.
Mistral AI and Helsing announced a partnership to develop AI-driven vision-language-action models for military applications—another sign that the lines between commercial AI innovation and defense technology are blurring. Branca Pavic, CEO of AI for Peace, observed how these developments align with a broader shift in AI’s role in warfare. As she put it, writing in LinkedIn:
"This mirrors trends among US-based companies, such as OpenAI quietly lifting its ban on using ChatGPT for military purposes and Google dropping its pledge not to develop AI for weapons."
Although 61 countries signed the Paris Summit Declaration on ‘Inclusive’ AI, the UK and US declined to sign, with the UK citing a lack of clarity on global governance and insufficient focus on national security risks.
2️⃣ Scaling participatory AI
Amidst the top-down governance debates, participatory AI gained ground as a model for designing AI with and for the communities it impacts as well as driving higher levels of trust and collaboration via AI-enabled deliberative technologies.
Tim Davies and the Tech and Global Innovation Hub hosted day-long Participatory AI Research & Practice Symposium during which academics and practitioners looking to scale AI-enabled deliberation, including as a pathway to greater participatory AI governance. Highly recommend clicking here for the full list of papers and presentations.
Make.org helped launch the Worldwide Alliance for AI and Democracy, where over 100 global leaders and organizations pledged to uphold transparency, accountability, and inclusivity in AI governance. As they put it:
"Democracy cannot be left to chance, nor can the future of AI. By aligning our efforts, we can build a world where technological innovation serves as a pillar for democratic resilience."
Digital Action’s AI Commons Initiative emerged as a framework for democratizing AI through global citizen power. Their gathering emphasized the importance of ensuring that ‘diverse values’ guide AI development.
"We are delighted that our gathering could mark a step toward ensuring that the Global Majority's voices shape the future of AI governance."
This shift toward participatory AI represents a growing pushback against the dominance of tech giants and national security-driven AI policies, advocating instead for community-led AI design.
3️⃣ Safety for children, women and girls
The summit placed a renewed focus on AI’s disproportionate risks for women, girls, and children—highlighting both new commitments and ongoing gaps in AI safety.
A joint statement from 12 governments, including France, the UK, Canada, and Mexico, called for gender to be a core part of AI governance—particularly in tackling technology-facilitated gender-based violence (TFGBV).
UNESCO furthered consultations around its emerging governance guidelines for generative AI to complement its existing Internet for Trust guidelines. UNESCO’s recent report, ‘Your Opinion Doesn’t Matter Anyway,’ highlighted the risks of generative AI amplifying gender-based harassment.
AI and child safety were also central concerns. The Paris Peace Forum launched a new global coalition for AI and children, co-led with everyone.AI, to develop clear safeguards ensuring AI supports children's development rather than harming them.
The growing focus on safety-by-design AI is encouraging, but without enforcement mechanisms, questions remain about whether tech companies will self-regulate or require stronger policies to protect vulnerable users.
4️⃣ Red-Teaming at scale
I learned a lot from expanding initiatives around AI red-teaming—stress-testing AI models for risks before they cause harm. A new Data & Society report called for red-teaming to extend beyond corporate AI labs and into the public sphere. As the report states:
"Red-teaming for AI is often limited to private-sector testing, yet its impact would be significantly greater if integrated into public interest assessments, bringing in diverse communities to identify blind spots and systemic risks."
This aligns with Humane Intelligence’s ongoing efforts to diversify the input into AI evaluations and audits via the development of hands-on, measurable methods of real-time assessments of the societal impact of AI models.
Whether companies and governments will fully embrace red-teaming in the public interest remains an open question.
5️⃣ Open-Source safety tools
One of the most tangible and actionable outcomes of the summit was the launch of ROOST (Robust Open-Source Online Safety Tools)—a $27 million initiative backed by Google, OpenAI, Discord, and Roblox to build and scale trust & safety tools for platforms of all sizes.
As Casey Newton wrote in Platformer:
"Trust and safety engineers are tired of reinventing the wheel. ROOST represents the most significant effort to date to build common tools that platforms can share. This could be game-changing for platforms without massive moderation teams."
ROOST aims to equip platforms—especially smaller ones—with open-source tools for combating online harms, ensuring that AI safety doesn’t just belong to the tech giants.
This launch comes at a critical moment, as the AI & disinformation conversation at the French-African Foundation gathering emphasized how weakened trust & safety measures are fueling real-world violence across the continent.
Reality check
The AI Action Summit exposed both the urgency and the fractures in AI governance.
✔️ There were real wins, like participatory AI, public interest red-teaming, and ROOST’s launch.
❌ But there were glaring failures, from a lack of across-the-board alignment to prevent AI use in war-making to the lack of broad commitment to wider and more participatory AI governance.
Governments remain divided on how fast and how safely AI should evolve, and whether safety mechanisms should be industry-led or state-driven. As Humane Intelligence’s Theodora Skeadas reflected in her LinkedIn post,
“Pro-innovation accelerationists fight those advocating for safety. Within the safety community, long-termists fight those focused on immediate harms. And even within the ethics community, there’s infighting. If we don’t start working across these divides, AI governance will fail.”
My takeaway: AI’s future hinges on political choices. It can deepen existing power imbalances—or be intentionally shaped to build trust, inclusion, and social cohesion.
Lena Slachmuijlder Co-Chairs the Council on Tech and Social Cohesion.