AI can predict war. But will we act in time?
A Global Peace and Security Task Force recommends the G7 lean into AI for prevention -- both in foresight and dialogue
If AI could reliably predict war months in advance, would we trust it enough to act decisively? This question looms large as headlines spotlight the rapid militarization of AI—Mistral AI’s partnership with Helsing, OpenAI’s permissions for military use, and a growing unease that machines are becoming instruments of war, not peace.
Yet this anxiety masks a deeper potential: AI’s ability to anticipate, mitigate, and ultimately transform how we prevent violent conflict. According to the Global Peace and Security Task Force’s new policy brief, AI and Global Security: From Early Warning to AI-Assisted Diplomacy, AI has already demonstrated predictive power with staggering precision.
In 2021, Rhombus, a Silicon Valley firm, forecasted Russia’s full-scale invasion of Ukraine four months in advance with 80% certainty by analyzing vast and diverse datasets—from satellite images to dark web chatter. The U.S. military’s Raven Sentry tool used similar techniques—tracking everything from social media sentiment to nighttime satellite images—to anticipate Taliban attacks during the Afghanistan withdrawal. These systems didn’t just mimic human analysts; they outpaced them.
Humans are too slow to respond
As the Task Force report authors Leonardo De Agostini and Michele Giovanardi note, “the biggest challenge is not detecting conflict—but acting on the warning.” Institutional inertia, fragmented data-sharing, and weak integration of AI insights into diplomacy all create a dangerous delay. We’re not short on foresight; we’re short on political will and practical mechanisms to act fast.
To close this “warning-response gap,” the Task Force calls on G7 nations to do four things:
Invest in predictive AI systems and embed them within diplomatic and humanitarian processes.
Form hybrid intelligence teams—technologists, diplomats, and civil society—capable of translating forecasts into real-time action.
Develop participatory AI tools that involve communities in identifying risks and shaping peace efforts.
Standardize data-sharing protocols across governments, NGOs, and tech companies to build cohesive, accurate early warning networks.
Importantly, the report doesn’t stop at detection. It recommends deeper investment in dialogue and deliberation. It underscores that AI isn’t just good at sensing conflict triggers—it can also identify conditions for peace.
Predicting peace
We're not short on ways to understand ripeness for peace. What we need are systems that help us act on it. Global peacebuilding organization Search for Common Ground measures all its hundreds of peacebuilding programs with a Peace Impact Framework to measure the five vital signs of a healthy society: agency, horizontal trust, institutional trust, security, and support for peace. Platforms like CulturePulse.ai use real-time sentiment analysis to map and forecast collective emotions—tracking cultural shifts and signaling when a community may be ready for cooperation, not just at risk of conflict.
The same is true in the civic sphere. The recent report AI and the Future of Digital Public Squares, co-authored by more than 20 experts in tech, democracy, and peacebuilding, lays out how AI-enabled tools can strengthen civic discourse and scale inclusive deliberation. As the authors put it:
“LLMs offer opportunities for a paradigm shift towards more decentralized, participatory online spaces that can be used to facilitate deliberative dialogues at scale.”
Platforms like Polis and Remesh already show how this works in practice. Last year, Remesh helped uncover shared values between Israeli and Palestinian peacebuilders through a broad-scale dialogue. Common Good AI used deliberative technology in Cincinnati, in the United States, to help communities find empathy and common ground on the divisive issues of policing and race.
Think of these approaches as an investment. In fractured societies, deliberative dialogue is infrastructure. It builds civic trust, reduces polarization, and creates the conditions where peace initiatives can succeed and where support for violence becomes less acceptable.
Yes, there are ethical concerns
Harnessing AI for peace demands governance and raises ethical question. Both the Task Force and Digital Public Squares reports highlight the importance of transparency, inclusivity, and ethical oversight. These systems must be built with—not just for—the communities they aim to serve.
AI’s military potential will continue to command headlines. But as we edge deeper into an unstable, multipolar world, the quiet revolution of peace-focused AI deserves louder support and greater investment.
We now have tools that can detect war before it starts and platforms that can deepen civic trust before it frays.
Here are some ways we can advance this positive use of AI:
Activate and invest in more use cases of participatory AI in peacebuilding, in order to build an evidence base of promising practices.
Fund open-source, ethically governed early warning tools.
Integrate AI-enhanced dialogue systems into local governance, mediation and diplomacy.
Invest in training both peacebuilders and technologists to adopt and adapt the available and emerging tools for diverse contexts.
If you’re involved in similar initiatives, we’d love to hear from you! If we harness AI wisely —rooted in justice, inclusion, and shared responsibility— it can transform the way we prevent war and build peace.
Lena Slachmuijlder Co-Chairs the Council on Tech and Social Cohesion.