Peace by Prompt
Custom AI chatbots are supporting peacebuilders in Sudan and offering Congolese audiences an antidote to bias.
With people turning to AI chatbots as therapists, coaches, and lawyers, should peacebuilders also look to Grok or Claude for mediation advice? Not necessarily, according to new research by the Institute for Integrated Transitions that warns of the risks of taking AI mediation advice at face value.
That’s why initiatives to customize AI chatbots in Africa are worth watching. Two efforts in particular—Akord AI in Sudan and Cocorico in the Democratic Republic of Congo—show how customization, curation, and ethical guardrails can turn off-the-shelf AI chatbots into real peacebuilding tools.
Akord AI and the peace accord library
Developed by Conflict Dynamics International, Akord AI is a chatbot designed to support Sudanese peacebuilders, civil society actors, diplomats, and policy influencers. Instead of scraping the internet, it draws exclusively from a curated library of more than 3,000 resources—peace agreements, constitutional texts, case studies on women’s inclusion, and strategies from both global and local sources, in English and Modern Standard Arabic. Because Akord only draws on its curated library, it has so far avoided the hallucinations common in mainstream chatbots.
The design principle is transparency. “We didn’t want users guessing whether a response was accurate,” said Nanako Tamaru of CDI. “Every answer links to its source. You can decide for yourself if it’s credible.” Nearly 1,500 people—from Sudanese youth ambassadors to foreign ministry staff—have already used it.
Akord is powered by GPT-4o, but fine-tuned through red-teaming and ethical prompt design. It rejects requests that promote hate, disinformation, or electoral manipulation. “We’re not trying to replace expertise,” said Tamaru. “Just making it easier to find —especially when it’s scattered across decades of reports, agreements, and strategies.”
The team plans to expand its base beyond written records by incorporating oral histories and tribal agreements, recognizing that much of Sudan’s conflict and peacebuilding knowledge lives outside of formal archives.
Kinshasa’s AI Analyst
In Kinshasa, a different experiment is unfolding. “Cocorico,” an AI chatbot developed by Kinshasa Television, isn’t just helping in the newsroom—it’s become an on-air analyst. It has weighed in on issues from the DRC–Rwanda peace talks to UN expert reports, government reshuffles, and legal cases.
For Marius Muhunga, CEO of Kinshasa Television and host of the show, Cocorico’s role is intentional. “In the DRC, political polarization and self-censorship are everywhere,” he said. “Many analysts avoid speaking publicly, and those who do often represent just one side. Cocorico has changed that. It doesn’t play favorites”, says Muhunga.
Built with OpenAI’s Custom GPT tool, Muhunga shaped Cocorico’s role and posture —insisting on multi-partial analysis that consistently highlights diverse perspectives. He regularly updates its reference library with recent news articles, academic papers, and government releases.
“We’ve had praise from political leaders, other analysts, and our viewers, who say Cocorico is one of the most clear and balanced sources of political insight on national television,” says Muhunga.
In a media landscape where self-censorship is common and criticism can carry real danger, Cocorico’s AI-powered framing—grounded in evidence, and resistant to overt partisanship—has made healthy and balanced discourse within reach.
When AI misleads mediators
Without such careful design, the risks are clear—something underscored by recent research from the Institute for Integrated Transitions (IFIT).
In its report ‘AI on the Frontline: Evaluating Large Language Models in Real‑World Conflict Resolution’, IFIT tested four chatbots—GPT-4, Claude, Gemini, and Grok—by asking for advice on how to advance sensitive, real-world mediation scenarios from Sudan, Syria and Mexico. The researchers then scored the quality of each chatbot’s first answer, without further prompting.
The results were sobering: the average score was just 26.7 out of 100. Models failed to ask basic clarifying questions, and sometimes gave vague or risky advice. On dimensions like due diligence, some models scored as low as 0.1 out of 10.
As the report warned: “No single tool consistently provided conflict-appropriate advice. Worse, some models offered guidance that could plausibly endanger users on the ground or exacerbate existing conflict.”
In short: mainstream chatbots, left unmodified, can mislead or even put peacebuilders at risk.
What makes these two initiatives stand out is not their novelty but their intentionality. Akord roots Sudanese questions in evidence and local knowledge, not slogans or speculation. Cocorico gives the Congolese public an alternative to polarized punditry, offering multi-partial analysis where impartial experts are often silent.
Neither is perfect. Both are still in development. But they illustrate a critical point: AI is not inherently good or bad—it is shaped by design choices.
Lena Slachmuijlder is Senior Advisor at Search for Common Ground and Co-Chairs the Council on Tech and Social Cohesion.
Keep up the great work!