Can AI speak for the absent?
Researchers are exploring how AI might serve as a proxy for marginalized or future voices—raising profound questions for peacebuilding and inclusive decision-making.
Peace agreements are more likely to hold when everyone with a stake in the outcome has a seat at the negotiating table. Yet this is rarely the case. Certain groups—those displaced by conflict, silenced by repression, or deemed peripheral, like women, youth, or civil society—are consistently left out. And some, like future generations, can’t yet speak for themselves at all.
What if AI could help us hear the perspectives of those who are not—but should be—at the table?
In ‘Could AI speak on behalf of future humans?’, published in the Stanford Social Innovation Review, Konstantin Scheuermann and Angela Aristidou of University College London’s School of Management introduce the concept of AI Voice: the use of AI systems, particularly generative models, to articulate perspectives that are typically absent in collective decision-making.
“An enduring societal challenge the world over is a ‘perspective deficit’ in collective decision-making,” the authors write. “Some perspectives are not (adequately) heard and may not receive fair and inclusive representation.”
Designing AI to act as a proxy—a representative voice—for groups like future generations, nature, or marginalized communities, could be one way of addressing this ‘perspective deficit’.
Voice vs decisions
The authors distinguish between AI systems that are given a right to voice (they can provide input, analysis, or argumentation) vs. those that have a right to decide (they can make or influence final decisions).
Giving AI voice rights means acknowledging that there are perspectives which we cannot access directly, but still want to include in the conversation. And it requires designing AI not just to generate content, but to do so responsibly, with mechanisms for oversight, revalidation, and withdrawal if needed.
Several examples show how AI Voice is already being explored:
At Salesforce, the Einstein AI tool sits in on weekly executive meetings, offering insights derived from customer data. It gives voice to customer trends that would otherwise be flattened in strategy conversations.
Dictador (a rum producer) and Tieto (a Nordic IT firm) have gone further, assigning AI systems formal roles in leadership structures. This suggests that of AI might hold a kind of structured presence in decision-making spaces—particularly when representing large, dispersed, or silent stakeholder groups.
While these use cases are not about peace processes, they point toward possible futures where AI systems, carefully designed, could help elevate the interests of those who aren’t directly represented in negotiations or consultations.
Rivers’ legal personhood
Legally, efforts to ‘hear’ the natural world have led to granting rivers and mountains legal status. In 2023, the Komi Memem River in the Brazilian Amazon was granted legal personhood, a recognition that gives it rights to protection, integrity, and voice—through human stewards.
This is part of a growing global movement, including the Whanganui River in New Zealand and all rivers in Bangladesh, to embed nature in legal systems as a way of hearing and defending what would otherwise remain silent. AI Voice could offer a complementary pathway—expressing these interests not in court, but in deliberative spaces.
AI distortion
Amongst the risks inherent in this idea is that AI systems, if not regularly updated, may fossilize outdated perspectives—misrepresenting the communities they were meant to include. If the context shifts and the model doesn’t evolve, the "voice" becomes distorted.
In human systems, representation can be refreshed through elections or community consultations. With AI, we’ll need equivalent mechanisms: retraining, participatory audits, and the humility to decommission models when they no longer serve.
Trust in AI Voice will depend on governance, transparency, and inclusion in how the voice is constructed—and by whom.
Seeing the blind spots
AI could also be harnessed to analyse blind spots or forecast future scenarios in peace processes, as discussed in the recent “AI and the Future of Conflict Resolution” event hosted by Harvard Kennedy School’s Belfer Center.
Harvard Law lecturer Dr. Jeffrey Seul emphasized that AI can support mediators in identifying hidden assumptions, generating alternative outcomes, and presenting less-polarizing proposals. But he also cautioned: “AI must be positioned as a support to human judgment, not a replacement for the nuanced cultural, historical context, and live human empathy that a peace process demands.”
University of Birmingham professor Dr. Martin Wählisch added that as digital tools like extended reality and AI-powered behavioral analysis evolve, peacebuilders will face growing pressure to adopt them. But adoption must come with safeguards: “It is important to recognize the increasing dominance of private companies in the AI ecosystem,” he warned, pointing to governance and neutrality risks.
Designing for inclusion
The idea of AI Voice asks us to rethink inclusion. As Scheuermann and Aristidou write: “AI Voice cannot realize its promise without first challenging how voice is given and withheld... and how the new technology may and does unsettle the status quo.”
AI won’t replace the absent. But under the right conditions, it may help approximate their perspectives—drawing from relevant data, lived experiences, and imagined futures—to offer structured input where it’s otherwise missing.
Used carelessly, AI risks reinforcing old exclusions. But designed wisely, it can become more than a tool. It becomes a measure of how far we’re willing to go to widen the circle—and include those who’ve too often been left outside it.
Lena Slachmuijlder is Senior Advisor at Search for Common Ground and Co-Chairs the Council on Tech and Social Cohesion.
Always appreciate how you weave the threads together, Lena!
What a great piece, Lena !
looking forward to explore this prosocial potential !