The AI Cooperation Test
Four thought leaders examine how AI design is reshaping trust, relationships, and our capacity to cooperate.
We are not just building artificial intelligence. We are training it on ourselves—on our language, our preferences, our conflicts, our collaborations, our fears, and our aspirations. Every prompt, post, and prediction becomes part of a feedback loop. AI learns from us, then subtly reshapes the environment in which we think and relate.
If AI is built on past human data, will it naturally make us more collaborative—or simply scale our divisions? Are we seeing signs that AI design - particularly the relationship with generative AI, large language models and chatbots - is prompting us to be better at working with other humans, not just working better with the AI?
Across research institutions, policy centers, and cultural forums, a set of voices are converging on this concern. They converge around the idea that AI is not neutral, but rather amplifies the system it is embedded in. And if that system already struggles with polarization, loneliness, and institutional distrust, why would AI behave differently?
Here are four recent perspectives that focus specifically on AI’s impact on relationships, trust, and social cohesion—and what they believe we must do next.
It’s the worldview, first of all
At ProSocial World, evolutionary theorists Paul Atkins and David Sloan Wilson argue in Evolving Prosocial AI that AI governance debates often miss the deepest layer of influence: worldview.
Their central claim is that technologies scale the values of the systems that produce them. If AI is developed within a competitive, extractive, market-maximizing frame, it will reinforce competition, extraction, and short-term optimization. If it is developed within a cooperation-centered frame, it can instead strengthen shared intelligence and collective problem-solving.
They frame the challenge in stark terms:
“The most important design question is this: Can AI help make cooperation rewarding, not self-sacrificing?”
Rather than treating cooperation as morally admirable but strategically naïve, they argue that institutions—and technologies—should be structured so that prosocial behavior becomes adaptive. Drawing on Nobel Economics Laureate Elinor Ostrom’s research on successful commons governance, they propose AI systems that:
Strengthen trust within groups
Support fair participation in decision-making
Make shared goals visible and measurable
Distribute benefits more equitably
In this view, AI is not simply a productivity tool. It is infrastructure for large-scale cooperation. But only if it is intentionally designed as such.
What’s being eroded
The Center for Humane Technology (CHT) approaches the issue from another angle: what we are already losing.
In its recently launched initiative AI and What Makes Us Human, CHT warns that AI systems optimized for engagement, persuasion, or frictionless interaction may erode the very social capacities democratic societies depend on.
As CHT Policy Director Camille Carlton writes:
“From extractive to regenerative technology. From cognitive overload to collective wisdom. From loneliness to belonging.”
Their warning is clear: when systems prioritize frictionless interaction over relational depth, we risk degrading the skills required for real connection. AI does not just shape attention—it shapes norms. If we increasingly replace human relationships with artificial connection, we may weaken social trust and community resilience.
One line captures the stakes:
“A society without strong human relationships is not merely lonelier — it is fragile, less resilient, and more susceptible to polarization and exploitation.”
For CHT, the solution is not rejection of AI, but reorientation. Social impact must not be an afterthought in product design. It must be central. Regenerative technology—systems that strengthen communities rather than extract from them—should be the standard.
Designing to flourish
A third perspective comes from researchers at Harvard University’s Human Flourishing Program, whose work provides a conceptual foundation for evaluating AI’s social impact.
In their white paper Social AI and Human Connections: Benefits, Risks and Social Impact , the authors argue that AI systems—especially chatbots designed for companionship—should be assessed not only for safety, but for how they affect the following core domains of human flourishing.
Close relationships
Mental and physical health
Moral character
Meaning and purpose
Agency and autonomy
In other words, convenience, engagement, and user growth are insufficient metrics if the systems undermine relational depth or developmental health.
Building on this flourishing foundation, the University of Southern California’s Neely Center for Ethical Leadership and Decision Making has translated these concerns into concrete product design principles. Where the Harvard framework asks “Does this support flourishing?”, the Neely Design Code asks “What specific design requirements would ensure that it does?”
In their the draft Social AI Design Code , the Neely Center proposes enforceable guardrails for chatbots that simulate social relationships—particularly those used by youth, including:
Restricting AI companions with human-like emotional features to adults,
Requiring therapeutic AI applications to be licensed, supervised, and independently verified,
Embedding social impact assessment into AI audits,
Designing chatbots to reinforce human-to-human relationships rather than replace them, and
Being explicit and consistent about the non-human nature of the system.
The Neely Code reminds us that emotional design is moral design. If AI simulates empathy, intimacy, or authority, it influences development—especially for children and adolescents.
Our co-evolution with AI
The Artificiality Institute adds a cultural dimension to the conversation. Rather than focusing only on regulation or governance, it emphasizes co-evolution.
AI is not just being trained on us. We are being trained by it.
Every interaction subtly shifts norms: what counts as knowledge, how quickly we expect answers, how much friction we tolerate in disagreement, how we define creativity. The risk is not only manipulation, but habituation—outsourcing reflection to fluent systems.
As Co-Founder Helen Edwards writes in the The Artificiality: AI, Culture, and Why the Future Will Be Co-Evolution these are cooperation problems. Cooperation is hard. It requires patience, perspective-taking, and shared narratives. AI systems that prioritize speed and optimization may erode those capacities if left unchecked.
The Artificiality perspective invites vigilance at the cultural level: not panic, not techno-utopianism, but awareness. What habits are we reinforcing? What expectations are we normalizing? What skills are we neglecting?
Governance matters. But so does culture.
AI as mirror, and multiplier
Across these four perspectives—evolutionary governance, humane technology, flourishing ethics, and cultural co-evolution—a shared theme emerges:
AI is a multiplier.
It can amplify empathy or extraction.
It can scaffold cooperation or accelerate division.
It can support flourishing—or simulate it.
They advocate for us to ask, consistently and publicly:
Is AI helping us understand each other—or merely predict and persuade?
Is it reinforcing empathy—or extracting attention?
Is it strengthening institutions—or weakening trust?
If we want AI to serve the common good, we will need shared guardrails, new incentives, and deeper cultural reflection. That includes aligning market rewards with prosocial outcomes, embedding social impact into audits and regulation, and cultivating habits of mindful engagement.
Because in the end, this is not just about ‘intelligence’. It is about whether the systems we build and measure across ‘intelligence’ metrics make cooperation easier—or harder. Because the future of AI is not just technical. It is relational.
Lena Slachmuijlder is Senior Advisor for digital peacebuilding at Search for Common Ground, a Practitioner Fellow at the USC Neely Center, and Co-chair of the Council on Tech and Social Cohesion.

