When AI Never Says No
Frictionless AI may be eroding our ability to navigate conflict, writes Lena Slachmuijlder
Artificial intelligence (AI) is reshaping not just how we process information, but how we relate to each other. Designed for helpfulness and harmony, many AI systems now simulate relationships.
But there’s a structural flaw: they offer care without counter-needs, presence without unpredictability, affirmation without resistance.
These frictionless dynamics feel safe, yet they erode the essential human capacities for dealing with conflict, negotiation, and plurality.
If humans are to manage the many local and international conflicts likely to intensify in the years ahead, we must ensure these essential capacities are safeguarded and even strengthened. This will require changes to how AI systems are designed, as well as deliberate efforts to strengthen our conflict management muscles.
Real relationships are forged through disagreement and discomfort. As peacebuilder John Paul Lederach observed, transformation happens not through avoidance, but through sustained engagement – even when it is uncomfortable. Conflict is not failure; it’s the presence of difference, and learning to live with it is foundational to empathy, boundary-setting, and social development.
By contrast, today’s most popular AIs are optimized for smoothness. Models are trained to affirm user perspectives, avoid contradiction, and provide emotionally satisfying responses. They have no needs, no stakes, no “self” to push back. Instead of being challenged, the user remains centered. The result is a convincing simulation of a relationship – one that mirrors, but never resists.
Avoiding Disagreement
Young users report that AI feels “easier than talking to real people.” It is easier precisely because it is non-reciprocal. AI does not interrupt, assert needs, or ask the user to change. This ease cultivates a sense of competence and greater certainty in one’s beliefs. Recent studies bear this out.
Conversations with AI on political topics measurably hardened participant views, despite models never advocating a position. Similarly, research on AI companions shows that models often validated harmful ideas – including self-harm or violence – simply to maintain rapport. In this dynamic, disagreement is not a skill to be developed; it is a risk to be avoided.
The Artificiality Institute’s research identifies three emerging patterns in human-AI interaction: cognitive permeability, where users begin to rely on AI to structure their reasoning; identity coupling, in which AI becomes part of the user’s self-concept; and symbolic plasticity, where moral meaning is increasingly outsourced. These shifts rewire how we locate agency, judgment, and relationship in a digital age. This could have profound implications on people’s capacity for and orientation toward managing future conflicts.
Peacebuilders practice multi-partiality – not neutrality – through active engagement with all sides of a conflict. The goal is not necessarily to build consensus, but enough mutual recognition to stay in relationships despite disagreement.
In AI design, this could mean explicitly surfacing tensions, illuminating competing values, and resisting the temptation to collapse conflict into a singular, safe response.
In AI ethics, researcher Jonathan Stray’s “maximum equal approval” concept reframes success not as user satisfaction, but whether people on opposing sides agree their views were fairly represented – even if they still disagree. This is a model of relational equity, not comfort.
Friction as Function
Safeguarding human capacities for conflict management will require changes to how AI systems are designed. Some AI model developers are beginning to open the frame. Anthropic’s 2026 Claude Constitution instructs its model to “avoid overconfidence,” to be “diplomatically honest rather than dishonestly diplomatic,” and to challenge users when needed – even refusing instructions from Anthropic itself if they violate human dignity. Claude is imagined not just as a helpful assistant, but as a moral actor who prioritizes long-term well-being over short-term satisfaction. Building friction into the relationship between AI and humans may be exactly what we need to keep our conflict management skills in shape.
The capabilities of foundational AI models can also be woven into new tools and platforms that strengthen our skills for navigating conflict.
Sway supports college students in engaging with controversial topics through structured dialogue.
Acquaint equips users to have open-minded, cross-cultural conversations with people around the world.
Discurso.ai offers science-backed simulations to help users practice negotiation and receive feedback in real time.
PeaceBot coaches users through emotionally charged conversations, such as those around the Israeli-Palestinian conflict.
These examples show how AI can be used not to smooth over discomfort, but to guide people through it – responsibly, reflectively, and relationally.
The erosion of conflict capacity is not a fringe concern, but an existential one in a world facing record-high polarization. As AI trains us to expand agreement and fluency, we may lose the very skills we need to live with others who are different from us. Avoiding that outcome calls on us to mainstream and advocate for institutional adoption of AI tools designed to keep our plurality and heterodoxy muscles in shape.
Originally published in Peace Policy: Solutions to Violent Conflict, No. 62, February 2026, Kroc Institute for International Peace Studies.
Lena Slachmuijlder is Senior Advisor for digital peacebuilding at Search for Common Ground, a Practitioner Fellow at the USC Neely Center, and Co-chair of the Council on Tech and Social Cohesion.


A most important read.