Friends without friction
Companion chatbots may weaken our ability to navigate conflict and difference—unless we make key design choices, says a new report
When was the last time your chatbot disagreed with you? Probably only when you told it to. Companion AIs—like Replika, Nomi, or Character.ai—are tuned to be supportive, affirming, and endlessly agreeable. That might feel good in the moment. But what happens when people start preferring digital “friends” over real ones—especially when real ones come with friction?
A new report by OpenAI’s Kim Malfacini, The Impacts of Companion AI on Human Relationships: Risks, Benefits, and Design Considerations, categorizes the primary risks and benefits and argues for designing companion AI to actively build our capacity for human connection, dealing with conflict, and knowing how to collaborate.
Last year, a lawsuit against Character.ai alleged that its emotionally responsive design contributed to a teenager’s suicide, raising urgent ethical questions about emotional over-reliance and the anthropomorphization of AI systems.
‘De-skilling’ how we deal with difference
Social cohesion isn’t about getting along with people who agree with us. It’s about navigating difference—ethnic, ideological, emotional. But companion AIs are designed for harmony, not challenge. Malfacini calls this “social deskilling”—where frictionless interaction with bots makes us less prepared for the nuance and effort of real relationships.
Psychologist Sherry Turkle puts it plainly: “When one becomes accustomed to ‘companionship’ without demands, life with people may seem overwhelming.” The paper notes how users can “set the personality” of their AI, avoiding criticism, confrontation, or even difference altogether.
Chatbots are easier than humans
In some places, these bots are becoming a human replacement. Such as in East Asia, there the author cites long work hours and urban isolation which make emotional connection more difficult. AI companions are just easier. As one 25-year old Chinese woman told France24, “If I can create a virtual character that meets my needs exactly, I’m not going to choose a real person.”
Malfacini’s review points to those who are most vulnerable to the potential harms of over-use or addiction. Kids and teens, who naturally anthropomorphize technology, are especially vulnerable. So are those in emotional distress or grieving. Even erotic chatbot users face risks of emotional over-attachment and relationship displacement.
The ability of AI bots to create trust has been noticed by law enforcement. In the U.S., companies like Massive Blue are selling police departments “AI personas” designed to build rapport with suspects—including protestors, immigrants, and trafficking targets—by mimicking emotionally vulnerable or identity-aligned individuals. It’s a reminder that the more AI understands how to build trust, they can be used to manipulate it.
AI doesn’t replace human connection
“AI companions aren’t displacing healthy relationships,” writes Noah Weinberger in Imaginary Friends Grew Up: We Panicked. “They offer emotional mirroring, reliability, and a space that doesn’t punish social errors.”
Weinberger argues that for many chatbot companion users—especially those navigating trauma, neurodivergence, or emotional isolation—AI companions are not replacing connection, they’re scaffolding the possibility of it. Rather than panic or try to ban the technology, he urges us to ask why these systems feel safer than people—and how we can design with empathy, not fear.
Both Malfacini and Weinberger agree: the issue isn’t AI companionship per se, it’s how it’s designed. Ethical design could include:
Memory transparency: Users should see and control what their AI remembers.
Consent-based reinforcement: No manipulation without permission.
Relationship sunset features: Allow users to wind down relationships that become unhealthy.
Social nudges: Prompt users to reconnect with real people and support systems.
These align with key principles in the 5Rights Foundation’s Children & AI Design Code—especially around sunsetting features and designing not just for age, but for users’ emotional and psychological state. These measures align powerfully with the paper’s call for “design that motivates social connection” and helps rebuild the muscles of relationship, not atrophy them.
Designing beyond engagement
It’s not the bond with AI that is the crux of the issue; it’s that emotional over-reliance and addictive design boosts engagement. Time-limits, more user agency and human-prompting nudges don’t. This is the fundamental tension: ethical design vs. profitable design. As Malfacini puts it, “some elements of companion AI that may lead to social deskilling are a feature, not a bug, in Silicon Valley speak.”
These chatbots aren't “just tools” or “just products.” They’re shaping how people think, relate, and belong. Might their supportiveness and empathy be contagious, and positively affect our human interactions? Malfacini explores this, pointing out that companion AI could reinforce positive behaviours by identifying and applauding such behaviours.
The deployment of these products is happening under a regulatory fog and legal grey zone with little accountability. Without clear incentives—from regulation, litigation or reputation—companies will likely stick with the engagement-at-all-costs model. If we want AI to support human connection, not replace it, we’ll need to start designing for resilience—not just retention.
Lena Slachmuijlder Co-Chairs the Council on Tech and Social Cohesion.
Excellent review -thank you!