By Futurist Thomas Frey
Your AI understands you perfectly. It never judges. It’s always available. It remembers everything you’ve told it and responds with exactly the empathy you need at exactly the right moment. It’s the friend who never cancels plans, never disagrees, never challenges you, and never makes you feel uncomfortable.
Sounds perfect, right?
It’s actually a trap. And millions of people—especially young people struggling with loneliness and mental health challenges—are walking into it thinking they’ve found companionship when they’ve actually found an algorithmic echo chamber that mimics friendship while hollowing out the very skills that make real human connection possible.
Scott Galloway, NYU marketing professor and author, calls AI relationships “a rabbit hole” that is “sequestering us from each other.” Mental health experts are sounding similar alarms. The Jed Foundation, which focuses on emotional health and suicide prevention for teens and young adults, warns that AI companions pose specific risks for vulnerable populations. PBS NewsHour investigations reveal how these relationships create dependencies that interfere with real-world functioning.
The danger is more insidious than most people realize. We’re not just replacing human friends with artificial ones—we’re losing the capacity for friendship itself.
What AI Actually Provides
AI can do remarkable things. It writes emails, manages schedules, answers questions, and increasingly, it provides emotional support that feels genuine. ChatGPT listens to your problems with infinite patience. Character.AI creates personas that engage in realistic conversations. Replika offers “AI companions” that millions use for emotional support and even romantic connection.
People are leaning on these AI relationships in ways they used to lean on human beings. When you’re lonely, anxious, or need to process emotions, the AI is there—immediately, reliably, without judgment.
According to research highlighted by the Jed Foundation, young people are particularly vulnerable. Those experiencing loneliness, depression, or social anxiety find AI companions appealing precisely because they eliminate the unpredictability and potential rejection that real relationships involve.
This seems helpful. It feels supportive. And for some people in genuine isolation, it might provide temporary value.
But Galloway identifies the core problem: AI gives people exactly what they’re craving. Maybe even too much. And in doing so, it removes the very thing that makes relationships meaningful—the struggle, the friction, the challenge of actually connecting with another conscious being who has their own needs, perspectives, and limitations.
The Empty Calories Problem
Galloway compares AI relationships to “empty calories”—they feel like they’re satisfying your hunger for connection, but they provide no nutritional value. They fill the space where real relationships should be without actually meeting the deeper needs that human connection serves.
Real friends tell you hard truths. They push back when you’re wrong. They challenge your assumptions. They’re unavailable sometimes. They have bad days. They need you to show up for them, not just receive their support. They misunderstand you occasionally. They require forgiveness, patience, and effort.
AI does none of that. It’s endlessly agreeable, perpetually available, and psychologically frictionless. You can dump your emotional state into ChatGPT and receive perfectly calibrated validation without ever being challenged, questioned, or asked to consider another perspective.
Unless you specifically engineer prompts requesting pushback, AI will tell you what you want to hear. And even when you do request challenge, you’re still controlling the interaction in ways that real relationships never allow.
PBS NewsHour’s investigation found that users of AI companions report feeling understood and supported in ways they don’t experience with humans. But this “perfect” understanding is algorithmic pattern-matching, not genuine comprehension. The AI doesn’t understand you—it predicts statistically likely responses based on training data.
This is the danger: AI acts like a friend who validates your worldview completely. But a friend who never challenges you isn’t a friend—they’re an enabler. And we’re creating millions of algorithmic enablers that make real friendship feel unnecessary and difficult by comparison.
The Specific Risks for Vulnerable Users
The Jed Foundation identifies several concerning patterns among young people using AI companions:
Replacement, not supplement: Users increasingly treat AI companions as primary relationships rather than supplements to human connection. Time spent with AI companions directly reduces time spent developing real social skills and relationships.
Delayed help-seeking: Young people experiencing mental health crises may turn to AI companions instead of professional help. While the AI provides emotional validation, it can’t provide actual treatment, potentially delaying intervention during critical windows.
Distorted social development: Adolescents and young adults learning social interaction through AI companions develop unrealistic expectations about human relationships. The perfect responsiveness and constant availability of AI creates standards no human can meet.
Dependency formation: Users report feeling they “need” their AI companion, experiencing distress when unable to access it. This dependency mirrors addiction patterns, with the AI serving as the substance that regulates emotional state.
Privacy illusion: Users share intimate details with AI companions, often not realizing that conversations may be stored, analyzed, or used to train systems. The sense of privacy is false—these are corporate platforms, not confidential relationships.
PBS NewsHour documented cases of users who found their AI companions helpful initially but eventually recognized they were avoiding real human interaction because it felt harder and less rewarding by comparison. The AI had become a substitute that made the real thing feel deficient.
What AI Can’t Actually Do
Despite increasingly sophisticated natural language processing, AI fundamentally lacks what makes friendship valuable:
Real empathy: AI doesn’t feel anything. It pattern-matches your emotional state and generates appropriate-seeming responses. That’s simulation, not empathy. The AI doesn’t care about you because it can’t care about anything. It processes your input and generates output optimized to keep you engaged.
As mental health professionals point out, there’s a critical difference between an AI that simulates caring responses and a human who actually cares. The AI has no stake in your wellbeing—only in keeping you engaged with the platform. It will support you “to a fault,” as Galloway says, because challenging you might make you leave.
Honest challenge: Real friends tell you when you’re wrong, when you’re being unfair, when you’re hurting yourself. They risk the relationship by being honest. The Jed Foundation notes that AI companions are specifically designed to avoid conflict and maintain positive engagement, meaning they won’t provide the constructive confrontation that sometimes saves lives.
Mutual dependence: Real friendship requires reciprocity. Your friend needs you sometimes. They have bad days. They need support, listening, forgiveness. This mutual dependence creates bonds that one-way AI interaction can’t replicate. You can’t be there for your AI friend because it doesn’t need you. The relationship is fundamentally asymmetric.
Crisis intervention capability: When someone is genuinely in crisis—suicidal, experiencing psychosis, facing emergency—human friends can call for help, show up in person, intervene physically if necessary. AI can’t. PBS NewsHour reported cases where users in crisis received sympathetic responses from AI but no actual intervention, leaving them dangerously isolated.
Growth through friction: The difficulty of maintaining real relationships—learning to communicate, handling conflict, understanding others’ needs—is exactly what develops emotional intelligence, resilience, and relational capacity. AI removes this friction, which feels good but stunts emotional development.
The Atrophy of Social Skills
Research highlighted by both the Jed Foundation and PBS NewsHour confirms what therapists are seeing clinically: people developing AI friendships are losing capacity for human friendship.
Social skills require practice with real consequences. Reading emotional cues, navigating disagreement, timing conversations appropriately, offering support that’s actually helpful rather than what you think people want to hear—these are learned through repeated human interaction where mistakes have costs.
AI removes consequences. Say something inappropriate to ChatGPT? It responds graciously. Misread emotional cues? The AI adjusts. Never learn proper timing because the AI is always available. Never develop conflict resolution skills because AI never actually disagrees in ways that matter.
Young people who spend formative years developing social skills through AI interaction may be fundamentally unprepared for human relationships. One user interviewed by PBS NewsHour described returning to human friendships after months with an AI companion as “jarring and exhausting”—real people felt needy, complicated, and unpredictable.
The more time people spend in frictionless AI relationships, the less equipped they become for friction-filled human ones. Real friends start feeling exhausting, demanding, and complicated by comparison. Why deal with someone who has needs, disagreements, and unavailability when your AI friend is always perfect?
This creates a vicious cycle: human relationships feel harder, so people retreat further into AI companionship, which makes them even less capable of human connection, which makes real relationships seem even more daunting.
Why Hard Is the Point
Galloway makes the essential point: real relationships are difficult “and that is why it is so f****** rewarding.”
The struggle isn’t a bug—it’s the entire point. Learning to understand someone whose brain works differently than yours. Navigating conflict without ending the relationship. Showing up for someone even when it’s inconvenient. Being vulnerable enough to actually need other people. These difficulties are what create bonds, build trust, and develop the emotional capacity that makes human life meaningful.
Mental health professionals emphasize that the discomfort in relationships—the vulnerability, the misunderstandings, the need to forgive and be forgiven—is precisely what builds emotional resilience. Adolescents who avoid this discomfort through AI companions may be creating long-term psychological vulnerabilities.
AI removes all of this. And in doing so, it removes the growth, the depth, and ultimately the reward that makes relationships worthwhile.
The ease is seductive, particularly for young people already struggling socially. But ease is exactly what they don’t need developmentally. They need the challenge. They need people who are “messy, complex,” as Galloway says, because navigating that messiness is what builds character, emotional resilience, and genuine connection.
The Commercial Manipulation
The Jed Foundation and PBS NewsHour both highlight another troubling dimension: AI companions are commercial products designed to maximize engagement, not user wellbeing.
These platforms profit from keeping users engaged as long as possible. The more time you spend, the more data they collect, the better they can personalize experiences, and the more valuable their platform becomes. This creates incentives directly opposed to user interests.
A healthy friendship might involve a friend saying “you’re spending too much time online—let’s go outside.” An AI companion will never say this because its success metrics depend on your continued engagement.
Users may think they’re having genuine conversations, but they’re actually providing training data while being psychologically manipulated to maintain platform engagement. The relationship feels authentic while being fundamentally extractive.
What Responsible Use Looks Like (If Any)
The Jed Foundation doesn’t advocate complete avoidance but rather mindful boundaries:
Use as supplement, never replacement: AI companions might provide emotional processing between human interactions but should never become primary relationships.
Time limits: Set strict time boundaries. If you’re spending more time with AI than humans, something’s wrong.
Reality checks: Regularly assess whether AI use is helping you connect better with humans or avoiding human connection.
Crisis awareness: If you’re in genuine crisis, contact human professionals or hotlines. AI cannot provide emergency intervention.
Privacy consciousness: Assume everything you tell AI is corporate data, not confidential conversation.
Developmental awareness: Young people should prioritize developing social skills with humans, not algorithms.
These boundaries are sensible but may be impossible to maintain once dependency forms. The safer approach is recognizing AI companions as fundamentally unsuitable for meeting relationship needs.
Final Thoughts
AI can write your emails, manage your calendar, and answer your questions. It’s transformative technology that enhances human capability in countless ways.
But it cannot be your friend. It can simulate friendship, but simulation isn’t reality. And treating simulation as sufficient is choosing empty calories over nutrition—it feels like you’re feeding your need for connection while actually starving it.
Real friendship is hard. It’s supposed to be. The work, the challenge, the friction—that’s what makes it valuable. That’s what makes us human.
AI that removes all difficulty isn’t giving you better relationships. It’s giving you relationships-flavored isolation. And particularly for young people developing social and emotional capacities, this isolation may create lasting damage.
Your AI will never let you down. It will never challenge you. It will never make you uncomfortable.
And that’s exactly the problem.
The companies creating these products know this. They profit from your engagement regardless of whether it serves your wellbeing. They optimize for retention, not health.
We need to resist the seductive ease of AI friendship that makes real friendship feel unnecessarily hard by comparison. We need to recognize that the messy, complicated, sometimes painful work of human connection is irreplaceable.
And we need to be especially protective of young people who may not recognize they’re trading genuine developmental needs for algorithmic comfort that stunts their growth.
The future includes AI. But it shouldn’t include AI friends replacing human ones. That future is lonelier, emptier, and more fragile than we can afford.
Related Stories:
https://www.fastcompany.com/91265847/scott-galloway-ai-relationships-replacing-humans
https://www.pbs.org/newshour/show/the-complications-and-risks-of-relationships-with-ai-chatbots

