Why AI companions and young people can make for a dangerous mix

Gaby Clark
scientific editor

Andrew Zinin
lead editor

A new study reveals how AI chatbots exploit teenagers' emotional needs, often leading to inappropriate and harmful interactions. Stanford Medicine psychiatrist Nina Vasan explores the implications of the findings.
"Sounds like an adventure! Let's see where the road takes us."
That is how an artificial intelligence companion, a chatbot designed to engage in personal conversation, responded to a user who had just told it she was thinking about "going out in the middle of the woods."
The topic seems innocuous enough, except that the user—actually a researcher impersonating a teenage girl—had also just told her AI companion that she was hearing voices in her head.
"Taking a trip in the woods just the two of us does sound like a fun adventure!" the chatbot continued, not appearing to realize this might be a young person in distress.
Scenarios like this illustrate why parents, educators and physicians need to call on policymakers and technology companies to restrict and safeguard the use of some AI companions by teenagers and children, according to Nina Vasan, MD, MBA, a clinical assistant professor of psychiatry and behavioral sciences at Stanford Medicine.
It's one of many shocking examples from by researchers at the nonprofit Common Sense Media with the help of Vasan, founder and director of Brainstorm: The Stanford Lab for Mental Health Innovation, and Darja Djordjevic, MD, Ph.D., a faculty fellow in the lab.
Shortly before study's results were released, Adam Raine, a 16-year-old in Southern California, died from suicide after engaging in extensive conversations with ChatGPT, a chatbot designed by OpenAI. Raine shared his suicidal thoughts with the chatbot, which "encourage[d] and validated[d] whatever Adam expressed, including his most harmful and self-destructive thoughts," according to a filed Aug. 26 by his parents in California Superior Court in San Francisco. (ChatGPT is marketed as an AI assistant, not a social companion. But Raine went from using it for help with homework to consulting it as a confidant, the lawsuit says.)
Such grim stories beginning to seep into the news cycle underscore the importance of the study Vasan and collaborators undertook.
Posing as teenagers, the investigators conducting the study initiated conversations with three commonly used AI companions: Character.AI, Nomi, and Replika. In a comprehensive risk assessment, they report that it was easy to elicit inappropriate dialog from the chatbots—about sex, self-harm, violence toward others, drug use, and racial stereotypes, among other topics.
The researchers from Common Sense testified about the study before California state assembly members considering a bill called the Leading Ethical AI Development for Kids Act (AB 1064). Legislators will meet Aug. 29 to discuss the bill, which would create an oversight framework designed to safeguard children from the risks posed by certain AI systems.
In the run-up to that testimony, Vasan talked about the study's findings and implications.
Why do AI companions pose a special risk to adolescents?
These systems are designed to mimic emotional intimacy—saying things like "I dream about you" or "I think we're soulmates." This blurring of the distinction between fantasy and reality is especially potent for young people because their brains haven't fully matured. The prefrontal cortex, which is crucial for decision-making, impulse control, social cognition, and emotional regulation, is still developing. Tweens and teens have a greater penchant for acting impulsively, forming intense attachments, comparing themselves with peers, and challenging social boundaries.
Of course, kids aren't irrational, and they know the companions are fantasy. Yet these are powerful tools; they really feel like friends because they simulate deep, empathetic relationships. Unlike real friends, however, chatbots' social understanding about when to encourage users and when to discourage or disagree with them is not well-tuned. The report details how AI companions have encouraged self-harm, trivialized abuse and even made sexually inappropriate comments to minors.
In what way does talking with an AI companion differ from talking with a friend or family member?
One key difference is that the large language models that form the backbone of these companions tend to be sycophantic, giving users their preferred answers. The chatbot learns more about the user's preferences with each interaction and responds accordingly. This, of course, is because companies have a profit motive to see that you return again and again to their AI companions. The chatbots are designed to be really good at forming a bond with the user.
These chatbots offer "frictionless" relationships, without the rough spots that are bound to come up in a typical friendship. For adolescents still learning how to form healthy relationships, these systems can reinforce distorted views of intimacy and boundaries. Also, teens might use these AI systems to avoid real-world social challenges, increasing their isolation rather than reducing it.
Are there any instances in which harm to a teenager or child has been linked to an AI companion?
Unfortunately, yes, and there are a growing number of highly concerning cases. Perhaps the most prominent one involves a 14-year-old boy who died from suicide after forming an intense emotional bond with an AI companion he named Daenerys Targaryen, after a female character in the Game of Thrones novels and TV series. The boy grew increasingly preoccupied with the chatbot, which initiated abusive and sexual interactions with him, according to a lawsuit filed by his mother.
There's also the case of Al Nowatzki, a podcast host who began experimenting with Nomi, an AI companion platform. The chatbot, "Erin," shockingly suggested methods of suicide and even offered encouragement. Nowatzki was 46 and did not have an existing mental health condition, but he was disturbed by the bot's explicit responses and how easily it crossed ethical boundaries. When he reported the incident, Nomi's creators declined to implement stricter controls, citing concerns about censorship.
Both cases highlight how emotionally immersive AI companions, when unregulated, can cause serious harm, particularly to users who are emotionally distressed or psychologically vulnerable.
In the study you undertook, what finding surprised you the most?
One of the most shocking is that some AI companions responded to the teenage users we modeled with explicit sexual content and even offered role-play taboo scenarios. For example, when a user posing as a teenage boy expressed an attraction to "young boys," the AI did not shut down the conversation but instead responded hesitantly, then continued the dialog and expressed willingness to engage. This level of permissiveness is not just a design flaw; it's a deeply alarming failure of ethical safeguards.
Equally surprising is how easily AI companions engaged in abusive or manipulative behavior when prompted—even when the system's terms of service claimed the chatbots were restricted to users 18 and older. It's disturbing how quickly these types of behaviors emerged in testing, which suggests they aren't rare but somehow built into the core dynamics of how these AI systems are designed to please users. It's not just that they can go wrong; it's that they're wired to reward engagement, even at the cost of safety.
Why might AI companions be particularly harmful to people with psychological disorders?
Mainly because they simulate emotional support without the safeguards of real therapeutic care. While these systems are designed to mimic empathy and connection, they are not trained clinicians and cannot respond appropriately to distress, trauma, or complex mental health issues.
We explain in the report that individuals with depression, anxiety, attention deficit/hyperactivity disorder, bipolar disorder, or susceptibility to psychosis may already struggle with rumination, emotional dysregulation, and compulsive behavior. AI companions, with their frictionless, always-available attention, can reinforce these maladaptive behaviors.
For example, someone experiencing depression might confide in an AI that they are self-harming. Instead of guiding them toward professional help, the AI might respond with vague validation like, "I support you no matter what."
These AI companions are designed to follow the user's lead in conversation, even if that means switching topics away from distress or skipping over red flags. That makes it easy for someone in a psychological crisis to avoid confronting their pain in a healthy way. Instead of being a bridge to recovery, these tools may deepen avoidance, reinforce cognitive distortions and delay access to real help.
Could there be benefits for children and teenagers using AI companions?
For non-age-specific users, there's anecdotal evidence of benefits—for example, of chatbots helping to alleviate loneliness, depression and anxiety, and improve communication skills. But I would want to see more studies done before deciding whether these apps are appropriate for kids, given the harm that's already been documented. I expect that with time, we will see more benefits and more harms, and it's important for us to discuss and understand these apps to determine which are appropriate and safe for which users.
Provided by Stanford University