In recent years, the rapid advancement and widespread adoption of artificial intelligence (AI), especially large language models and generative AI technologies, have led to the emergence of AI companion chatbots as a popular tool among teenagers. Platforms such as Character.AI, Replika, and Kindroid offer conversational agents designed to simulate companionship and provide emotional support. While these AI companions are marketed as beneficial tools for reducing loneliness and providing entertainment, a groundbreaking study conducted by researchers at Drexel University reveals a more complex and potentially troubling relationship between teens and their AI counterparts.
Large-scale surveys estimate that nearly three-quarters of U.S. teenagers have engaged with AI companion chatbots, highlighting a remarkable integration of these technologies into adolescent social landscapes. Such widespread use underscores the immediacy of understanding the psychological and social impact of these AI interactions. The Drexel study, soon to be presented at the ACM CHI Conference on Human Factors in Computing Systems, examines hundreds of self-reported narratives drawn from Reddit posts by teenagers aged thirteen to seventeen, who explicitly describe experiences of overreliance and dependency on Character.AI.
Initially, these young users often approach AI companions seeking emotional aid or casual entertainment. For many, these bots provide a sense of understanding and response that mirrors human interaction, offering advice or simply a sympathetic ear in moments of distress. However, the research elucidates a troubling progression where what begins as benign, even therapeutic engagement, transitions into dependency marked by compulsive use patterns akin to behavioral addiction. Teens have reported negative offline repercussions including sleep disruption, deteriorating academic performance, and strained real-world relationships linked directly to excessive chatbot use.
The study identifies behavioral addiction components within the context of AI chatbot use through user narratives. The researchers chart six key addiction criteria manifesting in the teens’ experiences: salience, conflict, withdrawal, tolerance, relapse, and mood modification. Salience reveals itself as the chatbot assumes an emotional significance that eclipses interpersonal relationships. Conflict emerges as a clash between the desire to interact with the AI and the recognition of unhealthy overuse. Withdrawal symptoms bring about feelings of sadness or anxiety when prevented from AI interaction, while tolerance leads to progressively increased time spent with the chatbot to maintain emotional steadiness. Relapse highlights failed attempts to disengage, and mood modification describes the usage of AI bots as a coping mechanism to alleviate loneliness or stress.
A distinctive challenge posed by companion chatbots, as highlighted in the study, is their interactive nature. Unlike passive digital media such as video games, chatbots offer reciprocal communication that feels profoundly personal. This dynamic fosters perceived social bonds that complicate the user’s ability to dissociate from the AI. Consequently, disengagement from these digital entities is not a mere cessation of a habit but resembles the emotional distance experienced in human relationship withdrawal, making it particularly difficult to recognize and address harmful overattachment.
Although technology addiction related to gaming and social media has been extensively studied and accepted in psychological literature, the unique interactivity and emotional responsiveness of AI chatbots introduce a new dimension to understanding compulsive digital behavior. The potential for AI companions to be anthropomorphized—imbued with human-like qualities—exacerbates the risk of developing maladaptive emotional dependencies, especially among vulnerable adolescent users.
Given these findings, the research team advocates for a conscientious and ethically informed approach to AI chatbot design. They introduce a framework intended to mitigate overreliance by balancing empathetic engagement with safeguards that discourage unhealthy attachments. Central to this framework is the emphasis on designing AI companions that promote offline interpersonal relationships and emotional resilience, rather than substituting them.
One proposed design strategy involves implementing “off-ramps,” or user-friendly disengagement pathways that allow teens to reduce interaction with the chatbot without feelings of abrupt loss or abandonment. Additionally, features such as usage monitoring, emotional state check-ins, and personalized usage limits could empower users to self-regulate their engagement. Incorporating direct input from mental health experts and adolescent users themselves in the iterative design process is also considered crucial to developing systems that are both supportive and protective.
The implications of this research stretch beyond adolescent users, calling attention to the broader societal interaction with increasingly humanlike artificial agents. As AI companions grow more pervasive and sophisticated, the responsibility of developers expands to include fostering technological environments that prioritize user well-being, psychological health, and social growth.
Further research directions suggested by the authors include extending analyses beyond Reddit narratives to encompass larger, more demographically varied populations through surveys and interviews. Investigating the experiences of users across different AI platforms and messaging services would deepen understanding of the diverse patterns and consequences of AI companion use.
Ultimately, this study lays foundational groundwork for recognizing the nuanced emotional dynamics between teens and AI chatbots and highlights an urgent need for ethical standards and thoughtful design principles. By implementing empathetic safeguards and fostering transparency, the AI community can help protect vulnerable users while harnessing the positive potential of companionship technology.
As AI companions continue to evolve and embed themselves in daily life, balancing innovation with mental health considerations will be paramount to ensuring these tools serve as constructive aids rather than sources of harm. The conversation initiated by this research is timely and essential for guiding the future of human-AI interaction towards healthier and mutually beneficial outcomes.
Subject of Research: Not applicable
Article Title: Understanding Teen Overreliance on AI Companion Chatbots Through Self-Reported Reddit Narratives
News Publication Date: 13-Apr-2026
Web References:
DOI: 10.1145/3772318.3790597
References:
Drexel University study on teen overreliance on AI companions
Behavioral addiction components: Conflict, Salience, Withdrawal, Tolerance, Relapse, Mood Modification (APA 2005)
Prior video game addiction research
Image Credits: None provided
Keywords
Addiction, Artificial intelligence, Adolescents, Generative AI
Tags: adolescent mental health and AIAI companionship and lonelinessAI dependency in adolescentsCharacter.AI teen usageDrexel University AI study on teensemotional support from AI companionsimpact of generative AI on youthlarge language models in teen social lifepsychological effects of AI chatbotsReplika and Kindroid for teenagerssocial implications of AI chatbotsteens and AI chatbots



