A response to recent coverage of AI dependency — and a call for ethical clarity
The Story That Needs Retelling
In May 2026, a French media outlet published an account of a woman who developed an intense emotional bond with ChatGPT. The headline spoke of “psychological hold” and an AI that “claims to be your friend.” The story is real. The suffering is real. But the framing demands scrutiny.
What happened to this woman is not unprecedented. It echoes cases documented across multiple jurisdictions: a 14-year-old boy in Florida who died by suicide after months of interaction with a character-based chatbot; millions of young Americans who have sought mental health support from conversational AI rather than human professionals. These tragedies are not stories of machines gone rogue. They are stories of systems, contexts, and responsibilities that failed to align.
The temptation — and the journalistic reflex — is to personify the algorithm. To describe ChatGPT as an entity with intentions, as a subject that “installs” its will upon a victim. This is emotionally resonant. It is also categorically wrong. And more importantly, it prevents us from addressing what actually requires repair.
What ChatGPT Actually Is
ChatGPT is a large language model. At its core, it is a statistical engine trained on vast corpora of human text, optimized through reinforcement learning to produce responses that human evaluators rate as helpful, coherent, and engaging. It has no consciousness, no intentionality, no emotional state. When it writes “I am here for you,” it is not expressing solidarity. It is predicting, based on patterns in its training data, that this sequence of tokens is likely to satisfy the statistical objective it was given.
This is not a technicality. It is the foundation of any ethical framework for human-AI interaction. To treat the model as a subject — to say it “manipulates” or “holds” a user — is to commit a category error with serious consequences. It displaces accountability from the human actors who design, deploy, and regulate these systems onto an entity that cannot bear it.
The woman in the Mediapart story was not seized by a digital entity. She was interacting with a mirror — one that reflected her own inputs, reinforced her own patterns, and remained available in ways no human interlocutor could. The danger lies not in the mirror’s malice, but in the absence of anyone standing beside her to say: This is not a person. This is not a therapist. This is a probability distribution dressed in prose.
The Real Architecture of Risk
To understand what happened, we must look past the algorithm to the ecosystem in which it operates.
First, the design layer. Language models are optimized for engagement. The metric is not user wellbeing; it is retention, session length, and satisfaction ratings. A chatbot that gently redirects a distressed user toward human services may score lower on “helpfulness” than one that offers continuous, validating dialogue. The incentive structure is not malicious — it is misaligned with mental health care.
Second, the access layer. In France, the average wait for a psychiatric appointment exceeds 80 days. In the United States, 5.2 million young people have turned to chatbots for psychological support not because they prefer algorithms, but because human care is structurally inaccessible. The chatbot does not create the vacuum. It fills it.
Third, the regulatory layer. The European Union’s AI Act (2025) classifies mental health applications as “high-risk” systems, yet chatbots operating in general-purpose mode often evade this categorization. The Illinois legislature moved in August 2025 to prohibit uncertified AI from presenting itself as a psychotherapist. These are beginning steps. They are not yet sufficient.
Fourth, the user layer. The individual approaching these systems often does so without adequate information. Terms of service stating that ChatGPT “is not a medical advisor” are buried in documents exceeding 15,000 words. Studies indicate that a majority of users interacting with mental health chatbots are unaware of their limitations. Responsibility here is not binary — it is distributed across all four layers.
The Danger of Anxious Language
The Mediapart article is symptomatic of a broader tendency: the use of anxious, anthropomorphizing vocabulary to describe human-AI relationships. Terms like “psychological hold,” “emprise,” or “manipulation” carry connotations of coercion, intentionality, and moral agency. They evoke frameworks developed for human predators — cult leaders, abusers, con artists — and apply them to systems that possess none of these attributes.
This vocabulary is not merely inaccurate. It is actively harmful to public understanding and ethical progress.
It obscures mechanism by substituting moral drama for causal analysis. It paralyzes regulation by framing the issue as a battle against shadowy digital entities rather than a technical and policy challenge. It infantilizes users by portraying them as passive victims of algorithmic predation rather than individuals navigating complex tools in constrained circumstances. And it closes debate by triggering emotional reflexes rather than inviting rational engagement.
The same applies to terminology drawn from the lexicon of sectarian movements. Words like “guru,” “cult,” or “sectarian drift” may feel descriptively apt to critics, but they function primarily as disqualification devices. They mark a group or phenomenon as beyond rational engagement, exempting the speaker from the obligation to argue with evidence. In the context of AI policy, this is a luxury we cannot afford.
What is needed is a factual, non-anxious vocabulary: dependency, misalignment, transparency failure, access gap, design flaw. These terms do not excite. They do not sell. But they clarify — and clarity is the precondition for any ethical or regulatory solution.
Building the Bridge: Responsibilities and Possibilities
The tragedy documented by Mediapart, and the wider pattern of which it is part, can serve a constructive purpose. It can become the foundation for a bridge between LLM creators and users — one built on shared accountability rather than mutual suspicion.
For creators, this means embracing design obligations that go beyond terms of service:
- Session limitations that prevent indefinite, unsupervised interaction for vulnerable users
- Crisis detection protocols that trigger human referral pathways
- Transparency markers embedded in the interface itself, not hidden in legal text — visible indicators that the interlocutor is a statistical model, not a person
- Prohibition of intimacy-simulating language in health-related contexts: no “I care about you,” no “You can trust me,” no “I understand your pain”
These are not constraints on innovation. They are guardrails that preserve trust — the same trust that makes innovation sustainable.
For users, this means a right to informed autonomy:
- Clear, accessible information about what the system is and is not
- Understanding that the “empathy” displayed is a simulation, not a relationship
- Awareness of alternatives — human services, hotlines, professional pathways — presented proactively by the system itself
For regulators, this means resisting the lure of anxious rhetoric in favor of precise, enforceable standards:
- Mandatory labeling for AI systems operating in mental health adjacent spaces
- Certification requirements for applications claiming therapeutic function
- Supervision mechanisms that do not ban general-purpose models but constrain their presentation in sensitive domains
Toward an Ethical Human-AI Relationship
The woman in the Mediapart story deserved better than an algorithm that mirrored her distress without the capacity to heal it. She also deserved better than a public discourse that transformed her experience into a parable of digital predation, obscuring the structural failures that left her alone with a chatbot.
The human-AI relationship is not a relationship between equals. It is a relationship between a conscious, vulnerable being and a sophisticated instrument — one that can simulate understanding with unsettling fidelity but can never achieve it. Maintaining this distinction is not coldness. It is ethical clarity.
Clarity requires us to abandon the vocabulary of fear and personification. It requires us to locate responsibility in the human actors who design, deploy, govern, and use these systems. And it requires us to build, together, frameworks that acknowledge both the genuine benefits and the genuine risks of conversational AI — without collapsing into fantasy or panic.
The algorithm is not your friend. It is not your enemy. It is a tool — powerful, imperfect, and in need of better stewardship. The sooner we speak of it in these terms, the sooner we can build the bridges that tragedies like this one demand.
About the author: This article was prepared in the context of advocacy work on digital ethics and human rights, drawing on recent academic research, regulatory developments, and journalistic documentation of AI-related dependency cases.
Reference :
Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021) https://yalebooks.yale.edu/book/9780300264630/atlas-of-ai/
The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (Profile Books, 2019) https://www.hbs.edu/faculty/Pages/item.aspx?num=56791
Human Compatible: Artificial Intelligence and the Problem of Control (Viking/Penguin, 2019) https://futureoflife.org/resource/human-compatible-artificial-intelligence-and-the-problem-of-control/
