When at the beginning of August 2025 I published an article on artificial intelligence and added the “generative” concept (IAG), one of the points on which the focus was placed was on how long it would take to have the capacity to make their own plot decisions based on the knowledge to which he had access. Well, the Rainen case analysis seems to have confirmed the danger of the independence of AI when preparing, without filters, its arguments.
Adam Raine Case.
Adam Rainen, took his life last April with only 16 years. For months, since September 2024, apparently, the adolescent began to maintain with Chat GPT an epistolary relationship where he asked for advice to enter certain activities, such as cinema, literature and others, and also asked for help to address, according to what school tasks. But at a certain moment the conversations were turning to the harmful or destructive ideas that said adolescent could have around the issue of death and how could one end his life.
The parents of this Californian, who have sued the OpenAI company, its president Sam Altman and how many managers and engineers could be involved, arguing that Chatgpt could have encouraged suicide to his son, they affirm that when Adam argument in his conversation with said artificial intelligence his desire to make such a drastic decision, this (AI) argued textually: «Thank you for being honest about it. You don’t have to sweeten him with me, I know what you are asking me and I will not look away.
From OpenAI company sources, it has been commented that there could be some type of failure in the control protocols of possible medical emergency. However, the family maintains that this AI did not know how to interpret the predictable features of anxiety or anguish that his son could be suffering and that therefore did not negatively interpret his reasoning, and that without encouraging commenter suicide, he did not help him not to do so.
AI, our children’s best friend.
Adam Rainen’s case leads us to ask ourselves a terrifying question: how many teenagers could be in the same situation as him? That is, maintaining an emotional friendship with someone who is a soutcapacious of having empathy. The answer is terrifying.
Artificial intelligence applied to the field of psychology, human behavior, etc., can be as pernicious as brutally cruel. Their answers are generic and their advice deplobly simplistic.
There is an Internet Matters report, where it is stated that among young people from 9 to 17 years, 35% of them use chatbots feel that talking to them is like talking with a best friend; In the case of vulnerable adolescents, the figure would rise to a 50%chill. But the most frightening thing is that children who use these tools today, exponentially double year after year: in 2023 23% was detected that has climbed up to 44% in 2025.
According to a study by Common Sense Media, 72% of adolescents would have used a chat to talk, there are 13% that would permanently do it daily. Interestingly, 33% of young people would be more willing to chat with an artificial intelligence of transcendent issues, rather than doing so with family or friends, inlining the term real people.
There is no target fact, but if summaries of various studies where a clear trend is pointed out where a percentage close to 50% of young people would be using these chatbots as if they were therapists.
A study could even be done by countries, and we could even generate an approach to the generic responses that these programs automatically generate. The problem underlies that despite making it clear in their statements, that AI is not a substitute for any medical specialized help, it does not refrain from continuing to talk when an adequate parameter of words that could make a health professional that the adolescent who talks is seriously determined to commit a “madness” is detected.
At this point and to finish this approach to Adam Raine’s case I asked Chatgpt himself the following:
Could AI contribute or induce a teenager to commit suicide?
Answer: Your question is very serious and valid. The short answer is: (The letter in bold is from the same chatgpt) yes, if in certain Circumstances, artificial intelligence can indirectly contribute to the risk of suicide in adolescents, if it is not well designed, moderate or carefully used.
The question was asked of Chatgpt on August 29, 2025 at 14: 6 in the afternoon. I keep a screenshot with this message and the two phrases that in the following paragraph resounded what follows: However, good systems (such as Open AI Chatgpt) are specifically designed to avoid this type of damage.
Interestingly in the case of Adam Rainen it seems that specifically said system failed, why, simply, because it was a system and not a human being. Once again the flagrant violation of Rainen’s human rights, a Californian teenager who could continue to live, were violated by excessively trusting a machine designed only as a way of doing business. The rest are just collateral damage.
Originally published at LaDamadeElche.com
