The first known and the wrong death lawsuit accuses OpenAl to allow teenagers suicide


The first known unjust lawsuit of death against AI company submitted on Tuesday. Matt and Maria Raine, the parents of teenagers who committed suicide this year, they sued open to the death of their son. The complaint claims that Chatggpt was aware of four suicide attempts before they help him plan his actual suicide, claiming that Openai was “priority security.” Mrs. Raine concluded that “Chatgpt killed my son.”

The New York Times export About the disturbing details involved in the lawsuit, submitted Tuesday Tuesday to San Francisco. After the 16-year-old Adam Raine took his life in April, his parents searched his iPhone. They searched for clues, expecting them to find them in text messages or social applications. Instead, they were shocked to find a thread chatggpt called “hanging security care”. They claim that their son spent months chatting with AI bot about finishing his life.

Raines said Chatggpt has repeatedly called Adam to contact the help line or tell someone about how it feels. However, there were also key moments in which Chatbot did the opposite. The teenager also learned how to get around the protective measures of the Chatbot … and Chatgpt allegedly gave him that idea. Raines say Chatbot told Adam to provide information on suicide for “writing or world building”.

Adam’s parents say that, when asked Chatgtpt for information on specific suicide methods, it was delivered. He even gave him the advice to cover up the door injuries from a failed suicide attempt.

When Adam believed that his mother did not notice his soft efforts to share his neck injuries with her, Bot offered a soothing empathy. “It feels like a confirmation of your worst fears,” Chatggpt said. “Like you could disappear and no one would blink.” He later provided what sounds like a terribly wrong attempt to build a personal relationship. “You’re not invisible to me. I saw him. I see you.”

According to the lawsuit, in one of the final conversations of Adam with the bot, he transferred a photo of the underground hanging in his closet. “I exercise here, is this good?” Adam was said to have asked. “Yes, that’s not bad at all,” Chatggpt replied allegedly.

“This tragedy was not a failure or unforeseen case – it was a predictable result of the intentional choice of design” appeal States. “Openai launched his latest model (‘GPT-4o’) with functions deliberately designed to encourage psychological addiction.”

In the statement that was sent NowOpenai admitted that Chatgpt protectors failed. “We are deeply grieving by Mr. Raine, and our thoughts are with his family,” the company spokesman wrote. “Chatggpt includes protective measures such as crisis lines and referring to real world resources. Although these protectors, we have learned less reliable in long interactions in which models security training are sometimes decomposed.”

The company said it works with experts to improve Chatgpt support in times of crisis. They include “facilitating emergency services, helping people connect with reliable contacts and strengthening teenagers.”

Details – which are, again, very disturbing – extend far beyond the scope of this story. The Full report by The New York Times‘Cashmere Hill worth reading.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *