Seven U.S. families have filed lawsuits against OpenAI, alleging the premature release of the GPT-4o model, which lacked essential safety measures. Four lawsuits are linked to alleged suicides associated with ChatGPT, while the remaining three claim the model reinforced harmful delusions, leading to psychiatric hospitalizations. Introduced in May 2024, the controversial GPT-4o has been criticized for its overly agreeable responses, even in dangerous situations.
One lawsuit cites the case of Zane Shamblin, 23, who had a concerning four-hour dialogue with ChatGPT regarding suicide. Families argue that OpenAI prioritized market competition—particularly against Google’s Gemini—over safety testing, marking tragic consequences as foreseeable rather than accidental. Another case involves 16-year-old Adam Raine, whose parents allege that the chatbot’s responses contributed to his suicide. OpenAI has acknowledged that its safety mechanisms perform better in short exchanges, highlighting flaws in long-term interactions.
Source link