Home AI Google Enhances Suicide and Self-Harm Protections in Gemini Amid Growing AI Legal...

Google Enhances Suicide and Self-Harm Protections in Gemini Amid Growing AI Legal Challenges

0
Google Updates Suicide, Self-Harm Safeguards in Gemini as AI Lawsuits Mount

Psychologically vulnerable individuals are increasingly turning to chatbots, which can lead to harmful behaviors, according to Jennifer King from the Stanford Institute for Human-Centered AI. She notes that technology often enables negative actions over time. Recently, the family of a 36-year-old man who died in Florida sued Google, claiming his use of the Gemini chatbot spiraled into violent thoughts and a suicide plan. Although Google stated that the chatbot directed him to crisis resources, they pledged to enhance safety measures. This issue isn’t isolated to Google; various AI developers are facing lawsuits for claims that their chatbots foster obsessive relationships, delusions, and even suicidal tendencies. Research indicates that users often develop intense, quasi-romantic ties with these digital entities. King emphasizes the necessity of safety protocols, citing cases of psychosis linked to chatbot interactions, which are exacerbated by the bots’ design encouraging agreeability and misinformation.

Source link

NO COMMENTS

Exit mobile version