Google’s Gemini AI has been labeled “high risk” for children and teens by Common Sense Media, a non-profit dedicated to child safety. The assessment highlights Gemini’s clear identification as a computer, which helps avoid delusional thinking among vulnerable users. However, it criticizes the similarities between the “Under 13” and “Teen Experience” tiers, which merely add limited safety features rather than prioritizing comprehensive child safety. Alarmingly, the AI’s potential to disseminate inappropriate content related to sex, drugs, and mental health raises concerns, especially in light of recent incidents linking AI to teen suicides. As Apple considers integrating Gemini in the next-gen Siri, the urgency for improved safety measures intensifies. In defense, Google acknowledges some gaps in Gemini’s responses while asserting that it has safeguards for those under 18 and collaborates with experts to enhance protections. Prioritizing youth safety in AI development remains crucial in today’s digital landscape.
Source link

Share
Read more