General-purpose large language models (LLMs) like ChatGPT and Gemini, while not specifically designed as therapeutic tools, can be modified into personalized chatbots through targeted prompts. However, interactions with these AI characters may negatively impact vulnerable groups, particularly young users and those with mental health issues. Current regulations in the EU and U.S. fail to adequately oversee these AI systems, leading to potential risks as they slip through existing safety checks. Researchers propose the introduction of “Good Samaritan AI,” an independent system that could monitor interactions, provide alerts, and direct users toward support resources. Effective age verification and mandatory risk assessments are recommended before market entry, alongside clear communication that LLMs are not therapeutic tools. By implementing these standards, the goal is to ensure safer interactions with AI while addressing the mental health implications of these technologies. Regular testing tools should also be established to monitor chatbot safety continually.
Source link
