As reliance on AI for mental health support grows, states are regulating AI “therapy” apps amid insufficient federal oversight. Recent state laws, including bans in Illinois and Nevada, aim to address concerns, but critics argue these measures offer scant protection for users and fail to capture the rapid evolution of AI technology. They emphasize that popular generative bots like ChatGPT are often excluded from the regulations. Mental health professionals suggest that while chatbots can address care shortages, rigorous federal guidelines are essential to ensure safety and accountability. The Federal Trade Commission is investigating major AI companies for their impacts on youth, and the FDA is reviewing AI-enabled mental health devices. App developers like Earkick have adapted by modifying their language around therapy. Experts advocate for more comprehensive regulations to balance innovation with user safety, highlighting the need for ethically responsible AI solutions in mental health care.
Source link

Share
Read more