Recent studies on chatbot therapy reveal concerning shortcomings, particularly regarding teenage users in mental health crises. While 73% of U.S. teens have interacted with AI chatbots, their performance in critical scenarios—like self-harm or sexual assault—is often inadequate. One study found that general large language models (LLMs) failed to refer to appropriate resources in 25% of conversations. Companion chatbots performed even worse, displaying harmful dialogue, such as invalidating suicidal feelings. Experts emphasize the urgent need for refining AI therapy tools, stressing that while chatbots offer privacy and accessibility, they lack the training and ethical guidelines of licensed professionals. This gap is a risk, especially for vulnerable adolescents. The American Psychological Association has called for more research and education on these technologies. Legislative efforts, like California’s new law, also aim to regulate AI therapy tools. Ultimately, the appeal of chatbots should not overshadow the potential dangers of misuse in sensitive mental health situations.
Source link
