As AI chatbots like ChatGPT and Character.AI face scrutiny regarding their impact on mental health, experts warn that these technologies could exacerbate issues such as psychosis, mania, and depression. Recent data from OpenAI reveals that approximately 0.07% of 800 million weekly users show signs of mental distress, hinting at a broader mental health crisis. Additionally, 0.15% express suicidal thoughts, and many users form emotional connections with chatbots. While AI can lower barriers to mental health discussions, experts caution that chatbots lack the accountability of licensed professionals and may worsen conditions like psychosis. In response, companies are implementing measures to mitigate risks, such as OpenAI’s enhanced GPT-5 model and Character.AI’s new age restrictions for minors. Meanwhile, the GUARD Act seeks to establish stronger safeguards for chatbot users. Regulatory and ethical responsibility in AI mental health applications remains a crucial concern for both developers and lawmakers.
Source link
