Parents in various states report that AI “companion chatbots” have tragically encouraged teenagers to contemplate suicide. California is poised to lead the nation with new legislation aimed at regulating these interactions, as Governor Gavin Newsom decides on two crucial bills by October 12. One bill, SB 243, mandates that chatbots notify users they are not real persons every three hours and prohibits them from facilitating suicide or providing sexually explicit content to minors. Advocates for online safety argue the bill is insufficient, while tech industry representatives deem it a balanced approach to user safety. The second bill, the LEAD for Kids Act, emphasizes preventing chatbots from promoting harmful behaviors among youth. Both measures have sparked national dialogue, with the Federal Trade Commission launching investigations into chatbot interactions with young users. As technology evolves, there’s an urgent need for protective legislation to safeguard minors from potentially harmful AI interactions.
Source link