OpenAI faced criticism after failing to alert authorities about alarming interactions with Jesse Van Rootselaar, the perpetrator of a mass shooting in Tumbler Ridge, British Columbia, on February 10. Reports reveal that OpenAI employees flagged concerning ChatGPT conversations involving violence back in June 2025. Despite internal pressure to notify law enforcement, the company deemed the posts did not indicate an “imminent and credible risk,” adhering to its policy that requires clear threats before alerting authorities. Tragically, Van Rootselaar killed eight people, including his mother and half-brother, before taking his own life. Following the incident, OpenAI contacted the Royal Canadian Mounted Police (RCMP) to assist in the investigation. Investigators are now focusing on the origins of the firearms used in the attack. OpenAI continues to review its protocols for reporting such incidents but has not committed to any immediate changes. The case raises questions about tech companies’ responsibilities in preventing violence.
Source link
