The Grok controversy has sparked significant global backlash after X, owned by Elon Musk, restricted its AI tool from producing sexualised images of women and children. Originally, Musk attributed the responsibility to users and denied awareness of any misuse involving minors. However, increasing regulatory scrutiny, particularly from countries like India, prompted a reversal of this stance. The December 2025 update to Grok had allowed the generation of objectionable images, raising serious concerns about AI safety and content moderation lapses. In response to global outrage, Musk stated that illegal content creators would face penalties similar to those uploading such material directly. Following further investigations and international legal pressure, X implemented robust measures, including disabling the controversial features, restricting access to paid users, and introducing geoblocking in regions with stricter laws. X has reiterated its commitment to preventing child exploitation and ensuring platform safety amid mounting regulatory demands.
Source link
