This week, Google faced backlash over claims that it used Gmail users’ emails to train its Gemini AI models, originating from a Malwarebytes article. The allegations claimed that a hidden setting allowed Gmail to scan emails and attachments unless users disabled ‘smart features’ like spell check. Google quickly refuted the claims, emphasizing that no user emails are involved in Gemini’s training. The uproar, amplified on social media, highlighted concerns over data privacy and transparency. While Gmail’s ‘Personalized Services and Features’ setting uses email content for functionalities like smart replies, Google clarified that these practices have not changed and do not compromise user data. The controversy also reflects ongoing scrutiny of AI’s data practices, particularly in light of regulatory pressures from the EU. As Google prepares for the Gemini 3 launch, clearer boundaries around email data usage are crucial for restoring trust in privacy practices.
Source link
