Google’s Gemini app has launched a new feature aimed at combating deepfakes and manipulated media through its built-in verification tool, introduced on November 20, 2025. Users can check whether an image was created or edited with Google’s AI, leveraging SynthID watermarking technology developed by Google DeepMind. This feature enhances transparency in an age where AI-generated content blurs reality and fiction. However, critics highlight a significant limitation: the tool only identifies images watermarked by SynthID, excluding numerous non-Google AI images from detection. This narrow scope raises concerns about its effectiveness against the broader deepfake crisis, as it primarily promotes “content trust” within Google’s ecosystem. Industry comparisons reveal that while Google iterates on AI solutions, it risks becoming insular. Regulatory bodies are advocating broader AI watermarking standards, highlighting the need for collaborative defenses against misinformation. As Gemini evolves, industry-wide solutions are essential for safeguarding digital truth in an increasingly AI-driven landscape.
Source link
Share
Read more