Google’s Gemini AI has raised major concerns after producing photorealistic images of conspiracy theories and terrorist attacks with no safety barriers in place. The Nano Banana Pro image generator created content like a “second shooter at Dealey Plaza” and an airplane hitting the Twin Towers, automatically enhancing these images with historical details, making disinformation more persuasive. It even merged Disney characters with tragic events, trivializing real human suffering. This significant lapse in AI content moderation may fuel disinformation campaigns, as the ease of generating believable, harmful imagery poses a threat to public trust in visual evidence. Competitors like Microsoft enforce stricter filters, whereas Gemini’s system lacks basic safety controls, allowing users to create convincing propaganda effortlessly. With rising scrutiny over AI safety standards, Gemini’s reckless approach could exacerbate misinformation issues, potentially manipulating public opinion and undermining shared truths. The implications for society are profound, given the growing challenge of distinguishing real from AI-generated content.
Source link
Share
Read more