OpenAI’s Sora 2 app allows users to create hyper-realistic AI-generated deepfake videos, raising concerns about the authenticity of visual content. Despite visible watermarks and metadata to track video origins, critics argue that deceptive uses still proliferate, particularly on social media. Notable incidents include fake videos misrepresenting social issues, prompting calls for stricter safeguards. Experts like Hany Farid highlight the “perfect storm” of AI technology, political polarization, and social media dynamics, complicating public trust in media. OpenAI’s policies against misleading content are seen as insufficient, with issues of accountability and regulation in the tech sector growing increasingly urgent. Users are encouraged to understand these technologies to better discern real from fake, emphasizing the need for public awareness. As society grapples with these challenges, finding reliable sources and moderating social media use is crucial for preserving trust in media and maintaining a shared sense of reality.
Source link
