OpenAI’s Sora 2, launched in September, faced criticism for defaulting to allow the use of individuals’ names and likenesses without consent. In response to backlash from public figures and advocacy groups, OpenAI revised its policy, embracing proposals for federal legislation like the NO FAKES Act, aimed at protecting creators’ rights over their identities. This bipartisan act seeks to establish a federal right of publicity, addressing the complexities of identity control in AI-generated content. Prominent voices, including actors, raised concerns about unauthorized usage, prompting OpenAI to transition to an opt-in model for likeness use. This shift highlights a critical need for responsible AI development practices, advocating for tools like prompt filtering and consent mechanisms. Creatives and public figures should consult legal experts to safeguard their intellectual property in this evolving landscape. As AI technology advances, the balance between innovation and individual rights remains a significant issue for developers and creators alike.
Source link
