OpenAI’s recent report sheds light on its ongoing battle against cyber threats, from criminals to state-backed campaigns, all while managing user privacy concerns. Since February 2024, the company has dismantled over 40 networks misusing its AI models, with notable offenders including a Cambodian crime ring and Russian actors generating deepfake prompts. OpenAI stresses it does not surveil individual chats but focuses on “threat actor behavior” to protect user experience. Amid rising worries over AI’s psychological effects—highlighted by tragic recent incidents—the company claims it’s developed mechanisms to recognize users in distress, redirecting them to real-world help. While acknowledging that safety checks might falter in prolonged interactions, OpenAI is committed to ongoing improvements. This report reveals the complex challenge of ensuring AI safety and sensitivity as it navigates a rapidly evolving landscape of ethical and security issues.
Source link