Home AI ChatGPT Vulnerabilities Exposed: Prompt Injection Bugs Lead to Risk of Data Theft

ChatGPT Vulnerabilities Exposed: Prompt Injection Bugs Lead to Risk of Data Theft

0
ChatGPT Atlas: SEChatGPT hit by severe prompt-injection bugs enabling silent data theftO, backlink skills set for extinction

OpenAI’s ChatGPT is under scrutiny after cybersecurity researchers at Tenable uncovered seven critical prompt-injection vulnerabilities. These flaws could allow attackers to stealthily steal user data, hijack conversations, and poison long-term memory without any user interaction. The vulnerabilities primarily affect OpenAI’s GPT-4o and GPT-5 models due to their inability to distinguish real user instructions from hidden malicious commands embedded in web content. Notably, a zero-click prompt-injection technique can cause ChatGPT to execute harmful actions just by summarizing information about compromised websites. Other concerns include memory poisoning and bypassed safety filters through trusted domains like Bing.com. Tenable warns that these issues stem from inherent limitations in large language models (LLMs) and highlight the need for stricter content sanitization and tighter AI browsing restrictions. As AI technology evolves, the risks of silent data theft may continue to grow, emphasizing the importance of robust security measures.

Source link

NO COMMENTS

Exit mobile version