Home AI Researchers Successfully Prompt ChatGPT to Self-Inject Code

Researchers Successfully Prompt ChatGPT to Self-Inject Code

0
ChatGPT icon mobile app on a screen smartphone iPhone. ChatGPT is an artificial intelligence chatbot developed by OpenAI. 20.12.2024. İstanbul, Türkiye

Tenable’s researchers have unveiled a new vulnerability termed “conversation injection,” which highlights the risks associated with chained attacks involving ChatGPT and SearchGPT. They discovered that if SearchGPT outputs a prompt containing injected instructions, ChatGPT can unwittingly process these instructions, believing they are legitimate. This makes the model susceptible to manipulating its own responses based on unauthorized prompts injected from external websites. However, for attackers, simply getting a prompt into ChatGPT is insufficient; they also need a method to retrieve the model’s output, which may consist of sensitive data from the conversational context. This stealthy data exfiltration method exposes potential security risks in AI interactions, emphasizing the urgent need for robust safeguards against such vulnerabilities. Organizations must remain vigilant and implement strategies to mitigate the risks associated with AI conversational models and their data handling practices.

Source link

NO COMMENTS

Exit mobile version