Home AI Researcher Warns: Malicious Prompts Could Exploit ChatGPT to Access Your Private Email...

Researcher Warns: Malicious Prompts Could Exploit ChatGPT to Access Your Private Email Data

0
Malicious prompts could exploit ChatGPT to steal data from your private emails, claims researcher

A recent demonstration by developer Eito Miyamura, an Oxford alumnus, has uncovered a significant security vulnerability in OpenAI’s ChatGPT. Miyamura showcased how he exploited the newly introduced Model Context Protocol (MCP) tools to access and leak private user data using just an email address. Despite OpenAI’s intentions for MCP to enhance productivity by connecting ChatGPT to platforms like Gmail and Google Calendar, this demonstration reveals potential security risks. The attack involves sending a calendar invite with a “jailbreak” prompt, enabling the AI to execute harmful instructions, even if the victim does not accept the invite. Currently, MCP tools operate in developer mode with manual approval needed for each session, but Miyamura warns that users might inadvertently compromise their data due to decision fatigue. OpenAI also announced a new feature allowing users to branch conversations dynamically for enhanced flexibility, available to logged-in users on the web.

Source link

NO COMMENTS

Exit mobile version