Artificial intelligence (AI) agents significantly enhance convenience but raise privacy concerns, especially regarding sensitive data like Social Security numbers. Rochester Institute of Technology (RIT) researchers, including cybersecurity professor Yidan Hu and Ph.D. student Ye Zheng, are addressing these risks through AudAgent, a tool that continuously audits AI agents’ data practices to ensure compliance with privacy policies. Their research highlights how these agents, often integrated with generative systems like ChatGPT, can unintentionally store and share sensitive information during tasks. Tests revealed that many AI-powered systems, including Claude and Gemini, inadequately handle sensitive data, while GPT-4o showcased better security. RIT’s work aims to enhance transparency and accountability within AI interactions, advocating for more explicit privacy policies. By integrating AudAgent in AI development, the goal is to protect user data while maintaining the advantages of these advanced technologies. This research underscores the importance of safeguarding privacy in the rapidly evolving AI landscape.
Source link
Share
Read more