A recent report from OpenAI revealed that suspected Chinese government operatives sought assistance from ChatGPT to develop tools for large-scale surveillance and to promote software for scanning social media for “extremist speech.” This highlights the rising concern over the misuse of artificial intelligence (AI) for authoritarian purposes, especially amid ongoing US-China competition in AI technology, where billions are being invested. OpenAI noted that AI is increasingly being employed by state actors for mundane tasks like data analytics, rather than groundbreaking innovations. Notable examples include requests related to monitoring the Uyghur population and generating promotional materials for social media surveillance tools. OpenAI’s findings also indicated that various state actors, including those from Russia and North Korea, utilize AI to enhance their operations, often improving language skills in influence campaigns. Interestingly, more victims are using ChatGPT to identify scams, showcasing its dual role in cybersecurity.
Source link

Share
Read more