In mid-2023, researchers discovered a network of over 1,000 social bots, dubbed the “fox8” botnet, amplifying crypto scams on Twitter (now X). These bots, powered by AI, exhibited content generated by models like ChatGPT and managed to exploit X’s algorithm by simulating realistic interactions and engagement. The evolution of social bots has led to sophisticated AI swarms capable of creating varied and credible content, posing a significant threat to democratic processes. Malicious actors can manipulate public opinion through these agents, generating a false sense of consensus and undermining trust in discourse. Current U.S. policies are reducing oversight and research funding on these issues, creating a conducive environment for influence operations. Mitigating risks requires access to social media platform data, improved detection methods for coordinated behavior, and regulatory measures to limit the monetization of inauthentic engagement. As AI continues to advance, immediate action is essential to counter the manipulation threats posed by these technologies.
Source link
