A recent Dartmouth University study published in the Proceedings of the National Academy of Sciences reveals that AI, particularly large language models (LLMs), can manipulate public opinion polls, making it nearly impossible to distinguish between human and bot responses. The research highlights a significant vulnerability in online survey infrastructures, posing an existential threat to unsupervised research. The study’s author, Sean Westwood, elaborates on how LLMs can convincingly imitate human behavior to bias survey outcomes, undermining the integrity of crucial electoral processes, such as the 2024 US presidential election. Notably, 10 to 52 AI-generated responses could skew poll results dramatically. Moreover, the technology can generate flawless English answers even when programmed in other languages, raising concerns about foreign interference. To combat this issue, Westwood emphasizes the urgent need for reliable methods to verify genuine human participation in surveys to safeguard democratic accountability and the scientific knowledge ecosystem.
Source link
Study Reveals AI’s Potential to Mirror Human Responses, Raising ‘Existential Threat’ to Polling Accuracy
Share
Read more