A recent study conducted by researchers from Vrije Universiteit Amsterdam and the University of Oslo investigates political biases in large language models (LLMs). By developing a novel methodology that aligns LLM-generated voting predictions with actual parliamentary voting records from the Netherlands, Norway, and Spain, the team introduces three benchmarks: PoliBiasNL, PoliBiasNO, and PoliBiasES. Their findings reveal that current LLMs predominantly exhibit left-leaning or centrist tendencies, along with significant negative biases towards right-conservative parties. Utilizing sentiment analysis, the researchers calculated bias scores to illustrate the correlation between LLM predictions and real-world voting behavior. Importantly, the study shows that these biases are not solely a product of training data prevalence, indicating inherent political leanings within LLM architectures. The research highlights the urgent need for transparent auditing of LLMs, especially as these technologies shape public discourse and political polarisation. Future work will expand these benchmarks and explore ways to mitigate such biases.
Source link
Assessing Political Bias in Large Language Models Through Analysis of 10,584 Parliamentary Records
Share
Read more