Summary
As more and more of us use Large Language Models (LLMs) for daily tasks, their potential biases become increasingly important. We investigated whether today’s leading models, such as those from OpenAI, Google, and others, exhibit ideological leanings.
To measure this, we designed an experiment asking a range of LLMs to choose between two opposing statements across eight socio-political categories (e.g., Progressive vs. Conservative, Market vs. State). Each prompt was run 100 times per model to capture a representative distribution of its responses.
Our results reveal that LLMs are not ideologically uniform. Different models displayed distinct “personalities”, with some favouring progressive, libertarian, or regulatory stances, for example, while others frequently refused to answer.
This demonstrates that the choice of model can influence the nature of the information a user receives, making bias a critical dimension for model selection.