How AI training data is influencing what users believe
You might think you are simply looking up a fact, but a new Yale study suggests that asking an AI chatbot a straightforward question could be quietly reshaping your political views.
A new Yale study, published in PNAS Nexus, has found that AI chatbots can subtly shift users’ social and political opinions, even when asked for factual information and with no intent to persuade.
Researchers tested nearly 1,912 participants, comparing responses to AI-generated summaries of historical events with those to Wikipedia entries, and found measurable differences in opinion.
The culprit, researchers say, is ‘latent bias’, ideological leanings embedded in the data used to train large language models that subtly colour the framing of otherwise accurate responses.
Default summaries generated by GPT-4o consistently nudged readers towards more liberal opinions compared to Wikipedia entries, even without any deliberate prompting.
Unlike Wikipedia, which makes its editorial process transparent, AI development remains largely opaque, giving the companies behind these models an unacknowledged ability to shape public opinion.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
