AI-driven disinformation threatens public trust, Nobel economist warns
Economic modelling suggests AI-generated disinformation could further damage the online information environment unless governments introduce stronger digital platform regulation.
Research by Nobel Prize-winning economist Joseph Stiglitz and Columbia University’s Maxim Ventura-Bolet argues that AI could worsen the economics of misinformation by making low-quality and misleading content cheaper and easier to produce at scale.
According to an analysis in The Strategist, their economic modelling suggests that digital markets reward misleading and emotionally charged content because it attracts engagement, advertising revenue and data collection. The analysis argues that without regulation, markets are likely to produce more disinformation and less reliable information as AI lowers the cost of content production.
The article says social media platforms and AI systems have reshaped how people consume information. Instead of visiting original news sources, users increasingly rely on algorithm-driven feeds, search summaries and AI-generated overviews, reducing traffic and revenue for original publishers.
It also argues that AI systems can intensify the problem by producing large volumes of convincing but unreliable material quickly and cheaply. Since AI tools depend on online information for training and outputs, distorted or misleading data can feed back into the information ecosystem and further reduce quality.
The analysis links the issue to political polarisation, warning that audiences are more likely to engage with information that reinforces existing beliefs. That demand can further reward producers of misleading content while putting additional pressure on public-interest journalism.
Stiglitz and Ventura-Bolet argue that market forces alone will not correct the decline in information quality. The article says possible responses include stronger platform accountability for content amplification, obligations to address coordinated disinformation campaigns and intellectual property protections for news producers.
The analysis also points to Australia’s memorandum of understanding with Anthropic as a sign of engagement between government and AI companies, while stressing that voluntary cooperation is not a substitute for regulation.
Why does it matter?
The analysis highlights how AI and platform algorithms can affect the economic incentives behind public information, not only the speed at which false content spreads. If engagement-based systems continue to reward misleading material while weakening the revenue base for quality journalism, the risks extend beyond individual misinformation incidents to the overall reliability of the online information environment.
That matters for democratic debate, public trust and informed decision-making. It also raises regulatory questions about platform accountability, the use of news content by AI systems and whether voluntary agreements with technology companies are enough to protect the information ecosystem.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!
