Language models impress but miss real-world understanding

Growing concerns from Yann LeCun suggest language models impress with words but fall short on real-world reasoning, raising doubts about their role in achieving true artificial intelligence.

Growing confidence in chatbots may mask fundamental weaknesses in how artificial intelligence understands the real world, warns AI pioneer Yann LeCun.

Leading AI researcher Yann LeCun has argued that large language models only simulate understanding rather than genuinely comprehending the world. Their intelligence, he said, lacks grounding in physical reality and everyday common sense.

Despite being trained on vast amounts of online text, LLMs struggle with unfamiliar situations, according to LeCun. Real-world experience, he noted, provides richer learning than language alone ever could.

Drawing on decades in AI research, LeCun warned that enthusiasm around LLMs mirrors earlier hype cycles that promised human-level intelligence. Similar claims have repeatedly failed to deliver since the 1950s.

Instead of further scaling language models, LeCun urged greater investment in ‘world models’ that can reason about actions and consequences. He also cautioned that current funding patterns risk sidelining alternative approaches to AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!