
Leading AI researcher Yann LeCun has argued that large language models only simulate understanding rather than genuinely comprehending the world. Their intelligence, he said, lacks grounding in physical reality and everyday common sense.
Despite being trained on vast amounts of online text, LLMs struggle with unfamiliar situations, according to LeCun. Real-world experience, he noted, provides richer learning than language alone ever could.
Drawing on decades in AI research, LeCun warned that enthusiasm around LLMs mirrors earlier hype cycles that promised human-level intelligence. Similar claims have repeatedly failed to deliver since the 1950s.
Instead of further scaling language models, LeCun urged greater investment in 'world models' that can reason about actions and consequences. He also cautioned that current funding patterns risk sidelining alternative approaches to AI.
