Apple researchers have identified significant limitations in large reasoning models (LRMs) of artificial intelligence, questioning the industry’s push toward more powerful AI systems. In a recently published paper, they revealed that LRMs experienced a “complete accuracy collapse” when faced with complex problems, while standard AI models performed better in simpler tasks. As complexity increased, both model types struggled, with LRMs beginning to reduce their reasoning effort despite the rising difficulty. This inefficiency was highlighted during tests involving puzzles, leading to concerns about the sustainability of current AI approaches. Gary Marcus, an academic voice of caution, described the findings as “devastating,” challenging the notion that large language models (LLMs) will lead directly to artificial general intelligence (AGI). The paper suggests the AI field may be at a dead end, as LRMs struggle with generalizable reasoning, hinting at fundamental barriers that could hinder future advancements in AI capabilities.
Source link
Study Reveals Advanced AI Faces ‘Total Accuracy Collapse’ When Tackling Complex Challenges

Leave a Comment
Leave a Comment