On December 21, 2025, Andrej Karpathy, OpenAI co-founder, released the “2025 LLM Year in Review,” detailing significant paradigm shifts in large language models (LLMs). He emphasized a transition in AI training from “probabilistic imitation” to “logical reasoning,” driven by Reinforcement Learning with Verifiable Rewards (RLVR). This evolving strategy enables models to generate human-like reasoning traces, impacting their performance in complex tasks. Karpathy metaphorically described current AI as “summoning ghosts,” highlighting their inconsistent abilities—excelling in specialized areas while struggling with basic common sense. He also discussed the emergence of “Vibe Coding,” localized agents, and LLM graphical user interfaces. Despite rapid advancements, he noted that we have tapped into less than 10% of this new technology’s potential. As we move toward “pure machine intelligence,” the focus will shift toward efficiently enhancing AI’s logical reasoning capabilities, marking a critical evolution in AI competitiveness by 2026.
Source link
