Friday, August 15, 2025

Could This Be the Limit for AI Advancement?

In this week’s Open Questions column, Cal Newport discusses the evolution of artificial intelligence since the pivotal 2020 OpenAI report on neural language model scaling. Researchers, including Jared Kaplan, suggested that larger language models would continually improve, challenging prior beliefs that increased size would lead to diminishing returns. This theory gained traction with the release of GPT-3, igniting hopes for artificial general intelligence (AGI). However, skepticism emerged, notably from Gary Marcus, who questioned the universality of scaling laws. The recent launch of GPT-5 prompted mixed reviews, leading to criticism that the anticipated advancements have been overhyped. As AI companies grapple with stagnating improvements, a shift toward post-training techniques—enhancing pre-trained models rather than exclusively focusing on scaling—has begun. This shift signifies a broader reevaluation of AI development strategies, suggesting future innovations may emerge from optimizing existing models rather than merely increasing computational power.

Source link

Share

Read more

Local News