A recent 2025 study by Model Evaluation and Threat Research (METR) has sparked debates on AI’s effectiveness in software development. Contrary to expectations, experienced developers took 19% longer to complete tasks using AI tools compared to traditional methods. The trial involved 16 developers tackling real tasks, with many relying on AI assistance like Cursor Pro and Claude. Despite anticipating a 24% speedup, developers reported only a 20% boost in productivity, compounded by the time spent correcting AI-generated code. METR’s findings challenge more optimistic studies that suggested significant productivity increases with AI tools, illustrating the complexities of real-world coding tasks. Researchers noted that AI struggles with project-specific nuances that experienced developers navigate easily. As firms aim for productivity gains, leaders are advised to track objective metrics, selectively pair AI tools with less experienced coders, and prioritize thorough reviews to mitigate cognitive load and context-switching issues.
Source link
Share
Read more