For years, OpenAI has dominated headlines with significant hype around its AI advancements, particularly GPT-3 and ChatGPT. Despite predictions of transformative impacts on science, medicine, and productivity, many claims have not materialized. Critics, including Gary Marcus, labeled LLMs like ChatGPT as “authoritative bullshit” and warned against unrealistic expectations for GPT-4 and upcoming models. Recently, the narrative has shifted post-GPT-5’s release, revealing significant limitations and leading to a chorus of support for Marcus’s views, including endorsements from major publications and academics. The media now acknowledges that scaling alone won’t yield AGI, as skepticism grows around Sam Altman’s promises. This shift raises vital questions about the future of AI, the credibility of OpenAI, and the potential for a new AI winter. The moral underscores the importance of truth in science, with hopes that new, trustworthy advancements will emerge as the AI landscape evolves.
Source link
![https3A2F2Fsubstack-post-media.s3.amazonaws.com2Fpublic2Fimages2F785d2d1c-dcf1-4296-8148-12286.jpeg OpenAI’s Waterloo? [with corrections] - Marcus on AI](https://site.server489.com/wp-content/uploads/2025/08/https3A2F2Fsubstack-post-media.s3.amazonaws.com2Fpublic2Fimages2F785d2d1c-dcf1-4296-8148-12286-696x348.jpeg)
Share
Read more