OpenAI recently faced backlash following claims of a breakthrough with GPT-5 solving multiple unsolved Erdős problems. This statement, initially shared by manager Kevin Weil, suggested the AI independently produced mathematical proofs, igniting excitement about AI’s potential in research. However, the claim was quickly challenged by mathematician Thomas Bloom, who clarified that “open” problems on his site merely indicated personal uncertainty regarding solutions, not that they were unresolved. DeepMind CEO Demis Hassabis critiqued OpenAI’s communication, calling it “embarrassing,” as the hype surrounding the announcement became evident. Ultimately, researchers admitted their mistake, raising concerns about accountability in AI advancements, especially given the industry’s high stakes. While GPT-5 is valuable as a literature review assistant—helping researchers navigate scattered academic information—experts like Terence Tao emphasize the importance of human oversight in integrating AI-generated insights into valid research.
Source link