OpenAI highlights the dual-edged nature of superintelligent AI systems, which promise significant benefits but also present potentially catastrophic risks. To mitigate such dangers, the company advocates for rigorous AI safety research, proposing that the industry may need to slow down development to align and control these technologies effectively. A primary concern is the advancement toward recursive self-improvement, a critical challenge in achieving artificial general intelligence (AGI). Notably, figures like Prince Harry and Meghan Markle have called for a ban on superintelligent AI due to its threats to humanity. Experts, including Andrej Karpathy, suggest AGI is still a decade away due to unresolved issues, such as the lack of continual learning. OpenAI argues that traditional regulatory measures may not suffice and recommends unified AI regulations, strategic partnerships with the government, and an AI resilience framework to safeguard systems against misuse. Despite challenges, OpenAI anticipates AI systems will achieve notable advancements by 2026 and beyond.
Source link
OpenAI Sounds Alarm on ‘Potentially Catastrophic’ Risks of Superintelligent AI, Proposes Global Safety Measures | Technology News
Share
Read more