OpenAI confident in AGI but faces safety concerns
Despite OpenAI’s ambitions, concerns remain over AI safety, with the company acknowledging it lacks solutions for controlling superintelligent systems.
OpenAI CEO Sam Altman has stated that the company believes it knows how to build AGI and is now turning its focus towards developing superintelligence. He argues that advanced AI could significantly boost scientific discovery and economic growth. While AGI is often defined as AI that outperforms humans in most tasks, OpenAI and Microsoft also use a financial benchmark—$100 billion in profits—as a key measure.
Despite Altman’s optimism, today’s AI systems still struggle with accuracy and reliability. OpenAI has previously acknowledged that transitioning to a world with superintelligence is far from certain, and controlling such systems remains an unsolved challenge. The company has, however, recently disbanded key safety teams, leading to concerns about its priorities as it seeks further investment.
Altman remains confident that AI will soon make a significant impact on businesses, suggesting that AI agents could enter the workforce and reshape industries in the near future. He insists that OpenAI continues to balance innovation with safety, despite growing scepticism from former staff and industry critics.