Former OpenAI scientist aims to develop superintelligent AI safely

Sutskever, 37, played a pivotal role in developing generative AI models like ChatGPT through his advocacy of the scaling hypothesis, which posits that AI performance improves with vast computing power.

 Computer, Electronics, Tablet Computer, Pen

Ilya Sutskever, OpenAI’s former chief scientist, has launched a new company called Safe Superintelligence (SSI) to develop safe AI systems that significantly surpass human intelligence. In an interview, Sutskever explained that SSI aims to take a different approach to AI scaling compared to OpenAI, emphasising the need for safety in superintelligent systems. He believes that once superintelligence is achieved, it will transform our understanding of AI and introduce new challenges for ensuring its safe use.

Sutskever acknowledged that defining what constitutes ‘safe’ AI is still a work in progress, requiring significant research to address the complexities involved. He also highlighted that as AI becomes more powerful, safety concerns will intensify, making it essential to test and evaluate AI systems rigorously. While the company does not plan to open-source all of its work, there may be opportunities to share parts of its research related to superintelligence safety.

SSI aims to contribute to the broader AI community’s safety efforts, which Sutskever views positively. He believes that as AI companies progress, they will realise the gravity of the safety challenges they face and that SSI can make a valuable contribution to this ongoing conversation.