OpenAI unveils AI safety framework

OpenAI charts a safety course for advanced AI models, empowering their board to reverse crucial safety decisions. Their plan focuses on deploying tech only in secure sectors like cybersecurity and nuclear threats.

 Aircraft, Airplane, Transportation, Vehicle, Robot

OpenAI has presented a safety framework for its advanced AI models, allowing the board to overturn safety decisions as part of the plan released on its website. Supported by Microsoft, OpenAI will only deploy its latest tech if it’s deemed safe in key areas like cybersecurity and nuclear threats. They’re also establishing an advisory group to review safety reports, forwarding them to company executives and the board. While executives hold decision-making power, the board retains the authority to reverse these decisions.

The framework outlines strategies to monitor, evaluate, and safeguard against potential risks associated with increasingly powerful AI. It integrates various safety teams to mitigate current risks, anticipate emerging threats, and establish safety protocols. Employing rigorous assessments and continuous updates, OpenAI aims to ensure the safe development and deployment of frontier AI models. This evolving framework emphasises ongoing improvement, collaboration, and external accountability.

Since ChatGPT’s launch, concerns about AI risks have been prominent among both researchers and the public. While generative AI has impressed users with its creative abilities, it has also raised worries due to its potential for spreading disinformation and manipulating individuals.

Why does this matter?

In April, a coalition of AI experts called for a halt in developing systems more powerful than OpenAI’s GPT-4, citing societal risks. A May Reuters/Ipsos poll discovered that over two-thirds of Americans harbor concerns about AI’s potential negative impacts, with 61% fearing it could pose a threat to civilization. Furthermore, the steps comes after the conflict faced between CEO Sam Altman and the board of OpenAI over safety questions, where Altman was fired and re-hired and the OpenAI board faced some changes.