OpenAI’s new Safety Committee takes independent role

This important step follows the committee’s initial recommendations, which were recently made public for the first time.

The company behind the popular AI chatbot ChatGPT, OpenAI, has announced that its newly established Safety and Security Committee will now operate independently to oversee the development and deployment of its AI models. This decision follows the committee’s recent recommendations, which were released publicly for the first time. Formed in May, the committee’s goal is to enhance and refine OpenAI’s safety practices amid growing concerns about AI’s ethical use and potential biases.

The committee will be led by Zico Kolter, a professor at Carnegie Mellon University and a member of OpenAI’s board. Under its guidance, OpenAI plans to implement an ‘Information Sharing and Analysis Center’ to facilitate cybersecurity information exchange within the AI industry. Additionally, the company is focusing on improving internal security measures and increasing transparency regarding the capabilities and risks associated with its AI technologies.

In a related development, OpenAI has also partnered with the US government to research and evaluate its AI models further. This move underscores the company’s commitment to addressing both the opportunities and challenges posed by AI as it continues to evolve.