OpenAI appoints AI safety expert as director

OpenAI has appointed AI safety expert Zico Kolter to its board, reflecting its commitment to responsible AI development.

OpenAI faces major shifts as key figures, including Schulman and Sutskever, exit.

One of the largest AI research organizations has appointed Zico Kolter, a distinguished professor and director of the machine learning department at Carnegie Mellon University, to its board of directors. Renowned for his focus on AI safety, Kolter will also join the company’s safety and security committee, which is tasked with overseeing the safe deployment of OpenAI’s projects. The appointment comes as OpenAI’s board undergoes changes in response to growing concerns about the safety of generative AI, which has seen rapid adoption across various sectors.

Following the departure of co-founder John Schulman, Kolter’s addition to the OpenAI board underscores a commitment to addressing these safety concerns. He brings a wealth of experience from his roles as the chief expert at Bosch and chief technical adviser at Gray Swan, a startup dedicated to AI safety. Notably, Kolter has contributed to developing methods that automatically assess the safety of large language models, a crucial area as AI systems become increasingly sophisticated. His expertise will be invaluable in guiding OpenAI as it navigates the challenges posed by the widespread use of generative AI technologies such as ChatGPT.

The formation of the safety and security committee in May, preceded by Ilya Sutskever‘s leaving, which includes Kolter alongside CEO Sam Altman and other directors, underlines OpenAI’s proactive approach to ensuring AI is developed and deployed responsibly. The committee is responsible for making recommendations on safety decisions across all of OpenAI’s projects, reflecting the company’s recognition of the potential risks associated with AI advancements.

In a related move, Microsoft relinquished its board observer seat at OpenAI in July, aiming to address antitrust concerns from regulators in the United States and the United Kingdom. This decision was seen as a step towards maintaining a balance of power within OpenAI, as the company continues to play a leading role in the rapidly evolving AI landscape.