OpenAI CEO warns of possible exit from EU over new AI regulations
OpenAI’s CEO, Sam Altman, warns of possible cessation of operations in the EU due to new AI regulations. Concerns surround the classification of ‘high risk’ systems, affecting models like ChatGPT and GPT-4. Altman stresses the need for fine-tuning the legislation and proposes a balanced approach.
OpenAI’s CEO, Sam Altman, warned that the company might cease its operations in the EU if it cannot meet the requirements of upcoming AI regulations. Altman’s main worry stems from the EU’s classification of ‘high-risk’ systems, which might encompass OpenAI’s large AI models such as ChatGPT and GPT-4. Compliance with the additional safety measures associated with this designation is a concern for OpenAI due to technical limitations. Altman stresses the significance of refining the details of the legislation and proposes a regulatory approach that combines elements from both European and US strategies.
In an appearance at University College London, Altman also mentioned the risks associated with AI, particularly the generation of AI-driven disinformation tailored to individuals’ biases and its potential impact on future elections. However, Altman emphasised that social media platforms play a more prominent role in disseminating disinformation compared to AI large language models. Nonetheless, Altman maintains an optimistic view, highlighting the benefits of AI technology outweigh the risks.
When questioned about the issue of wealth distribution in an AI-dominated future, he recognised the significance of reassessing. Additionally, he discloses that the company intend to openly address this topic in 2024 after conducting a comprehensive five-year study on universal basic income.
Under the EU AI Act, high-risk AI applications face a more rigorous classification process. To be considered high-risk, these applications must not only pertain to critical areas and use cases but also pose significant risks to people’s health, safety, or fundamental rights. The legislation outlines obligations and penalties for misclassifications, specifically identifying certain sectors and platforms as falling within the high-risk categorisation. In order to ensure responsible AI usage, the AI Act imposes specific obligations on high-risk AI providers. These obligations encompass risk management, data governance, technical documentation, and record keeping. Furthermore, users of high-risk AI solutions are required to conduct a fundamental rights impact assessment, taking into account potential negative effects on marginalised groups and the environment.