Leading tech companies commit to responsible development of AI at Seoul AI Summit

Tech companies have agreed on a set of safety commitments on artificial intelligence (AI) at a second global safety summit led by South Korea and the UK.

 Person, Architecture, Building, Factory

At an AI Seoul Summit 2024 meeting on Tuesday, sixteen companies leading the charge in artificial intelligence (AI) development pledged to advance this transformative technology responsibly. This initiative, dubbed ‘Frontier AI Safety Commitments‘, seeks to implement strict safety standards as AI technologies increasingly integrate into everyday society.

Among the signatories are industry giants such as Amazon, Google, IBM, Meta, Microsoft, and OpenAI, alongside notable firms like Anthropic, Cohere, G42, Inflection AI, Mistral AI, Naver, Samsung Electronics, Technology Innovation Institute, xAI, and Zhipu.ai. They were also backed by a broader Seoul declaration from the Group of Seven (G7) major economies, the EU, Singapore, Australia and South Korea. 

These companies have pledged to advance the development of AI within a framework of safety and trust, emphasising the necessity of responsible innovation. As part of their commitment, they have agreed to adhere to a set of voluntary principles designed to mitigate severe risks associated with AI technologies. This includes the publication of a detailed safety framework by the time of the upcoming AI Summit in France.

The commitments cover various aspects of AI safety from development to deployment. They include rigorous risk assessments across the AI lifecycle, setting intolerable risk thresholds, and implementing effective risk mitigations. These tech giants will also maintain transparency in their operations, updating the public on their methodologies and any significant changes to their practices.

Notably, the commitments also focus on enhancing collaboration within the industry. This involves internal and external red-teaming to identify and mitigate new threats, promoting information sharing, and strengthening cybersecurity measures. Moreover, the signatories have committed to facilitating third-party evaluations of their systems and to developing technologies that allow users to recognise AI-generated content.

Accountability is another critical aspect of these commitments. Each organisation has pledged to develop internal governance frameworks to ensure adherence to these safety protocols and to allocate adequate resources for continuous improvement.