G7 and EU to establish code of conduct for advanced AI systems

The EU and G7 countries will agree on a code of conduct for companies developing advanced AI systems. Companies will be urged to identify, evaluate, and mitigate risks throughout the AI lifecycle, as well as publish reports on AI capabilities and invest in robust security controls.

 Robot, Clothing, Glove

The G7, along with the EU, will establish on Monday, 30 October 2023, a code of conduct for companies involved in developing advanced AI systems, as stated in last week’s document, according to Reuters. This voluntary code aims to govern the way major countries handle AI technology and address concerns related to privacy, security risks, and potential misuse.

The voluntary code of conduct, outlined in an 11-point document, strives to promote the adoption of safe, secure, and trustworthy AI systems on a global scale. It specifically provides guidance for companies involved in developing the most advanced AI systems, including foundation models and generative AI systems.

One of the key elements emphasised in the code is the commitment and engagement of companies to take appropriate measures to identify, evaluate, and mitigate risks throughout the entire AI lifecycle. It also urges companies to address any incidents or patterns of misuse that may arise after AI products have been introduced to the market. The code also highlights the need for companies to publish public reports on the capabilities, limitations, and use of AI systems and to invest in robust security controls.

The European Union has been at the forefront of regulating AI measures by proposing the AI Act, which aims to ensure the ethical and responsible use of AI technologies. Earlier this month, at IGF 2023 in Kyoto, Japan, Vera Jourova, the European Commission digital chief, expressed support for the voluntary Code of Conduct, stating that it can serve as a solid basis for ensuring safety until formal regulations are in place. This indicates that there is recognition among policymakers that immediate action is necessary to address the risks and challenges associated with AI technology.

Why does it matter?

Establishing a code of conduct for AI development by the G7 and the EU reflects a growing trend of introducing voluntary codes. The voluntary code of conduct will set standards and guidelines for companies, ensuring the responsible and secure use of AI technology. The different approaches taken by the EU and Japan highlight the ongoing debate about balancing regulation and promoting innovation in the AI sector.