G7 officials agree to develop international code of conduct for AI
The code of conduct will require companies to make commitments to address potential societal harm resulting from their AI systems.
G7 officials have reached an agreement to develop an international code of conduct for AI. This voluntary code seeks to provide guidelines and control over the use of AI technology. The code will encompass specific principles to oversee advanced forms of AI, such as generative AI. It is expected that the code will be presented to G7 leaders in November.
The code of conduct will require companies to make commitments to address potential societal harm resulting from their AI systems. These companies will be expected to take proactive measures to prevent any negative impacts on individuals and society. Additionally, the code will emphasise the implementation of robust cybersecurity controls to ensure the security of AI technology throughout its development and usage. To mitigate the potential misuse of AI, the code of conduct will establish risk management systems. These systems will help in regulating the technology and preventing any malicious or unethical practices.
G7 officials will reconvene in Kyoto, Japan, in early October to further discuss and finalise the code of conduct. Following this, the digital ministers of the G7 countries will hold a virtual meeting in either November or December to complete the process.
Why does it matter?
The G7 acknowledges the need to navigate the challenges posed by AI to democratic values, individuals, and society. Also, it reminds us of the European Commission’s approach: Developing voluntary AI guardrails ahead of the actual AI law. Similar approaches have been adopted or drafted in the US and Canada, respectively.