Canadian government proposes a Voluntary Code of Conduct for AI developers

The Canadian government is developing a voluntary code of conduct for AI developers to prevent the creation of harmful or malicious content. The code will require safeguards to prevent cyberattacks, impersonation, and the misuse of personal data. It also aims to ensure the distinction between AI-generated and human-made content and includes provisions for user safety and the avoidance of biases. However, there are criticisms that the upcoming AI legislation is outdated and lacks input from stakeholders.

 Flag, Canada Flag

The Canadian government is taking steps to regulate the use of artificial intelligence (AI) by outlining a voluntary code of conduct for AI developers. The code aims to prevent AI systems from being used to create harmful or malicious content. Consultations between Innovation Canada and stakeholders are ongoing to finalise the code before the passage of Bill C-27, including the Artificial Intelligence and Data Act (AIDA).

The code will require AI developers to implement safeguards to ensure their technology is not used for malicious purposes, such as cyberattacks, impersonating real people, or tricking individuals into revealing personal data. Instances have occurred where criminals have used generative AI technology to clone people’s voices and deceive their friends and family into providing cash under false pretences.

The Canadian government is concerned about the potential abuse and misuse of generative AI platforms like ChatGPT and is seeking to regulate the AI industry. While these platforms can potentially revolutionise industries, they pose significant risks if used nefarious.

The voluntary code of conduct is intended to build trust in AI systems and facilitate a smooth transition to compliance with forthcoming regulatory requirements. Among the key provisions of the code, AI developers will be required to ensure that users can distinguish between AI-generated content and human-made creations. Human oversight will also be necessary to monitor AI systems and ensure responsible use. This requirement aligns with the European Union’s call for online platforms to label AI-generated content to combat misinformation.

Tech firms in the United States have voluntarily agreed to responsible and safe AI development practices, including Google, Meta, OpenAI, Athropic, Amazon, Microsoft, and Inflection. They have committed to watermarking AI content and safeguarding against cyberattacks and discrimination.

The Canadian Code of Conduct also emphasises the importance of user safety, calling on AI companies to ensure the safety and security of their systems. Additionally, the Code highlights the need to critically examine AI systems to avoid biases and the use of low-quality or non-representative data sets. Generative AI has faced criticism for perpetuating biases against marginalised communities.

However, concerns have been raised about the upcoming AI legislation, which has been criticised for being outdated and lacking input from various stakeholders. Some argue that the legislation needs extensive revision to clarify which AI technologies it will govern and how they will be regulated. The legislation’s introduction prior to the introduction of generative AI systems like ChatGPT supports this criticism.

The regulation of AI has been a global subject of debate, with countries such as China, the United States, and European nations grappling with the need to balance promoting innovation and ensuring user security.

Overall, the Canadian government’s voluntary code of conduct for AI developers is a significant step towards regulating the industry and preventing the misuse of AI systems. The code addresses key concerns such as malicious use, user safety, distinguishing between AI-generated and human-made content, and avoiding biases. However, the effectiveness of the forthcoming AI legislation has been questioned, as it is considered outdated and lacks relevant stakeholder input.

Read more