Australia to establish an advisory body for AI oversight and regulation

Australia’s government has announced plans to establish an advisory body to address the risks posed by AI. The government also intend to introduce guidelines for technology companies to label and watermark content created by AI.

Australia flag is depicted on the screen with the program code

In a significant move towards improving oversight and regulation of AI technologies, Australia has announced plans for the establishment of an advisory body to address the associated risks. The government’s announcement reflects a global trend where countries are increasingly recognising the need for comprehensive measures to govern the evolving landscape of AI.

The groundwork for this plan was laid with a comprehensive consultation into AI that was initiated last year. On 17 January, the government released its interim response to the ‘Safe and Responsible AI in Australia‘ consultation, emphasising the need for stronger protections to manage the risks associated with AI.

As part of its strategy, the Australian government is set to collaborate with industry bodies to introduce voluntary guidelines that encourage technology companies to label and watermark AI-generated content. The primary objective behind these measures is to promote transparency and accountability in the utilisation of AI, addressing concerns related to inconsistent adoption and a prevailing lack of public trust.

Despite AI’s potential to significantly boost the economy, the technology has encountered obstacles in its widespread adoption due to trust issues among the public, stated the government. In response, the government aims to confront these challenges head-on and foster a wider acceptance of AI.

The government’s response is particularly targeted at high-risk settings where the potential harms of AI could be challenging to reverse. Simultaneously, efforts are being made to ensure that low-risk AI applications can continue to flourish with minimal impediment.

As part of immediate actions, the government is contemplating the introduction of mandatory guardrails for AI development and deployment in high-risk settings. These guardrails may take the form of amendments to existing laws or the creation of new, AI-specific legislation.

Key considerations for mandatory guardrails include testing procedures to ensure the safety of products before and after release, transparency in model design and data underpinning AI applications, and mechanisms for accountability. The government is also exploring the possibility of mandatory training for developers and deployers of AI systems, potential forms of certification, and clearer expectations regarding accountability for organisations involved in AI systems.