eSafety Commissioner raises doubts on big tech’s AI self-regulation in Australia
Australian officials, including eSafety Commissioner Julie Inman Grant, express doubts about relying on big tech companies’ self-regulation of AI development, prompting the government, led by Prime Minister Albanese, to pursue new regulations for high-risk AI uses.
Australia’s eSafety Commissioner, Julie Inman Grant, has expressed scepticism about relying on big tech companies to self-regulate their AI development. This comes after the US administration secured voluntary commitments from companies like OpenAI, Google, and Microsoft regarding AI technology development. Inman Grant questioned the effectiveness of these commitments, citing tech giants’ poor track record in enforcing their pledges, and emphasises the need for measurable metrics to ensure implementation and effectiveness.
Australia is currently engaged in a debate over AI regulation, prompting the government, under Prime Minister Albanese’s leadership, to take action. Notably, Minister of Science, Ed Husic, has unveiled new regulations specifically targeting high-risk AI applications, emphasising trust and responsible deployment. Additionally, Government Services Minister Bill Shorten has put forth a proposal to establish an AI expert group aimed at addressing ethical considerations in AI system design and implementation,
Victorian Government Services Minister Danny Pearson recognises the challenges of swift technology regulation and advocates for a principles-based approach and expert consultations to establish guidelines for AI usage in government applications. Pearson stresses the importance of remaining open-minded in finding the right solutions due to the uncertainty surrounding AI’s impact. NSW Customer Service Minister Jihad Dib acknowledges the rapid advancement of technology and the necessity of ongoing updates to AI policy frameworks, emphasising the dynamic nature of AI and the need for a flexible approach. Most Australian officials believe that big tech companies’ self-regulation is insufficient in addressing potential risks and harms associated with AI.