EU’s AI Act influences New Zealand’s digital strategy

Dr Lynch highlights the need for legal safeguards, while Professor Markus Luczak-Roesch recommends Estonia and Norway as models.

 Flag, New Zealand Flag

As governments worldwide grapple with AI regulation and digital identity strategies, many are looking to the EU for guidance. In New Zealand, the EU’s AI Act and EUDI wallet program serve as valuable models. Dr Nessa Lynch, an expert on emerging technology regulation, highlights the need for legal and policy safeguards to ensure AI development prioritises public interests over commercial ones. She argues that the EU’s AI Act, framed as product safety legislation, protects people from high-risk AI uses and promotes trustworthy AI. However, she notes the controversial exceptions for law enforcement and national security.

Lynch emphasises that regulation must balance innovation and trust. For New Zealand, adopting a robust regulatory framework is crucial for fostering public trust in AI. The current gaps in its privacy and data protection laws, along with unclear AI usage guidelines, could hinder innovation and public confidence. Lynch stresses the importance of a people-centred approach to regulation, ensuring AI is used responsibly and ethically.

Similarly, New Zealand’s digital identity strategy is evolving alongside its AI regulation. The recent launch of the New Zealand Trust Framework Authority aims to verify digital identity service providers. Professor Markus Luczak-Roesch from Victoria University of Wellington highlights the transformative potential of digital ID, which must be managed in line with national values. He points to Estonia and Norway as models for integrating digital ID with robust data infrastructure and ethical AI development, stressing the importance of avoiding technologies that may carry unethical components or incompatible values.