Calls grow to strengthen New Zealand privacy law

Pressure is growing in New Zealand to strengthen the Privacy Act following several high-profile data breaches. Debate in New Zealand intensified after a cyberattack exposed medical records from the Manage My Health patient portal.

The breach in New Zealand affected about 120,000 patients and involved threats to release documents on the dark web. Another incident forced the MediMap medication platform offline after unauthorised changes were detected in patient records.

Privacy specialists argue that current enforcement powers are too weak to deter serious failures. The Privacy Act allows only limited financial penalties, with fines generally capped at NZD10,000.

Officials are now considering reforms, including stronger penalties for privacy violations. Policymakers also warn that failure to strengthen the law could threaten the country’s EU data adequacy status.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI explains 5 AI value models transforming enterprise strategy

AI is beginning to reshape corporate strategy as organisations shift from isolated technology experiments to broader operational transformation.

According to OpenAI, businesses that treat AI as a collection of disconnected pilots risk missing the bigger structural change that the technology enables.

A new framework describes five value models through which AI can gradually reshape companies. The first stage focuses on workforce empowerment, where tools such as ChatGPT spread AI capabilities across teams and improve everyday productivity.

Once employees develop fluency, organisations can introduce AI-native distribution models that transform how customers discover products and interact with digital services.

More advanced stages involve specialised systems. Expert capability integrates AI into research, creative production, and domain-specific analysis, allowing professionals to explore a wider range of ideas and experiments.

Meanwhile, systems and dependency management introduce AI tools capable of safely updating interconnected digital environments, including codebases, documentation, and operational processes.

The final stage involves full process re-engineering through autonomous agents. In such environments, AI systems coordinate complex workflows across departments while maintaining governance, accountability, and auditability.

Organisations that successfully progress through these stages may eventually redesign their business models rather than merely improving efficiency within existing structures.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU watchdog urges limits on US data access

The European Union’s data protection watchdog has urged stronger safeguards as negotiations continue with the US over access to biometric databases. European Data Protection Supervisor Wojciech Wiewiórowski said limits must ensure Europeans’ data is used only for agreed purposes.

Talks between the EU and the US involve potential arrangements that would allow US authorities to query national biometric systems. Databases across the EU contain sensitive information, including fingerprints and facial recognition data.

Past transatlantic data-sharing agreements between the two have faced legal challenges due to insufficient safeguards. European regulators are closely monitoring the Data Privacy Framework amid ongoing concerns about oversight.

Officials also warned that emerging AI technologies could create new surveillance risks linked to US data access. European authorities said they must negotiate as a unified bloc when dealing with the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Major crypto exchanges in South Korea face new ownership limits

South Korea’s ruling Democratic Party and the Financial Services Commission have agreed to cap major shareholder stakes in domestic crypto exchanges at 20%. Exceptions of up to 34% would apply to new businesses to support early-stage operators.

Large exchanges like Upbit and Bithumb will have 3 years to comply, while smaller platforms will receive an additional 3-year grace period.

Current ownership exceeds the proposed cap, with Upbit at 25.5%, Bithumb at 73.6%, and Coinone at 53.4%. Korbit’s pending acquisition would give Mirae Asset Consulting 92% ownership, highlighting the extent of concentrated holdings in the market.

The cap seeks to curb governance risks from concentrated shareholding, following the FSC’s January 2026 proposal. The move gained urgency after Bithumb’s accidental $43 billion Bitcoin transfer, which raised concerns about internal controls.

The ownership limit will likely be included in South Korea’s upcoming Digital Asset Basic Act, alongside rules on stablecoins and crypto ETFs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK to launch new lab for breakthrough AI research

Researchers in the UK will gain a new AI lab designed to drive transformational breakthroughs in healthcare, transport, science, and everyday technology, supported by government funding.

The lab will provide up to £40 million in funding over six years, alongside substantial access to large-scale computing resources, inviting UK researchers to pitch their most ambitious ideas.

The Fundamental AI Research Lab will focus on tackling core AI challenges, including hallucinations, unreliable memory, and unpredictable reasoning.

The lab will support high-risk, blue-sky research rather than simply scaling existing systems. Its goal is to unlock entirely new capabilities that could improve medical diagnoses, infrastructure resilience, scientific discovery, and public services.

UK officials highlighted the country’s strength in world-class universities, AI talent, and a thriving sector attracting over £100 billion in private investment. Experts, including Raia Hadsell of Google DeepMind, will peer-review funding applications, prioritising bold, high-reward proposals.

The initiative is part of the UKRI AI Strategy, which is backed by £1.6 billion and aims to strengthen research and ensure AI benefits society and the economy. UK AI projects like RADAR for rail faults and the IXI Brain Atlas for Alzheimer’s research demonstrate the approach’s potential impact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI adoption and jobs debated at India summit

Governments, companies and international organisations gathered in India in February for the AI Impact Summit to discuss the future of AI governance and adoption. Participants in India focused on economic impacts, labour market changes and sector specific uses of AI.

Delegates in India also highlighted growing interest in international cooperation on AI governance. Ninety one countries endorsed a declaration supporting shared tools, global collaboration and people centred development of AI.

Language diversity became a central topic during discussions in India. India’s government announced eight foundation AI models designed to support generative AI across the country’s 22 recognised languages.

Debate in India also reflected the growing influence of the Global South in AI policy discussions. Policymakers and experts in India emphasised infrastructure gaps, language diversity and local economic realities shaping AI adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ECB reports minor impact of AI on employment

AI has so far had only a small effect on employment across Europe, according to economists at the European Central Bank. A comparison of 5,000 firms- both AI users and non-users- showed no significant difference in job creation or reduction.

Some firms that use AI intensively were even four percent more likely to hire new staff than average.

Economists noted that AI investment has not replaced existing jobs. In some cases, firms are hiring additional employees to develop and implement AI systems or to scale up operations more efficiently.

Only a minority of firms, around 15 percent, reported reducing labour costs as a motivation for AI adoption.

Despite limited impacts so far, the ECB cautioned that AI could have more significant effects as technology matures. Firms that specifically invest in AI to cut jobs may indeed reduce employment, and the long-term consequences for production processes and labour markets remain uncertain.

The findings come amid rising concern over AI-driven job losses, with companies such as Amazon and Allianz citing AI as a reason for recent cuts. Markets reacted negatively last week after a viral post predicted widespread layoffs, though current evidence shows only minor effects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Growing risks from AI meeting transcription tools

Businesses across the US and Europe are confronting new privacy risks as AI transcription tools spread through workplaces. Tools that automatically record and transcribe meetings increasingly capture sensitive conversations without clear consent.

Privacy specialists warn that organisations in the US and Europe previously focused on rules controlling what employees upload into AI systems. Governance efforts now shift towards monitoring what AI tools record during daily work.

AI services such as Otter, Zoom transcription and Microsoft Copilot can record discussions involving performance reviews, health information and legal matters. Companies in the US and Europe face legal exposure when third-party platforms store recordings without strict controls.

Governance teams in the US and Europe are being urged to introduce clear rules on meeting recordings and retention of transcripts. Stronger policies may include consent requirements, limits on recording sensitive meetings and stricter data storage oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI tracks how AI shapes student performance over time

AI is increasingly shaping education, offering tools like ChatGPT that provide personalised learning support for students anywhere. Early studies suggest features such as study mode can enhance exam performance, yet understanding AI’s long-term effect on learning remains a challenge.

Traditional research often focuses on test scores, overlooking how students interact with AI over time in real-world settings.

OpenAI, in partnership with Estonia’s University of Tartu and Stanford’s SCALE Initiative, created the Learning Outcomes Measurement Suite to track longitudinal learning outcomes. The framework assesses interactions, engagement, cognitive growth, and alignment with pedagogical principles.

Large-scale trials involve tens of thousands of students, combining AI-driven insights with traditional classroom measures such as exams and observations.

Research shows that guided AI interactions can strengthen understanding, persistence, and problem-solving. Microeconomics students using the study mode achieved around 15% higher exam scores than those relying on traditional online resources.

Beyond short-term results, the measurement suite evaluates deeper learning effects, including motivation, metacognition, and productive engagement, helping educators and developers optimise AI tools for meaningful outcomes.

The suite will be validated through ongoing studies and eventually made available to schools, universities, and education systems worldwide. OpenAI aims to share findings broadly to ensure AI contributes effectively to student learning and cognitive development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Council of Europe issues new guidance on AI and gender equality

Ahead of International Women’s Day on 8 March, the Council of Europe adopted two new recommendations addressing gender equality and the prevention of violence against women in the context of emerging technologies.

One recommendation targets the design and use of AI to prevent discrimination, while the other focuses on accountability for technology-facilitated violence against women and girls.

The AI recommendation advises member states on preventing discrimination throughout the lifecycle of AI systems, from development to deployment and retirement. It highlights risks like gender bias while promoting transparency, explainability, and safeguards.

Special attention is given to discrimination based on gender, race, and sexual orientation, gender identity, and expression (SOGIESC).

The second recommendation sets the first international standard for addressing technology-facilitated violence against women. It outlines strategies to overcome impunity, including clearer legal frameworks, accessible reporting systems, and victim-centred approaches.

Emphasis is placed on multistakeholder engagement, trauma-informed policies, and safety-by-design in technology products to prevent digital harm.

Both recommendations reinforce the importance of combining regulation, institutional support, and public awareness to ensure technology advances equality rather than perpetuates harm.

The formal launch is scheduled for 10 June 2026 at the Palais de l’Europe in Strasbourg during an event titled ‘From standards to action: making accountability for technology-facilitated violence against women and girls a reality.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot