Microsoft commits A$25 billion to expand AI and cloud in Australia

Microsoft has announced its largest-ever investment in Australia, committing A$25 billion by the end of 2029 to expand AI and cloud infrastructure, strengthen cyber defence collaboration, and train three million Australians in AI skills by 2028.

The announcement was made alongside Australian Prime Minister Anthony Albanese during Microsoft chief executive Satya Nadella’s visit to Sydney. The company said the investment will expand Azure AI supercomputing and cloud capacity in Australia and increase its local cloud and AI infrastructure footprint by more than 140% by the end of 2029.

The announcement also includes collaboration with the Australian AI Safety Institute, an extension of the Microsoft-Australian Signals Directorate Cyber Shield to additional government agencies, and deeper work on national resilience with the Department of Home Affairs.

Albanese said:

We want to make sure all Australians benefit from AI. Our National AI Plan is all about capturing the economic opportunities of this transformative technology while protecting Australians from the risks.’ He added: ‘Microsoft’s long-term investment in our national capability will help deliver on that plan – strengthening our cyber defences and creating opportunity for Australian workers and businesses.’

Nadella added:

Australia has an enormous opportunity to translate AI into real economic growth and societal benefit.’ He added: ‘That is why we are making our largest investment in Australia to date, committing A$25 billion to expand AI and cloud capacity, strengthen cybersecurity, and expand access to digital skills across the country.

Microsoft said the investment is underpinned by a memorandum of understanding with the Australian Government, tied to national expectations for data center and AI infrastructure developers. It also said it will work with the Australian AI Safety Institute to monitor, test, and evaluate advanced AI systems, including human-AI interaction risks in companion chatbots and conversational AI systems.

Why does it matter?

The scale of the investment links infrastructure, skills, safety, and cyber resilience in a single package aligned with Australia’s AI Action Plan. It also signals that competition over AI capacity is increasingly tied not only to datacentres and compute, but to workforce readiness, regulatory cooperation, and national capability in areas such as cybersecurity and resilience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI privacy model sets new standard for AI-data protection

The US R&D company, OpenAI, has introduced OpenAI Privacy Filter, a specialised AI system designed to detect and redact personally identifiable information in text with high accuracy.

A model that is part of broader efforts to strengthen privacy-by-design practices in AI development, offering developers a practical tool to embed data protection directly into workflows rather than relying on external processing systems.

Unlike traditional rule-based systems, the model applies contextual language understanding to identify sensitive information in unstructured text. It processes inputs in a single pass and supports long-context analysis, enabling efficient handling of large documents.

Local deployment further reduces exposure risks, allowing sensitive data to remain on-device rather than being transmitted to external servers.

Performance benchmarks indicate near frontier-level capability, with strong precision and recall scores across standard evaluation datasets.

The system detects multiple categories of private data, including personal identifiers, financial information, and confidential credentials, while allowing developers to fine-tune detection thresholds according to operational requirements.

Despite its capabilities, the model is positioned as one component within a wider privacy framework instead of a standalone compliance solution.

Human oversight remains necessary in high-risk domains such as legal or financial processing.

Such a release by OpenAI reflects a shift towards smaller, specialised AI systems designed to address targeted challenges in real-world deployments while maintaining adaptability and transparency.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK government seeks industry cooperation to strengthen AI-driven cyber resilience

The UK government has called on leading AI companies to collaborate on building advanced cyber defence capabilities, as threats grow in scale and sophistication.

Speaking ahead of CYBERUK, Security Minister Dan Jarvis emphasised that AI-driven security will become a defining challenge, requiring innovation at unprecedented speed and scale.

Government officials warn that AI is already reshaping the threat landscape, with hostile states and criminal groups increasingly deploying automated systems to identify vulnerabilities.

The number of nationally significant cyber incidents handled by authorities more than doubled in 2025, highlighting the urgency of strengthening national resilience.

To address these risks, businesses are being encouraged to sign a voluntary Cyber Resilience Pledge, committing to stronger governance, early warning systems, and supply chain security standards.

Alongside this initiative, the UK government will invest £90 million over the next three years to support cyber defences, particularly for small and medium-sized enterprises.

A strategy that forms part of a broader National Cyber Action Plan, reflecting a shift towards integrating AI into national security infrastructure.

Officials argue that effective cooperation between government and industry will be essential to protect critical systems and maintain economic stability in an increasingly automated threat environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Online safety agreement signed by eSafety and OAIC in Australia

Australia’s eSafety Commissioner and the Office of the Australian Information Commissioner have signed a memorandum of understanding to strengthen cooperation on issues where online safety and privacy intersect.

The agreement formalises communication pathways between the two regulators and builds on existing collaboration. It covers matters including age-assurance requirements under Australia’s online industry codes and standards, as well as compliance by age-restricted platforms with Social Media Minimum Age obligations.

eSafety Commissioner Julie Inman Grant stated: ‘Both regulators have always recognised that combatting certain harms requires privacy and safety to go hand in hand. For example, at eSafety we knew from the outset our implementation of the Social Media Minimum Age would need to recognise important rights, including the right to privacy.’

She added: ‘Our commitment to continue working collaboratively with the OAIC gives formal recognition to that principle and sets out how we will balance and promote privacy and safety for everyone.’

Inman Grant also linked the agreement to emerging risks associated with new technologies and wider regulatory requirements around age assurance. Grant expanded: ‘It comes at an important time, when the proliferation of new technologies like artificial intelligence is amplifying risks and we are increasingly requiring industry to deploy age-assurance technologies that meet their regulatory obligations and respect privacy in the Australian context.’

Australian Information Commissioner Elizabeth Tydd said the memorandum would support the OAIC’s work in monitoring and responding to emerging online privacy risks and help both agencies deliver their statutory functions under the Online Safety Act.

Tydd added: ‘With this memorandum, we’re not only formalising cooperation, but building a foundation where privacy protections and online safety initiatives can better address specific harms side by side, ensuring Australians can be protected when interacting online.’

Why does it matter?

A growing number of online safety measures now depend on systems that also raise privacy questions, especially age-assurance tools and other platform controls involving personal data. The agreement gives both regulators a clearer basis for coordinating oversight as Australia expands enforcement around child safety, platform obligations, and emerging technologies such as AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI governance debate intensifies amid rapid global expansion

Growing concerns over the pace of AI development have prompted renewed calls for stronger regulatory oversight. Geoffrey Hinton, an AI pioneer and Nobel laureate often referred to as the ‘godfather of AI’, has warned that current systems are advancing without adequate control mechanisms.

Speaking at a United Nations-supported conference, he cautioned that the absence of effective safeguards could expose societies to significant systemic risks.

International policy discussions have intensified alongside the rapid expansion of the sector. Estimates from UNCTAD indicate that the global AI market could increase from $189 billion in 2023 to $4.8 trillion by 2033.

Despite this growth trajectory, the capacity to develop and govern such technologies remains concentrated within a limited number of jurisdictions and corporate actors. Distributional disparities continue to shape the global AI landscape. 

Doreen Bogdan-Martin, Secretary-General of the International Telecommunication Union, highlighted that adoption rates in developed economies significantly outpace those in developing regions. She warned, ‘Left unaddressed, this is a second great divergence – widening the gap between countries shaping artificial intelligence and those merely consuming it’.

Structural gaps in infrastructure, investment, and technical expertise remain central to this imbalance.

Ongoing UN processes are seeking to establish a more coherent governance framework grounded in scientific evidence and multilateral cooperation. 

Maria Ressa, a journalist and Nobel Peace Prize laureate, cautioned that increasingly sophisticated AI systems may facilitate ‘narrative warfare‘, contributing to institutional erosion and the spread of disinformation.

Findings from the UN’s scientific panel are expected to inform upcoming global discussions aimed at advancing transparent, accountable, and rights-based AI governance.

Why does it matter? 

The pace and concentration of AI development are beginning to shape economic power, information ecosystems, and institutional stability at a global scale. 

Without coordinated governance, the widening gap between advanced and developing economies risks reinforcing inequality, while misuse of AI systems may weaken trust in democratic processes through disinformation and opaque decision-making.

At the same time, the absence of shared regulatory standards increases systemic uncertainty for governments, businesses, and citizens as AI becomes embedded in essential sectors such as labour markets, education, and public services. 

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNIDIR highlights role of women in AI governance and international security

The United Nations Institute for Disarmament Research highlights the role of women in shaping the digital future, particularly in AI and international security. The organisation stresses the importance of increasing female participation in decision-making.

According to the research Institute, women remain underrepresented in AI and related policy spaces, including diplomacy and security forums. This imbalance risks limiting perspectives in global technology governance.

The organisation’s Women in AI Fellowship programme aims to address this gap by providing training and expertise to women diplomats. Participants gain knowledge across technical, legal and policy aspects of AI.

The research Institute positions inclusion as essential to effective AI governance and security policy, emphasising the need for diverse voices in shaping digital futures globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ukraine highlights AI strategic shifts

The National Security and Defense Council of Ukraine has published an overview of global AI developments for March 2026, highlighting a shift towards infrastructure and strategic realignment. The report is part of its ‘AI Frontiers’ analytical series.

According to the Council, growing investment and expansion of data centres to fuel AI demands are increasing pressure on energy resources. This is creating new competition not only for computing power but also for energy stability.

The analysis also points to intensifying competition between the US, China and the European Union, extending beyond AI models to supply chains, semiconductors and infrastructure. At the same time, AI is becoming more integrated into defence, cyberspace and information operations.

The Council highlights rising risks linked to disinformation, synthetic content and legal challenges, alongside growing demand for clearer regulation and content labelling as AI adoption expands in Ukraine.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK’s National Cyber Security Centre chief warns of ‘perfect storm’ for UK cybersecurity

Dr Richard Horne, chief executive of the UK’s National Cyber Security Centre, has described the country as facing a ‘perfect storm’ for cybersecurity.

Speaking at the CYBERUK conference in Glasgow, Horne described developments in AI and wider international tensions as creating a period of ‘tumultuous uncertainty’. He added that the definition of cybersecurity is expanding as technology becomes more deeply embedded in robotics, autonomous systems, and human-integrated technologies.

Horne called for what he described as a ‘cultural shift’ across organisations, adding: ‘cybersecurity is the responsibility of everyone, whether they sit on the Board or the IT help desk… cybersecurity is part of their mission.’

He also argued: ‘organisations that do not focus on their technology base…as core to their prosperity … are no longer just naïve but are failing to grasp the reality of today’s world.’

On the threat landscape, Horne noted that incident numbers remain ‘fairly steady’, but that the source of attacks has shifted, with ‘the majority of the nationally significant incidents that the NCSC is handling now originate directly or indirectly from nation states.’

He also described cyberspace as part of the contested space ‘between peace and war’ and warned that the UK is seeing Russia apply lessons learned during its invasion of Ukraine beyond the battlefield. In that context, he argued that recent conflicts show ‘cyber operations are now integral to conflict’ and that ‘cybersecurity is the home front’.

Addressing frontier AI, Horne said: ‘Frontier AI is rapidly enabling discovery and exploitation of existing vulnerabilities at scale, illustrating how quickly it will expose where fundamentals of cybersecurity are still to be addressed.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hong Kong advances digital corporate identity to transform business operations

The development of its Digital Corporate Identity (CorpID) platform has been accelerated by Hong Kong, positioning it as a central pillar of the territory’s digital economy strategy.

Backed by a $300 million public investment approved in 2024, the system is designed to provide corporations with a secure, standardised and scalable digital identity, enabling seamless interaction with both government and private sector services instead of fragmented administrative processes.

The platform builds on the success of ‘iAM Smart’, extending digital identity capabilities from individuals to corporations. With more than 4.3 million users already accessing over 1,400 services through the personal system, authorities aim to replicate and expand the model for businesses.

CorpID will enable companies to authenticate their identity digitally, authorise representatives, and access services through a unified interface, reducing duplication and significantly improving operational efficiency.

At its core, the platform introduces a set of integrated functions intended to modernise corporate workflows.

Digital authentication replaces traditional document submission, allowing real-time verification through direct integration with official databases. Digital signing, supported by legally recognised certificates, serves as a secure alternative to company chops and handwritten signatures, enabling faster and more reliable transactions.

A document wallet will store verifiable licences and certificates, while automated form pre-filling reduces administrative burden by reusing existing data across applications.

The inclusion of an AI assistant reflects a broader shift towards intelligent public services. The system will provide instant responses to corporate queries and deliver personalised recommendations, including access to funding schemes, regulatory guidance and industry support programmes.

Such an approach by Hong Kong aims to improve user experience while encouraging small and medium-sized enterprises to adopt digital tools and expand their capabilities.

Security and trust are central to the platform’s design. The system incorporates multi-layered protection measures, including public key infrastructure, advanced encryption standards and blockchain-based verification to prevent data tampering.

Strict compliance with privacy regulations and cybersecurity requirements ensures that corporate data remains protected, while continuous monitoring, audits and red team testing will reinforce resilience against emerging threats.

Integration with existing government systems also enables reliable identity verification and reduces the risk of fraud.

Beyond domestic efficiency, the platform is designed to strengthen Hong Kong’s position in global and regional markets.

Authorities are actively exploring interoperability with mainland China and international systems, incorporating widely recognised identifiers such as Legal Entity Identifiers and D-U-N-S numbers.

The initial rollout will connect approximately 200 services across sectors such as taxation, trade, logistics, finance and licensing. Government departments will be required to integrate their corporate services within 18 months of the platform’s launch, ensuring rapid adoption.

Collaboration with financial institutions, technology hubs and industry organisations is also expected to drive business-to-business applications, supported by sandbox testing environments that allow companies to develop and refine use cases before full deployment.

Development has now entered its final phase, with system integration and testing scheduled for mid-2026. The official launch is planned for the end of the year, followed by a gradual expansion of services and capabilities.

By 2028, all corporate-related government services are expected to support the platform, marking a significant step towards a fully digital business environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ILO sets first global framework for AI use in manufacturing sector

The International Labour Organization (ILO) has adopted its first-ever tripartite conclusions on AI in manufacturing, marking a significant policy step in addressing the sector’s digital transformation.

Agreed following a five-day technical meeting in Geneva, the framework brings together governments, employers and workers to shape how AI is integrated into one of the world’s largest employment sectors.

These ILO conclusions respond to the growing impact of AI on manufacturing, which employs nearly 500 million people globally.

Rather than focusing solely on productivity gains, the framework emphasises the need to align technological adoption with labour standards, ensuring that innovation supports decent work, strengthens enterprises and contributes to inclusive economic growth.

Key provisions address skills development, lifelong learning and occupational safety, alongside the protection of fundamental rights at work.

The framework also highlights the importance of social dialogue, recognising that collaboration between stakeholders is essential to managing AI-driven change and mitigating potential disruptions to employment and working conditions.

An agreement that reflects a broader effort to balance efficiency with worker protection, rejecting the notion that productivity and labour rights are competing priorities.

Instead, it positions AI as a tool that, if properly governed, can enhance both economic performance and job quality within the manufacturing sector.

The conclusions will be submitted to the ILO Governing Body in November 2026 for formal approval, with the intention of guiding national policies and international approaches to AI deployment in industry.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!