Frontier AI changes cyber risk calculations, New Zealand warns

New Zealand’s National Cyber Security Centre has warned that frontier AI models are likely to change the cyber threat landscape by increasing malicious actors’ ability to discover and exploit software vulnerabilities at greater speed and scale.

The guidance states that frontier AI models have already demonstrated the ability to identify vulnerabilities in software products. At the same time, it notes that defenders should consider where AI can support their own work, including checking in-house code for vulnerabilities and strengthening software before it is deployed into production.

Also, the guidance refers to a recent Anthropic report on Mythos Preview, which describes it as an agentic model capable of autonomously completing a series of tasks. According to the NCSC, Anthropic says the model can identify zero-day vulnerabilities in code and turn them into working exploits.

At the same time, the NCSC stresses that effective security controls remain the best line of defence as new vulnerabilities continue to be discovered. It recommends that organisations review their security posture to ensure it remains fit for purpose, and that appropriate methods to detect and contain malicious activity are in place across networks.

Senior leaders are urged to review how vulnerabilities are identified and managed, including patching, disclosure, supplier assurance, incident response, and protections for critical systems. For developers, the guidance recommends using frontier AI models cautiously in code reviews, patching frequently, reducing attack surfaces, applying defence-in-depth, and monitoring closely for signs of compromise.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

World Health Organization collaboration with Kazakhstan marks new phase in global health and AI

Kazakhstan and the World Health Organisation have held high-level talks to expand cooperation in healthcare, climate-related health risks, and digital transformation. Discussions also covered the growing role of AI in strengthening healthcare systems and improving public health outcomes.

President Kassym-Jomart Tokayev said cooperation with WHO had entered a new stage, reflecting wider efforts to modernise the country’s health system. WHO Director-General Tedros Adhanom Ghebreyesus welcomed Kazakhstan’s engagement and also recognised its broader reforms in governance, environmental protection, and regional water security.

A key outcome of the wider cooperation agenda was the WHO confirmation that Kazakhstan has reached Level 3 maturity in pharmaceutical regulation. The designation makes Kazakhstan the first country in Central Asia to achieve that level for the regulation of medicines and imported vaccines, marking an important step in the development of its health governance capacity.

The development matters because stronger regulatory recognition can improve confidence in the country’s medicines oversight system and support deeper international cooperation. The added focus on digital health and AI also points to a broader shift towards more modern, data-driven healthcare systems that could shape health policy development across the region.

Why does it matter?

The partnership signals Kazakhstan’s stronger integration into global health governance, particularly through recognised pharmaceutical regulatory standards. Achieving WHO Level 3 maturity strengthens trust in its drug safety system, which can improve access to medicines and international cooperation.

The added focus on digital health and AI also reflects a broader shift toward more modern, data-driven healthcare systems that could influence regional health policy development.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

UK embraces 6 frontier technologies to drive digital growth

The UK government has identified six frontier technologies as central to strengthening digital capability, economic growth, and long-term competitiveness.

Outlined in the 2025 Modern Industrial Strategy and Digital and Technologies Sector Plan, the approach prioritises AI, cybersecurity, advanced connectivity, engineering biology, quantum technologies, and semiconductors as pillars of national resilience and technological sovereignty.

Advanced connectivity and AI remain core drivers of digital transformation. Investment in next-generation telecoms, including 5G and future 6G development, is supported through public funding and infrastructure initiatives, while AI continues to expand rapidly through commitments to compute capacity, national supercomputing infrastructure, and workforce development. The strategy positions the UK as aiming to strengthen its role as a leading European AI hub.

Cybersecurity, engineering biology, and quantum technologies reflect a broader strategy linking innovation with security, resilience, and sustainability. Government-backed programmes are intended to support commercialisation, strengthen secure-by-design systems, and accelerate growth in emerging areas such as bio-based manufacturing. Quantum technologies are also being positioned for longer-term use across sectors, including healthcare, defence, and finance.

Semiconductors complete the strategy as a foundational technology underpinning modern digital systems. Rather than focusing on large-scale manufacturing, the UK is prioritising areas such as design, photonics, compound semiconductors, and specialised materials, backed by targeted funding and institutional support.

Across all six areas, the strategy reflects a wider effort to align innovation policy with economic security, global competitiveness, and more resilient supply chains.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Austria hosts the first Google data centre in the Alpine region

Google has announced its first data centre investment in Austria, marking an expansion of digital infrastructure in the Alpine region.

The facility, to be built in Kronstorf, is expected to create around 100 direct jobs while supporting growing demand for cloud services and AI capabilities across Europe.

The investment reflects a broader push to strengthen Europe’s digital competitiveness through infrastructure linked to AI-driven growth. By expanding its network capacity, Google says it aims to enhance the performance, reliability, and scalability of its services, helping regional economies remain connected to global digital ecosystems.

Sustainability is a central part of the project. The data centre will incorporate measures such as renewable energy integration, heat recovery systems, and water quality initiatives linked to the nearby Enns River.

These efforts align with wider industry trends towards greener data infrastructure and lower environmental impact.

Alongside infrastructure development, Google is also investing in workforce skills through partnerships with local institutions, including the University of Applied Sciences Upper Austria.

Building on previous training initiatives that have reached more than 140,000 people, the programme aims to equip workers with skills relevant to an AI-driven economy, reinforcing the link between digital infrastructure and human capital development.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

MIT method tackles AI overconfidence problem

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new training approach designed to address a persistent issue in AI systems: excessive confidence in uncertain answers.

The study identifies overconfidence as a by-product of standard reinforcement learning methods, which reward correct outputs without accounting for how those answers are reached.

The proposed method, known as RLCR (Reinforcement Learning with Calibration Rewards), enables models to generate both answers and associated confidence estimates.

By introducing a calibration-based reward mechanism, the system penalises incorrect high-confidence responses and unnecessary uncertainty in correct ones. Across multiple benchmarks, the approach reduced calibration error by up to 90 percent while maintaining or improving accuracy.

Findings suggest that conventional reinforcement learning frameworks unintentionally encourage models to guess confidently, even in the absence of sufficient evidence.

Researchers argue that this behaviour poses risks in applied settings, particularly in sectors such as healthcare, law, and finance, where users may rely heavily on perceived certainty in AI outputs.

Results also indicate that improved confidence calibration enhances practical performance during inference. Selecting answers based on model-reported confidence improves accuracy, suggesting uncertainty-aware reasoning can deliver measurable benefits in deployment.

Why does it matter? 

Improving how AI systems express uncertainty directly affects their reliability in real-world use. Models that distinguish between strong and weak answers reduce the risk of users over-relying on incorrect outputs presented with undue confidence.

Better-calibrated systems also enable more informed decision-making, as confidence signals can be used to filter, rank or combine responses. Overall, uncertainty-aware reasoning strengthens trust, safety and practical performance as AI becomes more widely integrated into critical decision processes.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Australia targets three million learners under AI workforce strategy

Three million people in Australia will be trained in workforce-ready AI skills under Microsoft’s largest AI skilling commitment, set to run through the end of 2028.

The initiative is delivered in partnership with government, industry, education providers and community organisations. It aligns with Australia’s National AI Plan to strengthen national capability and ensure the responsible adoption of emerging technologies.

The programme builds on earlier skilling targets that exceeded expectations, including milestones of one million and 300,000 learners achieved ahead of schedule.

It is supported by Microsoft’s broader A$25 billion (USD 18 billion) investment in digital infrastructure, cybersecurity and workforce development, strengthening long-term national AI capability.

Training will focus on three core areas:

  • Future workforce development through education systems;
  • Upskilling of the current workforce;
  • Expanded access for community groups.

Partnerships with institutions such as TAFE NSW, universities, employers and trade organisations are designed to scale practical AI learning, while also addressing productivity pressures and evolving labour market demands.

Community-focused initiatives aim to reduce digital inequality and broaden access to AI skills, particularly among underrepresented groups. Programmes supporting Indigenous-led organisations and social impact groups aim to widen participation in the digital economy and promote inclusive, responsible AI adoption. 

Why does it matter?

The initiative reflects a broader shift towards system-wide AI capability building across education, industry and communities.

Expanding AI skills is intended to support productivity, reduce workforce fragmentation and ensure more balanced access to emerging technologies. It also addresses risks of uneven adoption and widening digital inequality as AI becomes central to economic development.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot

OpenAI introduces ChatGPT for Clinicians and HealthBench Professional

OpenAI has launched ChatGPT for Clinicians, a version of ChatGPT designed to support clinical tasks such as documentation, medical research, evidence review, and care consults. The company says the product is now available free to verified physicians, nurse practitioners, physician associates, and pharmacists in the United States.

According to OpenAI, ChatGPT for Clinicians includes trusted clinical search with cited answers, reusable skills for repeatable workflows, deep research across medical literature, optional HIPAA support through a Business Associate Agreement for eligible accounts, and the ability for eligible evidence review to count towards continuing medical education credits. OpenAI also says conversations in the product are not used to train models.

The launch builds on OpenAI’s earlier ChatGPT for Healthcare offering for organisations. OpenAI says clinicians across US health systems are already using that product for administrative work such as medical research and documentation, and describes the free clinician version as the next step in expanding access.

Alongside the launch, OpenAI has introduced HealthBench Professional, which it describes as an open benchmark for real-world clinician chat tasks across care consultation, writing, documentation, and medical research. The company says the benchmark is based on physician-authored conversations, multi-stage physician adjudication, and filtered examples selected for quality, representativeness, and difficulty.

OpenAI also says physician advisers reviewed more than 700,000 model responses in health scenarios, and that before release, clinicians tested 6,924 conversations across clinical care, documentation, and research.

According to the company, physicians rated 99.6% of those responses as safe and accurate, while GPT-5.4 in the ChatGPT for Clinicians workspace outperformed base GPT-5.4, other OpenAI and external models, and human physicians on HealthBench Professional. OpenAI adds that the tool is designed to support clinicians with information rather than replace their judgement or expertise.

The company says the free version is currently limited to verified US clinicians, with plans to expand access to additional countries and groups over time. OpenAI also says it will begin by working with the Better Evidence Network to pilot access for verified clinicians outside the United States, subject to local regulations, and has released a Health Blueprint with recommendations for responsible AI integration in US healthcare.

Why does it matter?

The launch of ChatGPT for Clinicians reflects a shift from general-purpose AI use in healthcare towards clinician-specific products tied to workflow, benchmarking, and compliance. It also shows that competition in medical AI is increasingly centred not only on model capability, but on safety evaluation, evidence retrieval, privacy controls, and integration into real clinical practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft commits A$25 billion to expand AI and cloud in Australia

Microsoft has announced its largest-ever investment in Australia, committing A$25 billion by the end of 2029 to expand AI and cloud infrastructure, strengthen cyber defence collaboration, and train three million Australians in AI skills by 2028.

The announcement was made alongside Australian Prime Minister Anthony Albanese during Microsoft chief executive Satya Nadella’s visit to Sydney. The company said the investment will expand Azure AI supercomputing and cloud capacity in Australia and increase its local cloud and AI infrastructure footprint by more than 140% by the end of 2029.

The announcement also includes collaboration with the Australian AI Safety Institute, an extension of the Microsoft-Australian Signals Directorate Cyber Shield to additional government agencies, and deeper work on national resilience with the Department of Home Affairs.

Albanese said:

We want to make sure all Australians benefit from AI. Our National AI Plan is all about capturing the economic opportunities of this transformative technology while protecting Australians from the risks.’ He added: ‘Microsoft’s long-term investment in our national capability will help deliver on that plan – strengthening our cyber defences and creating opportunity for Australian workers and businesses.’

Nadella added:

Australia has an enormous opportunity to translate AI into real economic growth and societal benefit.’ He added: ‘That is why we are making our largest investment in Australia to date, committing A$25 billion to expand AI and cloud capacity, strengthen cybersecurity, and expand access to digital skills across the country.

Microsoft said the investment is underpinned by a memorandum of understanding with the Australian Government, tied to national expectations for data center and AI infrastructure developers. It also said it will work with the Australian AI Safety Institute to monitor, test, and evaluate advanced AI systems, including human-AI interaction risks in companion chatbots and conversational AI systems.

Why does it matter?

The scale of the investment links infrastructure, skills, safety, and cyber resilience in a single package aligned with Australia’s AI Action Plan. It also signals that competition over AI capacity is increasingly tied not only to datacentres and compute, but to workforce readiness, regulatory cooperation, and national capability in areas such as cybersecurity and resilience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI privacy model sets new standard for AI-data protection

The US R&D company, OpenAI, has introduced OpenAI Privacy Filter, a specialised AI system designed to detect and redact personally identifiable information in text with high accuracy.

A model that is part of broader efforts to strengthen privacy-by-design practices in AI development, offering developers a practical tool to embed data protection directly into workflows rather than relying on external processing systems.

Unlike traditional rule-based systems, the model applies contextual language understanding to identify sensitive information in unstructured text. It processes inputs in a single pass and supports long-context analysis, enabling efficient handling of large documents.

Local deployment further reduces exposure risks, allowing sensitive data to remain on-device rather than being transmitted to external servers.

Performance benchmarks indicate near frontier-level capability, with strong precision and recall scores across standard evaluation datasets.

The system detects multiple categories of private data, including personal identifiers, financial information, and confidential credentials, while allowing developers to fine-tune detection thresholds according to operational requirements.

Despite its capabilities, the model is positioned as one component within a wider privacy framework instead of a standalone compliance solution.

Human oversight remains necessary in high-risk domains such as legal or financial processing.

Such a release by OpenAI reflects a shift towards smaller, specialised AI systems designed to address targeted challenges in real-world deployments while maintaining adaptability and transparency.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK government seeks industry cooperation to strengthen AI-driven cyber resilience

The UK government has called on leading AI companies to collaborate on building advanced cyber defence capabilities, as threats grow in scale and sophistication.

Speaking ahead of CYBERUK, Security Minister Dan Jarvis emphasised that AI-driven security will become a defining challenge, requiring innovation at unprecedented speed and scale.

Government officials warn that AI is already reshaping the threat landscape, with hostile states and criminal groups increasingly deploying automated systems to identify vulnerabilities.

The number of nationally significant cyber incidents handled by authorities more than doubled in 2025, highlighting the urgency of strengthening national resilience.

To address these risks, businesses are being encouraged to sign a voluntary Cyber Resilience Pledge, committing to stronger governance, early warning systems, and supply chain security standards.

Alongside this initiative, the UK government will invest £90 million over the next three years to support cyber defences, particularly for small and medium-sized enterprises.

A strategy that forms part of a broader National Cyber Action Plan, reflecting a shift towards integrating AI into national security infrastructure.

Officials argue that effective cooperation between government and industry will be essential to protect critical systems and maintain economic stability in an increasingly automated threat environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!