World Health Organization launches AI tool for reproductive health information

The World Health Organization and partners have launched ChatHRP, an AI-assisted tool designed to provide fast access to verified information on sexual and reproductive health and rights.

Developed under the HRP research programme, the system is aimed at supporting evidence-based decision-making in a field where misinformation remains a persistent challenge.

ChatHRP uses advanced natural language processing and retrieval-based AI to deliver referenced answers drawn exclusively from WHO and HRP materials.

The tool is designed for policy-makers, researchers, health workers and civil society organisations, helping them quickly navigate complex scientific and policy information without fragmented sources.

Built for global accessibility, the platform includes multilingual functionality and low-bandwidth optimisation to ensure usability in resource-limited settings. Its structure prioritises accuracy and transparency, with responses linked directly to validated research and guidance that is regularly updated.

The beta phase focuses on professional use cases, where users can query topics such as maternal health, contraception and disease management.

Why does it matter?

The initiative directly improves access to reliable, evidence-based health information in a field where misinformation can influence policy and health outcomes. By centralising verified sources and reducing reliance on fragmented or unverified material, it supports faster, more consistent decision-making across healthcare, research and policy environments globally.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Wikipedia-based AI model identifies 100 emerging technologies to watch in 2026

A new analysis by Australian researchers reveals how AI is reshaping the way emerging technologies are identified and tracked.

Using a dataset derived from thousands of Wikipedia entries, the researchers mapped more than 23,000 technologies to produce the ‘Momentum 100’ list, highlighting the fastest-growing technologies across science and industry.

The findings place reinforcement learning at the top, followed closely by blockchain and other rapidly advancing fields such as 3D printing, soft robotics and augmented reality.

These technologies reflect a broader shift towards data-driven innovation, where systems capable of learning, adapting and scaling are becoming central to both research and commercial applications.

Unlike traditional forecasts, which often rely on expert judgement, the model uses large-scale data analysis to detect patterns of growth and interconnection between technologies.

The approach offers a more dynamic and repeatable method, capturing early signals that might otherwise be overlooked in manual assessments.

Despite its advantages, researchers caution that predicting real-world impact remains difficult at early stages.

While AI-driven mapping provides valuable insights, policymakers and industry leaders still rely on hybrid approaches that combine data analysis with expert evaluation, as seen in frameworks developed by organisations such as the World Economic Forum.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

GPT-5.5 pushes AI deeper into agentic work

OpenAI has released GPT-5.5 as its latest push towards more capable agentic AI, presenting the model as better suited to complex, multi-step digital work across coding, research, analysis, and enterprise tasks.

The company frames it as a system designed to carry more of the work itself, moving beyond isolated prompt-response interactions towards fuller execution across digital workflows.

According to OpenAI, the model’s biggest gains are in software engineering, tool use, and knowledge work. GPT-5.5 improves performance on coding and workflow benchmarks, strengthens long-horizon reasoning, and handles complex digital tasks with greater efficiency while maintaining earlier latency standards.

OpenAI also says the model performs better across documents, spreadsheets, presentations, and data analysis, reflecting a broader effort to make AI more useful across full professional workflows rather than only as an assistant for isolated tasks.

The release also highlights stronger performance in scientific and technical research, alongside expanded safety testing and tighter safeguards for higher-risk capabilities.

The wider significance of GPT-5.5 lies in its reflection of the next phase of AI competition. The focus is shifting from better answers to more reliable execution across real-world digital work, with growing implications for productivity, oversight, and governance.

Why does it matter? 

GPT-5.5 signals a shift from AI as a passive tool to AI as an active digital operator that can complete full workflows across coding, research, and business systems with minimal human supervision.

Over time, such capability could reshape productivity, speed up development cycles, and shift competitive advantage toward those best integrating autonomous AI while managing safety and governance risks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Meta expands parental oversight with new AI conversation insights for teens

Meta has introduced new supervision features that allow parents to see the topics their teenagers discuss with its AI assistant across Facebook, Messenger, and Instagram.

The update provides visibility into activity over the previous seven days, grouping interactions into areas such as education, health and well-being, lifestyle, travel, and entertainment. Parents can review these themes through a new Insights tab, although they will not see the exact prompts their teen sent or Meta AI’s responses.

The feature forms part of Meta’s broader effort to strengthen safeguards for younger users as AI becomes more embedded in everyday digital experiences. For more sensitive issues, including suicide and self-harm, Meta says it is developing additional alerts to notify parents when teens try to engage in those types of conversations with its AI assistant.

Meta has also partnered with external experts, including the Cyberbullying Research Centre, to develop structured conversation prompts to help families talk about AI use. The company says these tools are intended to support informed, non-judgemental dialogue rather than passive monitoring.

Alongside these updates, Meta has created an AI Wellbeing Expert Council to provide input on the development of age-appropriate AI systems for teens. The move reflects a wider shift towards embedding safety, transparency, and parental involvement into AI-driven platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Frontier AI changes cyber risk calculations, New Zealand warns

New Zealand’s National Cyber Security Centre has warned that frontier AI models are likely to change the cyber threat landscape by increasing malicious actors’ ability to discover and exploit software vulnerabilities at greater speed and scale.

The guidance states that frontier AI models have already demonstrated the ability to identify vulnerabilities in software products. At the same time, it notes that defenders should consider where AI can support their own work, including checking in-house code for vulnerabilities and strengthening software before it is deployed into production.

Also, the guidance refers to a recent Anthropic report on Mythos Preview, which describes it as an agentic model capable of autonomously completing a series of tasks. According to the NCSC, Anthropic says the model can identify zero-day vulnerabilities in code and turn them into working exploits.

At the same time, the NCSC stresses that effective security controls remain the best line of defence as new vulnerabilities continue to be discovered. It recommends that organisations review their security posture to ensure it remains fit for purpose, and that appropriate methods to detect and contain malicious activity are in place across networks.

Senior leaders are urged to review how vulnerabilities are identified and managed, including patching, disclosure, supplier assurance, incident response, and protections for critical systems. For developers, the guidance recommends using frontier AI models cautiously in code reviews, patching frequently, reducing attack surfaces, applying defence-in-depth, and monitoring closely for signs of compromise.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

World Health Organization collaboration with Kazakhstan marks new phase in global health and AI

Kazakhstan and the World Health Organisation have held high-level talks to expand cooperation in healthcare, climate-related health risks, and digital transformation. Discussions also covered the growing role of AI in strengthening healthcare systems and improving public health outcomes.

President Kassym-Jomart Tokayev said cooperation with WHO had entered a new stage, reflecting wider efforts to modernise the country’s health system. WHO Director-General Tedros Adhanom Ghebreyesus welcomed Kazakhstan’s engagement and also recognised its broader reforms in governance, environmental protection, and regional water security.

A key outcome of the wider cooperation agenda was the WHO confirmation that Kazakhstan has reached Level 3 maturity in pharmaceutical regulation. The designation makes Kazakhstan the first country in Central Asia to achieve that level for the regulation of medicines and imported vaccines, marking an important step in the development of its health governance capacity.

The development matters because stronger regulatory recognition can improve confidence in the country’s medicines oversight system and support deeper international cooperation. The added focus on digital health and AI also points to a broader shift towards more modern, data-driven healthcare systems that could shape health policy development across the region.

Why does it matter?

The partnership signals Kazakhstan’s stronger integration into global health governance, particularly through recognised pharmaceutical regulatory standards. Achieving WHO Level 3 maturity strengthens trust in its drug safety system, which can improve access to medicines and international cooperation.

The added focus on digital health and AI also reflects a broader shift toward more modern, data-driven healthcare systems that could influence regional health policy development.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

UK embraces 6 frontier technologies to drive digital growth

The UK government has identified six frontier technologies as central to strengthening digital capability, economic growth, and long-term competitiveness.

Outlined in the 2025 Modern Industrial Strategy and Digital and Technologies Sector Plan, the approach prioritises AI, cybersecurity, advanced connectivity, engineering biology, quantum technologies, and semiconductors as pillars of national resilience and technological sovereignty.

Advanced connectivity and AI remain core drivers of digital transformation. Investment in next-generation telecoms, including 5G and future 6G development, is supported through public funding and infrastructure initiatives, while AI continues to expand rapidly through commitments to compute capacity, national supercomputing infrastructure, and workforce development. The strategy positions the UK as aiming to strengthen its role as a leading European AI hub.

Cybersecurity, engineering biology, and quantum technologies reflect a broader strategy linking innovation with security, resilience, and sustainability. Government-backed programmes are intended to support commercialisation, strengthen secure-by-design systems, and accelerate growth in emerging areas such as bio-based manufacturing. Quantum technologies are also being positioned for longer-term use across sectors, including healthcare, defence, and finance.

Semiconductors complete the strategy as a foundational technology underpinning modern digital systems. Rather than focusing on large-scale manufacturing, the UK is prioritising areas such as design, photonics, compound semiconductors, and specialised materials, backed by targeted funding and institutional support.

Across all six areas, the strategy reflects a wider effort to align innovation policy with economic security, global competitiveness, and more resilient supply chains.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Austria hosts the first Google data centre in the Alpine region

Google has announced its first data centre investment in Austria, marking an expansion of digital infrastructure in the Alpine region.

The facility, to be built in Kronstorf, is expected to create around 100 direct jobs while supporting growing demand for cloud services and AI capabilities across Europe.

The investment reflects a broader push to strengthen Europe’s digital competitiveness through infrastructure linked to AI-driven growth. By expanding its network capacity, Google says it aims to enhance the performance, reliability, and scalability of its services, helping regional economies remain connected to global digital ecosystems.

Sustainability is a central part of the project. The data centre will incorporate measures such as renewable energy integration, heat recovery systems, and water quality initiatives linked to the nearby Enns River.

These efforts align with wider industry trends towards greener data infrastructure and lower environmental impact.

Alongside infrastructure development, Google is also investing in workforce skills through partnerships with local institutions, including the University of Applied Sciences Upper Austria.

Building on previous training initiatives that have reached more than 140,000 people, the programme aims to equip workers with skills relevant to an AI-driven economy, reinforcing the link between digital infrastructure and human capital development.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

MIT method tackles AI overconfidence problem

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new training approach designed to address a persistent issue in AI systems: excessive confidence in uncertain answers.

The study identifies overconfidence as a by-product of standard reinforcement learning methods, which reward correct outputs without accounting for how those answers are reached.

The proposed method, known as RLCR (Reinforcement Learning with Calibration Rewards), enables models to generate both answers and associated confidence estimates.

By introducing a calibration-based reward mechanism, the system penalises incorrect high-confidence responses and unnecessary uncertainty in correct ones. Across multiple benchmarks, the approach reduced calibration error by up to 90 percent while maintaining or improving accuracy.

Findings suggest that conventional reinforcement learning frameworks unintentionally encourage models to guess confidently, even in the absence of sufficient evidence.

Researchers argue that this behaviour poses risks in applied settings, particularly in sectors such as healthcare, law, and finance, where users may rely heavily on perceived certainty in AI outputs.

Results also indicate that improved confidence calibration enhances practical performance during inference. Selecting answers based on model-reported confidence improves accuracy, suggesting uncertainty-aware reasoning can deliver measurable benefits in deployment.

Why does it matter? 

Improving how AI systems express uncertainty directly affects their reliability in real-world use. Models that distinguish between strong and weak answers reduce the risk of users over-relying on incorrect outputs presented with undue confidence.

Better-calibrated systems also enable more informed decision-making, as confidence signals can be used to filter, rank or combine responses. Overall, uncertainty-aware reasoning strengthens trust, safety and practical performance as AI becomes more widely integrated into critical decision processes.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Australia targets three million learners under AI workforce strategy

Three million people in Australia will be trained in workforce-ready AI skills under Microsoft’s largest AI skilling commitment, set to run through the end of 2028.

The initiative is delivered in partnership with government, industry, education providers and community organisations. It aligns with Australia’s National AI Plan to strengthen national capability and ensure the responsible adoption of emerging technologies.

The programme builds on earlier skilling targets that exceeded expectations, including milestones of one million and 300,000 learners achieved ahead of schedule.

It is supported by Microsoft’s broader A$25 billion (USD 18 billion) investment in digital infrastructure, cybersecurity and workforce development, strengthening long-term national AI capability.

Training will focus on three core areas:

  • Future workforce development through education systems;
  • Upskilling of the current workforce;
  • Expanded access for community groups.

Partnerships with institutions such as TAFE NSW, universities, employers and trade organisations are designed to scale practical AI learning, while also addressing productivity pressures and evolving labour market demands.

Community-focused initiatives aim to reduce digital inequality and broaden access to AI skills, particularly among underrepresented groups. Programmes supporting Indigenous-led organisations and social impact groups aim to widen participation in the digital economy and promote inclusive, responsible AI adoption. 

Why does it matter?

The initiative reflects a broader shift towards system-wide AI capability building across education, industry and communities.

Expanding AI skills is intended to support productivity, reduce workforce fragmentation and ensure more balanced access to emerging technologies. It also addresses risks of uneven adoption and widening digital inequality as AI becomes central to economic development.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot