South Korea reviews AI cyber threat response

The Office of National Security of South Korea held a cybersecurity meeting to review how government agencies are responding to AI-driven cyber threats. The session focused on the growing risks posed by the misuse of advanced AI technologies.

Officials from multiple ministries attended, including science, defence and intelligence bodies, to coordinate responses. The government warned that AI-enabled hacking capabilities are becoming increasingly realistic as global technology companies release more advanced models.

Authorities have instructed relevant agencies to strengthen cooperation with businesses and institutions and distributed guidance on responding to AI-based security risks. Discussions also covered practical measures to support rapid responses to cybersecurity vulnerabilities across public and private sectors.

The government plans to establish a joint technical response team to improve information sharing and enable immediate action. Officials emphasised that while AI increases cyber risks, it also offers opportunities to strengthen security capabilities in South Korea.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Council of the EU pushes for human-centred AI in education systems

The Council of the European Union has approved conclusions calling for an ethical, safe and human-centred approach to AI in education, stressing that teachers should remain at the heart of the learning process as AI tools become more widely used across schools and universities.

The Council said the conclusions focus on strengthening digital skills and AI literacy, guaranteeing inclusion and fairness, empowering teachers, and supporting the well-being of both teachers and learners. It also noted that the relationship between AI and teaching is being addressed for the first time in the EU education policy.

The EU ministers highlighted both the opportunities and risks associated with AI-driven education systems. The Council said AI could improve accessibility, support disadvantaged learners, enable more individualised teaching and assessment methods, and reduce administrative workloads for educators.

At the same time, the conclusions raise concerns about misinformation, algorithmic bias, over-reliance on technology, reduced teacher autonomy, data protection risks and the widening of digital inequalities across Europe. The Council also warned that AI could affect learners’ concentration and skill acquisition, while raising broader societal and environmental concerns.

The conclusions call on national governments to strengthen teachers’ AI and digital skills through training, while encouraging the development and use of education-specific AI tools that provide clear pedagogical value and align with data protection, accountability and risk-awareness requirements.

The Council also said teachers should have opportunities to contribute to the design and evaluation of AI tools used in education, reflecting a digital humanism approach focused on human agency and democratic values.

Member states are urged to ensure AI deployment does not undermine teachers’ autonomy or sustainable working conditions, and that digital tools remain accessible and suitable for all learners. The European Commission was encouraged to support international cooperation, research, ethical guidance, peer-to-peer exchanges and capacity-building as AI adoption accelerates across European education systems.

Why does it matter?

AI is moving into classrooms not only as a learning tool, but as part of how teaching, assessment, administration and student support are organised. The Council’s conclusions underline that education policy will need to address more than technical adoption, including teacher autonomy, digital inequality, learner well-being, data protection and the risk of over-reliance on automated systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Health New Zealand issues guidance on use of generative AI and large language models

Health New Zealand has published new guidance on generative AI and large language models for healthcare settings.

The guidance states that the National Artificial Intelligence and Algorithm Expert Advisory Group evaluates the use of generative AI tools and LLMs and recommends caution in their application across Health New Zealand environments. It notes that further data is needed to assess risks and benefits in the New Zealand health context.

Employees and contractors are prohibited from entering personal, confidential or sensitive patient or organisational information into unapproved LLMs or generative AI tools. The guidance also says such tools must not be used for clinical decisions or personalised patient advice.

Staff using generative AI tools in other contexts must take full responsibility for checking the information generated and acknowledge when generative AI has been used to create content. Anyone planning to use generative AI or LLMs is also asked to seek advice from the advisory group.

The guidance highlights potential risks including privacy breaches, inaccurate or misleading outputs, bias in training data, lack of transparency in model outputs, data sovereignty concerns and intellectual property risks. It also notes that generative AI systems may not adequately support te reo Māori and other minority languages spoken in Aotearoa New Zealand.

Why does it matter?

The guidance shows how health systems are beginning to set practical boundaries for generative AI before its use becomes routine in clinical and administrative settings. By prohibiting unapproved tools for patient data, clinical decisions and personalised advice, Health New Zealand is drawing a clear line between limited productivity uses and high-risk healthcare applications. In contrast, its references to Māori data sovereignty and language support widen the governance frame to include equity, cultural rights and data protection concerns that standard technology policies may not fully address.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

World Economic Forum analysis explores AI-driven future planning for organisations

A World Economic Forum article argues that organisations need to move beyond static reports and analytical forecasts to become more future-ready in an era marked by rapid technological and geopolitical change.

The article highlights FutureSlam, a foresight method that combines participatory scenario-building, AI-supported reflection and improvisational performance to help organisations experience possible futures rather than analyse them. The authors say many organisations already invest in foresight, but struggle to translate insights into operational decisions because they often remain confined to strategy teams and slide decks.

The approach integrates human imagination with AI-generated scenarios. Participants first develop scenarios themselves, before comparing them with future images generated by an AI system using the same trend material. The authors argue that this comparison can challenge assumptions, confirm parts of participants’ reasoning and introduce perspectives that human groups may avoid.

FutureSlam then uses improvised performance, including simulated news broadcasts and staged scenarios, to make possible futures more tangible. According to the article, the method is designed to make foresight more inclusive, structured and memorable by turning participants into co-creators rather than passive recipients of expert analysis.

The authors suggest that such approaches could help organisations adapt more effectively to technological, geopolitical and societal change by turning foresight into a shared organisational capability rather than a niche strategic exercise.

Why does it matter?

AI is increasingly being used not only to automate tasks, but also to support strategic thinking, scenario-building and organisational learning. The FutureSlam example points to a broader shift in how organisations may prepare for uncertainty: less focus on predicting precise outcomes, and more focus on building the capacity to test assumptions, imagine alternatives and adapt collectively.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

World Health Organization launches AI tool for reproductive health information

The World Health Organization and partners have launched ChatHRP, an AI-assisted tool designed to provide fast access to verified information on sexual and reproductive health and rights.

Developed under the HRP research programme, the system is aimed at supporting evidence-based decision-making in a field where misinformation remains a persistent challenge.

ChatHRP uses advanced natural language processing and retrieval-based AI to deliver referenced answers drawn exclusively from WHO and HRP materials.

The tool is designed for policy-makers, researchers, health workers and civil society organisations, helping them quickly navigate complex scientific and policy information without fragmented sources.

Built for global accessibility, the platform includes multilingual functionality and low-bandwidth optimisation to ensure usability in resource-limited settings. Its structure prioritises accuracy and transparency, with responses linked directly to validated research and guidance that is regularly updated.

The beta phase focuses on professional use cases, where users can query topics such as maternal health, contraception and disease management.

Why does it matter?

The initiative directly improves access to reliable, evidence-based health information in a field where misinformation can influence policy and health outcomes. By centralising verified sources and reducing reliance on fragmented or unverified material, it supports faster, more consistent decision-making across healthcare, research and policy environments globally.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Wikipedia-based AI model identifies 100 emerging technologies to watch in 2026

A new analysis by Australian researchers reveals how AI is reshaping the way emerging technologies are identified and tracked.

Using a dataset derived from thousands of Wikipedia entries, the researchers mapped more than 23,000 technologies to produce the ‘Momentum 100’ list, highlighting the fastest-growing technologies across science and industry.

The findings place reinforcement learning at the top, followed closely by blockchain and other rapidly advancing fields such as 3D printing, soft robotics and augmented reality.

These technologies reflect a broader shift towards data-driven innovation, where systems capable of learning, adapting and scaling are becoming central to both research and commercial applications.

Unlike traditional forecasts, which often rely on expert judgement, the model uses large-scale data analysis to detect patterns of growth and interconnection between technologies.

The approach offers a more dynamic and repeatable method, capturing early signals that might otherwise be overlooked in manual assessments.

Despite its advantages, researchers caution that predicting real-world impact remains difficult at early stages.

While AI-driven mapping provides valuable insights, policymakers and industry leaders still rely on hybrid approaches that combine data analysis with expert evaluation, as seen in frameworks developed by organisations such as the World Economic Forum.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ILO warns against treating AI exposure indicators as job-loss forecasts

A new brief from the International Labour Organisation argues that AI exposure indicators should not be treated as forecasts of job losses, even as they become a more common tool for assessing how artificial intelligence could reshape work.

According to the ILO, these indicators can help identify where jobs may be affected by AI. Still, they do not show whether workers will actually be displaced or how labour markets will adjust in practice.

The brief examines how different exposure measures are constructed and why they often produce different results. Earlier approaches to automation focused mainly on routine and lower-skilled work, while newer AI-related models point to greater exposure in higher-skilled cognitive occupations, including roles in finance, computing, business, and education. That shift reflects the growing capacity of AI systems to perform tasks once seen as less vulnerable to automation.

The ILO stresses that exposure does not necessarily lead to job loss. Most indicators rely on static task descriptions and estimate what may be technically feasible, rather than what employers will actually adopt or what makes economic sense. They do not capture whether automation is profitable, whether it improves productivity, or how firms, workers, and institutions may respond over time.

The brief also argues that AI-related disruption is unlikely to stay confined to a narrow set of occupations. Jobs are linked through shared skills, career mobility, and workplace structures, meaning that changes in one part of the labour market can influence broader employment patterns elsewhere. That makes simple occupation-by-occupation risk scores less useful on their own than they may appear.

For that reason, the ILO says exposure indicators should be used as early warning signals rather than stand-alone labour market forecasts. It recommends combining them with evidence on employment, wages, job transitions, and broader economic and institutional conditions to build a more realistic picture of how AI is affecting work.

The broader significance of the brief is that it pushes back against the simplest narratives about AI and employment. Rather than asking how many jobs AI will eliminate, the ILO is urging policymakers to focus on where work may change, how quickly adoption may happen, and what kinds of institutions, skills, and labour protections will shape the outcome.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Study suggests AI reliance may weaken short-term problem-solving

A recent study by researchers from Carnegie Mellon University, the University of Oxford, MIT, and UCLA suggests that reliance on AI for basic tasks may temporarily weaken cognitive performance.

Participants who used AI tools to complete simple maths and reading exercises initially performed better than those working without assistance. However, once the technology was removed, their accuracy declined, and they were less likely to persist with the tasks.

The findings suggest that even brief exposure to AI support can reduce a person’s willingness to engage in sustained problem-solving, which remains essential to learning and skill development.

Researchers found that participants became more likely to abandon tasks and less able to complete them independently after relying on AI assistance.

The results add to wider concerns about how AI may be reshaping learning habits and intellectual development. Related research from MIT has described a phenomenon called ‘cognitive debt’, in which heavy reliance on AI tools may weaken retention, understanding, and independent reasoning over time.

Taken together, the studies point to a growing tension in AI design. While such tools can improve speed and convenience, they may also reduce the mental effort needed to build lasting cognitive skills. That suggests AI systems may need to be designed to support learning without replacing independent thought altogether.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Canada launches hybrid AI weather model

Environment and Climate Change Canada has announced the launch of a hybrid AI weather forecasting model aimed at improving predictions of severe weather. The system combines AI with traditional physics-based forecasting methods.

According to Environment and Climate Change Canada, the model uses AI to analyse large datasets while relying on established models to account for local weather factors such as temperature, wind and precipitation. This combination is expected to improve forecast accuracy.

The department states the system will enhance performance across all forecast timeframes and provide earlier warnings of major weather events. In some cases, forecasts could identify large systems more than 24 hours earlier than current capabilities.

Environment and Climate Change Canada said the model has been extensively tested alongside existing systems and will support better preparedness and public safety as extreme weather events increase in Canada.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU approves Italian State aid to support graphene-based photonic chip development

The European Commission has approved a €211 million Italian State aid measure to support the development of photonic chips based on graphene technology.

A funding will be provided to the Italian SME CamGraPhIC, with project activities taking place in Pisa and Bergamo.

Such an initiative focuses on optical transceivers that transmit data using light rather than electrons. The use of graphene instead of silicon is expected to enhance performance and energy efficiency across sectors such as telecommunications, automotive, aerospace and defence.

The Commission assessed the measure under the EU State aid rules and concluded that the funding is necessary, proportionate and aligned with research and innovation objectives. It also found that the project would not proceed without public support, demonstrating an incentive effect.

A decision that reflects broader EU efforts to strengthen semiconductor capabilities and support advanced digital technologies through targeted public investment and regulatory oversight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!