New EU cybersecurity package strengthens resilience and ENISA powers

The European Commission has unveiled a broad cybersecurity package that moves the EU beyond certification reform towards systemic resilience across critical digital infrastructure.

Building on plans to expand EU cybersecurity certification beyond products and services, the revised Cybersecurity Act introduces a risk-based framework for securing ICT supply chains, with particular focus on dependencies, foreign interference, and high-risk third-country suppliers.

A central shift concerns supply-chain security as a geopolitical issue. The proposal enables mandatory derisking of mobile telecommunications networks, reinforcing earlier efforts under the 5G security toolbox.

Certification reform continues through a redesigned European Cybersecurity Certification Framework, promising clearer governance, faster scheme development, and voluntary certification that can cover organisational cyber posture alongside technical compliance.

The package also tackles regulatory complexity. Targeted amendments to the NIS2 Directive aim to ease compliance for tens of thousands of companies by clarifying jurisdictional rules, introducing a new ‘small mid-cap’ category, and streamlining incident reporting through a single EU entry point.

Enhanced ransomware data collection and cross-border supervision are intended to reduce fragmentation while strengthening enforcement consistency.

ENISA’s role is further expanded from coordination towards operational support. The agency would issue early threat alerts, assist in ransomware recovery with national authorities and Europol, and develop EU-wide vulnerability management and skills attestation schemes.

Together, the measures signal a shift from fragmented safeguards towards a more integrated model of European cyber sovereignty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cisco and OpenAI push AI-native software development

Cisco has deepened its collaboration with OpenAI to embed agentic AI into enterprise software engineering. The approach reflects a broader shift towards treating AI as operational infrastructure rather than an experimental tool.

Integrating Codex into production workflows exposed it to complex, multi-repository, and security-critical environments. Codex operated across interconnected codebases, running autonomous build and testing loops within existing compliance and governance frameworks.

Operational use delivered measurable results. Engineering teams reported faster builds, higher defect-resolution throughput, and quicker framework migrations, cutting work from weeks to days.

Real-world deployment shaped Codex’s enterprise roadmap, especially around compliance, long-running tasks, and pipeline integration. The collaboration will continue as both organisations pursue AI-native engineering at scale, including within Cisco’s Splunk teams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK names industry leaders to steer safe AI adoption in finance

The UK government has appointed two senior industry figures as AI Champions to support safe and effective adoption of AI across financial services, as part of a broader push to boost growth and productivity.

Harriet Rees of Starling Bank and Dr Rohit Dhawan of Lloyds Banking Group will work with firms and regulators to help turn rapid AI uptake into practical delivery. Both will report directly to Lucy Rigby, the Economic Secretary to the Treasury.

AI is already widely deployed across the sector, with around three-quarters of UK financial firms using the technology. Analysis indicates AI could add tens of billions of pounds to financial services by 2030, while improving customer services and reducing costs.

The Champions will focus on accelerating trusted adoption, speeding up innovation, and removing barriers to scale. Their remit includes protecting consumers, supporting financial stability, and strengthening the UK’s role as a global economic and technology hub.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI travel influencers begin reshaping digital storytelling

India’s first AI-generated travel influencer, Radhika Subramaniam, has begun attracting sustained audience engagement since her launch in mid-2025, signalling growing acceptance of virtual creators in travel content.

Developed by Collective Artists Network, a talent management company based in India, Radhika initially drew attention through curiosity, but followers increasingly interacted with her posts in ways similar to those of human influencers, according to the company’s leadership.

Industry observers say AI travel influencers offer brands greater efficiency, lower production costs, and more control over storytelling, as virtual creators can be deployed without logistical constraints.

Some creators remain sceptical about whether artificial personas can replicate the emotional authenticity and sensory experiences that shape real-world travel storytelling.

Marketing specialists expect AI and human influencers to coexist, with virtual avatars serving as consistent brand voices while human creators retain value through spontaneity, trust, and personal perspective.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK watchdogs warned over AI risks in financial services

UK regulators and the Treasury face MP criticism over their approach to AI, amid warnings of risks to consumers and financial stability. A new Treasury Select Committee report says authorities have been overly cautious as AI use rapidly expands across financial services.

More than 75% of UK financial firms are already using AI, according to evidence reviewed by the committee, with insurers and international banks leading uptake.

Applications range from automating back-office tasks to core functions such as credit assessments and insurance claims, increasing AI’s systemic importance within the sector.

MPs acknowledge AI’s benefits but warn that readiness for large-scale failures remains insufficient. The committee urges the Bank of England and the FCA to introduce AI-specific stress tests to gauge resilience to AI-driven market shocks.

Further recommendations include more explicit regulatory guidance on AI accountability and faster use of the Critical Third Parties Regime. No AI or cloud providers have been designated as critical, prompting calls for stronger oversight to limit operational and systemic risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Indian creators embrace Adobe AI tools

Adobe says generative AI is rapidly reshaping India’s creator economy, with 97% of surveyed creators reporting a positive impact. Findings come from the company’s inaugural Creators’ Toolkit Report covering more than 16,000 creators worldwide.

Adoption levels in India are among the highest globally, with almost all creators reporting that AI tools are embedded in their daily workflows. Adobe is commonly used for editing, content enhancement, asset generation and idea development across video, image and social media formats.

Despite enthusiasm, concerns remain around trust and transparency. Many creators fear their work may be used to train AI models without consent, while cost, unclear training methods and inconsistent outputs also limit wider confidence.

Interest in agentic AI is also growing, with most Indian creators expressing optimism about systems that automate tasks and adapt to personal creative styles. Mobile devices continue to gain importance, with creators expecting phone output to increase further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Forced labour data opened to the public

Exiger has launched a free online tool designed to help organisations identify links to forced labour in global supply chains. The platform, called forcedlabor.ai, was unveiled during the annual meeting of the World Economic Forum in Davos.

The tool allows users to search suppliers and companies to assess potential exposure to state-sponsored forced labour, with an initial focus on risks linked to China. Exiger says the database draws on billions of records and is powered by proprietary AI to support compliance and ethical sourcing.

US lawmakers and human rights groups have welcomed the initiative, arguing that companies face growing legal and reputational risks if their supply chains rely on forced labour. The platform highlights risks linked to US import restrictions and enforcement actions.

Exiger says making the data freely available aims to level the playing field for smaller firms with limited compliance budgets. The company argues that greater transparency can help reduce modern slavery across industries, from retail to agriculture.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Yale researchers unveil AI platform for faster chemistry discovery

Researchers at Yale University have developed an AI platform that accelerates chemical discovery by turning scientific knowledge into practical laboratory guidance. The system, known as MOSAIC, generates detailed experimental procedures across chemistry, including drug design and materials science.

MOSAIC differs from existing AI chemistry tools by combining thousands of specialised AI ‘experts,’ each representing a distinct area of chemical knowledge.

Instead of a single model, the platform draws on diverse reaction expertise to guide complex syntheses, including the synthesis of previously unreported compounds.

Early results suggest the approach significantly improves experimental outcomes. Using MOSAIC, researchers successfully synthesised more than 35 new compounds, spanning pharmaceuticals, catalysts, advanced materials, and other chemical domains.

The system also provides uncertainty estimates, helping scientists prioritise experiments most likely to succeed.

Designed as an open-source framework, MOSAIC aims to move AI beyond prediction and into hands-on laboratory support. Developers say the platform could cut research bottlenecks, improve reproducibility, and widen access to advanced chemical synthesis.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

World Economic Forum 2026 highlights human-centred AI at work

Global leaders at the World Economic Forum 2026 are emphasising how AI can strengthen, rather than diminish, human work. Discussions are centred on workforce resilience as economies adapt to rapid technological and structural change.

AI is increasingly taking on routine tasks while providing clearer insights, allowing employees to focus on creativity, judgement, and higher-value activities.

Rather than replacing workers, intelligent tools are reshaping job design, career paths, and leadership expectations, particularly as labour shortages intensify across many developed economies.

Attention is also turning to leadership in an AI-driven workplace. Executives are expected to anticipate risks, spot emerging patterns, and guide teams through change, supported by AI systems that offer earlier and more accurate insights.

Clear communication, upskilling, and trust-building have emerged as core priorities for successful adoption.

Human oversight remains vital as AI enters HR and payroll systems, where errors carry regulatory and reputational risks. Speakers stressed that involving employees directly in AI design improves trust, reduces risk, and ensures intelligent systems address real operational challenges.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OECD says generative AI reshapes education with mixed results

Generative AI has rapidly entered classrooms worldwide, with students using chatbots for assignments and teachers adopting AI tools for lesson planning. Adoption has been rapid, driven by easy access, intuitive design, and minimal technical barriers.

A new OECD Digital Education Outlook 2026 highlights both opportunities and risks linked to this shift. AI can support learning when aligned with clear goals, but replacing productive struggle may weaken deep understanding and student focus.

Research cited in the report suggests that general-purpose AI tools may improve the quality of written work without boosting exam performance. Education-specific AI grounded in learning science appears more effective as a collaborative partner or research assistant.

Early trials also indicate that GenAI-powered tutoring tools can enhance teacher capacity and improve student outcomes, particularly in mathematics. Policymakers are urged to prioritise pedagogically sound AI that is rigorously evaluated to strengthen learning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot