Stanford and Swiss institutes unite on open AI models

Stanford University, ETH Zurich, and EPFL have launched a transatlantic partnership to develop open-source AI models prioritising societal values over commercial interests.

The partnership was formalised through a memorandum of understanding signed during the World Economic Forum meeting in Davos.

The agreement establishes long-term cooperation in AI research, education, and innovation, with a focus on large-scale multimodal models. The initiative aims to strengthen academia’s influence over global AI by promoting transparency, accountability, and inclusive access.

Joint projects will develop open datasets, evaluation benchmarks, and responsible deployment frameworks, alongside researcher exchanges and workshops. The effort aims to embed human-centred principles into technical progress while supporting interdisciplinary discovery.

Academic leaders said the alliance reinforces open science and cultural diversity amid growing corporate influence over foundation models. The collaboration positions universities as central drivers of ethical, trustworthy, and socially grounded AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Education for Countries programme signals OpenAI push into public education policy

OpenAI has launched the Education for Countries programme, a new global initiative designed to support governments in modernising education systems and preparing workforces for an AI-driven economy.

The programme responds to a widening gap between rapid advances in AI capabilities and people’s ability to use them effectively in everyday learning and work.

Education systems are positioned at the centre of closing that gap, as research suggests a significant share of core workplace skills will change by the end of the decade.

By integrating AI tools, training and research into schools and universities, national education frameworks can evolve alongside technological change and better equip students for future labour markets.

The programme combines access to tools such as ChatGPT Edu and advanced language models with large-scale research on learning outcomes, tailored national training schemes and internationally recognised certifications.

A global network of governments, universities and education leaders will also share best practices and shape responsible approaches to AI use in classrooms.

Initial partners include Estonia, Greece, Italy, Jordan, Kazakhstan, Slovakia, Trinidad and Tobago and the United Arab Emirates. Early national rollouts, particularly in Estonia, already involve tens of thousands of students and educators, with further countries expected to join later in 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cambodia Internet Governance Forum marks major step toward inclusive digital policy

The first national Internet Governance Forum in Cambodia has taken place, establishing a new platform for digital policy dialogue. The Cambodia Internet Governance Forum (CamIGF) included civil society, private sector and youth participants.

The forum follows an Internet Universality Indicators assessment led by UNESCO and national partners. The assessment recommended a permanent multistakeholder platform for digital governance, grounded in human rights, openness, accessibility and participation.

Opening remarks from national and international stakeholders framed the CamIGF as a move toward people-centred and rights-based digital transformation. Speakers stressed the need for cross-sector cooperation to ensure connectivity, innovation and regulation deliver public benefit.

Discussions focused on online safety in the age of AI, meaningful connectivity, youth participation and digital rights. The programme also included Cambodia’s Youth Internet Governance Forum, highlighting young people’s role in addressing data protection and digital skills gaps.

By institutionalising a national IGF, Cambodia joins a growing global network using multistakeholder dialogue to guide digital policy. UNESCO confirmed continued support for implementing assessment recommendations and strengthening inclusive digital governance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

TikTok restructures operations for US market

TikTok has finalised a deal allowing the app to continue operating in America by separating its US business from its global operations. The agreement follows years of political pressure in the US over national security concerns.

Under the arrangement, a new entity will manage TikTok’s US operations, with user data and algorithms handled inside the US. The recommendation algorithm has been licensed and will now be trained only on US user data to meet American regulatory requirements.

Ownership of TikTok’s US business is shared among American and international investors, while China-based ByteDance retains a minority stake. Oracle will oversee data security and cloud infrastructure for users in the US.

Analysts say the changes could alter how the app functions for the roughly 200 million users in the US. Questions remain over whether a US-trained algorithm will perform as effectively as the global version.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU cyber rules target global tech dependence

The European Union has proposed new cybersecurity rules aimed at reducing reliance on high-risk technology suppliers, particularly from China. In the European Union, policymakers argue existing voluntary measures failed to curb dependence on vendors such as Huawei and ZTE.

The proposal would introduce binding obligations for telecom operators across the European Union to phase out Chinese equipment. At the same time, officials have warned that reliance on US cloud and satellite services also poses security risks for Europe.

Despite increased funding and expanded certification plans, divisions remain within the European Union. Countries including Germany and France support stricter sovereignty rules, while others favour continued partnerships with US technology firms.

Analysts say the lack of consensus in the European Union could weaken the impact of the reforms. Without clear enforcement and investment in European alternatives, Europe may struggle to reduce dependence on both China and the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI ads in ChatGPT signal a shift in conversational advertising

The AI firm, OpenAI, plans to introduce advertising within ChatGPT for logged-in adult users, marking a structural shift in how brands engage audiences through conversational interfaces.

Ads would be clearly labelled and positioned alongside responses, aiming to replace interruption-driven formats with context-aware brand suggestions delivered during moments of active user intent.

Industry executives describe conversational AI advertising as a shift from exposure to earned presence, in which brands must provide clarity or utility to justify inclusion.

Experts warn that trust remains fragile, as AI recommendations carry the weight of personal consultation, and undisclosed commercial influence could prompt rapid user disengagement instead of passive ad avoidance.

Regulators and marketers alike highlight risks linked to dark patterns, algorithmic framing and subtle manipulation within AI-mediated conversations.

As conversational systems begin to shape discovery and decision-making, media planning is expected to shift toward intent-led engagement, authority-building, and transparency, reshaping digital advertising economics beyond search rankings and impression-based buying.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware attack on Under Armour leads to massive customer data exposure

Under Armour is facing growing scrutiny following the publication of customer data linked to a ransomware attack disclosed in late 2025.

According to breach verification platform Have I Been Pwned, a dataset associated with the incident appeared on a hacking forum in January, exposing information tied to tens of millions of customers.

The leaked material reportedly includes 72 million email addresses alongside names, dates of birth, location details and purchase histories. Security analysts warn that such datasets pose risks that extend far beyond immediate exposure, particularly when personal identifiers and behavioural data are combined.

Experts note that verified customer information linked to a recognised brand can enable compelling phishing and fraud campaigns powered by AI tools.

Messages referencing real transactions or purchase behaviour can blur the boundary between legitimate communication and malicious activity, increasing the likelihood of delayed victimisation.

The incident has also led to legal action against Under Armour, with plaintiffs alleging failures in safeguarding sensitive customer information. The case highlights how modern data breaches increasingly generate long-term consequences rather than immediate technical disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Higher education urged to lead on AI skills and ethics

AI is reshaping how people work, learn and participate in society, prompting calls for universities to take a more active leadership role. A new book by Juan M. Lavista Ferres of Microsoft’s AI Economy Institute argues that higher education institutions must move faster to prepare students for an AI-driven world.

Balancing technical training with long-standing academic values remains a central challenge. Institutions are encouraged to teach practical AI skills while continuing to emphasise critical thinking, communication and ethical reasoning.

AI literacy is increasingly seen as essential for both employment and daily life. Early labour market data suggests that AI proficiency is already linked to higher wages, reinforcing calls for higher education institutions to embed AI education across disciplines rather than treating it as a specialist subject.

Developers, educators and policymakers are also urged to improve their understanding of each other’s roles. Technical knowledge must be matched with awareness of AI’s social impact, while non-technical stakeholders need clearer insight into how AI systems function.

Closer cooperation between universities, industry and governments is expected to shape the next phase of AI adoption. Higher education institutions are being asked to set recognised standards for AI credentials, expand access to training, and ensure inclusive pathways for diverse learners.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI fuels surge in online fraud risks in 2026

Online scams are expected to surge in 2026, overtaking ransomware as the top cyber-risk, the World Economic Forum warned, driven by the growing use of generative AI.

Executives are increasingly concerned about AI-driven scams that are easier to launch and harder to detect than traditional cybercrime. WEF managing director Jeremy Jurgens said leaders now face the challenge of acting collectively to protect trust and stability in an AI-driven digital environment.

Consumers are also feeling the impact. An Experian report found 68% of people now see identity theft as their main concern, while US Federal Trade Commission data shows consumer fraud losses reached $12.5 billion in 2024, up 25% year on year.

Generative AI is enabling more convincing phishing, voice cloning, and impersonation attempts. The WEF reported that 62% of executives experienced phishing attacks, 37% encountered invoice fraud, and 32% reported identity theft, with vulnerable groups increasingly targeted through synthetic content abuse.

Experts warn that many organisations still lack the skills and resources to defend against evolving threats. Consumer groups advise slowing down, questioning urgent messages, avoiding unsolicited requests for information, and verifying contacts independently to reduce the risk of generative AI-powered scams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK launches software security ambassadors scheme

The UK government has launched the Software Security Ambassadors Scheme to promote stronger software security practices nationwide. The initiative is led by the Department for Science, Innovation and Technology and the National Cyber Security Centre.

In the UK, participating organisations commit to championing the new Software Security Code of Practice within their industries. Signatories agree to lead by example through secure development, procurement and advisory practices, while sharing lessons learned to strengthen national cyber resilience.

The scheme aims to improve transparency and risk management across UK digital supply chains. Software developers are encouraged to embed security throughout the whole lifecycle, while buyers are expected to incorporate security standards into procurement processes.

Officials say the approach supports the UK’s broader economic and security goals by reducing cyber risks and increasing trust in digital technologies. The government believes that better security practices will help UK businesses innovate safely and withstand cyber incidents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot