Cambodia Internet Governance Forum marks major step toward inclusive digital policy

The first national Internet Governance Forum in Cambodia has taken place, establishing a new platform for digital policy dialogue. The Cambodia Internet Governance Forum (CamIGF) included civil society, private sector and youth participants.

The forum follows an Internet Universality Indicators assessment led by UNESCO and national partners. The assessment recommended a permanent multistakeholder platform for digital governance, grounded in human rights, openness, accessibility and participation.

Opening remarks from national and international stakeholders framed the CamIGF as a move toward people-centred and rights-based digital transformation. Speakers stressed the need for cross-sector cooperation to ensure connectivity, innovation and regulation deliver public benefit.

Discussions focused on online safety in the age of AI, meaningful connectivity, youth participation and digital rights. The programme also included Cambodia’s Youth Internet Governance Forum, highlighting young people’s role in addressing data protection and digital skills gaps.

By institutionalising a national IGF, Cambodia joins a growing global network using multistakeholder dialogue to guide digital policy. UNESCO confirmed continued support for implementing assessment recommendations and strengthening inclusive digital governance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WEF paper warns of widening AI investment gap

Policy-makers are being urged to take a more targeted approach to ‘sovereign AI’ spending, as a new paper released alongside the World Economic Forum meeting in Davos argues that no country can realistically build every part of the AI stack alone. Instead, the authors recommend treating AI sovereignty as ‘strategic interdependence’, combining selective domestic investment with trusted partnerships and alliances.

The paper, co-authored by the World Economic Forum and Bain & Co, highlights how heavily the United States and China dominate the global AI landscape. It estimates that the two countries capture around 65% of worldwide investment across the AI value chain, reflecting a full-stack model, from chips and cloud infrastructure to applications, that most other economies cannot match at the same scale.

For smaller and mid-sized economies, that imbalance can translate into a competitive disadvantage, because AI infrastructure, such as data centres and computing capacity, is increasingly viewed as the backbone of national AI capability. Still, the report argues that faster-moving countries can carve out a niche by focusing on a few priority areas, pooling regional capacity, or securing access through partnerships rather than trying to replicate the US-China approach.

The message was echoed in Davos by Nvidia chief executive Jensen Huang, who said every country should treat AI as essential infrastructure, comparable to electricity grids and transport networks. He argued that building AI data centres could drive demand for well-paid skilled trades, from electricians and plumbers to network engineers, framing the boom as a major job creator rather than a trigger for widespread job losses.

At the same time, the paper warns that physical constraints could slow expansion, including the availability of land, energy and water, as well as shortages of highly skilled workers. It also notes that local regulation can delay projects, although some industry groups argue that regulatory and cost pressures may push countries to innovate sooner in efficiency and greener data-centre design.

In the UK, industry body UKAI says high energy prices, limited grid capacity, complex planning rules and public scrutiny already create the same hurdles many other countries may soon face. It argues these constraints are helping drive improvements in efficiency, system design and coordination, seen as building blocks for more sustainable AI infrastructure.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Tata’s $11 billion Innovation City plan gains global visibility at Davos

Tata Sons plans to invest $11 billion to build a large ‘Innovation City’ near the upcoming Navi Mumbai International Airport, according to Maharashtra Chief Minister Devendra Fadnavis, speaking at the World Economic Forum (WEF) in Davos. He said the project has drawn strong interest from international investors and will include major infrastructure upgrades alongside a data centre.

Fadnavis said the aim is to turn Mumbai and its wider region into a global, ‘plug-and-play’ innovation hub where companies can quickly set up and scale new technologies. He described the initiative as the first of its kind in India and said work is expected to begin within six to eight months.

The location next to the Adani Group–developed Navi Mumbai Airport is being positioned as an advantage, linking global connectivity with the high-tech industry. The project also reflects a broader global rush to expand data centres as companies roll out AI services, with firms such as Microsoft, Alphabet, and Amazon investing heavily in new capacity worldwide.

Maharashtra, which contributes more than 10 percent of India’s GDP and hosts the country’s financial capital, is also pushing a wider infrastructure drive, including a $30 billion plan to upgrade Mumbai. State leaders have framed these investments as part of an effort to boost growth and respond to economic pressures, including unemployment.

The Innovation City is expected to support India’s ambitions in AI and semiconductors, with national officials pointing to a public-private partnership approach rather than leaving development solely to big tech companies. Alongside this, the state is exploring energy innovation, including potential collaborations on small modular nuclear reactors, following recent legislative support for smaller-scale nuclear projects.

Taken together, the plan is being presented as a bid to attract global investment, accelerate high-tech development, and strengthen India’s role in emerging industrial and technology shifts centred on AI, advanced manufacturing, and digital infrastructure.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI Act strengthens training rules despite 2025 Digital Omnibus reforms

The European AI Regulation reinforces training and awareness as core compliance requirements, even as the EU considers simplifications through the proposed Digital Omnibus. Regulation (EU) 2024/1689 sets a risk-based framework for AI systems under the AI Act.

AI literacy is promoted through a multi-level approach. The EU institutions focus on public awareness, national authorities support voluntary codes of conduct, and organisations are currently required under the AI Act to ensure adequate AI competence among staff and third parties involved in system use.

A proposed amendment to Article 4, submitted in November 2025 under the Digital Omnibus, would replace mandatory internal competence requirements with encouragement-based measures. The change seeks to reduce administrative burden without removing AI Act risk management duties.

Even if adopted, the amendment would not eliminate the practical need for AI training. Competence in AI systems remains essential for governance, transparency, monitoring, and incident handling, particularly for high-risk use cases regulated by the AI Act.

Companies are therefore expected to continue investing in tailored AI training across management, technical, legal, and operational roles. Embedding awareness and competence into risk management frameworks remains critical to compliance and risk mitigation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Humanoid robots and AI take centre stage as Musk joins Davos 2026

Elon Musk made his first appearance at the World Economic Forum in Davos despite years of public criticism towards the gathering, arguing that AI and robotics represent the only realistic route to global abundance.

Speaking alongside BlackRock chief executive Larry Fink, Musk framed robotics as a civilisational shift rather than a niche innovation, claiming widespread automation will raise living standards and reshape economic growth.

Musk predicted a future where robots outnumber humans, with humanoid systems embedded across industry, healthcare and domestic life.

He highlighted elder care as a key use case in ageing societies facing labour shortages, suggesting that robotics could compensate for demographic decline rather than relying solely on migration or extended working lives.

Tesla’s Optimus humanoid robots are already performing simple factory tasks, with more complex functions expected within a year.

Musk indicated public sales could begin by 2027 once reliability thresholds are met. He also argued autonomous driving is largely resolved, pointing to expanding robotaxi deployments in the US and imminent regulatory decisions in Europe and China.

The global market for humanoid robotics remains relatively small, but analysts expect rapid expansion as AI capabilities improve and costs fall.

Musk at Davos 2026 presented robotics as an engine for economic acceleration, suggesting ubiquitous automation could unlock productivity gains on a scale comparable to past industrial revolutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI ads in ChatGPT signal a shift in conversational advertising

The AI firm, OpenAI, plans to introduce advertising within ChatGPT for logged-in adult users, marking a structural shift in how brands engage audiences through conversational interfaces.

Ads would be clearly labelled and positioned alongside responses, aiming to replace interruption-driven formats with context-aware brand suggestions delivered during moments of active user intent.

Industry executives describe conversational AI advertising as a shift from exposure to earned presence, in which brands must provide clarity or utility to justify inclusion.

Experts warn that trust remains fragile, as AI recommendations carry the weight of personal consultation, and undisclosed commercial influence could prompt rapid user disengagement instead of passive ad avoidance.

Regulators and marketers alike highlight risks linked to dark patterns, algorithmic framing and subtle manipulation within AI-mediated conversations.

As conversational systems begin to shape discovery and decision-making, media planning is expected to shift toward intent-led engagement, authority-building, and transparency, reshaping digital advertising economics beyond search rankings and impression-based buying.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware attack on Under Armour leads to massive customer data exposure

Under Armour is facing growing scrutiny following the publication of customer data linked to a ransomware attack disclosed in late 2025.

According to breach verification platform Have I Been Pwned, a dataset associated with the incident appeared on a hacking forum in January, exposing information tied to tens of millions of customers.

The leaked material reportedly includes 72 million email addresses alongside names, dates of birth, location details and purchase histories. Security analysts warn that such datasets pose risks that extend far beyond immediate exposure, particularly when personal identifiers and behavioural data are combined.

Experts note that verified customer information linked to a recognised brand can enable compelling phishing and fraud campaigns powered by AI tools.

Messages referencing real transactions or purchase behaviour can blur the boundary between legitimate communication and malicious activity, increasing the likelihood of delayed victimisation.

The incident has also led to legal action against Under Armour, with plaintiffs alleging failures in safeguarding sensitive customer information. The case highlights how modern data breaches increasingly generate long-term consequences rather than immediate technical disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Glasses Impact Grants by Meta aim to boost social projects

Meta has launched a new AI Glasses Impact Grants programme to support US-based organisations using its AI-powered glasses for social and economic benefit. The initiative aims to scale existing projects and encourage new applications through financial support and technical access.

Grant recipients will be selected under two tracks. Accelerator Grants target organisations already using Meta’s AI glasses to expand their impact, while Catalyst Grants support new use cases developed with the Wearables Device Access Toolkit.

More than 30 organisations will receive funding, with awards ranging from $25,000 to $200,000 depending on project scope. Successful applicants will also join the Meta Wearables Community, a network of developers, researchers, and innovators focused on advancing wearable technology.

Practical use cases already include agricultural monitoring, sports injury documentation, and film education. Farmers use the glasses for real-time crop diagnostics, athletic trainers capture injury data hands-free, and film students record footage and pre-visualise shoots more easily.

Meta says the grants are designed to help organisations turn experimental ideas into scalable solutions. The company aims to expand the real-world impact of its AI glasses across education, accessibility, and community development.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI method boosts reasoning without extra training

Researchers at the University of California, Riverside, have introduced a technique that improves AI reasoning without requiring additional training data. Called Test-Time Matching, the approach enhances AI performance by enabling dynamic model adaptation.

The method addresses a persistent weakness in multimodal AI systems, which often struggle to interpret unfamiliar combinations of images and text. Traditional evaluation metrics rely on isolated comparisons that can obscure deeper reasoning capabilities.

By replacing these with a group-based matching approach, the researchers uncovered hidden model potential and achieved markedly stronger results.

Test-Time Matching lets AI systems refine predictions through repeated self-correction. Tests on SigLIP-B16 showed substantial gains, with performance surpassing larger models, including GPT-4.1, on key reasoning benchmarks.

The findings suggest that smarter evaluation and adaptation strategies may unlock powerful reasoning abilities even in smaller models. Researchers say the approach could speed AI deployment across robotics, healthcare, and autonomous systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Higher education urged to lead on AI skills and ethics

AI is reshaping how people work, learn and participate in society, prompting calls for universities to take a more active leadership role. A new book by Juan M. Lavista Ferres of Microsoft’s AI Economy Institute argues that higher education institutions must move faster to prepare students for an AI-driven world.

Balancing technical training with long-standing academic values remains a central challenge. Institutions are encouraged to teach practical AI skills while continuing to emphasise critical thinking, communication and ethical reasoning.

AI literacy is increasingly seen as essential for both employment and daily life. Early labour market data suggests that AI proficiency is already linked to higher wages, reinforcing calls for higher education institutions to embed AI education across disciplines rather than treating it as a specialist subject.

Developers, educators and policymakers are also urged to improve their understanding of each other’s roles. Technical knowledge must be matched with awareness of AI’s social impact, while non-technical stakeholders need clearer insight into how AI systems function.

Closer cooperation between universities, industry and governments is expected to shape the next phase of AI adoption. Higher education institutions are being asked to set recognised standards for AI credentials, expand access to training, and ensure inclusive pathways for diverse learners.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!