Humanoid robots and AI take centre stage as Musk joins Davos 2026

Elon Musk made his first appearance at the World Economic Forum in Davos despite years of public criticism towards the gathering, arguing that AI and robotics represent the only realistic route to global abundance.

Speaking alongside BlackRock chief executive Larry Fink, Musk framed robotics as a civilisational shift rather than a niche innovation, claiming widespread automation will raise living standards and reshape economic growth.

Musk predicted a future where robots outnumber humans, with humanoid systems embedded across industry, healthcare and domestic life.

He highlighted elder care as a key use case in ageing societies facing labour shortages, suggesting that robotics could compensate for demographic decline rather than relying solely on migration or extended working lives.

Tesla’s Optimus humanoid robots are already performing simple factory tasks, with more complex functions expected within a year.

Musk indicated public sales could begin by 2027 once reliability thresholds are met. He also argued autonomous driving is largely resolved, pointing to expanding robotaxi deployments in the US and imminent regulatory decisions in Europe and China.

The global market for humanoid robotics remains relatively small, but analysts expect rapid expansion as AI capabilities improve and costs fall.

Musk at Davos 2026 presented robotics as an engine for economic acceleration, suggesting ubiquitous automation could unlock productivity gains on a scale comparable to past industrial revolutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI ads in ChatGPT signal a shift in conversational advertising

The AI firm, OpenAI, plans to introduce advertising within ChatGPT for logged-in adult users, marking a structural shift in how brands engage audiences through conversational interfaces.

Ads would be clearly labelled and positioned alongside responses, aiming to replace interruption-driven formats with context-aware brand suggestions delivered during moments of active user intent.

Industry executives describe conversational AI advertising as a shift from exposure to earned presence, in which brands must provide clarity or utility to justify inclusion.

Experts warn that trust remains fragile, as AI recommendations carry the weight of personal consultation, and undisclosed commercial influence could prompt rapid user disengagement instead of passive ad avoidance.

Regulators and marketers alike highlight risks linked to dark patterns, algorithmic framing and subtle manipulation within AI-mediated conversations.

As conversational systems begin to shape discovery and decision-making, media planning is expected to shift toward intent-led engagement, authority-building, and transparency, reshaping digital advertising economics beyond search rankings and impression-based buying.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rethinking the digital embassy concept

The term ‘digital embassy’ has been floating around for years, but it often adds more confusion than clarity. In his blog ‘What is a ‘digital embassy’? (Spoiler: It’s not an embassy)’, Jovan Kurbalija argues that the phrase is a misnomer in a field already crowded with overlapping labels like digital diplomacy, cyber diplomacy, and tech diplomacy.

The expression became popular after Estonia, in 2017, set up an offshore backup of national data on servers in Luxembourg under diplomatic protection. The idea was innovative, but Kurbalija stresses that a protected data vault is not an embassy in the traditional sense; it does not represent a country, negotiate on its behalf, or engage with the host society.

He points to the 1961 Vienna Convention on Diplomatic Relations, which defines an embassy as a state’s official presence on foreign territory, tasked with representation, negotiation, and the safeguarding of national interests. While states have experimented with online forms of presence, such as official websites or even ‘virtual embassies’ in platforms like Second Life, the core function remains political and relational, not simply technical.

Calling a remote server a ‘digital embassy,’ Kurbalija warns, can mislead the public and muddy policymaking. An embassy suggests diplomacy and interaction; a backup facility is about continuity, resilience, and the preservation of state records.

Estonia’s motivation, he notes, was shaped by history, specifically the fear of losing national archives and collective memory, echoing the 1940 seizure of records during Soviet control.

The push for more precise terminology may become even more important if these facilities evolve. A proposal raised during a World Economic Forum panel suggested adding AI-based processing capabilities to such offshore data sites, an idea that could shift them from passive storage toward something closer to strategic infrastructure linked to ‘AI sovereignty.’

Kurbalija suggests that instead of stretching the word ‘embassy,’ governments could borrow more precise historical concepts for protected foreign facilities, such as a ‘diplomatic enclave,’ ‘diplomatic funduq,’ or ‘diplomatic sanctuary.’ His broader point is that as countries invest in digital resilience and sovereignty, the language used to describe these arrangements should keep pace, because legitimacy and legal clarity often begin with accurate naming.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware attack on Under Armour leads to massive customer data exposure

Under Armour is facing growing scrutiny following the publication of customer data linked to a ransomware attack disclosed in late 2025.

According to breach verification platform Have I Been Pwned, a dataset associated with the incident appeared on a hacking forum in January, exposing information tied to tens of millions of customers.

The leaked material reportedly includes 72 million email addresses alongside names, dates of birth, location details and purchase histories. Security analysts warn that such datasets pose risks that extend far beyond immediate exposure, particularly when personal identifiers and behavioural data are combined.

Experts note that verified customer information linked to a recognised brand can enable compelling phishing and fraud campaigns powered by AI tools.

Messages referencing real transactions or purchase behaviour can blur the boundary between legitimate communication and malicious activity, increasing the likelihood of delayed victimisation.

The incident has also led to legal action against Under Armour, with plaintiffs alleging failures in safeguarding sensitive customer information. The case highlights how modern data breaches increasingly generate long-term consequences rather than immediate technical disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Glasses Impact Grants by Meta aim to boost social projects

Meta has launched a new AI Glasses Impact Grants programme to support US-based organisations using its AI-powered glasses for social and economic benefit. The initiative aims to scale existing projects and encourage new applications through financial support and technical access.

Grant recipients will be selected under two tracks. Accelerator Grants target organisations already using Meta’s AI glasses to expand their impact, while Catalyst Grants support new use cases developed with the Wearables Device Access Toolkit.

More than 30 organisations will receive funding, with awards ranging from $25,000 to $200,000 depending on project scope. Successful applicants will also join the Meta Wearables Community, a network of developers, researchers, and innovators focused on advancing wearable technology.

Practical use cases already include agricultural monitoring, sports injury documentation, and film education. Farmers use the glasses for real-time crop diagnostics, athletic trainers capture injury data hands-free, and film students record footage and pre-visualise shoots more easily.

Meta says the grants are designed to help organisations turn experimental ideas into scalable solutions. The company aims to expand the real-world impact of its AI glasses across education, accessibility, and community development.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI method boosts reasoning without extra training

Researchers at the University of California, Riverside, have introduced a technique that improves AI reasoning without requiring additional training data. Called Test-Time Matching, the approach enhances AI performance by enabling dynamic model adaptation.

The method addresses a persistent weakness in multimodal AI systems, which often struggle to interpret unfamiliar combinations of images and text. Traditional evaluation metrics rely on isolated comparisons that can obscure deeper reasoning capabilities.

By replacing these with a group-based matching approach, the researchers uncovered hidden model potential and achieved markedly stronger results.

Test-Time Matching lets AI systems refine predictions through repeated self-correction. Tests on SigLIP-B16 showed substantial gains, with performance surpassing larger models, including GPT-4.1, on key reasoning benchmarks.

The findings suggest that smarter evaluation and adaptation strategies may unlock powerful reasoning abilities even in smaller models. Researchers say the approach could speed AI deployment across robotics, healthcare, and autonomous systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Higher education urged to lead on AI skills and ethics

AI is reshaping how people work, learn and participate in society, prompting calls for universities to take a more active leadership role. A new book by Juan M. Lavista Ferres of Microsoft’s AI Economy Institute argues that higher education institutions must move faster to prepare students for an AI-driven world.

Balancing technical training with long-standing academic values remains a central challenge. Institutions are encouraged to teach practical AI skills while continuing to emphasise critical thinking, communication and ethical reasoning.

AI literacy is increasingly seen as essential for both employment and daily life. Early labour market data suggests that AI proficiency is already linked to higher wages, reinforcing calls for higher education institutions to embed AI education across disciplines rather than treating it as a specialist subject.

Developers, educators and policymakers are also urged to improve their understanding of each other’s roles. Technical knowledge must be matched with awareness of AI’s social impact, while non-technical stakeholders need clearer insight into how AI systems function.

Closer cooperation between universities, industry and governments is expected to shape the next phase of AI adoption. Higher education institutions are being asked to set recognised standards for AI credentials, expand access to training, and ensure inclusive pathways for diverse learners.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Adobe upgrades Premiere and After Effects with new AI features

New AI-powered upgrades have been unveiled for video creators, expanding tools in Premiere, After Effects, and Firefly Boards ahead of the Sundance Film Festival. The updates, introduced by Adobe, aim to streamline post-production, improve collaboration, and enhance creative control.

Premiere now offers AI-assisted object selection, redesigned shape masks, and tighter integration with Firefly Boards. Editors can brainstorm ideas, explore visuals, and move assets into workflows using AI models from Adobe, Google, OpenAI, and others.

After Effects is also receiving major updates, including native 3D parametric meshes, access to more than 1,300 Substance 3D materials, improved vector workflows, and expanded variable-font animation tools. The additions are designed to support more advanced motion design and visual storytelling.

Alongside the product upgrades, Adobe announced an extra $10 million in funding through its Film & TV Fund to support emerging filmmakers from underserved communities. New partners include Rideback RISE and Dimz Inc., with existing collaborations continuing.

According to the Sundance Institute, 85% of films submitted to the 2026 festival were created using Creative Cloud tools. Adobe said it will continue investing in AI-driven workflows, professional training, and industry partnerships to support the next generation of storytellers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI tools reshape legal research and court efficiency in India

AI is rapidly reshaping India’s legal sector, as law firms and research platforms deploy conversational tools to address mounting caseloads and administrative strain.

SCC Online has launched an AI-powered legal research assistant that enables lawyers to ask complex questions in plain language, replacing rigid keyword-based searches and significantly reducing research time.

The need for speed and accuracy is pressing. India’s courts face a backlog exceeding 46 million cases, driven by procedural delays, documentation gaps, and limited judicial capacity.

Legal professionals routinely lose hours navigating precedents, limiting time for strategy, analysis, and client engagement.

Law firms are responding by embedding AI into everyday workflows. At Trilegal, AI supports drafting, document management, analytics, and collaboration, enabling lawyers to prioritise judgment and case strategy.

Secure AI platforms process high-volume legal material in minutes, improving productivity while preserving confidentiality and accuracy.

Beyond private practice, AI adoption is reshaping court operations and public access to justice. Real-time transcription, multilingual translation, and automated document analysis are shortening timelines and improving comprehension.

Incremental efficiency gains are beginning to translate into faster proceedings and broader legal accessibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Generative AI fuels surge in online fraud risks in 2026

Online scams are expected to surge in 2026, overtaking ransomware as the top cyber-risk, the World Economic Forum warned, driven by the growing use of generative AI.

Executives are increasingly concerned about AI-driven scams that are easier to launch and harder to detect than traditional cybercrime. WEF managing director Jeremy Jurgens said leaders now face the challenge of acting collectively to protect trust and stability in an AI-driven digital environment.

Consumers are also feeling the impact. An Experian report found 68% of people now see identity theft as their main concern, while US Federal Trade Commission data shows consumer fraud losses reached $12.5 billion in 2024, up 25% year on year.

Generative AI is enabling more convincing phishing, voice cloning, and impersonation attempts. The WEF reported that 62% of executives experienced phishing attacks, 37% encountered invoice fraud, and 32% reported identity theft, with vulnerable groups increasingly targeted through synthetic content abuse.

Experts warn that many organisations still lack the skills and resources to defend against evolving threats. Consumer groups advise slowing down, questioning urgent messages, avoiding unsolicited requests for information, and verifying contacts independently to reduce the risk of generative AI-powered scams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!