From chips to jobs: Huang’s vision for AI at Davos 2026

AI is evolving into a foundational economic system rather than a standalone technology, according to NVIDIA chief executive Jensen Huang, who described AI as a five-layer infrastructure spanning energy, hardware, data centres, models and applications.

Speaking at the World Economic Forum in Davos, Huang argued that building and operating each layer is triggering what he called the most significant infrastructure expansion in human history, with job creation stretching from power generation and construction to cloud operations and software development.

Investment patterns suggest a structural shift instead of a speculative cycle. Venture capital funding in 2025 reached record levels, largely flowing into AI-native firms across healthcare, manufacturing, robotics and financial services.

Huang stressed that the application layer will deliver the most significant economic return as AI moves from experimentation to core operational use across industries.

Concerns around job displacement were framed as misplaced. AI automates tasks rather than replacing professional judgement, enabling workers to focus on higher-value activities.

In healthcare, productivity gains from AI-assisted diagnostics and documentation are already increasing demand for radiologists and nurses rather than reducing headcount, as improved efficiency enables institutions to treat more patients.

Huang positioned AI as critical national infrastructure, urging governments to develop domestic capabilities aligned with local language, culture and industrial strengths.

He described AI literacy as an essential skill, comparable to leadership or management, while arguing that accessible AI tools could narrow global technology divides rather than widen them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea sets the global standard for frontier AI regulation

South Korea will begin enforcing its Artificial Intelligence Act on Thursday, becoming the first country to introduce formal safety requirements for high-performance, or frontier, AI systems, reshaping the global regulatory landscape.

The law establishes a national AI governance framework, led by the Presidential Council on National Artificial Intelligence Strategy, and creates an AI Safety Institute to oversee safety and trust assessments.

Alongside regulatory measures, the government is rolling out broad support for research, data infrastructure, talent development, startups, and overseas expansion, signalling a growth-oriented policy stance.

To minimise early disruption, authorities will introduce a minimum one-year grace period centred on guidance, consultation, and education rather than enforcement.

Obligations cover three areas: high-impact AI in critical sectors, safety rules for frontier models, and transparency requirements for generative AI, including disclosure of realistic synthetic content.

Enforcement remains light-touch, prioritising corrective orders over penalties, with fines capped at 30 million won for persistent noncompliance. Officials said the framework aims to build public trust while supporting innovation, serving as a foundation for ongoing policy development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube’s 2026 strategy places AI at the heart of moderation and monetisation

As announced yesterday, YouTube is expanding its response to synthetic media by introducing experimental likeness detection tools that allow creators to identify videos where their face appears altered or generated by AI.

The system, modelled conceptually on Content ID, scans newly uploaded videos for visual matches linked to enrolled creators, enabling them to review content and pursue privacy or copyright complaints when misuse is detected.

Participation requires identity verification through government-issued identification and a biometric reference video, positioning facial data as both a protective and governance mechanism.

While the platform stresses consent and limited scope, the approach reflects a broader shift towards biometric enforcement as platforms attempt to manage deepfakes, impersonation, and unauthorised synthetic content at scale.

Alongside likeness detection, YouTube’s 2026 strategy places AI at the centre of content moderation, creator monetisation, and audience experience.

AI tools already shape recommendation systems, content labelling, and automated enforcement, while new features aim to give creators greater control over how their image, voice, and output are reused in synthetic formats.

The move highlights growing tensions between creative empowerment and platform authority, as safeguards against AI misuse increasingly rely on surveillance, verification, and centralised decision-making.

As regulators debate digital identity, biometric data, and synthetic media governance, YouTube’s model signals how private platforms may effectively set standards ahead of formal legislation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Snapchat settles social media addiction lawsuit as landmark trial proceeds

Snapchat’s parent company has settled a social media addiction lawsuit in California just days before the first major trial examining platform harms was set to begin.

The agreement removes Snapchat from one of the three bellwether cases consolidating thousands of claims, while Meta, TikTok and YouTube remain defendants.

These lawsuits mark a legal shift away from debates over user content and towards scrutiny of platform design choices, including recommendation systems and engagement mechanics.

A US judge has already ruled that such features may be responsible for harm, opening the door to liability that section 230 protections may not cover.

Legal observers compare the proceedings to historic litigation against tobacco and opioid companies, warning of substantial damages and regulatory consequences.

A ruling against the remaining platforms could force changes in how social media products are designed, particularly in relation to minors and mental health risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Why AI systems privilege Western perspectives: ‘The Silicon Gaze’

A new study from the University of Oxford argues that large language models reproduce a distinctly Western hierarchy when asked to evaluate countries, reinforcing long-standing global inequalities through automated judgment.

Analysing more than 20 million English-language responses from ChatGPT’s 4o-mini model, researchers found consistent favouring of wealthy Western nations across subjective comparisons such as intelligence, happiness, creativity, and innovation.

Low-income countries, particularly across Africa, were systematically placed at the bottom of rankings, while Western Europe, the US, and parts of East Asia dominated positive assessments.

According to the study, generative models rely heavily on data availability and dominant narratives, leading to flattened representations that recycle familiar stereotypes instead of reflecting social complexity or cultural diversity.

The researchers describe the phenomenon as the ‘silicon gaze’, a worldview shaped by the priorities of platform owners, developers, and historically uneven training data.

Because large language models are trained on material produced within centuries of structural exclusion, bias emerges not as a malfunction but as an embedded feature of contemporary AI systems.

The findings intensify global debates around AI governance, accountability, and cultural representation, particularly as such systems increasingly influence healthcare, employment screening, education, and public decision-making.

While models are continuously updated, the study underlines the limits of technical mitigation without broader political, regulatory, and epistemic interventions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Horizon1000 aims to bring powerful AI healthcare tools to Africa

The Gates Foundation and OpenAI have launched a joint healthcare initiative, Horizon1000, to expand the use of AI across primary care systems in Sub-Saharan Africa. The partnership includes a $50 million commitment in funding, technology, and technical support to equip 1,000 clinics with AI tools by 2028.

Horizon1000’s Operations will begin in Rwanda, where local authorities will work with the two organisations to deploy AI systems in frontline healthcare settings. The initiative reflects the Foundation’s long-standing aim to ensure that new technologies reach lower-income regions without long delays.

Bill Gates said the project responds to a critical shortage of healthcare workers, which threatens to undermine decades of progress in global health. Sub-Saharan Africa currently faces a shortfall of nearly six million medical professionals, limiting the capacity of overstretched clinics to deliver consistent care.

Low-quality healthcare contributes to between six and eight million deaths annually in low- and middle-income countries, according to the World Health Organization. Rwanda, the first pilot country, has only one healthcare worker per 1,000 people, far below the WHO’s recommended level.

AI tools under Horizon1000 are intended to support, rather than replace, health workers by assisting with clinical guidance, administration, and patient interactions. The Gates Foundation said it will continue working with regional governments and innovators to scale the programme.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Tech-dense farms emerge as a new model for future agriculture

A BBC report examines the rise of so-called ‘tech-dense’ farms, where digital tools such as AI-powered sensors, satellite imagery, and farm management software are increasingly central to agricultural operations.

While the total number of farms is declining, those that remain are investing heavily in technology to stay competitive, improve precision, and reduce input costs such as pesticides and water.

Farmers interviewed describe using smart spraying systems, data analytics, and predictive software to optimise planting, monitor crop health, and respond to weather or pest risks in real time.

Agronomists suggest that these innovations could stabilise food supplies and potentially lower consumer prices, though adoption varies by age, cost, and willingness to change, highlighting a broader transition toward treating farming as a data-driven business.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Davos hears Fink warn AI could deepen inequality

BlackRock CEO Larry Fink used his Davos speech to put AI at the centre of a broader warning. In the AI era, trust may become the world’s ‘hardest currency.’

Speaking at the World Economic Forum, he argued that new technologies will only strengthen societies if people believe the benefits are real, fairly shared, and not decided solely by a small circle of insiders.

Fink said AI is already showing a familiar pattern. The earliest gains are flowing mainly to those who control the models, data, and infrastructure. He cautioned that without deliberate choices, AI could deepen inequality in advanced economies, echoing the fact that decades of wealth creation after the fall of the Berlin Wall still ended up concentrating prosperity among a narrower share of people than a ‘healthy society’ can sustain.

He also raised a specific fear for the workforce, asking whether AI will do to white-collar jobs what globalisation did to blue-collar work: automate, outsource, and reshape employment faster than institutions can protect workers and communities. That risk, he said, is why leaders need to move beyond slogans and produce a credible plan for broad participation in the gains AI can deliver.

The stakes, Fink argued, go beyond economic statistics. Prosperity should not be judged only by GDP or soaring market values, he said, but by whether people can ‘see it, touch it, and build a future on it’, a test that becomes more urgent as AI changes how value is created and who captures it.

Fink tied the AI debate to the legitimacy crisis facing Davos itself, acknowledging that elite institutions are widely distrusted and that many people most affected by these decisions will never enter the conference. If the WEF wants to shape the next phase of the AI transition, he said, it must rebuild trust by listening outside the usual circles and engaging with communities where the modern economy is actually built.

He also urged a different style of conversation about AI, less staged agreement and more serious disagreement, aimed at understanding. In that spirit, he called for the forum to take its discussions beyond Davos, to places such as Detroit, Dublin, Jakarta and Buenos Aires, arguing that only real dialogue, grounded in lived economic realities, can give AI governance and AI-driven growth the legitimacy to last.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Davos 2026 reveals competing visions for AI

AI has dominated debates at Davos 2026, matching traditional concerns such as geopolitics and global trade while prompting deeper reflection on how the technology is reshaping work, governance, and society.

Political leaders, executives, and researchers agreed that AI development has moved beyond experimentation towards widespread implementation.

Microsoft chief executive Satya Nadella argued that AI should deliver tangible benefits for communities and economies, while warning that adoption will remain uneven due to disparities in infrastructure and investment.

Access to energy networks, telecommunications, and capital was identified as a decisive factor in determining which regions can fully deploy advanced systems.

Other voices at Davos 2026 struck a more cautious tone. AI researcher Yoshua Bengio warned against designing systems that appear too human-like, stressing that people may overestimate machine understanding.

Philosopher Yuval Noah Harari echoed those concerns, arguing that societies lack experience in managing human and AI coexistence and should prepare mechanisms to correct failures.

The debate also centred on labour and global competition.

Anthropic’s Dario Amodei highlighted geopolitical risks and predicted disruption to entry-level white-collar jobs. At the same time, Google DeepMind chief Demis Hassabis forecast new forms of employment alongside calls for shared international safety standards.

Together, the discussions underscored growing recognition that AI governance will shape economic and social outcomes for years ahead.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI models embedded into ServiceNow for enterprise automation

ServiceNow has announced a multi-year agreement positioning OpenAI as a preferred intelligence capability across its enterprise platform, extending access to frontier AI models for organisations running tens of billions of workflows each year.

The partnership reflects a broader shift towards operational AI embedded directly within business systems instead of experimental deployments.

By integrating OpenAI models such as GPT-5.2 into the ServiceNow AI Platform, enterprises can embed reasoning and automation into secure workflows spanning IT, finance, human resources, and customer operations.

AI tools are designed to analyse context, recommend actions, and execute tasks within existing governance frameworks instead of functioning as standalone assistants.

Executives from both companies emphasised that the collaboration aims to deliver measurable outcomes at scale.

ServiceNow highlighted its role in coordinating complex enterprise environments, while OpenAI stressed the importance of deploying agentic AI capable of handling work end to end within permissioned infrastructures.

Looking ahead, the partnership plans to expand towards multimodal and voice-based interactions, enabling employees to communicate with AI systems through speech, text, and visual inputs.

The initiative strengthens OpenAI’s enterprise footprint while reinforcing ServiceNow’s ambition to act as a central control layer for AI-driven business operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!