Generative AI fuels surge in online fraud risks in 2026

Online scams are expected to surge in 2026, overtaking ransomware as the top cyber-risk, the World Economic Forum warned, driven by the growing use of generative AI.

Executives are increasingly concerned about AI-driven scams that are easier to launch and harder to detect than traditional cybercrime. WEF managing director Jeremy Jurgens said leaders now face the challenge of acting collectively to protect trust and stability in an AI-driven digital environment.

Consumers are also feeling the impact. An Experian report found 68% of people now see identity theft as their main concern, while US Federal Trade Commission data shows consumer fraud losses reached $12.5 billion in 2024, up 25% year on year.

Generative AI is enabling more convincing phishing, voice cloning, and impersonation attempts. The WEF reported that 62% of executives experienced phishing attacks, 37% encountered invoice fraud, and 32% reported identity theft, with vulnerable groups increasingly targeted through synthetic content abuse.

Experts warn that many organisations still lack the skills and resources to defend against evolving threats. Consumer groups advise slowing down, questioning urgent messages, avoiding unsolicited requests for information, and verifying contacts independently to reduce the risk of generative AI-powered scams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Siri to receive major AI upgrade with powerful enhancements

Apple is reportedly preparing a major overhaul of Siri by replacing the current system with an AI chatbot powered by Google’s Gemini technology. The change could mark the most significant upgrade to the assistant since its original launch.

Internal reports suggest the project aims to make Siri more conversational, capable of handling complex requests and sustained dialogue, rather than simple commands.

Future versions of iOS, iPadOS, and macOS are expected to introduce the new system. Users would still activate Siri with familiar voice commands or device buttons, regardless of the underlying technology.

Improved understanding of personal data could allow the assistant to manage calendars, photos, files, and settings more intuitively. Content creation features such as email drafting and note summarisation are also expected.

Growing competition from AI chatbots like ChatGPT and Gemini has increased pressure on Apple to modernise its digital assistant. Reports suggest a formal reveal could take place at a future developer event, followed by a broader rollout with upcoming iPhone releases.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI expands healthcare access in Africa

Health care in Africa is set to benefit from AI through a new initiative by the Gates Foundation and OpenAI. Horizon1000 aims to expand AI-powered support across 1,000 primary care clinics in Rwanda by 2028.

Severe shortages of health workers in Sub-Saharan Africa have limited access to quality care, with the region facing a shortfall of nearly six million professionals. AI tools will assist doctors and nurses by handling administrative tasks and providing clinical guidance.

Rwanda has launched an AI Health Intelligence Centre to utilise limited resources better and improve patient outcomes. The initiative will deploy AI in communities and homes, ensuring support reaches beyond clinic walls.

Experts believe AI represents a major medical breakthrough, comparable to vaccines and antibiotics. By helping health workers focus on patient care, the technology could reduce preventable deaths and transform health systems across low- and middle-income countries.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Kashi Vishwanath Temple launches AI chatbot

Shri Kashi Vishwanath Temple in India has launched an AI-powered chatbot to help devotees access services from anywhere in the world. The tool provides quick information on rituals, bookings, and temple timings.

Devotees can now book darshan, special aartis, and order prasad online. The chatbot also guides pilgrims on guesthouse availability and directions around Varanasi.

Supporting Hindi, English, and regional languages, the AI ensures smooth communication for global visitors. The initiative aims to simplify temple visits, especially during festivals and crowded periods.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft restores Exchange and Teams after Microsoft 365 disruption

The US tech giant, Microsoft, investigated a service disruption affecting Exchange Online, Teams and other Microsoft 365 services after users reported access and performance problems.

An incident that began late on Wednesday affected core communication tools used by enterprises for daily operations.

Engineers initially focused on diagnosing the fault, with Microsoft indicating that a potential third-party networking issue may have interfered with access to Outlook and Teams.

During the disruption, users experienced intermittent connectivity failures, latency and difficulties signing in across parts of the Microsoft 365 ecosystem.

Microsoft later confirmed that service access had been restored, although no detailed breakdown of the outage scope was provided.

The incident underlined the operational risks associated with cloud productivity platforms and the importance of transparency and resilience in enterprise digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

From chips to jobs: Huang’s vision for AI at Davos 2026

AI is evolving into a foundational economic system rather than a standalone technology, according to NVIDIA chief executive Jensen Huang, who described AI as a five-layer infrastructure spanning energy, hardware, data centres, models and applications.

Speaking at the World Economic Forum in Davos, Huang argued that building and operating each layer is triggering what he called the most significant infrastructure expansion in human history, with job creation stretching from power generation and construction to cloud operations and software development.

Investment patterns suggest a structural shift instead of a speculative cycle. Venture capital funding in 2025 reached record levels, largely flowing into AI-native firms across healthcare, manufacturing, robotics and financial services.

Huang stressed that the application layer will deliver the most significant economic return as AI moves from experimentation to core operational use across industries.

Concerns around job displacement were framed as misplaced. AI automates tasks rather than replacing professional judgement, enabling workers to focus on higher-value activities.

In healthcare, productivity gains from AI-assisted diagnostics and documentation are already increasing demand for radiologists and nurses rather than reducing headcount, as improved efficiency enables institutions to treat more patients.

Huang positioned AI as critical national infrastructure, urging governments to develop domestic capabilities aligned with local language, culture and industrial strengths.

He described AI literacy as an essential skill, comparable to leadership or management, while arguing that accessible AI tools could narrow global technology divides rather than widen them.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPT-5.2 shows how AI can generate real-world cyber exploits

Advanced language models have demonstrated the ability to generate working exploits for previously unknown software vulnerabilities. Security researcher Sean Heelan tested two systems built on GPT-5.2 and Opus 4.5 by challenging them to exploit a zero-day flaw in the QuickJS JavaScript interpreter.

Across multiple scenarios with varying security protections, GPT-5.2 completed every task, while Opus 4.5 failed only 2. The systems produced more than 40 functional exploits, ranging from basic shell access to complex file-writing operations that bypassed modern defences.

Most challenges were solved in under an hour, with standard attempts costing around $30. Even the most complex exploit, which bypassed protections such as address space layout randomisation, non-executable memory, and seccomp sandboxing, was completed in just over three hours for roughly $50.

The most advanced task required GPT-5.2 to write a specific string to a protected file path without access to operating system functions. The model achieved this by chaining seven function calls through the glibc exit handler mechanism, bypassing shadow stack protections.

The findings suggest exploit development may increasingly depend on computational resources rather than human expertise. While QuickJS is less complex than browsers such as Chrome or Firefox, the approach demonstrated could scale to larger and more secure software environments.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

YouTube’s 2026 strategy places AI at the heart of moderation and monetisation

As announced yesterday, YouTube is expanding its response to synthetic media by introducing experimental likeness detection tools that allow creators to identify videos where their face appears altered or generated by AI.

The system, modelled conceptually on Content ID, scans newly uploaded videos for visual matches linked to enrolled creators, enabling them to review content and pursue privacy or copyright complaints when misuse is detected.

Participation requires identity verification through government-issued identification and a biometric reference video, positioning facial data as both a protective and governance mechanism.

While the platform stresses consent and limited scope, the approach reflects a broader shift towards biometric enforcement as platforms attempt to manage deepfakes, impersonation, and unauthorised synthetic content at scale.

Alongside likeness detection, YouTube’s 2026 strategy places AI at the centre of content moderation, creator monetisation, and audience experience.

AI tools already shape recommendation systems, content labelling, and automated enforcement, while new features aim to give creators greater control over how their image, voice, and output are reused in synthetic formats.

The move highlights growing tensions between creative empowerment and platform authority, as safeguards against AI misuse increasingly rely on surveillance, verification, and centralised decision-making.

As regulators debate digital identity, biometric data, and synthetic media governance, YouTube’s model signals how private platforms may effectively set standards ahead of formal legislation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Snapchat settles social media addiction lawsuit as landmark trial proceeds

Snapchat’s parent company has settled a social media addiction lawsuit in California just days before the first major trial examining platform harms was set to begin.

The agreement removes Snapchat from one of the three bellwether cases consolidating thousands of claims, while Meta, TikTok and YouTube remain defendants.

These lawsuits mark a legal shift away from debates over user content and towards scrutiny of platform design choices, including recommendation systems and engagement mechanics.

A US judge has already ruled that such features may be responsible for harm, opening the door to liability that section 230 protections may not cover.

Legal observers compare the proceedings to historic litigation against tobacco and opioid companies, warning of substantial damages and regulatory consequences.

A ruling against the remaining platforms could force changes in how social media products are designed, particularly in relation to minors and mental health risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Why AI systems privilege Western perspectives: ‘The Silicon Gaze’

A new study from the University of Oxford argues that large language models reproduce a distinctly Western hierarchy when asked to evaluate countries, reinforcing long-standing global inequalities through automated judgment.

Analysing more than 20 million English-language responses from ChatGPT’s 4o-mini model, researchers found consistent favouring of wealthy Western nations across subjective comparisons such as intelligence, happiness, creativity, and innovation.

Low-income countries, particularly across Africa, were systematically placed at the bottom of rankings, while Western Europe, the US, and parts of East Asia dominated positive assessments.

According to the study, generative models rely heavily on data availability and dominant narratives, leading to flattened representations that recycle familiar stereotypes instead of reflecting social complexity or cultural diversity.

The researchers describe the phenomenon as the ‘silicon gaze’, a worldview shaped by the priorities of platform owners, developers, and historically uneven training data.

Because large language models are trained on material produced within centuries of structural exclusion, bias emerges not as a malfunction but as an embedded feature of contemporary AI systems.

The findings intensify global debates around AI governance, accountability, and cultural representation, particularly as such systems increasingly influence healthcare, employment screening, education, and public decision-making.

While models are continuously updated, the study underlines the limits of technical mitigation without broader political, regulatory, and epistemic interventions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!