Generative AI fuels surge in online fraud risks in 2026

Online scams are expected to surge in 2026, overtaking ransomware as the top cyber-risk, the World Economic Forum warned, driven by the growing use of generative AI.

Executives are increasingly concerned about AI-driven scams that are easier to launch and harder to detect than traditional cybercrime. WEF managing director Jeremy Jurgens said leaders now face the challenge of acting collectively to protect trust and stability in an AI-driven digital environment.

Consumers are also feeling the impact. An Experian report found 68% of people now see identity theft as their main concern, while US Federal Trade Commission data shows consumer fraud losses reached $12.5 billion in 2024, up 25% year on year.

Generative AI is enabling more convincing phishing, voice cloning, and impersonation attempts. The WEF reported that 62% of executives experienced phishing attacks, 37% encountered invoice fraud, and 32% reported identity theft, with vulnerable groups increasingly targeted through synthetic content abuse.

Experts warn that many organisations still lack the skills and resources to defend against evolving threats. Consumer groups advise slowing down, questioning urgent messages, avoiding unsolicited requests for information, and verifying contacts independently to reduce the risk of generative AI-powered scams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Siri to receive major AI upgrade with powerful enhancements

Apple is reportedly preparing a major overhaul of Siri by replacing the current system with an AI chatbot powered by Google’s Gemini technology. The change could mark the most significant upgrade to the assistant since its original launch.

Internal reports suggest the project aims to make Siri more conversational, capable of handling complex requests and sustained dialogue, rather than simple commands.

Future versions of iOS, iPadOS, and macOS are expected to introduce the new system. Users would still activate Siri with familiar voice commands or device buttons, regardless of the underlying technology.

Improved understanding of personal data could allow the assistant to manage calendars, photos, files, and settings more intuitively. Content creation features such as email drafting and note summarisation are also expected.

Growing competition from AI chatbots like ChatGPT and Gemini has increased pressure on Apple to modernise its digital assistant. Reports suggest a formal reveal could take place at a future developer event, followed by a broader rollout with upcoming iPhone releases.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI expands healthcare access in Africa

Health care in Africa is set to benefit from AI through a new initiative by the Gates Foundation and OpenAI. Horizon1000 aims to expand AI-powered support across 1,000 primary care clinics in Rwanda by 2028.

Severe shortages of health workers in Sub-Saharan Africa have limited access to quality care, with the region facing a shortfall of nearly six million professionals. AI tools will assist doctors and nurses by handling administrative tasks and providing clinical guidance.

Rwanda has launched an AI Health Intelligence Centre to utilise limited resources better and improve patient outcomes. The initiative will deploy AI in communities and homes, ensuring support reaches beyond clinic walls.

Experts believe AI represents a major medical breakthrough, comparable to vaccines and antibiotics. By helping health workers focus on patient care, the technology could reduce preventable deaths and transform health systems across low- and middle-income countries.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Advanced Linux malware framework VoidLink likely built with AI

Security researchers from Check Point have uncovered VoidLink. This advanced and modular Linux malware framework has been developed predominantly with AI assistance, likely by a single individual rather than a well-resourced threat group.

VoidLink’s development process, exposed due to the developer’s operational security (OPSEC) failures, indicates that AI models were used not just for parts of the code but to orchestrate the entire project plan, documentation and implementation.

According to analysts, the malware framework reached a functional state in under a week with more than 88,000 lines of code, compressing what would traditionally take weeks or months into days.

While no confirmed in-the-wild attacks have yet been reported, researchers caution that the advent of AI-assisted malware represents a significant cybersecurity shift, lowering the barrier to creating sophisticated threats and potentially enabling widespread future misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI automation poses a major challenge for transport jobs

The transport sector is expected to be the first industry to face large-scale AI automation, particularly in frontline driving roles. Buses, taxis, trains, coaches and heavy goods vehicles are seen as especially vulnerable as autonomous technologies continue to mature.

Employers are increasingly attracted to AI automation, such as automated vehicles, because they can operate continuously without the driving-time limits imposed on human workers. However, this makes automation economically appealing, especially in freight and logistics, where efficiency and round-the-clock operation are critical.

The shift could lead to the displacement of hundreds of thousands, or even millions, of transport workers. Concerns are growing over the lack of alternative job opportunities, as investment in reskilling across the UK has remained limited despite ongoing discussions about labour shortages.

Beyond employment, AI automation may have broader economic implications. Large-scale job losses would reduce tax revenues, potentially forcing governments to reconsider taxation policies, including taxing activities that are currently untaxed to offset losses from employment income.

AI becomes mainstream in UK auto buying behaviour, survey shows

A recent survey reported by AM-Online reveals that approximately 66 per cent of UK car buyers use artificial intelligence in some form as part of their vehicle research and buying process.

AI applications cited include chatbots for questions and comparisons, recommendation systems for model selection, and virtual advisors that help consumers weigh options based on preferences and budget.

Industry commentators suggest that this growing adoption reflects broader digital transformation trends in automotive retail, with dealerships and manufacturers increasingly deploying AI technologies to personalise sales experiences, streamline research and nurture leads.

The integration of AI tools is seen as boosting customer engagement and efficiency, but it also raises questions about privacy and data protection, transparency and the future role of human sales advisors as digital tools become more capable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Enterprise voice AI reaches new benchmark in India’s first live unscripted TV debate

Blue Machines AI set a new benchmark for enterprise voice AI by taking part in a 60-minute, live, unscripted debate on Indian national television. Aired in a single take, the broadcast tested whether voice AI could perform reliably under real-world pressure and national scrutiny.

During the debate, the system demonstrated enterprise-grade reliability and strong governance. It maintained contextual continuity, ultra-low latency, and disciplined responses while managing interruptions and rapid topic shifts, without producing speculative or unsafe outputs.

The discussion spanned complex and sensitive issues, including geopolitics, national security, AI ethics, trade policy, and India’s deep-technology ambitions. Performance across such a broad range of topics highlighted the system’s maturity and its ability to operate in unpredictable conversational environments.

Observers noted that such performance signals readiness for deployment in high-stakes sectors such as banking, insurance, aviation, and large digital platforms. The event also highlighted the strength of India’s deep-tech engineering ecosystem, marking a shift of voice AI from novelty to stable, governed, and scalable application.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Amazon One Medical launches health AI assistant

One Medical has launched a Health AI assistant in its mobile app, offering personalised health guidance at any time. The tool uses verified medical records to support everyday healthcare decisions.

Patients can use the assistant to explain lab results, manage prescriptions, and book virtual or in-person appointments. Clinical safeguards ensure users are referred to human clinicians when medical judgement is required.

Powered by Amazon Bedrock, the assistant operates under HIPAA-compliant privacy standards and avoids selling personal health data. Amazon says clinician and member feedback will shape future updates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Kashi Vishwanath Temple launches AI chatbot

Shri Kashi Vishwanath Temple in India has launched an AI-powered chatbot to help devotees access services from anywhere in the world. The tool provides quick information on rituals, bookings, and temple timings.

Devotees can now book darshan, special aartis, and order prasad online. The chatbot also guides pilgrims on guesthouse availability and directions around Varanasi.

Supporting Hindi, English, and regional languages, the AI ensures smooth communication for global visitors. The initiative aims to simplify temple visits, especially during festivals and crowded periods.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Davos roundtable calls for responsible AI growth

Leaders from the tech industry, academia, and policy circles met at a TIME100 roundtable in Davos, Switzerland, on 21 January to discuss how to pursue rapid AI progress without sacrificing safety and accountability. The conversation, hosted by TIME CEO Jessica Sibley, focused on how AI should be built, governed, and used as it becomes more embedded in everyday life.

A major theme was the impact of AI-enabled technology on children. Jonathan Haidt, an NYU Stern professor and author of The Anxious Generation, argued that the key issue is not total avoidance but the timing and habits of exposure. He suggested children do not need smartphones until at least high school, emphasising that delaying access can help protect brain development and executive function.

Yoshua Bengio, a professor at the Université de Montréal and founder of LawZero, said responsible innovation depends on a deeper scientific understanding of AI risks and stronger safeguards built into systems from the start. He pointed to two routes, consumer and societal demand for ‘built-in’ protections, and government involvement that could include indirect regulation through liability frameworks, such as requiring insurance for AI developers and deployers.

Participants also challenged the idea that geopolitical competition should justify weaker guardrails. Bengio argued that even rivals share incentives to prevent harmful outcomes, such as AI being used for cyberattacks or the development of biological weapons, and said coordination between major powers is possible, drawing a comparison to Cold War-era cooperation on nuclear risk reduction.

The roundtable linked AI risks to lessons from social media, particularly around attention-driven business models. Bill Ready, CEO of Pinterest, said engagement optimisation can amplify divisions and ‘prey’ on negative human impulses, and described Pinterest’s shift away from maximising view time toward maximising user outcomes, even if it hurts short-term metrics.

Several speakers argued that today’s alignment approach is too reactive. Stanford computer scientist Yejin Choi warned that models trained on the full internet absorb harmful patterns and then require patchwork fixes, urging exploration of systems that learn moral reasoning and human values more directly from the outset.

Kay Firth-Butterfield, CEO of Good Tech Advisory, added that wider AI literacy, shaped by input from workers, parents, and other everyday users, should underpin future certification and trust in AI tools.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!