AI-driven Christmas scams surge online

Cybersecurity researchers are urging greater caution as Christmas approaches, warning that seasonal scams are multiplying rapidly. Check Point has recorded over 33,500 festive phishing emails and more than 10,000 deceptive social ads within two weeks.

AI tools are helping criminals craft convincing messages that mirror trusted brands and local languages. Attackers are also deploying fake e-commerce sites with AI chatbots, as well as deepfake audio and scripted calls to strengthen vishing attempts.

Smishing alerts imitating delivery firms are becoming more widespread, with recent months showing a marked rise in fraudulent parcel scams. Victims are often tricked into sharing payment details through links that imitate genuine logistics updates.

Experts say fake shops and giveaway scams remain persistent risks, frequently launched from accounts created within the past three months. Users are being advised to ignore unsolicited links, verify retailers and treat unexpected offers with scepticism.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI reporting playbook published by Google

Google has released a new AI playbook aimed at helping organisations streamline and improve sustainability reporting, sharing lessons learned from integrating AI into its own environmental disclosure processes.

In a blog post published on The Keyword, Google states that corporate sustainability reporting is often hindered by fragmented data and labour-intensive workflows. After two years of using AI internally, the company is now open-sourcing its approach to help others reduce reporting burdens.

The AI Playbook for Sustainability Reporting is presented as a practical, implementation-focused toolkit. It includes a structured framework for auditing reporting processes, along with ready-made prompt templates for common sustainability reporting tasks.

Google also highlights real-world examples that demonstrate how tools such as Gemini and NotebookLM can be used to validate sustainability claims, respond to information requests, and support internal review, moving AI use beyond experimentation.

The company says the playbook is intended to support transparency and strategic decision-making, and has invited organisations and practitioners to explore the resource and provide feedback.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Open-source scheduler Slurm moves under NVIDIA ownership

NVIDIA has announced the acquisition of SchedMD, the developer of Slurm, a widely used open-source workload manager for high-performance computing and AI environments.

The company stated that Slurm will continue to be developed and distributed as open-source, vendor-neutral software, with support maintained across a broad range of hardware and software platforms used by the HPC and AI communities.

Slurm plays a central role in managing complex workloads on large computing clusters, handling job scheduling, queuing, and resource allocation. It is used by more than half of the top 10 and top 100 systems on the TOP500 supercomputer list, reflecting its widespread adoption and significant impact.

NVIDIA stated that the software is also critical infrastructure for generative AI, helping developers manage large-scale model training and inference. The company has collaborated with SchedMD for over a decade and plans to increase investment in Slurm’s ongoing development.

SchedMD said the deal will enable Slurm to evolve in tandem with accelerated computing demands while remaining open source. NVIDIA said it will continue to provide support, training, and development to existing customers across various use cases, including research, industry, and public sectors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Banks and fintechs turn to Visa as stablecoin infrastructure matures

Visa has launched a Stablecoins Advisory Practice through its Visa Consulting & Analytics unit, reflecting rising institutional interest in stablecoin-based payment infrastructure. The service aims to help banks, fintech firms, merchants, and enterprises assess strategy, market fit and implementation.

The move comes as the global stablecoin market exceeds $250 billion in value and emerging reports of an annualised stablecoin settlement run rate of $3.5 billion as of late November. According to the company, demand is rising among financial institutions exploring faster and lower-cost payment rails.

Visa Consulting & Analytics will offer services ranging from market education and strategy development to use case sizing and technical integration. The programme draws on Visa’s network of consultants, data scientists and product specialists to support clients navigating regulatory and operational complexity.

Several financial institutions have already participated in early engagements, citing the need for clearer frameworks as stablecoins gain traction in cross-border payments and digital finance. The advisory practice reflects broader efforts to support responsible adoption alongside emerging standards.

Visa has previously piloted stablecoin settlement using USDC and now supports more than 130 stablecoin-linked card programmes across 40 countries. The company is also testing stablecoin-based pre-funding for international payouts.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI tool links genetic mutations to diseases with improved accuracy

Scientists at the Icahn School of Medicine at Mount Sinai have created an AI tool called Variant to Phenotype (V2P) that can identify genetic mutations and predict the diseases they may cause, bolstering the field of genetic diagnostics.

The V2P method is designed to accelerate diagnosis and facilitate the discovery of new treatments for complex and rare diseases by comprehensively interpreting genomic data, surpassing the limitations of traditional techniques that often focus solely on mutation detection without predicting phenotypic effects.

This innovation could enhance clinical decision-making by linking specific genetic variants directly to disease risk, helping clinicians prioritise variants for further study and informing patients about likely outcomes sooner.

The findings were published online in Nature Communications, marking a notable advancement in how AI can support precision medicine and research for rare diseases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Libraries lead UK government push to improve digital inclusion and AI confidence

Libraries Connected, supported by a £310,400 grant from the UK Government’s Digital Inclusion Innovation Fund administered by the Department for Science, Industry and Technology (DSIT), is launching Innovating in Trusted Spaces: Libraries Advancing the Digital Inclusion Action Plan.

The programme will run from November 2025 to March 2026 across 121 library branches in Newcastle, Northumberland, Nottingham City and Nottinghamshire, targeting older people, low-income families and individuals with disabilities to ensure they are not left behind amid rapid digital and AI-driven change.

Public libraries are already a leading provider of free internet access and basic digital skills support, offering tens of thousands of public computers and learning opportunities each year. However, only around 27 percent of UK adults currently feel confident in recognising AI-generated content online, underscoring the need for improved digital and media literacy.

The project will create and test a new digital inclusion guide for library staff, focusing on the benefits and risks of AI tools, misinformation and emerging technologies, as well as building a national network of practice for sharing insights.

Partners in the programme include Good Things Foundation and WSA Community, which will help co-design materials and evaluate the initiative’s impact to inform future digital inclusion efforts across communities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI content flood drives ‘slop’ to word of the year

Merriam-Webster has chosen ‘slop’ as its 2025 word of the year, reflecting the rise of low-quality digital content produced by AI. The term originally meant soft mud, but now describes absurd or fake online material.

Greg Barlow, Merriam-Webster’s president, said the word captures how AI-generated content has fascinated, annoyed and sometimes alarmed people. Tools like AI video generators can produce deepfakes and manipulated clips in seconds.

The spike in searches for ‘slop’ shows growing public awareness of poor-quality content and a desire for authenticity. People want real, genuine material rather than AI-driven junk content.

AI-generated slop includes everything from absurd videos to fake news and junky digital books. Merriam-Webster selects its word of the year by analysing search trends and cultural relevance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Streaming platforms face pressure over AI-generated music

Musicians are raising the alarm over AI-generated tracks appearing on their profiles without consent, presenting fraudulent work as their own. British folk artist Emily Portman discovered an AI-generated album, Orca, on Spotify and Apple Music, which copied her folk style and lyrics.

Fans initially congratulated her on a release she had not made since 2022.

Australian musician Paul Bender reported a similar experience, with four ‘bizarrely bad’ AI tracks appearing under his band, The Sweet Enoughs. Both artists said that weak distributor security allows scammers to easily upload content, calling it ‘the easiest scam in the world.’

A petition launched by Bender garnered tens of thousands of signatures, urging platforms to strengthen their protections.

AI-generated music has become increasingly sophisticated, making it nearly impossible for listeners to distinguish from genuine tracks. While revenues from such fraudulent streams are low individually, bots and repeated listening can significantly increase payouts.

Industry representatives note that the primary motive is to collect royalties from unsuspecting users.

Despite the threat of impersonation, Portman is continuing her creative work, emphasising human collaboration and authentic artistry. Spotify and Apple Music have pledged to collaborate with distributors to enhance the detection and prevention of AI-generated fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI generated podcasts flood platforms and disrupt the audio industry

Podcasts generated by AI are rapidly reshaping the audio industry, with automated shows flooding platforms such as Spotify, Apple Podcasts and YouTube.

Advances in voice cloning and speech synthesis have enabled the production to large volumes of content at minimal cost, allowing AI hosts to compete directly with human creators in an already crowded market.

Some established podcasters are experimenting cautiously, using cloned voices for translation, post-production edits or emergency replacements. Others have embraced full automation, launching synthetic personalities designed to deliver commentary, biographies and niche updates at speed.

Studios, such as Los Angeles-based Inception Point AI, have scaled the model to scale, producing hundreds of thousands of episodes by targeting micro-audiences and trending searches instead of premium advertising slots.

The rapid expansion is fuelling concern across the industry, where trust and human connection remain central to listener loyalty.

Researchers and networks warn that large-scale automation risks devaluing premium content, while creators and audiences question how far AI voices can replace authenticity without undermining the medium itself.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study warns that LLMs are vulnerable to minimal tampering

Researchers from Anthropic, the UK AI Security Institute and the Alan Turing Institute have shown that only a few hundred crafted samples can poison LLM models. The tests revealed that around 250 malicious entries could embed a backdoor that triggers gibberish responses when a specific phrase appears.

Models ranging from 600 million to 13 billion parameters (such as Pythia) were affected, highlighting the scale-independent nature of the weakness. A planted phrase such as ‘sudo’ caused output collapse, raising concerns about targeted disruption and the ease of manipulating widely trained systems.

Security specialists note that denial-of-service effects are worrying, yet deceptive outputs pose far greater risk. Prior studies already demonstrated that medical and safety-critical models can be destabilised by tiny quantities of misleading data, heightening the urgency for robust dataset controls.

Researchers warn that open ecosystems and scraped corpora make silent data poisoning increasingly feasible. Developers are urged to adopt stronger provenance checks and continuous auditing, as reliance on LLMs continues to expand for AI purposes across technical and everyday applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot