Study warns that LLMs are vulnerable to minimal tampering

Researchers from Anthropic, the UK AI Security Institute and the Alan Turing Institute have shown that only a few hundred crafted samples can poison LLM models. The tests revealed that around 250 malicious entries could embed a backdoor that triggers gibberish responses when a specific phrase appears.

Models ranging from 600 million to 13 billion parameters (such as Pythia) were affected, highlighting the scale-independent nature of the weakness. A planted phrase such as ‘sudo’ caused output collapse, raising concerns about targeted disruption and the ease of manipulating widely trained systems.

Security specialists note that denial-of-service effects are worrying, yet deceptive outputs pose far greater risk. Prior studies already demonstrated that medical and safety-critical models can be destabilised by tiny quantities of misleading data, heightening the urgency for robust dataset controls.

Researchers warn that open ecosystems and scraped corpora make silent data poisoning increasingly feasible. Developers are urged to adopt stronger provenance checks and continuous auditing, as reliance on LLMs continues to expand for AI purposes across technical and everyday applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google boosts Translate with Gemini upgrades

Google is rolling out a major Translate upgrade powered by Gemini to improve text and speech translation. The update enhances contextual understanding so idioms, tone and intent are interpreted more naturally.

A beta feature for live headphone translation enables real-time speech-to-speech output. Gemini processes audio directly, preserving cadence and emphasis to improve conversations and lectures. Android users in the US, Mexico and India gain early access, with wider availability planned for 2026.

Translate is also gaining expanded language-learning tools for speaking practice and progress tracking. Additional language pairs, including English to German and Portuguese, broaden support for learners worldwide.

Google aims to reduce friction in global communication by focusing on meaning rather than literal phrasing. Engineers expect user feedback to shape the AI live translation beta across platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Building trustworthy AI for humanitarian response

A new vision for Humanitarian AI is emerging around a simple idea, and that is that technology should grow from local knowledge if it is to work everywhere. Drawing on the IFRC’s slogan ‘Local, everywhere,’ this approach argues that AI should not be driven by hype or raw computing power, but by the lived experience of communities and humanitarian workers on the ground. With millions of volunteers and staff worldwide, the Red Cross and Red Crescent Movement holds a vast reservoir of practical knowledge that AI can help preserve, organise, and share for more effective crisis response.

In a recent blog post, Jovan Kurbalija explains that this bottom-up approach is not only practical but also ethically sound. AI systems grounded in local humanitarian knowledge can better reflect cultural and social contexts, reduce bias and misinformation, and strengthen trust by being governed by humanitarian organisations rather than opaque commercial platforms. Trust, he argues, lies in people and institutions behind the technology, not in algorithms themselves.

Kurbalija also notes that developing such AI is technically and financially realistic. Open-source models, mobile and edge computing, and domain-specific AI tools enable the deployment to functional systems even in low-resource environments. Most humanitarian tasks, from decision support to translation or volunteer guidance, do not require massive infrastructure, but high-quality, well-structured knowledge rooted in real-world experience.

If developed carefully, Humanitarian AI could also support the IFRC’s broader renewal goals, from strengthening local accountability and collaboration to safeguarding independence and humanitarian principles. Starting with small pilot projects and scaling up gradually, the Movement could transform AI into a shared public good that not only enhances responses to today’s crises but also preserves critical knowledge for future generations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

CES 2026 to feature LG’s new AI-driven in-car platform

LG Electronics will unveil a new AI Cabin Platform at CES 2026 in Las Vegas, positioning the system as a next step beyond today’s software-defined vehicles and toward what the company calls AI-defined mobility.

The platform is designed to run on automotive high-performance computing systems and is powered by Qualcomm Technologies’ Snapdragon Cockpit Elite. LG says it applies generative AI models directly to in-vehicle infotainment, enabling more context-aware and personalised driving experiences.

Unlike cloud-dependent systems, all AI processing occurs on-device within the vehicle. LG says this approach enables real-time responses while improving reliability, privacy, and data security by avoiding communication with external servers.

Using data from internal and external cameras, the system can assess driving conditions and driver awareness to provide proactive alerts. LG also demonstrated adaptive infotainment features, including AI-generated visuals and music suggestions that respond to weather, time, and driving context.

LG will showcase the AI Cabin Platform at a private CES event, alongside a preview of its AI-defined vehicle concept. The company says the platform builds on its expanding partnership with Qualcomm Technologies and on its earlier work integrating infotainment and driver-assistance systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Universities back generative AI but guidance remains uneven

A majority of leading US research universities are encouraging the use of generative AI in teaching, according to a new study analysing institutional policies and guidance documents across higher education.

The research reviewed publicly available policies from 116 R1 universities and found that 63 percent explicitly support the use of generative AI, while 41 percent provide detailed classroom guidance. More than half of the institutions also address ethical considerations linked to AI adoption.

Most guidance focuses on writing-related activities, with far fewer references to coding or STEM applications. The study notes that while many universities promote experimentation, expectations placed on faculty can be demanding, often implying significant changes to teaching practices.

US researchers also found wide variation in how universities approach oversight. Some provide sample syllabus language and assignment design advice, while others discourage the use of AI-detection tools, citing concerns around reliability and academic trust.

The authors caution that policy statements may not reflect real classroom behaviour and say further research is needed to understand how generative AI is actually being used by educators and students in practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Conduit revolutionises neuro-language research with 10,000-hour dataset

A San Francisco start-up, named Conduit, has spent six months building what it claims is the largest neural language dataset ever assembled, capturing around 10,000 hours of non-invasive brain recordings from thousands of participants.

The project aims to train thought-to-text AI systems that interpret semantic intent from brain activity moments before speech or typing occurs.

Participants take part in extended conversational sessions instead of rigid laboratory tasks, interacting freely with large language models through speech or simplified keyboards.

Engineers found that natural dialogue produced higher quality data, allowing tighter alignment between neural signals, audio and text while increasing overall language output per session.

Conduit developed its own sensing hardware after finding no commercial system capable of supporting large-scale multimodal recording.

Custom headsets combine multiple neural sensing techniques within dense training rigs, while future inference devices will be simplified once model behaviour becomes clearer.

Power systems and data pipelines were repeatedly redesigned to balance signal clarity with scalability, leading to improved generalisation across users and environments.

As data volume increased, operational costs fell through automation and real time quality control, allowing continuous collection across long daily schedules.

With data gathering largely complete, the focus has shifted toward model training, raising new questions about the future of neural interfaces, AI-mediated communication and cognitive privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe risks falling behind without telecom scale, Telefónica says

Telefónica has called for a shift in Europe’s telecommunications policy, arguing that market fragmentation is undermining investment, digital competitiveness, and the continent’s technological sovereignty, according to a new blog post from the company.

In the post, Telefónica says Europe’s emphasis on maximising retail competition has produced a highly fragmented operator landscape. It cites industry data showing the average European operator serves around five million customers, far fewer than peers in the United States or China.

The company argues that this lack of scale explains Europe’s lower per-capita investment in telecoms infrastructure and is slowing the rollout of technologies such as standalone 5G, fibre networks, and sovereign cloud and AI platforms.

Telefónica points to recent reports by Mario Draghi and Enrico Letta as signs of a policy shift, with EU institutions placing greater weight on investment capacity, resilience, and dynamic efficiency alongside traditional competition objectives.

The blog post concludes that Europe faces a strategic choice between preserving fragmented markets or enabling responsible consolidation. Telefónica says carefully regulated mergers could support sustainability, reduce regional digital divides, and strengthen Europe’s digital infrastructure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

How data centres affect electricity, prices, water consumption and jobs

Data centres have become critical infrastructure for modern economies, supporting services ranging from digital communications and online commerce to emergency response systems and financial transactions.

As AI expands, demand for cloud computing continues to accelerate, increasing the need for additional data centre capacity worldwide.

Concerns about environmental impact often focus on electricity and water use, yet recent data indicate that data centres are not primary drivers of higher power prices and consume far less water than many traditional industries.

Studies show that rising electricity costs are largely linked to grid upgrades, climate-related damage and fuel prices instead of large-scale computing facilities, while water use by data centres remains a small fraction of overall consumption.

Technological improvements have further reduced resource intensity. Operators have significantly improved water efficiency per unit of computing power, adopting closed-loop liquid cooling and advanced energy management systems.

In many regions, water is required only intermittently, with consumption levels lower than those in sectors such as clothing manufacturing, agriculture and automotive services.

Beyond digital services, data centres deliver tangible economic benefits to local communities. Large-scale investments generate construction activity, long-term technical employment and stable tax revenues, while infrastructure upgrades and skills programmes support regional development.

As cloud computing and AI continue to shape everyday life, data centres are increasingly positioned as both economic and technological anchors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

BBVA deepens AI partnership with OpenAI

OpenAI and BBVA have agreed on a multi-year strategic collaboration designed to embed artificial intelligence across the global banking group.

An initiative that will expand the use of ChatGPT Enterprise to all 120,000 BBVA employees, marking one of the largest enterprise deployments of generative AI in the financial sector.

The programme focuses on transforming customer interactions, internal workflows and decision making.

BBVA plans to co-develop AI-driven solutions with OpenAI to support bankers, streamline risk analysis and redesign processes such as software development and productivity support, instead of relying on fragmented digital tools.

The rollout follows earlier deployments that demonstrated strong engagement and measurable efficiency gains, with employees saving hours each week on routine tasks.

ChatGPT Enterprise will be implemented with enterprise grade security and privacy safeguards, ensuring compliance within a highly regulated environment.

Beyond internal operations, BBVA is accelerating its shift toward AI native banking by expanding customer facing services powered by OpenAI models.

The collaboration reflects a broader move among major financial institutions to integrate AI at the core of products, operations and personalised banking experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reshapes cybercrime investigations in India

Maharashtra police are expanding the use of an AI-powered investigation platform developed with Microsoft to tackle the rapid growth of cybercrime.

MahaCrimeOS AI, already in use across Nagpur district, will now be deployed to more than 1,100 police stations statewide, significantly accelerating case handling and investigation workflows.

The system acts as an investigation copilot, automating complaint intake, evidence extraction and legal documentation across multiple languages.

Officers can analyse transaction trails, request data from banks and telecom providers and follow standardised investigation pathways, instead of relying on slow manual processes.

Built using Microsoft Foundry and Azure OpenAI Service, MahaCrimeOS AI integrates policing protocols, criminal law references and open-source intelligence.

Investigators report major efficiency gains, handling several cases monthly where only one was previously possible, while maintaining procedural accuracy and accountability.

The initiative highlights how responsible AI deployment can strengthen public institutions.

By reducing administrative burden and improving investigative capacity, the platform allows officers to focus on victim support and crime resolution, marking a broader shift toward AI-assisted governance in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!