Study warns that LLMs are vulnerable to minimal tampering

Researchers from Anthropic, the UK AI Security Institute and the Alan Turing Institute have shown that only a few hundred crafted samples can poison LLM models. The tests revealed that around 250 malicious entries could embed a backdoor that triggers gibberish responses when a specific phrase appears.

Models ranging from 600 million to 13 billion parameters (such as Pythia) were affected, highlighting the scale-independent nature of the weakness. A planted phrase such as ‘sudo’ caused output collapse, raising concerns about targeted disruption and the ease of manipulating widely trained systems.

Security specialists note that denial-of-service effects are worrying, yet deceptive outputs pose far greater risk. Prior studies already demonstrated that medical and safety-critical models can be destabilised by tiny quantities of misleading data, heightening the urgency for robust dataset controls.

Researchers warn that open ecosystems and scraped corpora make silent data poisoning increasingly feasible. Developers are urged to adopt stronger provenance checks and continuous auditing, as reliance on LLMs continues to expand for AI purposes across technical and everyday applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google boosts Translate with Gemini upgrades

Google is rolling out a major Translate upgrade powered by Gemini to improve text and speech translation. The update enhances contextual understanding so idioms, tone and intent are interpreted more naturally.

A beta feature for live headphone translation enables real-time speech-to-speech output. Gemini processes audio directly, preserving cadence and emphasis to improve conversations and lectures. Android users in the US, Mexico and India gain early access, with wider availability planned for 2026.

Translate is also gaining expanded language-learning tools for speaking practice and progress tracking. Additional language pairs, including English to German and Portuguese, broaden support for learners worldwide.

Google aims to reduce friction in global communication by focusing on meaning rather than literal phrasing. Engineers expect user feedback to shape the AI live translation beta across platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

CES 2026 to feature LG’s new AI-driven in-car platform

LG Electronics will unveil a new AI Cabin Platform at CES 2026 in Las Vegas, positioning the system as a next step beyond today’s software-defined vehicles and toward what the company calls AI-defined mobility.

The platform is designed to run on automotive high-performance computing systems and is powered by Qualcomm Technologies’ Snapdragon Cockpit Elite. LG says it applies generative AI models directly to in-vehicle infotainment, enabling more context-aware and personalised driving experiences.

Unlike cloud-dependent systems, all AI processing occurs on-device within the vehicle. LG says this approach enables real-time responses while improving reliability, privacy, and data security by avoiding communication with external servers.

Using data from internal and external cameras, the system can assess driving conditions and driver awareness to provide proactive alerts. LG also demonstrated adaptive infotainment features, including AI-generated visuals and music suggestions that respond to weather, time, and driving context.

LG will showcase the AI Cabin Platform at a private CES event, alongside a preview of its AI-defined vehicle concept. The company says the platform builds on its expanding partnership with Qualcomm Technologies and on its earlier work integrating infotainment and driver-assistance systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Conduit revolutionises neuro-language research with 10,000-hour dataset

A San Francisco start-up, named Conduit, has spent six months building what it claims is the largest neural language dataset ever assembled, capturing around 10,000 hours of non-invasive brain recordings from thousands of participants.

The project aims to train thought-to-text AI systems that interpret semantic intent from brain activity moments before speech or typing occurs.

Participants take part in extended conversational sessions instead of rigid laboratory tasks, interacting freely with large language models through speech or simplified keyboards.

Engineers found that natural dialogue produced higher quality data, allowing tighter alignment between neural signals, audio and text while increasing overall language output per session.

Conduit developed its own sensing hardware after finding no commercial system capable of supporting large-scale multimodal recording.

Custom headsets combine multiple neural sensing techniques within dense training rigs, while future inference devices will be simplified once model behaviour becomes clearer.

Power systems and data pipelines were repeatedly redesigned to balance signal clarity with scalability, leading to improved generalisation across users and environments.

As data volume increased, operational costs fell through automation and real time quality control, allowing continuous collection across long daily schedules.

With data gathering largely complete, the focus has shifted toward model training, raising new questions about the future of neural interfaces, AI-mediated communication and cognitive privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

BBVA deepens AI partnership with OpenAI

OpenAI and BBVA have agreed on a multi-year strategic collaboration designed to embed artificial intelligence across the global banking group.

An initiative that will expand the use of ChatGPT Enterprise to all 120,000 BBVA employees, marking one of the largest enterprise deployments of generative AI in the financial sector.

The programme focuses on transforming customer interactions, internal workflows and decision making.

BBVA plans to co-develop AI-driven solutions with OpenAI to support bankers, streamline risk analysis and redesign processes such as software development and productivity support, instead of relying on fragmented digital tools.

The rollout follows earlier deployments that demonstrated strong engagement and measurable efficiency gains, with employees saving hours each week on routine tasks.

ChatGPT Enterprise will be implemented with enterprise grade security and privacy safeguards, ensuring compliance within a highly regulated environment.

Beyond internal operations, BBVA is accelerating its shift toward AI native banking by expanding customer facing services powered by OpenAI models.

The collaboration reflects a broader move among major financial institutions to integrate AI at the core of products, operations and personalised banking experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reshapes cybercrime investigations in India

Maharashtra police are expanding the use of an AI-powered investigation platform developed with Microsoft to tackle the rapid growth of cybercrime.

MahaCrimeOS AI, already in use across Nagpur district, will now be deployed to more than 1,100 police stations statewide, significantly accelerating case handling and investigation workflows.

The system acts as an investigation copilot, automating complaint intake, evidence extraction and legal documentation across multiple languages.

Officers can analyse transaction trails, request data from banks and telecom providers and follow standardised investigation pathways, instead of relying on slow manual processes.

Built using Microsoft Foundry and Azure OpenAI Service, MahaCrimeOS AI integrates policing protocols, criminal law references and open-source intelligence.

Investigators report major efficiency gains, handling several cases monthly where only one was previously possible, while maintaining procedural accuracy and accountability.

The initiative highlights how responsible AI deployment can strengthen public institutions.

By reducing administrative burden and improving investigative capacity, the platform allows officers to focus on victim support and crime resolution, marking a broader shift toward AI-assisted governance in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New law requires AI disclosure in advertising in the US

A new law in New York, US, will require advertisers to disclose when AI-generated people appear in commercial content. Governor Kathy Hochul said the measure brings transparency and protects consumers as synthetic avatars become more widespread.

A second law now requires consent from heirs or executors when using a deceased person’s likeness for commercial purposes. The rule updates the state’s publicity rights, which previously lacked clarity in the context of the generative AI era.

Industry groups welcomed the move, saying it addresses the risks posed by unregulated AI usage, particularly for actors in the film and television industries. The disclosure must be conspicuous when an avatar does not correspond to a real human.

Specific expressive works such as films, games and shows are exempt when the avatar matches its use in the work. The laws arrive as national debate intensifies and President-elect Donald Trump signals potential attempts to limit state-level AI regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI outlines safeguards as AI cyber capabilities advance

Cyber capabilities in advanced AI models are improving rapidly, delivering clear benefits for cyberdefence while introducing new dual-use risks that require careful management, according to OpenAI’s latest assessment.

The company points to sharp gains in capture-the-flag performance, with success rates rising from 27 percent in August to 76 percent by November 2025. OpenAI says future models could reach high cyber capability, including assistance with sophisticated intrusion techniques.

To address this, OpenAI says it is prioritising defensive use cases, investing in tools that help security teams audit code, patch vulnerabilities, and respond more effectively to threats. The goal is to give defenders an advantage in an often under-resourced environment.

OpenAI argues that cybersecurity cannot be governed through a single safeguard, as defensive and offensive techniques overlap. Instead, it applies a defence-in-depth approach that combines access controls, monitoring, detection systems, and extensive red teaming to limit misuse.

Alongside these measures, the company plans new initiatives, including trusted access programmes for defenders, agent-based security tools in private testing, and the creation of a Frontier Risk Council. OpenAI says these efforts reflect a long-term commitment to cyber resilience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Tiiny AI unveils the Pocket Lab supercomputer

Tiiny AI has revealed the Pocket Lab, a palm-sized device recognised as the world’s smallest personal AI supercomputer. Guinness World Records confirmed the title, noting its ability to run models with up to 120 billion parameters.

The Pocket Lab uses an ARM v9.2 CPU, a discrete NPU delivering 190 TOPS and 80GB of LPDDR5X memory. Popular open-source models such as GPT-OSS, Llama, Qwen, Mistral, DeepSeek and Phi are supported. Tiiny AI says its hardware makes large-scale reasoning possible in a handheld format.

Two in-house technologies enhance efficiency by distributing workloads and reducing unnecessary activations. TurboSparse manages sparse neuron activity to preserve capability while improving speed, and PowerInfer splits computation across the CPU and NPU.

Tiiny AI plans a full showcase at CES 2026, with pricing and release information still pending. Analysts want to see how the device performs in real-world tasks compared with much larger systems. The company believes the Pocket Lab will shift expectations for personal AI hardware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Reddit challenges Australia’s teen social media ban

The US social media company, Reddit, has launched legal action in Australia as the country enforces the world’s first mandatory minimum age for social media access.

Reddit argues that banning users under 16 prevents younger Australians from taking part in political debate, instead of empowering them to learn how to navigate public discussion.

Lawyers representing the company argue that the rule undermines the implied freedom of political communication and could restrict future voters from understanding the issues that will shape national elections.

Australia’s ban took effect on December 10 and requires major platforms to block underage users or face penalties that can reach nearly 50 million Australian dollars.

Companies are relying on age inference and age estimation technologies to meet the obligation, although many have warned that the policy raises privacy concerns in addition to limiting online expression.

The government maintains that the law is designed to reduce harm for younger users and has confirmed that the list of prohibited platforms may expand as new safety issues emerge.

Reddit’s filing names the Commonwealth of Australia and Communications Minister Anika Wells. The minister’s office says the government intends to defend the law and will prioritise the protection of young Australians, rather than allowing open access to high-risk platforms.

The platform’s challenge follows another case brought by an internet rights group that claims the legislation represents an unfair restriction on free speech.

A separate list identifies services that remain open for younger users, such as Roblox, Pinterest and YouTube Kids. At the same time, platforms including Instagram, TikTok, Snapchat, Reddit and X are blocked for those under sixteen.

The case is expected to shape future digital access rights in Australia, as online communities become increasingly central to political education and civic engagement among emerging voters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!