AI helps to fight against antibiotic-resistant superbugs

UK scientists are launching a three-year initiative to use AI in the fight against drug-resistant infections, a growing threat to public health.

The project, backed by £45 million from GSK and coordinated with the Fleming Initiative, aims to develop new tools against pathogens that currently evade treatment.

Researchers will focus on priority bacteria and fungi identified by the World Health Organisation, including E. coli, Klebsiella pneumoniae, MRSA and Aspergillus.

These AI models will be utilised to design antibiotics and enhance the understanding of immune responses, with data shared globally to expedite drug development.

Experts warn that antimicrobial resistance could claim millions of lives by 2050 if new solutions are not found. The initiative reflects an urgent need to pool scientific expertise and technology to create next-generation treatments and vaccines for resistant infections.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Arizona astronomer creates ray-tracing method to make AI less overconfident

A University of Arizona astronomer, Peter Behroozi, has developed a novel technique to make AI systems more trustworthy by enabling them to quantify when they might be wrong.

Behroozi’s method adapts ray tracing, traditionally used in computer graphics, to explore the high-dimensional spaces in which AI models operate, thereby allowing the system to gauge uncertainty more effectively.

He uses a Bayesian-sampling approach: rather than relying on a single model, the system effectively consults a ‘whole range of experts’ by training many models in parallel and observing the diversity of their outputs.

This advance addresses a critical problem in modern AI: ‘wrong-but-confident’ outputs, situations where a model gives a single, confident answer that may be incorrect. According to Behroozi, his technique is orders of magnitude faster than traditional uncertainty-quantification methods, making it practical even for extensive neural networks.

The implications are broad, extending from healthcare to finance to autonomous systems: AI that knows its own limits could reduce risk and increase reliability. Behroozi hopes his code, now publicly available, will be adopted by other researchers working under high-stakes conditions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI promises better communication tools for disabled users

Students with disabilities met technology executives at National Star College in Gloucestershire, UK to explain what they need from communication devices. Battery life emerged as the top priority, with users saying they need devices that last 24 hours without charging so they can communicate all day long.

One student who controls his device by moving his eyes said losing power during the day feels like having his voice ripped away from him. Another student with cerebral palsy wants her device to help her run a bath independently and eventually design fairground rides that disabled people can enjoy.

Technology companies responded by promising artificial intelligence improvements that will make the devices work much faster. The new AI features will help users type more quickly, correct mistakes automatically and even create personalised voices that sound like the actual person speaking.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA pushes forward with AI-ready data

Enterprises are facing growing pressure to prepare unstructured data for use in modern AI systems as organisations struggle to turn prototypes into production tools.

Around forty percent of AI projects advance beyond the pilot phase, largely due to limits in data quality and availability. Most organisational information now comes in unstructured form, ranging from emails to video files, which offers little coherence and places a heavy load on governance systems.

AI agents need secure, recent and reliable data instead of fragmented information scattered across multiple storage silos. Preparing such data demands extensive curation, metadata work, semantic chunking and the creation of vector embeddings.

Enterprises also struggle with the rising speed of data creation and the spread of duplicate copies, which increases both operational cost and security concerns.

An emerging approach by NVIDIA, known as the AI data platform, aims to address these challenges by embedding GPU acceleration directly into the data path. The platform prepares and indexes information in place, allowing enterprises to reduce data drift, strengthen governance and avoid unnecessary replication.

Any change to a source document is immediately reflected in the associated AI representations, improving accuracy and consistency for business applications.

NVIDIA is positioning its own AI Data Platform reference design as a next step for enterprise storage. The design combines RTX PRO 6000 Blackwell Server Edition GPUs, BlueField three DPUs and integrated AI processing pipelines.

Leading technology providers including Cisco, Dell Technologies, IBM, HPE, NetApp, Pure Storage and others have adopted the model as they prepare storage systems for broader use of generative AI in the enterprise sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Intuit expand financial AI collaboration

Yesterday, OpenAI and Intuit announced a major strategic partnership aimed at reshaping how people manage their personal and business finances. The arrangement will allow Intuit apps to appear directly inside ChatGPT, enabling secure and personalised financial actions within a single environment.

An agreement that is worth more than one hundred million dollars and reinforces Intuit’s long-term push to strengthen its AI-driven expert platform.

Intuit will broaden its use of OpenAI’s most advanced models to support financial tasks across its products. Frontier models will help power AI agents that assist with tax preparation, cash flow forecasting, payroll management and wider financial planning.

Intuit will also continue using ChatGPT Enterprise internally so employees can work with greater speed and accuracy.

The partnership is expected to help consumers make more informed financial choices instead of relying on fragmented tools. Users will be able to explore suitable credit offers, receive clearer tax answers, estimate refunds and connect with tax specialists.

Businesses will gain tailored insights based on real time data that can improve cash flow, automate customer follow ups and support more effective outreach through email marketing.

Leaders from both companies argue that the collaboration will give people and firms a meaningful financial advantage. They say greater personalisation, deeper data analysis and more effortless decision making will support stronger household finances and more resilient small enterprises.

The deal expands the growing community of OpenAI enterprise customers and strengthens Intuit’s position in global financial technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Web services recover after Cloudflare restores its network systems

Cloudflare has resolved a technical issue that briefly disrupted access to major platforms, including X, ChatGPT, and Letterboxd. Users had earlier reported internal server error messages linked to Cloudflare’s network, indicating that pages could not be displayed.

The disruption began around midday UK time, with some sites loading intermittently as the problem spread across the company’s infrastructure. Cloudflare confirmed it was investigating an incident affecting multiple customers and issued rolling updates as engineers worked to identify the fault.

Outage tracker Down Detector also experienced difficulties during the incident, later showing a sharp rise in reports once it came back online. The pattern pointed to a broad network-level failure rather than isolated platform issues.

Users saw repeated internal server error warnings asking them to try again, though services began recovering as Cloudflare isolated the cause. The company has not yet released full technical details, but said the fault has been fixed and that systems are stabilising.

Cloudflare provides routing, security, and reliability tools for a wide range of online services, making a single malfunction capable of cascading globally. The company said it would share further information on the incident and steps taken to prevent similar failures.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Misconfigured database triggered global Cloudflare failure, CEO says

Cloudflare says its global outage on 18 November was caused by an internal configuration error, not a cyberattack. CEO Matthew Prince apologised to users after a permissions update to a ClickHouse cluster generated a malformed feature file that caused systems worldwide to crash.

The oversized file exceeded a hard limit in Cloudflare’s routing software, triggering failures across its global edge. Intermittent recoveries during the first hours of the incident led engineers to suspect a possible attack, as the network randomly stabilised when a non-faulty file propagated.

Confusion intensified when Cloudflare’s externally hosted status page briefly became inaccessible, raising fears of coordinated targeting. The root cause was later traced to metadata duplication from an unexpected database source, which doubled the number of machine-learning features in the file.

The outage affected Cloudflare’s CDN, security layers, and ancillary services, including Turnstile, Workers KV, and Access. Some legacy proxies kept limited traffic moving, but bot scores and authentication systems malfunctioned, causing elevated latencies and blocked requests.

Engineers halted the propagation of the faulty file by mid-afternoon and restored a clean version before restarting affected systems. Prince called it Cloudflare’s most serious failure since 2019 and said lessons learned will guide major improvements to the company’s infrastructure resilience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google enters a new frontier with Gemini 3

A new phase of its AI strategy has begun for Google with the release of Gemini 3, which arrives as the company’s most advanced model to date.

The new system prioritises deeper reasoning and more subtle multimodal understanding, enabling users to approach difficult ideas with greater clarity instead of relying on repetitive prompting. It marks a major step for Google’s long-term project to integrate stronger intelligence into products used by billions.

Gemini 3 Pro is already available in preview across the Gemini app, AI Mode in Search, AI Studio, Vertex AI and Google’s new development platform known as Antigravity.

A model that performs at the top of major benchmarks in reasoning, mathematics, tool use and multimodal comprehension, offering substantial improvements compared with Gemini 2.5 Pro.

Deep Think mode extends the model’s capabilities even further, reaching new records on demanding academic and AGI-oriented tests, although Google is delaying wider release until additional safety checks conclude.

Users can rely on Gemini 3 to learn complex topics, analyse handwritten material, decode long academic texts or translate lengthy videos into interactive guides instead of navigating separate tools.

Developers benefit from richer interactive interfaces, more autonomous coding agents and the ability to plan tasks over longer horizons.

Google Antigravity enhances this shift by giving agents direct control of the development environment, allowing them to plan, write and validate code independently while remaining under human supervision.

Google emphasises that Gemini 3 is its most extensively evaluated model, supported by independent audits and strengthened protections against manipulation. The system forms the foundation for Google’s next era of agentic, personalised AI and will soon expand with additional models in the Gemini 3 series.

The company expects the new generation to reshape how people learn, build and organise daily tasks instead of depending on fragmented digital services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok launches new tools to manage AI-generated content

TikTok has announced new tools to help users shape and understand AI-generated content (AIGC) in their feeds. A new ‘Manage Topics’ control will let users adjust how much AI content appears in their For You feeds alongside keyword filters and the ‘not interested’ option.

The aim is to personalise content rather than remove it entirely.

To strengthen transparency, TikTok is testing ‘invisible watermarking’ for AI-generated content created with TikTok tools or uploaded using C2PA Content Credentials. Combined with creator labels and AI detection, these watermarks help track and identify content even if edited or re-uploaded.

The platform has launched a $2 million AI literacy fund to support global experts in creating educational content on responsible AI. TikTok collaborates with industry partners and non-profits like Partnership on AI to promote transparency, research, and best practices.

Investments in AI extend beyond moderation and labeling. TikTok is developing innovative features such as Smart Split and AI Outline to enhance creativity and discovery, while using AI to protect user safety and improve the well-being of its trust and safety teams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Poll manipulation by AI threatens democratic accuracy, according to a new study

Public opinion surveys face a growing threat as AI becomes capable of producing highly convincing fake responses. New research from Dartmouth shows that AI-generated answers can pass every quality check, imitate real human behaviour and alter poll predictions without leaving evidence.

In several major polls conducted before the 2024 US election, inserting only a few dozen synthetic responses would have reversed expected outcomes.

The study reveals how easily malicious actors could influence democratic processes. AI models can operate in multiple languages yet deliver flawless English answers, allowing foreign groups to bypass detection.

An autonomous synthetic respondent that was created for the study passed nearly all attention tests, avoided errors in logic puzzles and adjusted its tone to match assigned demographic profiles instead of exposing its artificial nature.

The potential consequences extend far beyond electoral polling. Many scientific disciplines rely heavily on survey data to track public health risks, measure consumer behaviour or study mental wellbeing.

If AI-generated answers infiltrate such datasets, the reliability of thousands of studies could be compromised, weakening evidence used to shape policy and guide academic research.

Financial incentives further raise the risk. Human participants earn modest fees, while AI can produce survey responses at almost no cost. Existing detection methods failed to identify the synthetic respondent at any stage.

The researcher urges survey companies to adopt new verification systems that confirm the human identity of participants, arguing that stronger safeguards are essential to protect democratic accountability and the wider research ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!