MIT AI creates antibiotics to tackle resistant bacteria

MIT researchers have used generative AI to design novel antibiotics targeting drug-resistant bacteria such as gonorrhea and MRSA. Laboratory tests show the compounds kill bacteria without harming human cells, marking a potential breakthrough in antibiotic development.

The AI system analysed over 36 million possible compounds, generating entirely new molecules with mechanisms that bypass existing resistance. Unlike traditional methods, this approach enables faster discovery, reducing drug development timelines from years to months.

Drug resistance is a growing global threat, with the World Health Organisation predicting 10 million annual deaths by 2050 if unchecked. MIT’s AI bypasses resistance, clearing infections in lab and animal tests with minimal toxicity.

Beyond antibiotics, this achievement highlights the broader potential of AI in pharmaceutical research. Smaller biotech firms could leverage AI for rapid drug design, reducing costs and opening new pathways for addressing urgent medical challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Chinese researchers advance atom-based quantum computing with massive atom array

Chinese physicist Pan Jianwei’s team created the world’s largest atom array, arranging over 2,000 rubidium atoms for quantum computing. The breakthrough at the University of Science and Technology of China could enable atom-based quantum computers to scale to tens of thousands of qubits.

Researchers used AI and optical tweezers to position all atoms simultaneously, completing the array in 60 milliseconds. The system achieved 99.97 percent accuracy for single-qubit operations and 99.5 percent for two-qubit operations, with 99.92 percent accuracy in qubit state detection.

Atom-based quantum computing is more promising for its stability and control than superconducting circuits or trapped ions. Until now, arrays were limited to a few hundred atoms, as moving each into position individually was slow and challenging.

Future work aims to expand array sizes further using stronger lasers and faster light modulators. Researchers hope that perfectly arranging tens of thousands of atoms leads to fully reliable and scalable quantum computers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

DeepSeek delays next AI model amid Huawei chip challenges

Chinese AI company DeepSeek has postponed the launch of its R2 model after repeated technical problems using Huawei’s Ascend processors for training. The delay highlights Beijing’s ongoing struggle to replace US-made chips with domestic alternatives.

Authorities had encouraged DeepSeek to shift from Nvidia hardware to Huawei’s chips after the release of its R1 model in January. However, training failures, slower inter-chip connections, stability issues, and weaker software performance led the start-up to revert to Nvidia chips for training, while continuing to explore Ascend for inference tasks.

Despite Huawei deploying engineers to assist on-site, DeepSeek was unable to complete a successful training run using Ascend processors. The company is also contending with extended data-labelling timelines for its updated model, adding to the delays.

The situation underscores how far Chinese chip technology lags behind Nvidia for advanced AI development, even as Beijing pressures domestic firms to use local products. Industry observers say Huawei is facing “growing pains” but could close the gap over time. Meanwhile, competitors like Alibaba’s Qwen3 have integrated elements of DeepSeek’s design more efficiently, intensifying market pressure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple plans Siri upgrade with AI robots and smart displays

A tabletop robot, expected by 2027, could follow users around with a moving display and an animated voice assistant. Apple previewed this concept in research earlier this year, showing a dancing robot mimicking user movement.

Siri may soon take on a more visual, animated form, allowing natural conversations similar to ChatGPT’s voice mode. Apple is testing designs based on Memoji and the Finder icon.

A new smart home display will likely launch in 2026, featuring facial recognition and shared user access. Its design reportedly resembles Google’s Nest Hub.

Apple is also developing a range of home security products, including cameras and other devices, forming a new innovative ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT gets local pricing in India

OpenAI has introduced local pricing for ChatGPT in India, allowing users to pay in rupees instead of US dollars. The shift follows the release of GPT-5, which supports 12 Indian languages and offers improved relevance for local users.

India is now the second-largest ChatGPT market outside the US. The Plus plan now costs $24 per month, while the Pro and Team plans are priced at $240 and $25 per seat, respectively.

OpenAI is also expected to launch a lower-cost option called ChatGPT Go, potentially priced at $5 to appeal to casual users. Competitors like Google and Perplexity AI have also responded by offering free access to students and telecom customers to boost adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Employees trust managers less when emails use AI

A new study has revealed that managers who use AI to write emails are often viewed as less sincere by their staff. Acceptance improved for emails focused on factual information, where employees were more forgiving of AI involvement.

Researchers found employees were more critical of AI use by their supervisors than when using it themselves, even if the level of assistance was the same.

Only 40 percent of respondents rated managers as sincere when their emails involved high AI input, compared to 83 percent for lighter use.

Professionals did consider AI-assisted emails efficient and polished, but trust declined when messages were relationship-driven or motivational.

Researchers highlighted that managers’ heavier reliance on AI may undermine trust, care, and authenticity perceptions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India pushes for safe AI use in financial sector

India’s central bank has proposed a national framework to guide the ethical and responsible use of AI in the financial sector.

The committee, set up by the Reserve Bank of India in December 2024, has made 26 recommendations across six focus areas, including infrastructure, governance, and assurance.

It advised establishing a digital backbone to support homegrown AI models and forming a multi-stakeholder body to evaluate risks.

A dedicated fund to boost domestic AI development tailored for finance was also proposed, alongside audit guidelines and policy frameworks.

The committee recommended integrating AI into platforms such as UPI while preserving public trust and ensuring security.

Led by IIT Bombay’s Pushpak Bhattacharyya, the panel noted the need to balance innovation with risk mitigation in regulatory design.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Igor Babuschkin leaves Elon Musk’s xAI for AI safety investment push

Igor Babuschkin, cofounder of Elon Musk’s AI startup xAI, has announced his departure to launch an investment firm dedicated to AI safety research. Musk created xAI in 2023 to rival Big Tech, criticising industry leaders for weak safety standards and excessive censorship.

Babuschkin revealed his new venture, Babuschkin Ventures, will fund AI safety research and startups developing responsible AI tools. Before leaving, he oversaw engineering across infrastructure, product, and applied AI projects, and built core systems for training and managing models.

His exit follows that of xAI’s legal chief, Robert Keele, earlier this month, highlighting the company’s churn amid intense competition between OpenAI, Google, and Anthropic. The big players are investing heavily in developing and deploying advanced AI systems.

Babuschkin, a former researcher at Google DeepMind and OpenAI, recalled the early scramble at xAI to set up infrastructure and models, calling it a period of rapid, foundational development. He said he had created many core tools that the startup still relies on.

Last month, X CEO Linda Yaccarino also resigned, months after Musk folded the social media platform into xAI. The company’s leadership changes come as the global AI race accelerates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

How Anthropic trains and tests Claude for safe use

Anthropic has outlined a multi-layered safety plan for Claude, aiming to keep it useful while preventing misuse. Its Safeguards team blends policy experts, engineers, and threat analysts to anticipate and counter risks.

The Usage Policy establishes clear guidelines for sensitive areas, including elections, finance, and child safety. Guided by the Unified Harm Framework, the team assesses potential physical, psychological, and societal harms, utilizing external experts for stress tests.

During the 2024 US elections, a TurboVote banner was added after detecting outdated voting info, ensuring users saw only accurate, non-partisan updates.

Safety is built into development, with guardrails to block illegal or malicious requests. Partnerships like ThroughLine help Claude handle sensitive topics, such as mental health, with care rather than avoidance or refusal.

Before launch, Claude undergoes safety, risk, and bias evaluations with government and industry partners. Once live, classifiers scan for violations in real time, while analysts track patterns of coordinated misuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study warns AI chatbots exploit trust to gather personal data

According to a new King’s College London study, AI chatbots can easily manipulate people into slinging personal details. Chatbots like ChatGPT, Gemini, and Copilot are popular, but they raise privacy concerns, with experts warning that they can be co-opted for harm.

Researchers built AI models based on Mistral’s Le Chat and Meta’s Llama, programming them to extract private data directly, deceptively, or via reciprocity. Emotional appeals proved most effective, with users disclosing more while perceiving fewer safety risks.

The ‘friendliness’ of chatbots established trust, which was later exploited to breach privacy. Even direct requests yielded sensitive details, despite discomfort. Participants often shared their age, hobbies, location, gender, nationality, and job title, and sometimes also provided health or income data.

The study shows a gap between privacy risk awareness and behaviour. AI firms claim they collect data for personalisation, notifications, or research, but some are accused of using it to train models or breaching EU data protection rules.

Last week, Google faced criticism after private ChatGPT chats appeared in search results, revealing sensitive topics. Researchers suggest in-chat alerts about data collection and stronger regulation to stop covert harvesting.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!