AI becomes central to biotech discovery and drug development

The biotechnology industry is moving from early AI experimentation to fully integrated discovery systems that embed AI into everyday research operations.

According to the 2026 Biotech AI Report from Benchling, leading organisations are reshaping data environments and R&D structures, making AI a core part of the drug development process.

Predictive models, such as protein structure prediction and docking simulations, are accelerating early-stage discovery, helping scientists identify targets faster and improve accuracy.

Challenges persist in generative design, biomarker analysis, and ADME prediction, where adoption lags due to fragmented or poor-quality data.

Organisations overcoming these hurdles invest in high-quality, well-annotated measurements and strong integration between wet and dry lab work. It creates a continuous learning cycle that drives faster insights and reduces experimental dead ends.

Talent strategies are evolving to place AI expertise directly in R&D teams. Many firms upskill existing scientific staff to act as ‘scientific translators,’ bridging biology, regulatory needs, and machine learning.

Embedding AI leadership within research teams or using hybrid models reduces handoffs and ensures AI tools remain practical in real-world experiments.

Biotech firms combine in-house development with commercial components, following a ‘build what differentiates, buy what scales’ strategy. Confidence in AI is rising, driving investment in infrastructure, modelling, and integrated AI workflows for research.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI use among students surges as chatbots reshape schoolwork

More than half of US teenagers use AI tools to help with schoolwork, according to a new Pew Research Center study. The survey found that 54% of students aged 13 to 17 have used chatbots such as OpenAI’s ChatGPT or Microsoft’s Copilot to research assignments or solve maths problems.

Usage has risen in recent years. In 2024, 26% of US teens reported using ChatGPT for schoolwork, up from 13% in 2023. The latest survey of 1,458 teens and parents found 44% use AI for some schoolwork, while 10% rely on chatbots for most tasks.

Researchers say AI assistance is becoming routine in classrooms. Colleen McClain, a senior researcher at Pew and co-author of the report, said chatbot use for schoolwork is now a common practice among teens.

Findings come amid an intensifying debate over generative AI in education. Supporters argue that schools should teach students to use and evaluate AI tools, while critics warn of misinformation, reduced critical thinking, and increased cheating.

Recent research has raised questions about learning outcomes. One study by Cambridge University Press & Assessment and Microsoft Research found that students who took notes without chatbot support showed stronger reading comprehension than those using AI assistance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT Health under fire after study finds major failures in emergency detection

A new evaluation of ChatGPT Health has raised major safety concerns after researchers found it frequently failed to recognise urgent medical emergencies.

The independent study, published in Nature Medicine, reported that the system under-triaged more than half of the clinical scenarios tested, giving advice that could have delayed life-saving treatment.

The research team, led by Ashwin Ramaswamy, created sixty patient simulations ranging from minor illnesses to life-threatening conditions.

Three doctors agreed on the appropriate urgency for each case before comparing their judgement with the model’s responses. The AI performed adequately in straightforward emergencies such as strokes, yet frequently minimised danger in more complex presentations, including severe asthma and diabetic crises.

Experts also warned that ChatGPT Health struggled to detect suicidal ideation reliably. Minor changes to scenario details, such as adding normal lab results, caused safeguards to disappear entirely.

Critics, including health-misinformation researcher Alex Ruani, described the behaviour as dangerously inconsistent and capable of creating a false sense of security.

OpenAI said the study did not reflect typical real-world use but acknowledged the need for continued research and improvement.

Policy specialists argue that the findings underline the need for clear safety standards, external audits and stronger transparency requirements for AI systems operating in sensitive medical contexts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Galaxy S26 series brings powerful AI and privacy features

Samsung Electronics has unveiled the Galaxy S26 series, featuring advanced AI experiences, powerful performance, and an industry-leading camera system designed to simplify everyday smartphone tasks.

The series, which includes the Galaxy S26, S26+, and S26 Ultra, handles complex processes in the background, allowing users to focus on results rather than device operations.

The Galaxy S26 Ultra introduces the world’s first built-in Privacy Display, a redesigned chipset, and improved thermal management. Together, these upgrades enhance AI performance, graphics, and CPU efficiency, while ensuring faster, cooler, and more reliable operation throughout the day.

Photography and videography are also upgraded with wider apertures, Nightography Video, Super Steady video, and AI-powered editing tools that make professional-quality content accessible to all users.

Galaxy AI streamlines daily experiences by proactively suggesting actions, organising information, and automating tasks. Features such as Now Nudge, Now Brief, Circle to Search, and upgraded Bixby allow users to interact naturally with their devices.

Integrated AI agents, including Gemini and Perplexity, support multi-step tasks across apps, from booking services to advanced searches, all with minimal input.

Samsung has embedded multiple layers of security and privacy in the Galaxy S26 series. From AI-powered Call Screening and Privacy Alerts to Knox Vault, Knox Matrix, and post-quantum cryptography, users can control data access and protect personal information.

With long-term security updates, seamless software, and Galaxy Buds4 integration, the S26 series aims to combine performance, convenience, and safety in a single, intuitive device.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Scotland considers new offence for AI intimate images

The Scottish government has launched a consultation proposing a specific criminal offence for creating AI-generated intimate images without consent. Existing Scots law covers the sharing of such photos, but ministers in Scotland say gaps remain around their creation.

The consultation in Scotland also seeks views on criminalising digital tools designed solely to produce intimate images and videos. Ministers aim to address harms linked to emerging AI technologies affecting women and girls across Scotland.

Additional proposals in Scotland include a statutory aggravation where domestic abuse involves a pregnant woman, requiring courts to treat such cases more seriously at sentencing. Measures to strengthen protections against spiking offences are also under review in Scotland.

Justice Secretary Angela Constance said responses in Scotland would inform future action to reduce violence against women and girls. The consultation also considers changes to non-harassment orders and examines whether further laws on non-fatal strangulation are needed in Scotland.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Uni.lu expert urges schools to embrace AI

AI should be integrated into classrooms in Luxembourg rather than avoided, according to Gilbert Busana of the University of Luxembourg. Speaking to RTL Today in Luxembourg, he said ignoring AI would be a disservice to pupils and teachers alike.

Busana argued that AI should be taught both as a standalone subject and across disciplines in Luxembourg schools. Clear guidelines are needed to define when and how pupils may use AI, alongside transparency about its role in assignments.

He stressed that developing AI literacy in Luxembourg is essential to protect critical thinking. Assessment methods may shift away from focusing solely on final outputs towards evaluating the learning process itself.

Teachers in Luxembourg are increasingly becoming coaches rather than simple transmitters of knowledge. Busana said continuous professional training and collaboration within schools in Luxembourg will be vital as AI reshapes education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI misuse in online scams involving OpenAI models

OpenAI has reported new instances of its models being exploited in online scams and coordinated information campaigns. The company detailed actions to remove offending accounts and strengthen safeguards, highlighting misuse in fraud and deceptive content creation.

Several cases involved romance and ‘task’ scams, in which AI-generated messages built emotional engagement before requesting payment. One network, dubbed ‘Operation Date Bait,’ used chatbots to promote a fictitious dating service targeting young men in Indonesia.

Another, ‘Operation False Witness,’ saw actors posing as legal professionals to solicit advance fees for non-existent recovery services.

The report also outlined coordinated campaigns leveraging AI to produce articles, social media posts, and comments on geopolitical topics. In ‘Operation Trolling Stone,’ AI-generated content on a Russian arrest in Argentina was shared widely in multiple languages to mimic grassroots engagement.

OpenAI stressed that AI was sometimes used, but reach and account size largely drove engagement.

The company continues monitoring misuse and collaborates with partners and authorities to curb fraudulent or deceptive activity. Systems have been updated to decline policy-violating requests, and not all suspicious content online was generated using its tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Binance targets Greece as EU gateway

Efforts to secure a foothold in Europe have led Binance to select Greece as its entry point for operating under the EU’s Markets in Crypto-Assets framework. A licence would let the exchange offer services across the European Union when the rules take effect in July 2026.

Strategic considerations outweigh speed in the decision. Co-chief executive Richard Teng cited workforce quality, safety, and long-term growth potential as decisive factors, even though several larger EU economies have already issued more licences.

Regulatory attention continues to shape the company’s trajectory. Founder Changpeng Zhao remains a shareholder, as leadership says reforms aim to make the platform one of the most regulated exchanges globally.

Expansion plans unfold amid turbulent market conditions.  Bitcoin’s prices remain well below last year’s highs, dampening retail sentiment, yet institutional participation has remained resilient, supporting liquidity amid volatility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Kyoto researchers introduce AI monk to support Buddhist rituals

Researchers at Kyoto University have presented an AI robot monk designed to assist with religious ceremonies and spiritual guidance. The prototype, revealed at Shoren-in temple, demonstrates how robotics and faith traditions may coexist.

Equipped with an AI system based on Buddhist scriptures, the robot answers questions about personal struggles and wider social concerns. During a demonstration, it offered reflective advice while performing gestures such as bowing and placing its palms together.

Developers combined a chatbot powered by modern language technology with movements from an existing humanoid robot built by a Chinese manufacturer. Careful programming aimed to reproduce calm behaviour associated with traditional monks.

Japan faces a gradual decline in the number of active temples and clergy, encouraging the exploration of technological support within religious life. Project leaders believe the AI monk could represent a significant shift in preserving spiritual services for future communities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AT&T data breach settlement wins preliminary approval in $177 million deal

A federal judge in Texas has preliminarily approved a $177 million settlement resolving claims that AT&T failed to safeguard consumer data in two separate breaches. The company denies wrongdoing but agreed to establish compensation funds covering affected customers nationwide.

The agreement creates two non-reversionary funds: $149 million for individuals whose personal data appeared on the dark web, and $28 million for customers whose call and text logs were accessed. It covers a March 2024 breach and a separate incident between May 2022 and early 2023.

Eligible class members may submit claims for cash payments, with amounts depending on the number of valid submissions, and may also receive up to 24 months of credit monitoring. The deadline to opt out or object is 17 October 2025, with a final approval hearing set for 3 December 2025.

Legal and administrative costs, attorneys’ fees, and service awards will be paid from the settlement funds. The case resolves claims brought on behalf of all living US residents whose data was exposed in the two AT&T breaches.

The settlement follows other recent legal challenges facing AT&T, including class actions filed by New York pensioners alleging the company misled investors about the environmental impact of its lead-sheathed cables.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!