AI becomes central to biotech discovery and drug development

The biotechnology industry is moving from early AI experimentation to fully integrated discovery systems that embed AI into everyday research operations.

According to the 2026 Biotech AI Report from Benchling, leading organisations are reshaping data environments and R&D structures, making AI a core part of the drug development process.

Predictive models, such as protein structure prediction and docking simulations, are accelerating early-stage discovery, helping scientists identify targets faster and improve accuracy.

Challenges persist in generative design, biomarker analysis, and ADME prediction, where adoption lags due to fragmented or poor-quality data.

Organisations overcoming these hurdles invest in high-quality, well-annotated measurements and strong integration between wet and dry lab work. It creates a continuous learning cycle that drives faster insights and reduces experimental dead ends.

Talent strategies are evolving to place AI expertise directly in R&D teams. Many firms upskill existing scientific staff to act as ‘scientific translators,’ bridging biology, regulatory needs, and machine learning.

Embedding AI leadership within research teams or using hybrid models reduces handoffs and ensures AI tools remain practical in real-world experiments.

Biotech firms combine in-house development with commercial components, following a ‘build what differentiates, buy what scales’ strategy. Confidence in AI is rising, driving investment in infrastructure, modelling, and integrated AI workflows for research.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI use among students surges as chatbots reshape schoolwork

More than half of US teenagers use AI tools to help with schoolwork, according to a new Pew Research Center study. The survey found that 54% of students aged 13 to 17 have used chatbots such as OpenAI’s ChatGPT or Microsoft’s Copilot to research assignments or solve maths problems.

Usage has risen in recent years. In 2024, 26% of US teens reported using ChatGPT for schoolwork, up from 13% in 2023. The latest survey of 1,458 teens and parents found 44% use AI for some schoolwork, while 10% rely on chatbots for most tasks.

Researchers say AI assistance is becoming routine in classrooms. Colleen McClain, a senior researcher at Pew and co-author of the report, said chatbot use for schoolwork is now a common practice among teens.

Findings come amid an intensifying debate over generative AI in education. Supporters argue that schools should teach students to use and evaluate AI tools, while critics warn of misinformation, reduced critical thinking, and increased cheating.

Recent research has raised questions about learning outcomes. One study by Cambridge University Press & Assessment and Microsoft Research found that students who took notes without chatbot support showed stronger reading comprehension than those using AI assistance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU faces renewed pressure to ease industrial AI rules

European governments are renewing pressure to scale back industrial AI rules rather than expand regulatory demands.

Ten countries, including Germany, France, Italy, Spain and Poland, have urged the EU to clarify how the AI Act overlaps with machinery law and to adopt more realistic implementation deadlines. Their position is even more surprising, given that the legislation already outlines its relationship with existing industrial frameworks.

Parliament’s centre and centre-right groups are pushing for deeper cuts. The European People’s Party wants all industrial sectors to move to a lighter regime, while Renew is advocating broad exemptions for industrial and business-to-business AI.

The European Conservatives and Reformers are also seeking reductions for non-safety-related systems. Together, the three groups edge close to a parliamentary majority, signalling momentum for a broader deregulation push.

No sweeping changes have been added to the AI omnibus so far, yet policymakers expect more adjustments ahead. The package must be finalised by August, so legislators are focused on meeting the deadline instead of reopening primary debates.

Broader revisions to industrial AI rules are likely to reappear in the Commission’s forthcoming Digital Fitness Check, which will reassess how multiple EU tech laws interact.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Japan probes Microsoft cloud licensing

Japan’s Fair Trade Commission has launched an investigation into Microsoft in Tokyo over suspected antitrust violations. Authorities conducted an on-site inspection of Microsoft’s Japanese subsidiary in Tokyo on Wednesday, according to sources.

Regulators are examining whether Microsoft charged higher licensing fees to customers running Microsoft 365 and Windows on rival cloud platforms rather than on Microsoft Azure. The inquiry centres on concerns that software dominance may have restricted competition in Japan’s cloud market.

Microsoft’s Japanese unit said it would cooperate fully with the Fair Trade Commission in Tokyo. The watchdog is assessing whether pricing practices unfairly hindered rivals such as Amazon and Google, which also compete in Japan’s expanding cloud sector.

Japan’s Fair Trade Commission has intensified oversight of major technology firms in recent years. Previous actions in Japan include investigations into Amazon Japan and a 2025 order requiring Google to end certain preinstallation practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Scotland considers new offence for AI intimate images

The Scottish government has launched a consultation proposing a specific criminal offence for creating AI-generated intimate images without consent. Existing Scots law covers the sharing of such photos, but ministers in Scotland say gaps remain around their creation.

The consultation in Scotland also seeks views on criminalising digital tools designed solely to produce intimate images and videos. Ministers aim to address harms linked to emerging AI technologies affecting women and girls across Scotland.

Additional proposals in Scotland include a statutory aggravation where domestic abuse involves a pregnant woman, requiring courts to treat such cases more seriously at sentencing. Measures to strengthen protections against spiking offences are also under review in Scotland.

Justice Secretary Angela Constance said responses in Scotland would inform future action to reduce violence against women and girls. The consultation also considers changes to non-harassment orders and examines whether further laws on non-fatal strangulation are needed in Scotland.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Uni.lu expert urges schools to embrace AI

AI should be integrated into classrooms in Luxembourg rather than avoided, according to Gilbert Busana of the University of Luxembourg. Speaking to RTL Today in Luxembourg, he said ignoring AI would be a disservice to pupils and teachers alike.

Busana argued that AI should be taught both as a standalone subject and across disciplines in Luxembourg schools. Clear guidelines are needed to define when and how pupils may use AI, alongside transparency about its role in assignments.

He stressed that developing AI literacy in Luxembourg is essential to protect critical thinking. Assessment methods may shift away from focusing solely on final outputs towards evaluating the learning process itself.

Teachers in Luxembourg are increasingly becoming coaches rather than simple transmitters of knowledge. Busana said continuous professional training and collaboration within schools in Luxembourg will be vital as AI reshapes education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Binance targets Greece as EU gateway

Efforts to secure a foothold in Europe have led Binance to select Greece as its entry point for operating under the EU’s Markets in Crypto-Assets framework. A licence would let the exchange offer services across the European Union when the rules take effect in July 2026.

Strategic considerations outweigh speed in the decision. Co-chief executive Richard Teng cited workforce quality, safety, and long-term growth potential as decisive factors, even though several larger EU economies have already issued more licences.

Regulatory attention continues to shape the company’s trajectory. Founder Changpeng Zhao remains a shareholder, as leadership says reforms aim to make the platform one of the most regulated exchanges globally.

Expansion plans unfold amid turbulent market conditions.  Bitcoin’s prices remain well below last year’s highs, dampening retail sentiment, yet institutional participation has remained resilient, supporting liquidity amid volatility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenClaw creator Peter Steinberger urges playful approach to AI coding

Peter Steinberger, creator of the viral AI agent OpenClaw and now at OpenAI, urged developers to approach AI experimentation with curiosity rather than rigid plans. On the Builders Unscripted podcast, he said progress often comes from exploration rather than expertise.

He said OpenClaw began without a roadmap. Early tests included a WhatsApp integration he paused, expecting major labs to build similar tools. When that did not happen, he developed his own prototype and refined it through real-world use.

Using the tool in low-connectivity environments helped clarify its value. Through trial and iteration, he observed how modern AI models can generate workable solutions without explicit programming, reshaping how developers think about problem-solving and workflows.

He cautioned that coding with AI is a skill that requires practice. Comparing it to learning guitar, Steinberger said early frustration is common, but persistence leads to improved intuition and efficiency over time.

Steinberger argued that developers who focus on solving problems and creating useful tools will remain in demand. Treating AI as a collaborative instrument rather than a shortcut, he said, is essential in a rapidly shifting technology landscape.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI transforming the factory floor with smart automation and real-time oversight

According to industrial technology reporting, AI is being integrated across factory floor operations to improve efficiency, safety and productivity. Key applications include predictive maintenance, quality inspection, workflow optimisation and human-AI collaboration tools.

Machine learning models analyse sensor data from equipment (motors, conveyors, robots) to forecast failures before they occur, reducing unplanned downtime and lowering maintenance costs. Computer vision AI inspects products at high speed, detecting defects with greater accuracy than human inspection and enabling real-time corrective action.

AI systems analyse production workflows to identify bottlenecks, recommend adjustments to schedules and resource allocation, and help balance workload across stations. Augmented reality and AI assistants support factory workers with contextual guidance, safety alerts and hands-free documentation during complex tasks.

Manufacturers adopting these systems report gains in production reliability, reduced scrap rates and more flexible responsiveness to demand variability. However, the report notes challenges around data quality, legacy equipment integration and workforce upskilling.

Ensuring that AI tools are transparent and explainable for operators, rather than opaque ‘black box’ systems, is also highlighted as necessary for trust and operational safety.

These trends reflect a broader shift toward ‘smart factories’ within the framework of Industry 4.0, where digital tools across hardware, networks, data analytics and AI collaborate to support lean, adaptive and resilient manufacturing systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italy orders Amazon to stop processing sensitive employee data after privacy ruling

The Italian data protection authority has ordered Amazon Italia Logistics to halt processing of sensitive employee data after investigators found that the company gathered details ranging from health conditions to union involvement.

Information about workers’ private lives and family members had also been collected, often retained for a decade through internal tracking systems rather than being limited to what labour rules in Italy allow.

Regulators discovered that some data originated from cameras positioned near restrooms and staff break areas, a practice that breached EU privacy standards.

The watchdog concluded that the company’s monitoring went far beyond what employers are permitted to compile when assessing staff performance or workplace needs.

Amazon responded by stressing that protecting employee information remains a priority and said that internal rules and training programmes are designed to ensure compliance. The company added that any findings from the Italian authority would prompt a review of its procedures instead of being dismissed.

An order that arrives as Amazon attempts to regain its lobby badges at the European Parliament.

Access was suspended in 2024 after senior representatives declined to attend hearings on warehouse working conditions, and opposition from MEPs continues to place pressure on Parliament President Roberta Metsola to reject reinstatement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!