Can AI really transform drug development?

The growing use of AI in drug development is dividing opinions among researchers and industry experts. Some believe AI can significantly reduce the time and cost of bringing new medicines to market, while others argue that it has yet to solve the high failure rates seen in clinical trials.

AI-driven tools have already helped identify potential drug candidates more quickly, with some companies reducing the preclinical testing period from several years to just 30 months. However, experts point out that these early successes don’t always translate to breakthroughs in human trials, where most drug failures occur.

Unlike fields such as image recognition, AI in pharmaceuticals faces unique challenges due to limited high-quality data. Experts say AI’s impact could improve if it focuses on understanding why drugs fail in trials, such as problems with dosage, safety, and efficacy. They also recommend new trial designs that incorporate AI to better predict which drugs will succeed in later stages.

While AI won’t revolutionise drug development overnight, researchers agree it can help tackle persistent problems and streamline the process. But achieving lasting results will require better collaboration between AI specialists and drug developers to avoid repeating past mistakes.

AI and speed cameras to tackle dangerous Devon road

A notorious stretch of the A361 in Devon will receive £1 million in AI and speed camera technology to improve road safety. The investment, part of a £5 million grant from the Department for Transport (DfT), comes after the road was identified as ‘high risk,’ with three fatalities and 30 serious injuries recorded between 2018 and 2022. AI-powered cameras will detect offences such as drivers using mobile phones and failing to wear seatbelts, while speed cameras will be installed at key locations.

A pilot scheme last August recorded nearly 1,800 potential offences along the route, highlighting the need for stricter enforcement. The latest plans include three fixed speed cameras at Ilfracombe, Knowle, and Ashford, as well as two average speed camera systems covering longer stretches of the road. AI cameras will be rotated between different locations to monitor driver behaviour more effectively.

Councillor Stuart Hughes, Devon County Council’s cabinet member for highways, expressed pride in the region’s adoption of AI for road safety improvements. The remaining £4 million from the DfT grant will be allocated to upgrading junctions and improving access for pedestrians and cyclists along the A361.

Spain urges neutrality from social media platforms

The Spanish government stressed social media platforms must remain neutral and avoid interfering in political matters. The statement came after X’s owner, Elon Musk, commented on crime data involving foreigners in Catalonia.

Government spokesperson Pilar Alegria emphasised the need for absolute impartiality from such platforms when responding to questions about Musk’s remarks and his ongoing disagreements with European leaders like Keir Starmer and Emmanuel Macron.

Musk had reposted crime statistics from a Spanish newspaper, leading to criticism from Catalan officials. Catalonia’s Socialist leader Salvador Illa warned against using the region’s name to promote hate speech, while Spanish Prime Minister Pedro Sanchez rejected any link between immigration and crime rates.

The Spanish Interior Ministry previously reported stable or declining crime rates, affirming that immigration has no significant impact on criminal activity.

Faculty AI develops AI for military drones

Faculty AI, a consultancy company with significant experience in AI, has been developing AI technologies for both civilian and military applications. Known for its close work with the UK government on AI safety, the NHS, and education, Faculty is also exploring the use of AI in military drones. The company has been involved in testing AI models for the UK’s AI Safety Institute (AISI), which was established to study the implications of AI safety.

While Faculty has worked extensively with AI in non-lethal areas, its work with military applications raises concerns due to the potential for autonomous systems in weapons, including drones. Though Faculty has not disclosed whether its AI work extends to lethal drones, it continues to face scrutiny over its dual roles in advising both the government on AI safety and working with defense clients.

The company has also generated some controversy because of its growing influence in both the public and private sectors. Some experts, including Green Party members, have raised concerns about potential conflicts of interest due to Faculty’s widespread government contracts and its private sector involvement in AI, such as its collaborations with OpenAI and defence firms. Faculty’s work on AI safety is seen as crucial, but critics argue that its broad portfolio could create a risk of bias in the advice it provides.

Despite these concerns, Faculty maintains that its work is guided by strict ethical policies, and it has emphasised its commitment to ensuring AI is used safely and responsibly, especially in defence applications. As AI continues to evolve, experts call for caution, with discussions about the need for human oversight in the development of autonomous weapons systems growing more urgent.

UK develops first quantum clock for military use

The Ministry of Defence announced that the UK is developing its first quantum clock, a cutting-edge device designed to enhance military intelligence and reconnaissance. Created by the Defence Science and Technology Laboratory, the clock boasts unparalleled precision, losing less than one second over billions of years.

By leveraging quantum mechanics to measure atomic energy fluctuations, the technology reduces reliance on vulnerable GPS systems, offering greater resilience against disruption by adversaries. That marks the UK’s debut in building such a device, with deployment anticipated within five years.

While not the world’s first quantum clock (similar technology was pioneered in the US 15 years ago), the UK effort highlights a growing global race in quantum advancements. Quantum clocks hold potential beyond military applications, impacting satellite navigation, telecommunications, and scientific research.

Countries like the United States and China are heavily investing in quantum technology, seeing its transformative potential. Future UK research aims to miniaturise the quantum clock for broader applications, including integration into military vehicles and aircraft, underscoring its strategic importance in defence and industry.

China unveils Rotunbot RT-G: A groundbreaking advancement in robotic policing technology

China has introduced a groundbreaking addition to its law enforcement toolkit – the Rotunbot RT-G, a spherical robot designed to aid police in high-speed chases and challenging terrains. Developed by Logon Technology, this 276-pound robotic marvel can travel up to 22 mph on land and water, navigate mud and rivers, and even withstand drops from ledges. Its rapid acceleration and amphibious capabilities make it a unique asset for pursuit scenarios.

Equipped with advanced technology, the RT-G boasts GPS for precise navigation, cameras, ultrasonic sensors, and systems for tracking and avoiding obstacles. Gyroscopic self-stabilisation ensures smooth operation, while a suite of non-lethal tools—including tear gas dispensers, net shooters, and acoustic crowd dispersal devices—enables it to handle diverse law enforcement tasks humanely and effectively.

The RT-G is already used in Wenzhou, Zhejiang province of China, where it assists police in commercial zones. While its real-world performance shows promise, limitations such as instability during turns and difficulty navigating stairs reveal areas for improvement. Despite these challenges, the Rotunbot RT-G represents a significant leap in robotic policing technology, blending innovation with practicality.

Apheris revolutionises data privacy and AI in life sciences with federated computing

Privacy and regulatory concerns have long hindered AI’s reliance on data, especially in sensitive fields like healthcare and life sciences. Apheris, a German startup co-founded by Robin Röhm, aims to solve this problem using federated computing—a decentralised approach that trains AI models without moving sensitive data.

The company’s approach is gaining traction among prominent clients like Roche and hospitals, and its technology is already being used in collaborative drug discovery efforts by pharmaceutical giants such as Johnson & Johnson and Sanofi. Apheris recently secured $8.25 million in Series A funding led by OTB Ventures and eCAPITAL, bringing its total funding to $20.8 million.

That follows a pivotal shift in 2023 to focus on the needs of data owners in the pharmaceutical and life sciences sectors. The pivot has paid off, quadrupling the company’s revenue since launching its redefined product, the Apheris Compute Gateway, which securely bridges local data and AI models.

With its new funding, Apheris plans to expand its team and refine its AI-driven solutions for complex challenges like protein prediction. By prioritising data security and privacy, the company aims to unlock previously inaccessible data for innovation, addressing a core barrier to AI’s transformative potential in life sciences.

Debate over AI regulation intensifies amidst innovation and safety concerns

In recent years, debates over AI have intensified, oscillating between catastrophic warnings and optimistic visions. Technologists, once at the forefront of calling for caution, have been overshadowed by the tech industry’s emphasis on generative AI’s lucrative potential.

Dismissed as ‘AI doomers,’ critics warn of existential threats—from mass harm to societal destabilisation—while Silicon Valley champions the transformative benefits of AI, urging fewer regulations to accelerate innovation. 2023 marked a pivotal year for AI awareness, with luminaries like Elon Musk and over 1,000 experts calling for a development pause, citing profound risks.

US President Biden’s AI executive order aimed to safeguard Americans, and regulatory discussions gained mainstream traction. However, 2024 saw this momentum falter as investment in AI skyrocketed and safety-focused voices dwindled.

High-profile debates, like California’s SB 1047—a bill addressing catastrophic AI risks—ended in a veto, highlighting resistance from powerful tech entities. Critics argued that such legislation stifled innovation, while proponents lamented the lack of long-term safety measures.

Amid this tug-of-war, optimistic narratives, like Marc Andreessen’s essay ‘Why AI Will Save the World,’ gained prominence. Advocating rapid, unregulated AI development, Andreessen and others argued this approach would bolster competitiveness and prevent monopolisation.

Yet, detractors questioned the ethics of prioritising profit over societal concerns, especially as cases like AI-driven child safety failures underscored emerging risks.

Why does it matter?

Looking ahead to 2025, the AI safety movement faces an uphill battle. Policymakers hint at revisiting stalled regulations, signalling hope for progress. However, with influential players opposing stringent oversight, the path to balanced AI governance remains uncertain. As society grapples with AI’s rapid evolution, the challenge lies in addressing its vast potential and inherent risks.

Plans for major structural change announced by OpenAI

OpenAI has unveiled plans to transition its for-profit arm into a Delaware-based public benefit corporation (PBC). The move aims to attract substantial investment as the competition to develop advanced AI intensifies, and the proposed structure intends to prioritise societal interests alongside shareholder value, setting the company apart from traditional corporate models.

The shift marks a significant step for OpenAI, which started as a nonprofit in 2015 before establishing a for-profit division to fund high-cost AI development. Its latest funding round, valued at $157 billion, necessitated the structural change to eliminate a profit cap for investors, enabling greater financial backing. The nonprofit will retain a substantial stake in the restructured company, ensuring alignment with its original mission.

OpenAI faces criticism and legal challenges over the move. Elon Musk, a co-founder and vocal critic, has filed a lawsuit claiming the changes prioritise profit over public interest. Meta Platforms has also urged regulatory intervention. Legal experts suggest the PBC status offers limited enforcement of its mission-focused commitments, relying on shareholder influence to maintain the balance between profit and purpose.

By adopting this structure, OpenAI aims to align with competitors like Anthropic and xAI, which have similarly raised billions in funding. Analysts view the move as essential for securing the resources needed to remain a leader in the AI sector, though significant hurdles remain.

AI robot stuns with record-breaking basketball shot

A humanoid robot named CUE6 has captivated audiences in Japan with its basketball prowess, achieving a Guinness World Record for the longest shot by a humanoid robot. Developed by Toyota engineers, the robot’s achievement highlights the potential of AI in mimicking human precision and adapting to complex tasks.

CUE6’s journey began in 2017 as an experimental project. Starting with LEGO-based prototypes, the team gradually refined the robot’s capabilities, culminating in its ability to dribble, handle balls, and adapt its movements based on real-time analysis. By 2019, the robot had already achieved a remarkable milestone: 2,020 consecutive free throws. The latest version, CUE6, demonstrated the power of AI by recalibrating its shot after a miss to secure the record on its second attempt.

Toyota engineers view CUE6 as more than a novelty. The project serves as a testing ground for AI systems capable of dynamic learning and adaptation. While the immediate goal of creating a robot that can dunk like Michael Jordan remains aspirational, the technologies developed for CUE6 in Japan have far-reaching implications beyond sports, from automation to healthcare.