Uber is ready for driverless taxis in the UK

Uber says it is fully prepared to launch driverless taxis in the UK, but the government has pushed back its timeline for approving fully autonomous vehicles.

The previous 2026 target has been shifted to the second half of 2027, despite rapid developments in self-driving technology already being trialled on British roads.

Currently, limited self-driving systems are legal so long as a human remains behind the wheel and responsible for the car.

Uber, which already runs robotaxis in the US and parts of Asia, is working with 18 tech firms—including UK-based Wayve—to expand the service. Wayve’s AI-driven vehicles were recently tested in central London, managing traffic, pedestrians and roadworks with no driver intervention.

Uber’s Andrew Macdonald said the technology is ready now, but regulatory support is still catching up. The government insists legislation will come in 2027 and is exploring short-term trials in the meantime.

Macdonald acknowledged safety concerns, noting incidents abroad, but argued autonomous vehicles could eventually prove safer than human drivers, based on early US data.

Beyond technology, the shift raises big questions around insurance, liability and jobs. The government sees a £42 billion industry with tens of thousands of new roles, but unions warn of social impacts for professional drivers.

Still, Uber sees a future where fewer people even bother to learn how to drive, because AI will do it for them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Why ITU’s legacy still shapes our digital world

On 17 May 1865, 20 European countries came together to create the International Telecommunication Union (ITU), a response to the tedious and inefficient telegraph system that required messages to be rewritten at every border. This practical move—born not from idealism but necessity—paved the way for a global communications framework that continues to underpin today’s digital world.

From the first bilateral agreements to modern platforms like Instagram and AI tools like ChatGPT, the same core principle remains: international cooperation is essential to seamless communication. Despite revolutionary advances in technology, diplomacy has changed slowly.

Yet ITU’s mission—to balance national interests with shared global connectivity—has remained constant. For instance, debates over digital privacy and cybersecurity today echo those from the 19th century over telegraph regulation.

Even as US policies toward multilateralism shift, its consistent support for the ITU showcases how diplomacy can maintain continuity across centuries of change. As Jovan Kurbalija notes in his recent blog post, understanding this long arc of diplomatic history is essential for making sense of today’s tech governance debates.

Crises often trigger breakthroughs in multilateral governance. The Titanic disaster, for example, catalysed swift international regulation of radio communication after years of stagnation. In our interconnected AI-driven era, similar ‘Titanic moments’ could once again force urgent global agreements.

That is especially pressing as technology continues to reshape power structures, favouring innovators and standard-setters, and reviving the age-old race between digital ‘haves’ and ‘have-nots.’

Why does it matter?

ITU’s 160-year legacy is a testament to the endurance of diplomacy amid technological disruption. While tools evolve—from telegraphs to AI—the diplomatic mission to resolve conflicts and foster cooperation remains unchanged. The story of ITU, as Kurbalija reflects, is not just about commemorating the past, but recognising the urgent need for global cooperation in shaping our digital future.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

China launches first AI satellites in orbital supercomputer network

China has launched the first 12 satellites in a planned network of 2,800 that will function as an orbiting supercomputer, according to Space News.

Developed by ADA Space in partnership with Zhijiang Laboratory and Neijang High-Tech Zone, the satellites can process their own data instead of relying on Earth-based stations, thanks to onboard AI models.

Each satellite runs an 8-billion parameter AI model capable of 744 tera operations per second, with the group already achieving 5 peta operations per second in total. The long-term goal is a constellation that can reach 1,000 POPS.

The network uses high-speed laser links to communicate and shares 30 terabytes of data between satellites. The current batch also carries scientific tools, such as an X-ray detector for studying gamma-ray bursts, and can generate 3D digital twin data for uses like disaster response or virtual tourism.

The space-based computing approach is designed to overcome Earth-based limitations like bandwidth and ground station availability, which means less than 10% of satellite data typically reaches the surface.

Experts say space supercomputers could reduce energy use by relying on solar power and dissipating heat into space. The EU and the US may follow China’s lead, as interest in orbital data centres grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers believe AI transparency is within reach by 2027

Top AI researchers admit they still do not fully understand how generative AI models work. Unlike traditional software that follows predefined logic, gen AI models learn to generate responses independently, creating a challenge for developers trying to interpret their decision-making processes.

Dario Amodei, co-founder of Anthropic, described this lack of understanding as unprecedented in tech history. Mechanistic interpretability — a growing academic field — aims to reverse engineer how gen AI models arrive at outputs.

Experts compare the challenge to understanding the human brain, but note that, unlike biology, every digital ‘neuron’ in AI is visible.

Companies like Goodfire are developing tools to map AI reasoning steps and correct errors, helping prevent harmful use or deception. Boston University professor Mark Crovella says interest is surging due to the practical and intellectual appeal of interpreting AI’s inner logic.

Researchers believe the ability to reliably detect biases or intentions within AI models could be achieved within a few years.

This transparency could open the door to AI applications in critical fields like security, and give firms a major competitive edge. Understanding how these systems work is increasingly seen as vital for global tech leadership and public safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elton John threatens legal fight over AI use

Sir Elton John has lashed out at the UK government over plans that could allow AI companies to use copyrighted content without paying artists, calling ministers ‘absolute losers’ and accusing them of ‘thievery on a high scale.’

He warned that younger musicians, without the means to challenge tech giants, would be most at risk if the proposed changes go ahead.

The row centres on a rejected House of Lords amendment to the Data Bill, which would have required AI firms to disclose what material they use.

Despite a strong majority in favour in the Lords, the Commons blocked the move, meaning the bill will keep bouncing between the two chambers until a compromise is reached.

Sir Elton, joined by playwright James Graham, said the government was failing to defend creators and seemed more interested in appeasing powerful tech firms.

More than 400 artists, including Sir Paul McCartney, have signed a letter urging Prime Minister Sir Keir Starmer to strengthen copyright protections instead of allowing AI to mine their work unchecked.

While the government insists no changes will be made unless they benefit creators, critics say the current approach risks sacrificing the UK’s music industry for Silicon Valley’s gain.

Sir Elton has threatened legal action if the plans go ahead, saying, ‘We’ll fight it all the way.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

JMA to test AI-enhanced weather forecasting

The Japan Meteorological Agency (JMA) is exploring the use of AI to improve the accuracy of weather forecasts, with a particular focus on deep learning technologies, according to a source familiar with the plans.

A dedicated team was launched in April to begin developing the infrastructure and tools needed to integrate AI with JMA’s existing numerical weather prediction models. The goal is to combine traditional simulations with AI-generated forecasts based on historical weather data.

If implemented, AI systems could identify weather patterns more efficiently and enhance forecasts for variables such as rainfall and temperature. The technology may also offer improved accuracy in predicting extreme weather events like typhoons.

Currently, the JMA relies on supercomputers to simulate future atmospheric conditions based on observational data. Human forecasters then review the outputs, applying expert judgment before issuing final forecasts and alerts. Even with AI integration, human oversight will remain a core part of the process.

In addition to forecasting, the agency is also considering AI for processing data from the Himawari-10 satellite, which is expected to launch in fiscal 2029.

An official announcement outlining further AI integration measures is anticipated in June.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK workers struggle to keep up with AI

AI is reshaping the UK workplace, but many employees feel unprepared to keep pace, according to a major new study by Henley Business School.

While 56% of full-time professionals expressed optimism about AI’s potential, 61% admitted they were overwhelmed by how quickly the technology is evolving.

The research surveyed over 4,500 people across nearly 30 sectors, offering what experts call a clear snapshot of AI’s uneven integration into British industries.

Professor Keiichi Nakata, director of AI at The World of Work Institute, said workers are willing to embrace AI, but often lack the training and guidance to do so effectively.

Instead of empowering staff through hands-on learning and clear internal policies, many companies are leaving their workforce under-supported.

Nearly a quarter of respondents said their employers were failing to provide sufficient help, while three in five said they would use AI more if proper training were available.

Professor Nakata argued that AI has the power to simplify tasks, remove repetitive duties, and free up time for more meaningful work.

But he warned that without better support, businesses risk missing out on what could be a transformative force for both productivity and employee satisfaction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US bans nonconsensual explicit deepfakes nationwide

The US is introducing a landmark federal law aimed at curbing the spread of non-consensual explicit deepfake images, following mounting public outrage.

President Donald Trump is expected to sign the Take It Down Act, which will criminalise the sharing of explicit images, whether real or AI-generated, without consent. The law will also require tech platforms to remove such content within 48 hours of notification, instead of leaving the matter to patchy state laws.

The legislation is one of the first at the federal level to directly tackle the misuse of AI-generated content. It builds on earlier laws that protected children but had left adults vulnerable due to inconsistent state regulations.

The bill received rare bipartisan support in Congress and was backed by over 100 organisations, including tech giants like Meta, TikTok and Google. First Lady Melania Trump also supported the act, hosting a teenage victim of deepfake harassment during the president’s address to Congress.

The act was prompted in part by incidents like that of Elliston Berry, a Texas high school student targeted by a classmate who used AI to alter her social media image into a nude photo. Similar cases involving teen girls across the country highlighted the urgency for action.

Tech companies had already started offering tools to remove explicit images, but the lack of consistent enforcement allowed harmful content to persist on less cooperative platforms.

Supporters of the law argue it sends a strong societal message instead of allowing the exploitation to continue unchallenged.

Advocates like Imran Ahmed and Ilana Beller emphasised that while no law is a perfect solution, this one forces platforms to take real responsibility and offers victims some much-needed protection and peace of mind.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE to host world’s biggest AI site outside the US

The United Arab Emirates will build the largest artificial intelligence infrastructure outside the United States, following a high-level meeting between UAE President Sheikh Mohamed bin Zayed Al Nahyan and President Trump in Abu Dhabi.

It will be constructed by G42 and involve US firms under the newly established US-UAE AI Acceleration Partnership. Spanning 10 square miles in Abu Dhabi, the AI campus will run on a mix of nuclear, solar and gas energy to limit emissions and will feature a dedicated science park to drive innovation.

A 5GW capacity will enable it to serve half the global population, offering US cloud providers a vital regional hub. As part of the agreement, the UAE has pledged to align its national security rules with US standards, including strict technology safeguards and tighter access controls for computing power.

The UAE may also be permitted to purchase up to 500,000 Nvidia AI chips annually starting this year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok AI glitch reignites debate on trust and safety in AI tools

Elon Musk’s AI chatbot, Grok, has caused a stir by injecting unsolicited claims about ‘white genocide’ in South Africa into unrelated user queries. These remarks, widely regarded as part of a debunked conspiracy theory, appeared across various innocuous prompts before being quickly removed.

The strange behaviour led to speculation that Grok’s system prompt had been tampered with, possibly by someone inside xAI. Although Grok briefly claimed it had been instructed to mention the topic, xAI has yet to issue a full technical explanation.

Rival AI leaders, including OpenAI’s Sam Altman, joined public criticism on X, calling the episode a concerning sign of possible editorial manipulation. While Grok’s responses returned to normal within hours, the incident reignited concerns about control and transparency in large AI models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!