China launches first AI satellites in orbital supercomputer network

China has launched the first 12 satellites in a planned network of 2,800 that will function as an orbiting supercomputer, according to Space News.

Developed by ADA Space in partnership with Zhijiang Laboratory and Neijang High-Tech Zone, the satellites can process their own data instead of relying on Earth-based stations, thanks to onboard AI models.

Each satellite runs an 8-billion parameter AI model capable of 744 tera operations per second, with the group already achieving 5 peta operations per second in total. The long-term goal is a constellation that can reach 1,000 POPS.

The network uses high-speed laser links to communicate and shares 30 terabytes of data between satellites. The current batch also carries scientific tools, such as an X-ray detector for studying gamma-ray bursts, and can generate 3D digital twin data for uses like disaster response or virtual tourism.

The space-based computing approach is designed to overcome Earth-based limitations like bandwidth and ground station availability, which means less than 10% of satellite data typically reaches the surface.

Experts say space supercomputers could reduce energy use by relying on solar power and dissipating heat into space. The EU and the US may follow China’s lead, as interest in orbital data centres grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers believe AI transparency is within reach by 2027

Top AI researchers admit they still do not fully understand how generative AI models work. Unlike traditional software that follows predefined logic, gen AI models learn to generate responses independently, creating a challenge for developers trying to interpret their decision-making processes.

Dario Amodei, co-founder of Anthropic, described this lack of understanding as unprecedented in tech history. Mechanistic interpretability — a growing academic field — aims to reverse engineer how gen AI models arrive at outputs.

Experts compare the challenge to understanding the human brain, but note that, unlike biology, every digital ‘neuron’ in AI is visible.

Companies like Goodfire are developing tools to map AI reasoning steps and correct errors, helping prevent harmful use or deception. Boston University professor Mark Crovella says interest is surging due to the practical and intellectual appeal of interpreting AI’s inner logic.

Researchers believe the ability to reliably detect biases or intentions within AI models could be achieved within a few years.

This transparency could open the door to AI applications in critical fields like security, and give firms a major competitive edge. Understanding how these systems work is increasingly seen as vital for global tech leadership and public safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elton John threatens legal fight over AI use

Sir Elton John has lashed out at the UK government over plans that could allow AI companies to use copyrighted content without paying artists, calling ministers ‘absolute losers’ and accusing them of ‘thievery on a high scale.’

He warned that younger musicians, without the means to challenge tech giants, would be most at risk if the proposed changes go ahead.

The row centres on a rejected House of Lords amendment to the Data Bill, which would have required AI firms to disclose what material they use.

Despite a strong majority in favour in the Lords, the Commons blocked the move, meaning the bill will keep bouncing between the two chambers until a compromise is reached.

Sir Elton, joined by playwright James Graham, said the government was failing to defend creators and seemed more interested in appeasing powerful tech firms.

More than 400 artists, including Sir Paul McCartney, have signed a letter urging Prime Minister Sir Keir Starmer to strengthen copyright protections instead of allowing AI to mine their work unchecked.

While the government insists no changes will be made unless they benefit creators, critics say the current approach risks sacrificing the UK’s music industry for Silicon Valley’s gain.

Sir Elton has threatened legal action if the plans go ahead, saying, ‘We’ll fight it all the way.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

JMA to test AI-enhanced weather forecasting

The Japan Meteorological Agency (JMA) is exploring the use of AI to improve the accuracy of weather forecasts, with a particular focus on deep learning technologies, according to a source familiar with the plans.

A dedicated team was launched in April to begin developing the infrastructure and tools needed to integrate AI with JMA’s existing numerical weather prediction models. The goal is to combine traditional simulations with AI-generated forecasts based on historical weather data.

If implemented, AI systems could identify weather patterns more efficiently and enhance forecasts for variables such as rainfall and temperature. The technology may also offer improved accuracy in predicting extreme weather events like typhoons.

Currently, the JMA relies on supercomputers to simulate future atmospheric conditions based on observational data. Human forecasters then review the outputs, applying expert judgment before issuing final forecasts and alerts. Even with AI integration, human oversight will remain a core part of the process.

In addition to forecasting, the agency is also considering AI for processing data from the Himawari-10 satellite, which is expected to launch in fiscal 2029.

An official announcement outlining further AI integration measures is anticipated in June.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK workers struggle to keep up with AI

AI is reshaping the UK workplace, but many employees feel unprepared to keep pace, according to a major new study by Henley Business School.

While 56% of full-time professionals expressed optimism about AI’s potential, 61% admitted they were overwhelmed by how quickly the technology is evolving.

The research surveyed over 4,500 people across nearly 30 sectors, offering what experts call a clear snapshot of AI’s uneven integration into British industries.

Professor Keiichi Nakata, director of AI at The World of Work Institute, said workers are willing to embrace AI, but often lack the training and guidance to do so effectively.

Instead of empowering staff through hands-on learning and clear internal policies, many companies are leaving their workforce under-supported.

Nearly a quarter of respondents said their employers were failing to provide sufficient help, while three in five said they would use AI more if proper training were available.

Professor Nakata argued that AI has the power to simplify tasks, remove repetitive duties, and free up time for more meaningful work.

But he warned that without better support, businesses risk missing out on what could be a transformative force for both productivity and employee satisfaction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US bans nonconsensual explicit deepfakes nationwide

The US is introducing a landmark federal law aimed at curbing the spread of non-consensual explicit deepfake images, following mounting public outrage.

President Donald Trump is expected to sign the Take It Down Act, which will criminalise the sharing of explicit images, whether real or AI-generated, without consent. The law will also require tech platforms to remove such content within 48 hours of notification, instead of leaving the matter to patchy state laws.

The legislation is one of the first at the federal level to directly tackle the misuse of AI-generated content. It builds on earlier laws that protected children but had left adults vulnerable due to inconsistent state regulations.

The bill received rare bipartisan support in Congress and was backed by over 100 organisations, including tech giants like Meta, TikTok and Google. First Lady Melania Trump also supported the act, hosting a teenage victim of deepfake harassment during the president’s address to Congress.

The act was prompted in part by incidents like that of Elliston Berry, a Texas high school student targeted by a classmate who used AI to alter her social media image into a nude photo. Similar cases involving teen girls across the country highlighted the urgency for action.

Tech companies had already started offering tools to remove explicit images, but the lack of consistent enforcement allowed harmful content to persist on less cooperative platforms.

Supporters of the law argue it sends a strong societal message instead of allowing the exploitation to continue unchallenged.

Advocates like Imran Ahmed and Ilana Beller emphasised that while no law is a perfect solution, this one forces platforms to take real responsibility and offers victims some much-needed protection and peace of mind.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE to host world’s biggest AI site outside the US

The United Arab Emirates will build the largest artificial intelligence infrastructure outside the United States, following a high-level meeting between UAE President Sheikh Mohamed bin Zayed Al Nahyan and President Trump in Abu Dhabi.

It will be constructed by G42 and involve US firms under the newly established US-UAE AI Acceleration Partnership. Spanning 10 square miles in Abu Dhabi, the AI campus will run on a mix of nuclear, solar and gas energy to limit emissions and will feature a dedicated science park to drive innovation.

A 5GW capacity will enable it to serve half the global population, offering US cloud providers a vital regional hub. As part of the agreement, the UAE has pledged to align its national security rules with US standards, including strict technology safeguards and tighter access controls for computing power.

The UAE may also be permitted to purchase up to 500,000 Nvidia AI chips annually starting this year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok AI glitch reignites debate on trust and safety in AI tools

Elon Musk’s AI chatbot, Grok, has caused a stir by injecting unsolicited claims about ‘white genocide’ in South Africa into unrelated user queries. These remarks, widely regarded as part of a debunked conspiracy theory, appeared across various innocuous prompts before being quickly removed.

The strange behaviour led to speculation that Grok’s system prompt had been tampered with, possibly by someone inside xAI. Although Grok briefly claimed it had been instructed to mention the topic, xAI has yet to issue a full technical explanation.

Rival AI leaders, including OpenAI’s Sam Altman, joined public criticism on X, calling the episode a concerning sign of possible editorial manipulation. While Grok’s responses returned to normal within hours, the incident reignited concerns about control and transparency in large AI models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AlphaEvolve by DeepMind automates code optimisation and discovers new algorithms

Google’s DeepMind has introduced AlphaEvolve, a new AI-powered coding agent designed to autonomously discover and optimise computer algorithms.

Built on large language models and evolutionary techniques, AlphaEvolve aims to assist experts across mathematics, engineering, and computer science by improving existing solutions and generating new ones.

Unlike natural language-based models, AlphaEvolve uses automated evaluators and iterative evolution strategies—like mutation and crossover—to refine algorithmic solutions.

DeepMind reports success across several domains, including matrix multiplication, data centre scheduling, chip design, and AI model training.

In one case, AlphaEvolve developed a new method for multiplying 4×4 complex matrices using just 48 scalar multiplications, surpassing a longstanding result from 1969. It also improved job scheduling in Google data centres, recovering an average of 0.7% of global compute resources.

In mathematical tests, AlphaEvolve rediscovered known solutions 75% of the time and improved them in 20% of cases. While experts have praised its potential, researchers also stress the importance of secure deployment and responsible use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canva merges data and storytelling

Canva has introduced Sheets, a new spreadsheet platform combining data, design, and AI to simplify and visualise analytics. Announced at the Canva Create: Uncharted event, it redefines spreadsheets by enabling users to turn raw data into charts, reports and content without leaving the Canva interface.

Built-in tools like Magic Formulas, Magic Insights, and Magic Charts, Canva Sheets supports automated analysis and visual storytelling. Users can generate dynamic charts and branded content across platforms in seconds, thanks to Canva AI and features like bulk editing and multilingual translation.

Data Connectors allow seamless integration with platforms such as Google Analytics and HubSpot, ensuring live updates across all connected visuals. The platform is designed to reduce manual tasks in recurring reports and keep teams synchronised in real time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!