China creates AI to detect real nuclear warheads

Chinese scientists have created the world’s first AI-based system capable of identifying real nuclear warheads from decoys, marking a significant step in arms control verification.

The breakthrough, developed by the China Institute of Atomic Energy (CIAE), could strengthen Beijing’s hand in stalled disarmament talks, although it also raises difficult questions about AI’s growing role in managing weapons of mass destruction.

The technology builds on a long-standing US–China proposal but faced key obstacles: how to train AI using sensitive nuclear data, gain military approval without risking secret leaks, and persuade sceptical nations like the US to move past Cold War-era inspection methods.

So far, only the AI training has been completed, with the rest of the process still pending international acceptance.

The AI system uses deep learning and cryptographic protocols to analyse scrambled radiation signals from warheads behind a polythene wall, ensuring the weapons’ internal designs remain hidden.

The machine can verify a warhead’s chain-reaction potential without accessing classified details. According to CIAE, repeated randomised tests reduce the chance of deception to nearly zero.

While both China and the US have pledged not to let AI control nuclear launch decisions, the new system underlines AI’s expanding role in national defence.

Beijing insists the AI can be jointly trained and sealed before use to ensure transparency, but sceptics remain wary of trust, backdoor access and growing militarisation of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Uber’s product chief turns to AI for reports and research

Uber’s chief product officer, Sachin Kansal, is embracing AI to streamline his daily workflow—particularly through tools like ChatGPT, Google Gemini, and, soon, NotebookLM.

Speaking on ‘Lenny’s Podcast,’ Kansal revealed how AI summarisation helps him digest lengthy 50- to 100-page reports he otherwise wouldn’t have time to read. He uses AI to understand market trends and rider feedback across regions such as Brazil, South Korea, and South Africa.

Kansal also relies on AI as a research assistant. For instance, when exploring new driver features, he used ChatGPT’s deep research capabilities to simulate possible driver reactions and generate brainstorming ideas.

‘It’s an amazing research assistant,’ he said. ‘It’s absolutely a starting point for a brainstorm with my team.’

He’s now eyeing Google’s NotebookLM, a note-taking and research tool, as the next addition to his AI toolkit—especially its ‘Audio Overview’ feature, which turns documents into AI-generated podcast-style discussions.

Uber CEO Dara Khosrowshahi previously noted that too few of Uber’s 30,000+ employees are using AI and stressed that mastering AI tools, especially for coding, would soon be essential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Students build world’s fastest Rubik’s Cube solver

A group of engineering students from Purdue University have built the world’s fastest Rubik’s Cube-solving robot, achieving a Guinness World Record time of just 0.103 seconds.

The team focused on improving nearly every aspect of the process, not only faster motors, from image capture to cube construction.

Rather than processing full images, the robot uses low-resolution cameras aimed at opposite corners of the cube, capturing only the essential parts of the image to save time.

Instead of converting camera data into full digital pictures, the system directly reads colour data to identify the cube’s layout. Although slightly less accurate, the method allows quicker recognition and faster solving.

The robot, known as Purdubik’s Cube, benefits from software designed specifically for machines, allowing it to perform overlapping turns using a technique called corner cutting. Instead of waiting for one rotation to finish, the next begins, shaving off valuable milliseconds.

To withstand the stress, the team designed a cube with extremely tight tension using reinforced nylon, making it nearly impossible to turn by hand.

High-speed motors controlled the robot’s movements, with a trapezoidal acceleration profile ensuring rapid but precise turns. The students believe the record could fall again—provided someone develops a stronger, lighter cube using materials like carbon fibre.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI copyright clash stalls UK data bill

A bitter standoff over AI and copyright has returned to the House of Lords, as ministers and peers clash over how to protect creative workers while fostering technological innovation.

At the centre of the debate is the proposed Data (Use and Access) Bill, which was expected to pass smoothly but is now stuck in parliamentary limbo due to growing resistance.

The bill would allow AI firms to access copyrighted material unless rights holders opt out, a proposal that many artists and peers believe threatens the UK’s £124bn creative industry.

Nearly 300 Lords have called for AI developers to disclose what content they use and seek licences instead of relying on blanket access. Former film director Baroness Kidron described the policy as ‘state-sanctioned theft’ and warned it would sacrifice British talent to benefit large tech companies.

Supporters of the bill, like former Meta executive Sir Nick Clegg, argue that forcing AI firms to seek individual permissions would severely damage the UK’s AI sector. The Department for Science, Innovation and Technology insists it will only consider changes if they are proven to benefit creators.

If no resolution is found, the bill risks being shelved entirely. That would also scrap unrelated proposals bundled into it, such as new NHS data-sharing rules and plans for a nationwide underground map.

Despite the bill’s wide scope, the fight over copyright remains its most divisive and emotionally charged feature.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gmail adds automatic AI summaries

Gmail on mobile now displays AI-generated summaries by default, marking a shift in how Google’s Gemini assistant operates within inboxes.

Instead of relying on users to request a summary, Gemini will now decide when it’s useful—typically for long email threads with multiple replies—and present a brief summary card at the top of the message.

These summaries update automatically as conversations evolve, aiming to save users from scrolling through lengthy discussions.

The feature is currently limited to mobile devices and available only to users with Google Workspace accounts, Gemini Education add-ons, or a Google One AI Premium subscription. For the moment, summaries are confined to emails written in English.

Google expects the rollout to take around two weeks, though it remains unclear when, or if, the tool will extend to standard Gmail accounts or desktop users.

Anyone wanting to opt out must disable Gmail’s smart features entirely—giving up tools like Smart Compose, Smart Reply, and package tracking in the process.

While some may welcome the convenience, others may feel uneasy about their emails being analysed by large language models, especially since this process could contribute to further training of Google’s AI systems.

The move reflects a wider trend across Google’s products, where AI is becoming central to everyday user experiences.

Additional user controls and privacy commitments

According to Google Workspace, users have some control over the summary cards. They can collapse a Gemini summary card, and it will remain collapsed for that specific email thread.

In the near future, Gmail will introduce enhancements, such as automatically collapsing future summary cards for users who consistently collapse them, until the user chooses to expand them again. For emails that don’t display automatic summaries, Gmail still offers manual options.

Users can tap the ‘summarise this email’ chip at the top of the message or use the Gemini side panel to trigger a summary manually. Google also reaffirms its commitment to data protection and user privacy. All AI features in Gmail adhere to its privacy principles, with more details available on the Privacy Hub.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Thailand advances satellite rules

The Thai National Broadcasting and Telecommunications Commission (NBTC) has recently proposed a draft regulation titled ‘Criteria for Authorisation to Use Frequency Bands for Land, Aeronautical, and Maritime Earth Stations in FSS Services’. The regulation specifically targets the operation of Earth Stations in Motion (ESIMs), which include land-based stations on vehicles, aeronautical stations on aircraft, and maritime stations on ships and offshore platforms.

It defines dedicated frequency bands for both geostationary (GSO) and non-geostationary (NGSO) satellites, aligning closely with international best practices and recommendations from the International Telecommunication Union (ITU). The primary objective of this draft is to streamline the process for using specific radio frequencies by removing the need for individual frequency allocation for each ESIM deployment.

That approach aims to simplify and accelerate the rollout of high-speed satellite internet services for mobile users across various sectors, thus promoting innovation and economic development by facilitating faster and broader adoption of advanced satellite communications. Overall, the NBTC’s initiative underscores the critical importance for regulators worldwide to continually update their spectrum management frameworks.

Why does it matter?

In a rapidly evolving technological landscape, outdated or rigid regulations can obstruct innovation and economic growth. Effective spectrum management must strike a balance between preventing harmful interference and supporting the deployment of cutting-edge communication technologies like satellite-based internet services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini AI can now summarise videos in Google Drive

Google is expanding Gemini AI’s capabilities in Drive by enabling it to analyse video files and respond to user questions or generate concise summaries.

The new feature aims to save users time by providing quick insights from lengthy content such as meetings, classes or announcements, instead of requiring them to watch the entire video. Until now, Gemini could only summarise documents and PDFs stored in Drive.

According to a blog post published on 28 May 2025, the feature will support prompts like ‘Summarise the video’ or ‘List action items from the meeting.’ Users can access Gemini’s functionality either through Drive’s overlay previewer or a standalone viewer in a separate browser tab.

However, captions must be enabled within the user’s domain for the feature to work properly.

The update is being gradually rolled out and is expected to be available to all eligible users by 19 June. At the moment, it is limited to English and accessible only to users of Google Workspace and Google One AI Premium, or those with Gemini Business or Enterprise add-ons.

For administrators, smart features and personalisation settings must be activated to grant access.

To use the new function, users can double-click on a video file in Drive and select the ‘Ask Gemini’ option marked by a star icon in the top right corner. Google says the upgrade reflects a broader effort to integrate AI seamlessly into everyday workflows by making content easier to navigate and understand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta faces backlash over open source AI claims

Meta is under renewed scrutiny for what critics describe as ‘open washing’ after sponsoring a Linux Foundation whitepaper on the benefits of open source AI.

The paper highlights how open models help reduce enterprise costs—claiming companies using proprietary AI tools spend over three times more. However, Meta’s involvement has raised questions, as its Llama AI models are presented as open source despite industry experts insisting otherwise.

Amanda Brock, head of OpenUK, argues that Llama does not meet accepted definitions of open source due to licensing terms that restrict commercial use.

She referenced the Open Source Initiative’s (OSI) standards, which Llama fails to meet, pointing to the presence of commercial limitations that contradict open source principles. Brock noted that open source should allow unrestricted use, which Llama’s license does not support.

Meta has long branded its Llama models as open source, but the OSI and other stakeholders have repeatedly pushed back, stating that the company’s licensing undermines the very foundation of open access.

While Brock acknowledged Meta’s contribution to the broader open source conversation, she also warned that such mislabelling could have serious consequences—especially as lawmakers and regulators increasingly reference open source in crafting AI legislation.

Other firms have faced similar allegations, including Databricks with its DBRX model in 2024, which was also criticised for failing to meet OSI standards. As the AI sector continues to evolve, the line between truly open and merely accessible models remains a point of growing tension.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic CEO warns of mass job losses from AI

Just one week after releasing its most advanced AI models to date — Opus 4 and Sonnet 4 — Anthropic CEO Dario Amodei warned in an interview with Axios that AI could soon reshape the job market in alarming ways.

AI, he said, may be responsible for eliminating up to half of all entry-level white-collar roles within the next one to five years, potentially driving unemployment as high as 10% to 20%.

Amodei’s goal in speaking publicly is to help workers prepare and to urge both AI companies and governments to be more transparent about coming changes. ‘Most of them [workers] are unaware that this is about to happen,’ he told Axios. ‘It sounds crazy, and people just don’t believe it.’

According to Amodei, the shift from AI augmenting jobs to fully automating them could begin as soon as two years from now. He highlighted how widespread displacement may threaten democratic stability and deepen inequality, as large groups of people lose the ability to generate economic value.

Despite these warnings, Amodei explained that competitive pressures prevent developers from slowing down. Regulatory caution in the US, he suggested, would only result in countries like China advancing more rapidly.

Still, not all implications are negative. Amodei pointed to major breakthroughs in other areas, such as healthcare, as part of the broader impact of AI.

‘Cancer is cured, the economy grows at 10% a year, the budget is balanced — and 20% of people don’t have jobs,’ he said.

To prepare society, Amodei called for increased public awareness, encouraging individuals to reconsider career paths and avoid the most automation-prone fields.

He referenced the Anthropic Economic Index, which monitors how AI affects different occupations. At its launch in February, the index showed that 57% of AI use cases still supported human tasks rather than replacing them.

However, during a press-only session at Code with Claude, Amodei noted that augmentation is likely to be a short-term strategy. He described a ‘rising waterline’ — the gradual shift from assistance to full replacement — which may soon outpace efforts to retain human roles.

‘When I think about how to make things more augmentative, that is a strategy for the short and the medium term — in the long term, we are all going to have to contend with the idea that everything humans do is eventually going to be done by AI systems. That is a constant. That will happen,’ he said.

His other recommendations included boosting AI literacy and equipping public officials with a deeper understanding of superintelligent systems, so they can begin forming policy for a radically transformed economy.

While Amodei’s outlook may sound daunting, it echoes a pattern seen throughout history: every major technological disruption brings workforce upheaval. Though some roles vanish, others emerge. Several studies suggest AI may even highlight the continued relevance of distinctively human skills.

Regardless of the outcome, one thing remains clear — learning to work with AI has never been more important.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New York Times partners with Amazon on AI integration

The New York Times Company and Amazon have signed a multi-year licensing agreement that will allow Amazon to integrate editorial content from The New York Times, NYT Cooking, and The Athletic into a range of its AI-powered services, the companies announced Wednesday.

Under the deal, Amazon will use licensed content for real-time display in consumer-facing products such as Alexa, as well as for training its proprietary foundation models. The agreement marks an expansion of the firms’ existing partnership.

‘The agreement expands the companies’ existing relationship, and will deliver additional value to Amazon customers while bringing Times journalism to broader audiences,’ the companies said in a joint statement.

According to the announcement, the licensing terms include ‘real-time display of summaries and short excerpts of Times content within Amazon products and services’ alongside permission to use the content in AI model development. Amazon platforms will also feature direct links to full Times articles.

Both companies described the partnership as a reflection of a shared commitment to delivering global news and information across Amazon’s AI ecosystem. Financial details of the agreement were not made public.

The announcement comes amid growing industry debate about the role of journalistic material in training AI systems.

By entering a formal licensing arrangement, The New York Times positions itself as one of the first major media outlets to publicly align with a technology company for AI-related content use.

The companies have yet to name additional Amazon products that will feature Times content, and no timeline has been disclosed for the rollout of the new integrations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!