DeepSeek claims R1 model matches OpenAI

Chinese AI start-up DeepSeek has announced a major update to its R1 reasoning model, claiming it now performs on par with leading systems from OpenAI and Google.

The R1-0528 version, released following the model’s initial launch in January, reportedly surpasses Alibaba’s Qwen3, which debuted only weeks earlier in April.

According to DeepSeek, the upgrade significantly enhances reasoning, coding, and creative writing while cutting hallucination rates by half.

These improvements stem largely from greater computational resources applied after the training phase, allowing the model to outperform domestic rivals in benchmark tests involving maths, logic, and programming.

Unlike many Western competitors, DeepSeek takes an open-source approach. The company recently shared eight GitHub projects detailing methods to optimise computing, communication, and storage efficiency during training.

Its transparency and resource-efficient design have attracted attention, especially since its smaller distilled model rivals Alibaba’s Qwen3-235B while being nearly 30 times lighter.

Major Chinese tech firms, including Tencent, Baidu and ByteDance, plan to integrate R1-0528 into their cloud services for enterprise clients. DeepSeek’s progress signals China’s continued push into globally competitive AI, driven by a young team determined to offer high performance with fewer resources

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NSO asks court to overturn WhatsApp verdict

Israeli spyware company NSO Group has requested a new trial after a US jury ordered it to pay $168 million in damages to WhatsApp.

The company, which has faced mounting legal and financial troubles, filed a motion in a California federal court last week seeking to reduce the verdict or secure a retrial.

The May verdict awarded WhatsApp $444,719 in compensatory damages and $167.25 million in punitive damages. Jurors found that NSO exploited vulnerabilities in the encrypted platform and sold the exploit to clients who allegedly used it to target journalists, activists and political rivals.

WhatsApp, owned by Meta, filed the lawsuit in 2019.

NSO claims the punitive award is unconstitutional, arguing it is over 376 times greater than the compensatory damages and far exceeds the US Supreme Court’s general guidance of a 4:1 ratio.

The firm also said it cannot afford the penalty, citing losses of $9 million in 2023 and $12 million in 2024. Its CEO testified that the company is ‘struggling to keep our heads above water’.

WhatsApp, responding to TechCrunch in a statement, said NSO was once again trying to evade accountability. The company vowed to continue its legal campaign, including efforts to secure a permanent injunction that would prevent NSO from ever targeting WhatsApp or its users again.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft takes down massive Lumma malware network

Microsoft has dismantled a major cybercrime operation centred around the Lumma Stealer malware, which had infected over 394,000 Windows devices globally.

In partnership with global law enforcement and industry partners, Microsoft seized more than 1,300 domains linked to the malware.

The malware was known for stealing sensitive data such as login credentials, bank details and cryptocurrency information, making it a go-to tool for cybercriminals since 2022.

The takedown followed a court order from a US federal court and included help from the US Department of Justice, Europol, and Japan’s cybercrime unit.

Microsoft’s Digital Crimes Unit also received assistance from firms like Cloudflare and Bitsight to disrupt the infrastructure that supported Lumma’s Malware-as-a-Service network.

The operation is being hailed as a significant win against a sophisticated threat that had evolved to target Windows and Mac users. Security experts urge users to adopt strong cyber hygiene, including antivirus software, two-factor authentication, and password managers.

Microsoft’s action is part of a broader effort to tackle infostealers, which have fuelled a surge in data breaches and identity theft worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Colt, Honeywell and Nokia to trial quantum cryptography in space

Colt Technology Services, Honeywell, and Nokia have joined forces to trial quantum key distribution (QKD) via satellites to develop quantum-safe networks. The trial builds on a previous Colt pilot focused on terrestrial quantum-secure networks.

The collaboration aims to tackle the looming cybersecurity risks of quantum computing, which threatens to break current encryption methods. The project seeks to deliver secure global communication beyond the current 100km terrestrial limit by trialling space-based and subsea QKD.

Low-Earth orbit satellites will explore QKD over ultra-long distances, including transatlantic spans. The initiative is designed to support sectors that handle sensitive data, such as finance, healthcare, and government, by offering encryption solutions resistant to quantum threats.

Leaders from all three companies emphasised the urgency of developing safeguards to protect against future threats. A joint white paper, The Journey to Quantum-Safe Networking, has been released to outline the risks and technical roadmap for this new frontier in secure communications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Courts consider limits on AI evidence

A newly proposed rule by the Federal Judicial Conference could reshape how AI-generated evidence is treated in court. Dubbed Rule 707, it would allow such machine-generated evidence to be admitted only if it meets the same reliability standards required of expert testimony under Rule 702.

However, it would not apply to outputs from simple scientific instruments or widely used commercial software. The rule aims to address concerns about the reliability and transparency of AI-driven analysis, especially when used without a supporting expert witness.

Critics argue that the limitation to non-expert presentation renders the rule overly narrow, as the underlying risks of bias and interpretability persist regardless of whether an expert is involved. They suggest that all machine-generated evidence in US courts should be subject to robust scrutiny.

The Advisory Committee is also considering the scope of terminology such as ‘machine learning’ to prevent Rule 707 from encompassing more than intended. Meanwhile, a separate proposed rule regarding deepfakes has been shelved because courts already have tools to address the forgery.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China creates AI to detect real nuclear warheads

Chinese scientists have created the world’s first AI-based system capable of identifying real nuclear warheads from decoys, marking a significant step in arms control verification.

The breakthrough, developed by the China Institute of Atomic Energy (CIAE), could strengthen Beijing’s hand in stalled disarmament talks, although it also raises difficult questions about AI’s growing role in managing weapons of mass destruction.

The technology builds on a long-standing US–China proposal but faced key obstacles: how to train AI using sensitive nuclear data, gain military approval without risking secret leaks, and persuade sceptical nations like the US to move past Cold War-era inspection methods.

So far, only the AI training has been completed, with the rest of the process still pending international acceptance.

The AI system uses deep learning and cryptographic protocols to analyse scrambled radiation signals from warheads behind a polythene wall, ensuring the weapons’ internal designs remain hidden.

The machine can verify a warhead’s chain-reaction potential without accessing classified details. According to CIAE, repeated randomised tests reduce the chance of deception to nearly zero.

While both China and the US have pledged not to let AI control nuclear launch decisions, the new system underlines AI’s expanding role in national defence.

Beijing insists the AI can be jointly trained and sealed before use to ensure transparency, but sceptics remain wary of trust, backdoor access and growing militarisation of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Uber’s product chief turns to AI for reports and research

Uber’s chief product officer, Sachin Kansal, is embracing AI to streamline his daily workflow—particularly through tools like ChatGPT, Google Gemini, and, soon, NotebookLM.

Speaking on ‘Lenny’s Podcast,’ Kansal revealed how AI summarisation helps him digest lengthy 50- to 100-page reports he otherwise wouldn’t have time to read. He uses AI to understand market trends and rider feedback across regions such as Brazil, South Korea, and South Africa.

Kansal also relies on AI as a research assistant. For instance, when exploring new driver features, he used ChatGPT’s deep research capabilities to simulate possible driver reactions and generate brainstorming ideas.

‘It’s an amazing research assistant,’ he said. ‘It’s absolutely a starting point for a brainstorm with my team.’

He’s now eyeing Google’s NotebookLM, a note-taking and research tool, as the next addition to his AI toolkit—especially its ‘Audio Overview’ feature, which turns documents into AI-generated podcast-style discussions.

Uber CEO Dara Khosrowshahi previously noted that too few of Uber’s 30,000+ employees are using AI and stressed that mastering AI tools, especially for coding, would soon be essential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Students build world’s fastest Rubik’s Cube solver

A group of engineering students from Purdue University have built the world’s fastest Rubik’s Cube-solving robot, achieving a Guinness World Record time of just 0.103 seconds.

The team focused on improving nearly every aspect of the process, not only faster motors, from image capture to cube construction.

Rather than processing full images, the robot uses low-resolution cameras aimed at opposite corners of the cube, capturing only the essential parts of the image to save time.

Instead of converting camera data into full digital pictures, the system directly reads colour data to identify the cube’s layout. Although slightly less accurate, the method allows quicker recognition and faster solving.

The robot, known as Purdubik’s Cube, benefits from software designed specifically for machines, allowing it to perform overlapping turns using a technique called corner cutting. Instead of waiting for one rotation to finish, the next begins, shaving off valuable milliseconds.

To withstand the stress, the team designed a cube with extremely tight tension using reinforced nylon, making it nearly impossible to turn by hand.

High-speed motors controlled the robot’s movements, with a trapezoidal acceleration profile ensuring rapid but precise turns. The students believe the record could fall again—provided someone develops a stronger, lighter cube using materials like carbon fibre.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI copyright clash stalls UK data bill

A bitter standoff over AI and copyright has returned to the House of Lords, as ministers and peers clash over how to protect creative workers while fostering technological innovation.

At the centre of the debate is the proposed Data (Use and Access) Bill, which was expected to pass smoothly but is now stuck in parliamentary limbo due to growing resistance.

The bill would allow AI firms to access copyrighted material unless rights holders opt out, a proposal that many artists and peers believe threatens the UK’s £124bn creative industry.

Nearly 300 Lords have called for AI developers to disclose what content they use and seek licences instead of relying on blanket access. Former film director Baroness Kidron described the policy as ‘state-sanctioned theft’ and warned it would sacrifice British talent to benefit large tech companies.

Supporters of the bill, like former Meta executive Sir Nick Clegg, argue that forcing AI firms to seek individual permissions would severely damage the UK’s AI sector. The Department for Science, Innovation and Technology insists it will only consider changes if they are proven to benefit creators.

If no resolution is found, the bill risks being shelved entirely. That would also scrap unrelated proposals bundled into it, such as new NHS data-sharing rules and plans for a nationwide underground map.

Despite the bill’s wide scope, the fight over copyright remains its most divisive and emotionally charged feature.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gmail adds automatic AI summaries

Gmail on mobile now displays AI-generated summaries by default, marking a shift in how Google’s Gemini assistant operates within inboxes.

Instead of relying on users to request a summary, Gemini will now decide when it’s useful—typically for long email threads with multiple replies—and present a brief summary card at the top of the message.

These summaries update automatically as conversations evolve, aiming to save users from scrolling through lengthy discussions.

The feature is currently limited to mobile devices and available only to users with Google Workspace accounts, Gemini Education add-ons, or a Google One AI Premium subscription. For the moment, summaries are confined to emails written in English.

Google expects the rollout to take around two weeks, though it remains unclear when, or if, the tool will extend to standard Gmail accounts or desktop users.

Anyone wanting to opt out must disable Gmail’s smart features entirely—giving up tools like Smart Compose, Smart Reply, and package tracking in the process.

While some may welcome the convenience, others may feel uneasy about their emails being analysed by large language models, especially since this process could contribute to further training of Google’s AI systems.

The move reflects a wider trend across Google’s products, where AI is becoming central to everyday user experiences.

Additional user controls and privacy commitments

According to Google Workspace, users have some control over the summary cards. They can collapse a Gemini summary card, and it will remain collapsed for that specific email thread.

In the near future, Gmail will introduce enhancements, such as automatically collapsing future summary cards for users who consistently collapse them, until the user chooses to expand them again. For emails that don’t display automatic summaries, Gmail still offers manual options.

Users can tap the ‘summarise this email’ chip at the top of the message or use the Gemini side panel to trigger a summary manually. Google also reaffirms its commitment to data protection and user privacy. All AI features in Gmail adhere to its privacy principles, with more details available on the Privacy Hub.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!