Google tests AI tool to automate software development

Google is internally testing an advanced AI tool designed to support software engineers through the entire development cycle, according to The Information. The firm is also expected to demonstrate integration between its Gemini chatbot in voice mode and Android-powered XR headsets.

The agentic AI assistant is said to handle tasks such as code generation and documentation, and has already been previewed to staff and developers ahead of Google’s I/O conference on 20 May. The move reflects a wider trend among tech giants racing to automate programming.

Amazon is developing its own coding assistant, Kiro, which can process both text and visual inputs, detect bugs, and auto-document code. While AWS initially targeted a June launch, the current release date remains uncertain.

Microsoft and Google have claimed that around 30% of their code is now AI-generated. OpenAI is also eyeing expansion, reportedly in talks to acquire AI coding start-up Windsurf for $3 billion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S urges password reset after major cyber incident

Marks & Spencer has confirmed that hackers accessed personal customer information in a cyber-attack that began in late April. The retailer stated that no payment details or account passwords were compromised, and there is currently no evidence the stolen data has been shared.

Customers will be prompted to reset their passwords as a precaution. Chief executive Stuart Machin called the breach a result of a sophisticated attack and apologised for the disruption, which has impacted online orders, app functionality, and some in-store services.

Although stores remain open, the company has been unable to process online purchases since 25 April. A hacking group known as Scattered Spider is believed to be behind the incident.

M&S has contacted affected customers and provided guidance on online safety. The company said it is working ‘around the clock’ to resolve the issue and restore normal operations. Customers are thanked for their patience and continued support.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Masked cybercrime groups rise as attacks escalate worldwide

Cybercrime is thriving like never before, with hackers launching attacks ranging from absurd ransomware demands of $1 trillion to large-scale theft of personal data. Despite efforts from Microsoft, Google and even the FBI, these threat actors continue to outpace defences.

A new report by Group-IB has analysed over 1,500 cybercrime investigations to uncover the most active and dangerous hacker groups operating today.

Rather than fading away after arrests or infighting, many cybercriminal gangs are re-emerging stronger than before.

Group-IB’s May 2025 report highlights a troubling increase in key attack types across 2024 — phishing rose by 22%, ransomware leak sites by 10%, and APT (advanced persistent threat) attacks by 58%. The United States was the most affected country by ransomware activity.

At the top of the cybercriminal hierarchy now sits RansomHub, a ransomware-as-a-service group that emerged from the collapsed ALPHV group and has already overtaken long-established players in attack numbers.

Behind it is GoldFactory, which developed the first iOS banking trojan and exploited facial recognition data. Lazarus, a well-known North Korean state-linked group, also remains highly active under multiple aliases.

Meanwhile, politically driven hacktivist group NoName057(16) has been targeting European institutions using denial-of-service attacks.

With jurisdictional gaps allowing cybercriminals to flourish, these masked hackers remain a growing concern for global cybersecurity, especially as new threat actors emerge from the shadows instead of disappearing for good.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US scraps Biden AI chip export rule

The US Department of Commerce has scrapped the Biden administration’s Artificial Intelligence Diffusion Rule just days before it was due to come into force.

Introduced in January, the rule would have restricted the export of US-made AI chips to many countries for the first time, while reinforcing existing controls.

Rather than enforcing broad restrictions, the Department now intends to pursue direct negotiations with individual countries.

The original rule divided the world into three tiers, with countries like Japan and South Korea spared restrictions, middle-tier countries such as Mexico and Portugal facing new limits, and nations like China and Russia subject to tighter controls.

According to Bloomberg, a replacement rule is expected at a later date.

Instead of issuing immediate new regulations, officials released industry guidance warning companies against using Huawei’s Ascend AI chips and highlighted the risks of allowing US chips to train AI in China.

Secretary Jeffrey Kessler criticised the Biden-era policy, promising a ‘bold, inclusive’ AI strategy that works with allies while limiting access for adversaries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU prolongs sanctions for cyberattackers until 2026

The EU Council has extended its sanctions on cyberattacks until May 18, 2026, with the legal framework for enforcing these measures now lasting until 2028. The sanctions target individuals and institutions involved in cyberattacks that pose a significant threat to the EU and its members.

The extended measures will allow the EU to impose restrictions on those responsible for cyberattacks, including freezing assets and blocking access to financial resources.

These actions may also apply to attacks against third countries or international organisations, if necessary for EU foreign and security policy objectives.

At present, sanctions are in place against 17 individuals and four institutions. The EU’s decision highlights its ongoing commitment to safeguarding its digital infrastructure and maintaining its foreign policy goals through legal actions against cyber threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

BlackRock raises concerns over quantum computing risks to Bitcoin ETFs

BlackRock has flagged quantum computing as a potential risk to its iShares Bitcoin ETF (IBIT) in a recent regulatory filing. BlackRock highlighted the threat from emerging technologies, specifically quantum computing, to the cryptographic security of Bitcoin and blockchain networks.

BlackRock warned that advances in quantum computing could undermine the cryptographic algorithms protecting digital assets like Bitcoin. It is the first time BlackRock has explicitly mentioned this risk in relation to the IBIT ETF, with $64 billion in net assets.

Despite the warnings, analysts suggest that such risk disclosures are standard practice for financial products. James Seyffart, an analyst at Bloomberg Intelligence, noted that firms are required to flag all possible risks, even those with a very low likelihood of occurring.

Meanwhile, Bitcoin ETFs have seen a surge in popularity, attracting over $41 billion in net inflows since their launch.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Jamie Lee Curtis calls out Zuckerberg over AI scam using her likeness

Jamie Lee Curtis has directly appealed to Mark Zuckerberg after discovering her likeness had been used without consent in an AI-generated advert.

Posting on Facebook, Curtis expressed her frustration with Meta’s lack of proper channels to report such abuse, stating she had exhausted all official avenues before resorting to a public plea.

The fake video reportedly manipulated footage from an emotional interview following the January wildfires in Los Angeles, inserting false statements under the guise of a product endorsement.

Instead of remaining silent, Curtis urged Zuckerberg to take action, saying the unauthorised content damaged her integrity and voice. Within hours of her public callout, Meta confirmed the video had been removed for breaching its policies, a rare example of a swift response.

‘It worked! Yay Internet! Shame has its value!’ she wrote in a follow-up, though she also highlighted the broader risks posed by deepfakes.

The actress joins a growing list of celebrities, including Taylor Swift and Scarlett Johansson, who’ve been targeted by AI misuse.

Swift was forced to publicly clarify her political stance after an AI video falsely endorsed Donald Trump, while Johansson criticised OpenAI for allegedly using a voice nearly identical to hers despite her refusal to participate in a project.

The issue has reignited concerns around consent, misinformation and the exploitation of public figures.

Instead of waiting for further harm, lawmakers in California have already begun pushing back. New legislation signed by Governor Gavin Newsom aims to protect performers from unauthorised digital replicas and deepfakes.

Meanwhile, in Washington, proposals like the No Fakes Act seek to hold tech platforms accountable, possibly fining them thousands per violation. As Curtis and others warn, without stronger protections, the misuse of AI could spiral further, threatening not just celebrities but the public as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber attack disrupts Edinburgh school networks

Thousands of Edinburgh pupils were forced to attend school on Saturday after a phishing attack disrupted access to vital online learning resources.

The cyber incident, discovered on Friday, prompted officials to lock users out of the system as a precaution, just days before exams.

Approximately 2,500 students visited secondary schools to reset passwords and restore their access. Although the revision period was interrupted, the council confirmed that no personal data had been compromised.

Scottish Council staff acted swiftly to contain the threat, supported by national cyber security teams. Ongoing monitoring is in place, with authorities confident that exam schedules will continue unaffected.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Punycode scams steal crypto through lookalike URLs

Crypto holders are facing a growing threat from a sophisticated form of phishing that swaps letters in website addresses for nearly identical lookalikes, tricking users into handing over their digital assets.

Known as Punycode phishing, the tactic has led to significant losses—even for vigilant users—by mimicking legitimate cryptocurrency exchange sites with deceptive domain names.

Cybercriminals exploit the similarity between characters from different alphabets, such as replacing Latin letters with visually identical Cyrillic ones.

These fake websites are almost indistinguishable from real ones, making it extremely difficult to spot the fraud. Recent reports reveal that even browser recommendation systems, such as Google Chrome’s, have directed users to these deceptive domains.

In one widely cited case, a user was guided to a fraudulent site impersonating the crypto exchange ChangeNOW and subsequently lost over $20,000. The incident has raised questions about browser accountability and the urgency of protective measures against increasingly advanced phishing strategies.

US regulators, including the Federal Trade Commission (FTC), the North American Securities Administrators Association (NASAA), and California’s Department of Financial Protection and Innovation (DFPI), have issued ongoing warnings about crypto scams.

While none have specifically addressed Punycode-based attacks, their advice—careful URL scrutiny, skepticism of unsolicited links, and immediate fraud reporting—remains critical.

As phishing methods evolve, users are urged to double-check domain names, avoid clicking unverified links, and consult tools like the DFPI Crypto Scam Tracker. Until browsers and platforms address the threat directly, user awareness remains the most effective defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US senator calls for AI chip tracking to protect national security

A new bill introduced by Republican Senator Tom Cotton aims to bolster national security by requiring location verification features on American-made AI chips.

The Chip Security Act, announced on 9 May, would ensure such technology does not end up in the hands of foreign adversaries, particularly China.

Cotton urged the US Departments of Commerce and Defence to assess how tracking mechanisms could help detect and prevent illegal chip exports.

He also called for stricter obligations for companies exporting AI chips, including notifying authorities if devices are tampered with or redirected from their original destinations.

The proposed legislation follows a policy shift announced on 7 May by the Trump administration to ease restrictions on AI chip exports previously imposed under President Biden.

Cotton argued that better security practices could allow US firms to expand globally without undermining the country’s technological edge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!