Click To Do and Settings agent bring AI to Windows 11 beta

Microsoft has rolled out Windows 11 Insider Preview Build 26120.3964 to the Beta Channel, marking the official start of the 24H2 version. Available to Insider users starting this week, the update delivers key AI-driven enhancements—most notably, a new agent built into the Settings app and upgraded text actions.

The AI agent in Settings allows users to interact using natural language instead of simple keywords. Microsoft says users can ask questions like ‘how to control my PC by voice’ or ‘my mouse pointer is too small’ to receive personalised help navigating and adjusting system settings.

Initially, the feature is limited to Copilot+ PCs powered by Snapdragon processors and set to English as the primary language. Microsoft plans to expand support to AMD and Intel devices in the near future.

The update also introduces a new FAQs section on the About page under Settings > System. The company says this addition will help users better understand their device’s configuration, performance, and compatibility.

Microsoft is also enhancing its ‘Click To Do’ feature. On Copilot+ PCs with AMD or Intel chips, users can now highlight text (10 words or more) and press Win + Click or Win + Q to access quick AI actions like Summarise, Rewrite, or Create a bulleted list.

These tools are powered by Phi Silica, an on-device small language model. The features require the system language to be English and the user to be signed in with a Microsoft account.

Microsoft notes that Rewrite is temporarily unavailable for users with French or Spanish as their default language but will return in a future update.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S urges password reset after major cyber incident

Marks & Spencer has confirmed that hackers accessed personal customer information in a cyber-attack that began in late April. The retailer stated that no payment details or account passwords were compromised, and there is currently no evidence the stolen data has been shared.

Customers will be prompted to reset their passwords as a precaution. Chief executive Stuart Machin called the breach a result of a sophisticated attack and apologised for the disruption, which has impacted online orders, app functionality, and some in-store services.

Although stores remain open, the company has been unable to process online purchases since 25 April. A hacking group known as Scattered Spider is believed to be behind the incident.

M&S has contacted affected customers and provided guidance on online safety. The company said it is working ‘around the clock’ to resolve the issue and restore normal operations. Customers are thanked for their patience and continued support.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Masked cybercrime groups rise as attacks escalate worldwide

Cybercrime is thriving like never before, with hackers launching attacks ranging from absurd ransomware demands of $1 trillion to large-scale theft of personal data. Despite efforts from Microsoft, Google and even the FBI, these threat actors continue to outpace defences.

A new report by Group-IB has analysed over 1,500 cybercrime investigations to uncover the most active and dangerous hacker groups operating today.

Rather than fading away after arrests or infighting, many cybercriminal gangs are re-emerging stronger than before.

Group-IB’s May 2025 report highlights a troubling increase in key attack types across 2024 — phishing rose by 22%, ransomware leak sites by 10%, and APT (advanced persistent threat) attacks by 58%. The United States was the most affected country by ransomware activity.

At the top of the cybercriminal hierarchy now sits RansomHub, a ransomware-as-a-service group that emerged from the collapsed ALPHV group and has already overtaken long-established players in attack numbers.

Behind it is GoldFactory, which developed the first iOS banking trojan and exploited facial recognition data. Lazarus, a well-known North Korean state-linked group, also remains highly active under multiple aliases.

Meanwhile, politically driven hacktivist group NoName057(16) has been targeting European institutions using denial-of-service attacks.

With jurisdictional gaps allowing cybercriminals to flourish, these masked hackers remain a growing concern for global cybersecurity, especially as new threat actors emerge from the shadows instead of disappearing for good.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US scraps Biden AI chip export rule

The US Department of Commerce has scrapped the Biden administration’s Artificial Intelligence Diffusion Rule just days before it was due to come into force.

Introduced in January, the rule would have restricted the export of US-made AI chips to many countries for the first time, while reinforcing existing controls.

Rather than enforcing broad restrictions, the Department now intends to pursue direct negotiations with individual countries.

The original rule divided the world into three tiers, with countries like Japan and South Korea spared restrictions, middle-tier countries such as Mexico and Portugal facing new limits, and nations like China and Russia subject to tighter controls.

According to Bloomberg, a replacement rule is expected at a later date.

Instead of issuing immediate new regulations, officials released industry guidance warning companies against using Huawei’s Ascend AI chips and highlighted the risks of allowing US chips to train AI in China.

Secretary Jeffrey Kessler criticised the Biden-era policy, promising a ‘bold, inclusive’ AI strategy that works with allies while limiting access for adversaries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU prolongs sanctions for cyberattackers until 2026

The EU Council has extended its sanctions on cyberattacks until May 18, 2026, with the legal framework for enforcing these measures now lasting until 2028. The sanctions target individuals and institutions involved in cyberattacks that pose a significant threat to the EU and its members.

The extended measures will allow the EU to impose restrictions on those responsible for cyberattacks, including freezing assets and blocking access to financial resources.

These actions may also apply to attacks against third countries or international organisations, if necessary for EU foreign and security policy objectives.

At present, sanctions are in place against 17 individuals and four institutions. The EU’s decision highlights its ongoing commitment to safeguarding its digital infrastructure and maintaining its foreign policy goals through legal actions against cyber threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US Copyright Office avoids clear decision on AI and fair use

The US Copyright Office has stopped short of deciding whether AI companies can legally use copyrighted material to train their systems under fair use.

Its newly released report acknowledges that some uses—such as non-commercial research—may qualify, while others, like replicating expressive works from pirated content to produce market-ready AI output, likely won’t.

Rather than offering a definitive answer, the Office said such cases must be assessed by the courts, not through a universal standard.

The latest report is the third in a series aimed at guiding how copyright law applies to AI-generated content. It reiterates that works entirely created by AI cannot be copyrighted, but human-edited outputs might still qualify.

The 108-page document focuses heavily on whether AI training methods transform content enough to justify legal protection, and whether they harm creators’ livelihoods through lost sales or diluted markets.

Instead of setting new policy, the Office highlights existing legal principles, especially the four factors of fair use: the purpose, the nature of the work, the amount used, and the impact on the original market.

It notes that AI-generated content can sometimes alter original works meaningfully, but when styles or outputs closely resemble protected material, legal risks remain. Tools like content filters are seen as helpful in preventing infringement, even though they’re not always reliable.

The timing of the report has been overshadowed by political turmoil. President Donald Trump reportedly dismissed both the Librarian of Congress and the head of the Copyright Office days before the report’s release.

Meanwhile, creators continue urging the government not to permit fair use in AI training, arguing it threatens the value of original work. The debate is now expected to unfold further in courtrooms instead of regulatory offices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jamie Lee Curtis calls out Zuckerberg over AI scam using her likeness

Jamie Lee Curtis has directly appealed to Mark Zuckerberg after discovering her likeness had been used without consent in an AI-generated advert.

Posting on Facebook, Curtis expressed her frustration with Meta’s lack of proper channels to report such abuse, stating she had exhausted all official avenues before resorting to a public plea.

The fake video reportedly manipulated footage from an emotional interview following the January wildfires in Los Angeles, inserting false statements under the guise of a product endorsement.

Instead of remaining silent, Curtis urged Zuckerberg to take action, saying the unauthorised content damaged her integrity and voice. Within hours of her public callout, Meta confirmed the video had been removed for breaching its policies, a rare example of a swift response.

‘It worked! Yay Internet! Shame has its value!’ she wrote in a follow-up, though she also highlighted the broader risks posed by deepfakes.

The actress joins a growing list of celebrities, including Taylor Swift and Scarlett Johansson, who’ve been targeted by AI misuse.

Swift was forced to publicly clarify her political stance after an AI video falsely endorsed Donald Trump, while Johansson criticised OpenAI for allegedly using a voice nearly identical to hers despite her refusal to participate in a project.

The issue has reignited concerns around consent, misinformation and the exploitation of public figures.

Instead of waiting for further harm, lawmakers in California have already begun pushing back. New legislation signed by Governor Gavin Newsom aims to protect performers from unauthorised digital replicas and deepfakes.

Meanwhile, in Washington, proposals like the No Fakes Act seek to hold tech platforms accountable, possibly fining them thousands per violation. As Curtis and others warn, without stronger protections, the misuse of AI could spiral further, threatening not just celebrities but the public as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber attack disrupts Edinburgh school networks

Thousands of Edinburgh pupils were forced to attend school on Saturday after a phishing attack disrupted access to vital online learning resources.

The cyber incident, discovered on Friday, prompted officials to lock users out of the system as a precaution, just days before exams.

Approximately 2,500 students visited secondary schools to reset passwords and restore their access. Although the revision period was interrupted, the council confirmed that no personal data had been compromised.

Scottish Council staff acted swiftly to contain the threat, supported by national cyber security teams. Ongoing monitoring is in place, with authorities confident that exam schedules will continue unaffected.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Some Google apps are better off without AI

With Google I/O 2025 around the corner, concerns are growing about artificial intelligence creeping into every corner of Google’s ecosystem. While AI has enhanced tools like Gmail and Photos, some users are urging Google to leave certain apps untouched.

These include fan favourites like Emoji Kitchen, Google Keep, and Google Wallet, which continue to shine due to their simplicity and human-focused design. Critics argue that introducing generative AI to these apps could diminish what makes them special.

Emoji Kitchen’s handcrafted stickers, for example, are widely praised compared to Apple’s AI-driven alternatives. Likewise, Google Keep and Wallet are valued for their light, efficient interfaces that serve clear purposes without AI interference.

Even in environments where AI might seem useful, such as Android Auto and Google Flights, the call is for restraint. Users appreciate clear menus and limited distractions over chatbots making unsolicited suggestions.

As AI continues to dominate tech conversations, a growing number of voices are asking Google to preserve the balance between innovation and usability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!