Cheshire’s new AI tool flags stalking before it escalates

Cheshire Police has become the first UK force to use AI in stalking investigations, aiming to identify harmful behaviours earlier. The AI will analyse reports in real time, even as victims speak with call handlers.

The system, trained using data from the force and the Suzy Lamplugh Trust, is designed to detect stalking patterns—even if the term isn’t used directly. Currently, officers in the Harm Reduction Unit manually review 10 cases a day.

Det Ch Insp Danielle Knox said AI will enhance, not replace, police work, and ethical safeguards are in place. Police and Crime Commissioner Dan Price secured £300,000 to fund the initiative, saying it could be ’25 times more effective’ than manual investigation.

Survivor ‘Amy’ said earlier intervention might have prevented her violent assault. Three-quarters of the unit’s cases already lead to charges, but police hope AI will improve that success rate and offer victims faster protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google tests AI mode on Search page

Google is experimenting with a redesigned version of its iconic Search homepage, swapping the familiar ‘I’m Feeling Lucky’ button for a new option called ‘AI Mode.’

The fresh feature, which began rolling out in limited tests earlier in May, is part of the company’s push to integrate more AI-driven capabilities into everyday search experiences.

According to a Google spokesperson, the change is currently being tested through the company’s experimental Labs platform, though there’s no guarantee it will become a permanent fixture.

The timing is notable, arriving just before Google I/O, the company’s annual developer conference where more AI-focused updates are expected.

Such changes to Google’s main Search page are rare, but the company may feel growing pressure to adapt. Just last week, an Apple executive revealed that Google searches on Safari had declined for the first time, linking the drop to the growing popularity of AI tools like ChatGPT.

By testing ‘AI Mode,’ Google appears to be responding to this shift, exploring ways to stay ahead in an increasingly AI-driven search landscape instead of sticking to its traditional layout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Republicans seek to block state AI laws for a decade

Republican lawmakers in the US have introduced a proposal that would block states from regulating artificial intelligence for the next ten years. Critics argue the move is a handout to Big Tech and could stall protections already passed in states like California, Utah, and Colorado.

The measure, embedded in a budget reconciliation bill, would prevent states from enforcing rules on a wide range of automated systems, from AI chatbots to algorithms used in health and justice sectors.

Over 500 AI-related bills have been proposed this year at the state level, and many of them would be nullified if the federal ban succeeds. Supporters of the bill claim AI oversight should happen at the national level to avoid a confusing patchwork of state laws.

Opponents, including US Democrats and tech accountability groups, warn the ban could allow unchecked algorithmic discrimination, weaken privacy, and leave the public vulnerable to AI-driven harms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok unveils AI video feature

TikTok has launched ‘AI Alive,’ its first image-to-video feature that allows users to transform static photos into animated short videos within TikTok Stories.

Accessible only through the Story Camera, the tool applies AI-driven movement and effects—like shifting skies, drifting clouds, or expressive animations—to bring photos to life.

Unlike text-to-image tools found on Instagram and Snapchat, TikTok’s latest feature takes visual storytelling further by enabling full video generation from single images. Although Snapchat plans to introduce a similar function, TikTok has moved ahead with this innovation.

All AI Alive videos will carry an AI-generated label and include C2PA metadata to ensure transparency, even when shared beyond the platform.

TikTok emphasises safety, noting that every AI Alive video undergoes several moderation checks before it appears to creators.

Uploaded photos, prompts, and generated videos are reviewed to prevent rule-breaking content. Users can report violations, and final safety reviews are conducted before public sharing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Click To Do and Settings agent bring AI to Windows 11 beta

Microsoft has rolled out Windows 11 Insider Preview Build 26120.3964 to the Beta Channel, marking the official start of the 24H2 version. Available to Insider users starting this week, the update delivers key AI-driven enhancements—most notably, a new agent built into the Settings app and upgraded text actions.

The AI agent in Settings allows users to interact using natural language instead of simple keywords. Microsoft says users can ask questions like ‘how to control my PC by voice’ or ‘my mouse pointer is too small’ to receive personalised help navigating and adjusting system settings.

Initially, the feature is limited to Copilot+ PCs powered by Snapdragon processors and set to English as the primary language. Microsoft plans to expand support to AMD and Intel devices in the near future.

The update also introduces a new FAQs section on the About page under Settings > System. The company says this addition will help users better understand their device’s configuration, performance, and compatibility.

Microsoft is also enhancing its ‘Click To Do’ feature. On Copilot+ PCs with AMD or Intel chips, users can now highlight text (10 words or more) and press Win + Click or Win + Q to access quick AI actions like Summarise, Rewrite, or Create a bulleted list.

These tools are powered by Phi Silica, an on-device small language model. The features require the system language to be English and the user to be signed in with a Microsoft account.

Microsoft notes that Rewrite is temporarily unavailable for users with French or Spanish as their default language but will return in a future update.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SoftBank profit jumps on AI-driven rebound

SoftBank Group reported a 124% surge in quarterly profit, driven by booming AI demand that lifted chip sales and startup valuations. Net income reached ¥517.18 billion ($3.5 billion) in the fiscal fourth quarter, with the Vision Fund swinging back to a profit of ¥26.1 billion.

The results provide momentum for SoftBank’s ambitions to invest heavily in OpenAI and US-based AI infrastructure. Plans include a $30 billion stake in OpenAI and leading a $100 billion push into data centres under the Stargate project, which could eventually grow to $500 billion.

However, investor caution amid tariffs and tech protectionism has delayed detailed financing discussions. Despite these hurdles, SoftBank’s chip unit Arm Holdings has benefited from rising global AI investments, even as near-term forecasts remain mixed.

For the full year, SoftBank earned ¥1.15 trillion, reversing a significant loss from the previous year. The company continues to navigate risks tied to the volatile tech start-up market, especially as Vision Fund portfolio firms go public in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google tests AI tool to automate software development

Google is internally testing an advanced AI tool designed to support software engineers through the entire development cycle, according to The Information. The firm is also expected to demonstrate integration between its Gemini chatbot in voice mode and Android-powered XR headsets.

The agentic AI assistant is said to handle tasks such as code generation and documentation, and has already been previewed to staff and developers ahead of Google’s I/O conference on 20 May. The move reflects a wider trend among tech giants racing to automate programming.

Amazon is developing its own coding assistant, Kiro, which can process both text and visual inputs, detect bugs, and auto-document code. While AWS initially targeted a June launch, the current release date remains uncertain.

Microsoft and Google have claimed that around 30% of their code is now AI-generated. OpenAI is also eyeing expansion, reportedly in talks to acquire AI coding start-up Windsurf for $3 billion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US Copyright Office avoids clear decision on AI and fair use

The US Copyright Office has stopped short of deciding whether AI companies can legally use copyrighted material to train their systems under fair use.

Its newly released report acknowledges that some uses—such as non-commercial research—may qualify, while others, like replicating expressive works from pirated content to produce market-ready AI output, likely won’t.

Rather than offering a definitive answer, the Office said such cases must be assessed by the courts, not through a universal standard.

The latest report is the third in a series aimed at guiding how copyright law applies to AI-generated content. It reiterates that works entirely created by AI cannot be copyrighted, but human-edited outputs might still qualify.

The 108-page document focuses heavily on whether AI training methods transform content enough to justify legal protection, and whether they harm creators’ livelihoods through lost sales or diluted markets.

Instead of setting new policy, the Office highlights existing legal principles, especially the four factors of fair use: the purpose, the nature of the work, the amount used, and the impact on the original market.

It notes that AI-generated content can sometimes alter original works meaningfully, but when styles or outputs closely resemble protected material, legal risks remain. Tools like content filters are seen as helpful in preventing infringement, even though they’re not always reliable.

The timing of the report has been overshadowed by political turmoil. President Donald Trump reportedly dismissed both the Librarian of Congress and the head of the Copyright Office days before the report’s release.

Meanwhile, creators continue urging the government not to permit fair use in AI training, arguing it threatens the value of original work. The debate is now expected to unfold further in courtrooms instead of regulatory offices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jamie Lee Curtis calls out Zuckerberg over AI scam using her likeness

Jamie Lee Curtis has directly appealed to Mark Zuckerberg after discovering her likeness had been used without consent in an AI-generated advert.

Posting on Facebook, Curtis expressed her frustration with Meta’s lack of proper channels to report such abuse, stating she had exhausted all official avenues before resorting to a public plea.

The fake video reportedly manipulated footage from an emotional interview following the January wildfires in Los Angeles, inserting false statements under the guise of a product endorsement.

Instead of remaining silent, Curtis urged Zuckerberg to take action, saying the unauthorised content damaged her integrity and voice. Within hours of her public callout, Meta confirmed the video had been removed for breaching its policies, a rare example of a swift response.

‘It worked! Yay Internet! Shame has its value!’ she wrote in a follow-up, though she also highlighted the broader risks posed by deepfakes.

The actress joins a growing list of celebrities, including Taylor Swift and Scarlett Johansson, who’ve been targeted by AI misuse.

Swift was forced to publicly clarify her political stance after an AI video falsely endorsed Donald Trump, while Johansson criticised OpenAI for allegedly using a voice nearly identical to hers despite her refusal to participate in a project.

The issue has reignited concerns around consent, misinformation and the exploitation of public figures.

Instead of waiting for further harm, lawmakers in California have already begun pushing back. New legislation signed by Governor Gavin Newsom aims to protect performers from unauthorised digital replicas and deepfakes.

Meanwhile, in Washington, proposals like the No Fakes Act seek to hold tech platforms accountable, possibly fining them thousands per violation. As Curtis and others warn, without stronger protections, the misuse of AI could spiral further, threatening not just celebrities but the public as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Morrisons tests Tally robots amid job cut fears

Supermarket giant Morrisons has introduced shelf-scanning robots in several of its UK stores as part of a push to streamline operations and improve inventory accuracy.

The robots, known as Tally, are currently being trialled in three branches—Wetherby, Redcar, and Stockton—where they autonomously roam aisles to monitor product placement, stock levels, and pricing.

Developed by US-based Symbi Robotics, Tally is the world’s first autonomous item-scanning robot, capable of scanning up to 30,000 items per hour with 99% accuracy.

Already in use by major international retailers including Carrefour and Kroger, the robot is designed to operate in a range of retail environments, from chilled aisles to traditional shelves.

Morrisons says the robots will enhance store efficiency and reduce out-of-stock issues, but the move has sparked concern after reports that as many as 365 employees could lose their jobs due to automation.

The robots are part of a broader trend in retail toward AI-powered tools that boost productivity—but often at the expense of human labour.

Tally units are slim, mobile, and equipped with friendly digital faces. They return automatically to their charging stations when power runs low, and operate with minimal staff intervention.

While Morrisons has not confirmed a wider rollout in the UK, the trial reflects a growing shift in retail automation. As AI technologies evolve, companies are weighing the balance between operational gains and workforce impact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!