Android adds new scam protection for phone calls

Google is introducing new protections on Android devices to combat phone call scams, particularly those involving screen-sharing and app installations. Users will see warning messages if they attempt to change settings during a call and Android will also block the deactivation of Play Protect features.

The system will now block users from sideloading apps or granting accessibility permissions while on a call with unknown contacts.

The new tools are available on devices running Android 16 and select protections are also rolling out to older versions, starting with Android 11

A separate pilot in the UK will alert users trying to open banking apps during a screen-sharing call, prompting them to end the call or wait before proceeding.

These features expand Android’s broader efforts to prevent fraud, which already include AI-based scam detection for phone calls and messages.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lawyers sanctioned after AI-generated cases found false

A federal judge in California has sanctioned two law firms for submitting a legal brief containing fake citations generated by AI tools. Judge Michael Wilner described the AI-generated references as ‘bogus’ and fined the firms $31,000, criticising them for failing to properly check the sources.

The legal document in question was based on an outline created with Google Gemini and AI tools within Westlaw.

However, this draft was handed off to another firm, K&L Gates, which included the fabricated citations without verifying their authenticity.Judge Wilner noted that at least two cases referenced in the filing did not exist at all.

He warned that undisclosed reliance on AI could mislead US courts and compromise legal integrity. This case adds to a growing list of incidents where lawyers misused AI, mistakenly treating chatbots as legitimate research tools.

The judge called the actions professionally reckless and said no competent attorney should outsource research to AI without careful oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cheshire’s new AI tool flags stalking before it escalates

Cheshire Police has become the first UK force to use AI in stalking investigations, aiming to identify harmful behaviours earlier. The AI will analyse reports in real time, even as victims speak with call handlers.

The system, trained using data from the force and the Suzy Lamplugh Trust, is designed to detect stalking patterns—even if the term isn’t used directly. Currently, officers in the Harm Reduction Unit manually review 10 cases a day.

Det Ch Insp Danielle Knox said AI will enhance, not replace, police work, and ethical safeguards are in place. Police and Crime Commissioner Dan Price secured £300,000 to fund the initiative, saying it could be ’25 times more effective’ than manual investigation.

Survivor ‘Amy’ said earlier intervention might have prevented her violent assault. Three-quarters of the unit’s cases already lead to charges, but police hope AI will improve that success rate and offer victims faster protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google tests AI mode on Search page

Google is experimenting with a redesigned version of its iconic Search homepage, swapping the familiar ‘I’m Feeling Lucky’ button for a new option called ‘AI Mode.’

The fresh feature, which began rolling out in limited tests earlier in May, is part of the company’s push to integrate more AI-driven capabilities into everyday search experiences.

According to a Google spokesperson, the change is currently being tested through the company’s experimental Labs platform, though there’s no guarantee it will become a permanent fixture.

The timing is notable, arriving just before Google I/O, the company’s annual developer conference where more AI-focused updates are expected.

Such changes to Google’s main Search page are rare, but the company may feel growing pressure to adapt. Just last week, an Apple executive revealed that Google searches on Safari had declined for the first time, linking the drop to the growing popularity of AI tools like ChatGPT.

By testing ‘AI Mode,’ Google appears to be responding to this shift, exploring ways to stay ahead in an increasingly AI-driven search landscape instead of sticking to its traditional layout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Republicans seek to block state AI laws for a decade

Republican lawmakers in the US have introduced a proposal that would block states from regulating artificial intelligence for the next ten years. Critics argue the move is a handout to Big Tech and could stall protections already passed in states like California, Utah, and Colorado.

The measure, embedded in a budget reconciliation bill, would prevent states from enforcing rules on a wide range of automated systems, from AI chatbots to algorithms used in health and justice sectors.

Over 500 AI-related bills have been proposed this year at the state level, and many of them would be nullified if the federal ban succeeds. Supporters of the bill claim AI oversight should happen at the national level to avoid a confusing patchwork of state laws.

Opponents, including US Democrats and tech accountability groups, warn the ban could allow unchecked algorithmic discrimination, weaken privacy, and leave the public vulnerable to AI-driven harms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok unveils AI video feature

TikTok has launched ‘AI Alive,’ its first image-to-video feature that allows users to transform static photos into animated short videos within TikTok Stories.

Accessible only through the Story Camera, the tool applies AI-driven movement and effects—like shifting skies, drifting clouds, or expressive animations—to bring photos to life.

Unlike text-to-image tools found on Instagram and Snapchat, TikTok’s latest feature takes visual storytelling further by enabling full video generation from single images. Although Snapchat plans to introduce a similar function, TikTok has moved ahead with this innovation.

All AI Alive videos will carry an AI-generated label and include C2PA metadata to ensure transparency, even when shared beyond the platform.

TikTok emphasises safety, noting that every AI Alive video undergoes several moderation checks before it appears to creators.

Uploaded photos, prompts, and generated videos are reviewed to prevent rule-breaking content. Users can report violations, and final safety reviews are conducted before public sharing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Click To Do and Settings agent bring AI to Windows 11 beta

Microsoft has rolled out Windows 11 Insider Preview Build 26120.3964 to the Beta Channel, marking the official start of the 24H2 version. Available to Insider users starting this week, the update delivers key AI-driven enhancements—most notably, a new agent built into the Settings app and upgraded text actions.

The AI agent in Settings allows users to interact using natural language instead of simple keywords. Microsoft says users can ask questions like ‘how to control my PC by voice’ or ‘my mouse pointer is too small’ to receive personalised help navigating and adjusting system settings.

Initially, the feature is limited to Copilot+ PCs powered by Snapdragon processors and set to English as the primary language. Microsoft plans to expand support to AMD and Intel devices in the near future.

The update also introduces a new FAQs section on the About page under Settings > System. The company says this addition will help users better understand their device’s configuration, performance, and compatibility.

Microsoft is also enhancing its ‘Click To Do’ feature. On Copilot+ PCs with AMD or Intel chips, users can now highlight text (10 words or more) and press Win + Click or Win + Q to access quick AI actions like Summarise, Rewrite, or Create a bulleted list.

These tools are powered by Phi Silica, an on-device small language model. The features require the system language to be English and the user to be signed in with a Microsoft account.

Microsoft notes that Rewrite is temporarily unavailable for users with French or Spanish as their default language but will return in a future update.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SoftBank profit jumps on AI-driven rebound

SoftBank Group reported a 124% surge in quarterly profit, driven by booming AI demand that lifted chip sales and startup valuations. Net income reached ¥517.18 billion ($3.5 billion) in the fiscal fourth quarter, with the Vision Fund swinging back to a profit of ¥26.1 billion.

The results provide momentum for SoftBank’s ambitions to invest heavily in OpenAI and US-based AI infrastructure. Plans include a $30 billion stake in OpenAI and leading a $100 billion push into data centres under the Stargate project, which could eventually grow to $500 billion.

However, investor caution amid tariffs and tech protectionism has delayed detailed financing discussions. Despite these hurdles, SoftBank’s chip unit Arm Holdings has benefited from rising global AI investments, even as near-term forecasts remain mixed.

For the full year, SoftBank earned ¥1.15 trillion, reversing a significant loss from the previous year. The company continues to navigate risks tied to the volatile tech start-up market, especially as Vision Fund portfolio firms go public in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google tests AI tool to automate software development

Google is internally testing an advanced AI tool designed to support software engineers through the entire development cycle, according to The Information. The firm is also expected to demonstrate integration between its Gemini chatbot in voice mode and Android-powered XR headsets.

The agentic AI assistant is said to handle tasks such as code generation and documentation, and has already been previewed to staff and developers ahead of Google’s I/O conference on 20 May. The move reflects a wider trend among tech giants racing to automate programming.

Amazon is developing its own coding assistant, Kiro, which can process both text and visual inputs, detect bugs, and auto-document code. While AWS initially targeted a June launch, the current release date remains uncertain.

Microsoft and Google have claimed that around 30% of their code is now AI-generated. OpenAI is also eyeing expansion, reportedly in talks to acquire AI coding start-up Windsurf for $3 billion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US Copyright Office avoids clear decision on AI and fair use

The US Copyright Office has stopped short of deciding whether AI companies can legally use copyrighted material to train their systems under fair use.

Its newly released report acknowledges that some uses—such as non-commercial research—may qualify, while others, like replicating expressive works from pirated content to produce market-ready AI output, likely won’t.

Rather than offering a definitive answer, the Office said such cases must be assessed by the courts, not through a universal standard.

The latest report is the third in a series aimed at guiding how copyright law applies to AI-generated content. It reiterates that works entirely created by AI cannot be copyrighted, but human-edited outputs might still qualify.

The 108-page document focuses heavily on whether AI training methods transform content enough to justify legal protection, and whether they harm creators’ livelihoods through lost sales or diluted markets.

Instead of setting new policy, the Office highlights existing legal principles, especially the four factors of fair use: the purpose, the nature of the work, the amount used, and the impact on the original market.

It notes that AI-generated content can sometimes alter original works meaningfully, but when styles or outputs closely resemble protected material, legal risks remain. Tools like content filters are seen as helpful in preventing infringement, even though they’re not always reliable.

The timing of the report has been overshadowed by political turmoil. President Donald Trump reportedly dismissed both the Librarian of Congress and the head of the Copyright Office days before the report’s release.

Meanwhile, creators continue urging the government not to permit fair use in AI training, arguing it threatens the value of original work. The debate is now expected to unfold further in courtrooms instead of regulatory offices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!