Android adds new scam protection for phone calls

Google is introducing new protections on Android devices to combat phone call scams, particularly those involving screen-sharing and app installations. Users will see warning messages if they attempt to change settings during a call and Android will also block the deactivation of Play Protect features.

The system will now block users from sideloading apps or granting accessibility permissions while on a call with unknown contacts.

The new tools are available on devices running Android 16 and select protections are also rolling out to older versions, starting with Android 11

A separate pilot in the UK will alert users trying to open banking apps during a screen-sharing call, prompting them to end the call or wait before proceeding.

These features expand Android’s broader efforts to prevent fraud, which already include AI-based scam detection for phone calls and messages.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kick faces investigation after ignoring Ofcom risk assessment request

Ofcom has launched two investigations into Kick Online Entertainment, the provider of a pornography website, over potential breaches of the Online Safety Act.

The regulator said the company failed to respond to a statutory request for a risk assessment related to illegal content appearing on the platform.

As a result, Ofcom is investigating whether Kick has failed to meet its legal obligations to complete and retain a record of such a risk assessment, as well as for not responding to the regulator’s information request.

Ofcom confirmed it had received complaints about potentially illegal material on the site, including child sexual abuse content and extreme pornography.

It is also considering a third investigation into whether the platform has implemented adequate safety measures to protect users from such material—another requirement under the Act.

Under the Online Safety Act, firms found in breach can face fines of up to £18 million or 10% of their global revenue, whichever is higher. In the most severe cases, Ofcom can pursue court orders to block UK access to the website or compel payment providers and advertisers to cut ties with the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cheshire’s new AI tool flags stalking before it escalates

Cheshire Police has become the first UK force to use AI in stalking investigations, aiming to identify harmful behaviours earlier. The AI will analyse reports in real time, even as victims speak with call handlers.

The system, trained using data from the force and the Suzy Lamplugh Trust, is designed to detect stalking patterns—even if the term isn’t used directly. Currently, officers in the Harm Reduction Unit manually review 10 cases a day.

Det Ch Insp Danielle Knox said AI will enhance, not replace, police work, and ethical safeguards are in place. Police and Crime Commissioner Dan Price secured £300,000 to fund the initiative, saying it could be ’25 times more effective’ than manual investigation.

Survivor ‘Amy’ said earlier intervention might have prevented her violent assault. Three-quarters of the unit’s cases already lead to charges, but police hope AI will improve that success rate and offer victims faster protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK artists urge PM to shield creative work from AI exploitation

More than 400 prominent British artists, including Dua Lipa, Elton John, and Sir Ian McKellen, have signed a letter urging Prime Minister Keir Starmer to update UK copyright laws to protect their work from being used without consent in training AI systems. The signatories argue that current laws leave their creative output vulnerable to exploitation by tech companies, which could ultimately undermine the UK’s status as a global cultural leader.

The artists are backing a proposed amendment to the Data (Use and Access) Bill by Baroness Beeban Kidron, requiring AI developers to disclose when and how they use copyrighted materials. They believe this transparency could pave the way for licensing agreements that respect the rights of creators while allowing responsible AI development.

Nobel laureate Kazuo Ishiguro and music legends like Paul McCartney and Kate Bush have joined the call, warning that creators risk ‘giving away’ their life’s work to powerful tech firms. While the government insists it is consulting all parties to ensure a balanced outcome that supports both the creative sector and AI innovation, not everyone supports the amendment.

Critics, like Julia Willemyns of the Centre for British Progress, argue that stricter copyright rules could stifle technological growth, offshore development, and damage the UK economy.

Why does it matter?

The debate reflects growing global tension between protecting intellectual property and enabling AI progress. With a key vote approaching in the House of Lords, artists are pressing for urgent action to secure a fair and sustainable path forward that upholds innovation and artistic integrity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google faces DOJ’s request to sell key ad platforms

The US Department of Justice (DOJ) has moved to break up Google’s advertising technology business after a federal judge ruled that the company holds illegal monopolies across two markets.

The DOJ is seeking the sale of Google’s AdX digital advertising marketplace and its DFP platform, which helps publishers manage their ad inventory.

It follows a ruling in April by Federal Judge Leonie Brinkema, who found that Google’s dominance in the online advertising market violated antitrust laws.

AdX and DFP were key acquisitions for Google, particularly the purchase of DoubleClick in 2008 for $3.1 billion. The DOJ argues that Google used monopolistic tactics, such as acquisitions and customer lock-ins, to control the ad tech market and stifle competition.

In response, Google has disputed the DOJ’s move, claiming the proposed sale of its advertising tools exceeds the court’s findings and could harm publishers and advertisers.

The DOJ’s latest filing also comes amid a separate legal action over Google’s Chrome browser, and the company is facing additional scrutiny in the UK for its dominance in the online search market.

The UK’s Competition and Markets Authority (CMA) has found that Google engaged in anti-competitive practices in open-display advertising technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MoJ explores AI for criminal court transcripts

The UK government is actively examining the use of AI to produce official transcripts of criminal court proceedings, but ministers have stressed that any technology must meet the high standards currently achieved by human professionals.

The Ministry of Justice (MoJ) is considering introducing AI-driven transcription services in the Crown Court to help reduce costs, according to Sarah Sackman, the minister responsible for court reform, AI, and digitisation.

Sackman, responding to a parliamentary question from MP David Davis, emphasised that accuracy remains the top priority. She explained that transcripts must be of an extremely high standard to protect the interests of parties, witnesses, and victims.

At present, transcription is delivered manually by third-party suppliers who are contractually required to achieve 99.5% accuracy.

AI-based solutions would need to meet a similar threshold before being adopted. Sackman added that while the MoJ is actively exploring the technology, reducing costs cannot come at the expense of reliability.

In 2023, the Ministry established a four-year, £20 million framework agreement for court reporting and transcription services.

Eight suppliers, including Appen, Epiq, and Opus 2, are providing services across three categories: remote transcription from recordings, on-site transcription refined into final documents, and real-time transcription for instant use.

Although AI could eventually transform how transcripts are created, any new systems will need to prove they can match the performance and accuracy of human transcribers before replacing existing methods.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI adoption soars in the UK but skills gap looms

AI adoption in the UK has grown rapidly, rising by 33% over the past year. According to a new report from AWS, 52% of UK businesses are now using AI technologies, up from 39% in the previous year.

Adoption has become so widespread that a business implements new AI tools or strategies every 60 seconds. The benefits are becoming more obvious too, with 92% of AI adopters reporting revenue increases, compared with 64% in 2024.

However, the report highlights a growing divide in AI readiness. While large enterprises and startups share similar adoption rates of 55% and 59% respectively, startups appear better prepared for technological shifts.

Twice as many startups (31%) have developed comprehensive AI strategies compared with larger companies (15%), suggesting agility and forward planning remain crucial.

Despite the progress, serious challenges remain. Skills shortages are slowing businesses down, with nearly 38% citing a lack of expertise as a major barrier, up from 29% last year.

Almost half report delays in hiring qualified talent, with recruitment taking an average of five and a half months. As AI becomes more integrated, it is expected that 47% of new jobs will require AI literacy in the next three years.

In response, AWS has launched a UK initiative to train 100,000 people in AI skills by 2030. The programme includes partnerships with universities such as Exeter and Manchester.

According to the UK Government’s own projections, improved AI adoption could unlock £45 billion per year in public sector savings and productivity. Still, AWS warns that unless skill gaps are addressed, the country risks developing a two-tier AI economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI app offers early support for parents of neurodivergent children

A new app called Hazel, developed by Bristol-based company Spicy Minds, offers parents a powerful tool to understand better and support their neurodivergent children while waiting for formal diagnoses. Using AI, the app runs a series of tests and then provides personalised strategies tailored to everyday challenges like school routines or holidays.

While it doesn’t replace a medical diagnosis, Hazel aims to fill a critical gap for families stuck in long waiting queues. Spicy Minds CEO Ben Cosh emphasised the need for quicker support, noting that many families wait years before receiving an autism diagnosis through the UK’s NHS.

‘Parents shouldn’t have to wait years to understand their child’s needs and get practical support,’ he said.

In Bristol alone, around 7,000 children are currently on waiting lists for an autism assessment, a number that continues to rise. Parents like Nicola Bennett, who waited five years for her son’s diagnosis, believe the app could be life-changing.

She praised Hazel for offering real-time guidance for managing sensory needs and daily planning—tools she wished she’d had much earlier. She also suggested integrating links to local support groups and services to make the app even more impactful.

By helping reduce stress and giving families a head start on understanding neurodiversity, Hazel represents a meaningful step toward more accessible, tech-driven support for parents navigating a complex and often delayed healthcare system.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hackers target UK retailers with fake IT calls

British retailers are facing a new wave of cyberattacks as hackers impersonate IT help desk staff to infiltrate company systems. The National Cyber Security Centre (NCSC) has issued an urgent warning following breaches at major firms including Marks & Spencer, Co-op, and Harrods.

Attackers use sophisticated social engineering tactics—posing as locked-out employees or IT support staff—to trick individuals into giving up passwords and security details. The NCSC urges companies to strengthen how their IT help desks verify employee identities, particularly when handling password resets for senior staff.

Security experts in the UK recommend using multi-step verification methods and even code words to confirm identities over the phone. These additional layers are vital, as attackers increasingly exploit trust and human error rather than technical vulnerabilities.

While the NCSC hasn’t named any group officially, the style of attack closely resembles the methods of Scattered Spider, a loosely connected network of young, English-speaking hackers. Known for high-profile cyber incidents—including attacks on Las Vegas casinos and public transport systems—the group often coordinates via platforms like Discord and Telegram.

However, those claiming responsibility for the latest breaches deny links to Scattered Spider, calling themselves ‘DragonForce.’ Speaking to the BBC, the group claimed to have stolen significant customer and employee data from Co-op and hinted at more disruptions in the future.

The NCSC is investigating with law enforcement to determine whether DragonForce is a new player or simply a rebranded identity of the same well-known threat actors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber incident disrupts services at Marks & Spencer

Marks & Spencer has confirmed that a cyberattack has disrupted food availability in some stores and forced the temporary shutdown of online services. The company has not officially confirmed the nature of the breach, but cybersecurity experts suspect a ransomware attack.

The retailer paused clothing and home orders on its website and app after issues arose over the Easter weekend, affecting contactless payments and click-and-collect systems. M&S said it took some systems offline as a precautionary measure.

Reports have linked the incident to the hacking group Scattered Spider, although M&S has declined to comment further or provide a timeline for the resumption of online orders. The disruption has already led to minor product shortages and analysts anticipate a short-term hit to profits.

Still, M&S’s food division had been performing strongly, with grocery spending rising 14.4% year-on-year, according to Kantar. The retailer, which operates around 1,000 UK stores, earns about one-third of its non-food sales online. Shares dropped earlier in the week but closed Tuesday slightly up.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!