Elton John threatens legal fight over AI use

Sir Elton John has lashed out at the UK government over plans that could allow AI companies to use copyrighted content without paying artists, calling ministers ‘absolute losers’ and accusing them of ‘thievery on a high scale.’

He warned that younger musicians, without the means to challenge tech giants, would be most at risk if the proposed changes go ahead.

The row centres on a rejected House of Lords amendment to the Data Bill, which would have required AI firms to disclose what material they use.

Despite a strong majority in favour in the Lords, the Commons blocked the move, meaning the bill will keep bouncing between the two chambers until a compromise is reached.

Sir Elton, joined by playwright James Graham, said the government was failing to defend creators and seemed more interested in appeasing powerful tech firms.

More than 400 artists, including Sir Paul McCartney, have signed a letter urging Prime Minister Sir Keir Starmer to strengthen copyright protections instead of allowing AI to mine their work unchecked.

While the government insists no changes will be made unless they benefit creators, critics say the current approach risks sacrificing the UK’s music industry for Silicon Valley’s gain.

Sir Elton has threatened legal action if the plans go ahead, saying, ‘We’ll fight it all the way.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK workers struggle to keep up with AI

AI is reshaping the UK workplace, but many employees feel unprepared to keep pace, according to a major new study by Henley Business School.

While 56% of full-time professionals expressed optimism about AI’s potential, 61% admitted they were overwhelmed by how quickly the technology is evolving.

The research surveyed over 4,500 people across nearly 30 sectors, offering what experts call a clear snapshot of AI’s uneven integration into British industries.

Professor Keiichi Nakata, director of AI at The World of Work Institute, said workers are willing to embrace AI, but often lack the training and guidance to do so effectively.

Instead of empowering staff through hands-on learning and clear internal policies, many companies are leaving their workforce under-supported.

Nearly a quarter of respondents said their employers were failing to provide sufficient help, while three in five said they would use AI more if proper training were available.

Professor Nakata argued that AI has the power to simplify tasks, remove repetitive duties, and free up time for more meaningful work.

But he warned that without better support, businesses risk missing out on what could be a transformative force for both productivity and employee satisfaction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Android adds new scam protection for phone calls

Google is introducing new protections on Android devices to combat phone call scams, particularly those involving screen-sharing and app installations. Users will see warning messages if they attempt to change settings during a call and Android will also block the deactivation of Play Protect features.

The system will now block users from sideloading apps or granting accessibility permissions while on a call with unknown contacts.

The new tools are available on devices running Android 16 and select protections are also rolling out to older versions, starting with Android 11

A separate pilot in the UK will alert users trying to open banking apps during a screen-sharing call, prompting them to end the call or wait before proceeding.

These features expand Android’s broader efforts to prevent fraud, which already include AI-based scam detection for phone calls and messages.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kick faces investigation after ignoring Ofcom risk assessment request

Ofcom has launched two investigations into Kick Online Entertainment, the provider of a pornography website, over potential breaches of the Online Safety Act.

The regulator said the company failed to respond to a statutory request for a risk assessment related to illegal content appearing on the platform.

As a result, Ofcom is investigating whether Kick has failed to meet its legal obligations to complete and retain a record of such a risk assessment, as well as for not responding to the regulator’s information request.

Ofcom confirmed it had received complaints about potentially illegal material on the site, including child sexual abuse content and extreme pornography.

It is also considering a third investigation into whether the platform has implemented adequate safety measures to protect users from such material—another requirement under the Act.

Under the Online Safety Act, firms found in breach can face fines of up to £18 million or 10% of their global revenue, whichever is higher. In the most severe cases, Ofcom can pursue court orders to block UK access to the website or compel payment providers and advertisers to cut ties with the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cheshire’s new AI tool flags stalking before it escalates

Cheshire Police has become the first UK force to use AI in stalking investigations, aiming to identify harmful behaviours earlier. The AI will analyse reports in real time, even as victims speak with call handlers.

The system, trained using data from the force and the Suzy Lamplugh Trust, is designed to detect stalking patterns—even if the term isn’t used directly. Currently, officers in the Harm Reduction Unit manually review 10 cases a day.

Det Ch Insp Danielle Knox said AI will enhance, not replace, police work, and ethical safeguards are in place. Police and Crime Commissioner Dan Price secured £300,000 to fund the initiative, saying it could be ’25 times more effective’ than manual investigation.

Survivor ‘Amy’ said earlier intervention might have prevented her violent assault. Three-quarters of the unit’s cases already lead to charges, but police hope AI will improve that success rate and offer victims faster protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK artists urge PM to shield creative work from AI exploitation

More than 400 prominent British artists, including Dua Lipa, Elton John, and Sir Ian McKellen, have signed a letter urging Prime Minister Keir Starmer to update UK copyright laws to protect their work from being used without consent in training AI systems. The signatories argue that current laws leave their creative output vulnerable to exploitation by tech companies, which could ultimately undermine the UK’s status as a global cultural leader.

The artists are backing a proposed amendment to the Data (Use and Access) Bill by Baroness Beeban Kidron, requiring AI developers to disclose when and how they use copyrighted materials. They believe this transparency could pave the way for licensing agreements that respect the rights of creators while allowing responsible AI development.

Nobel laureate Kazuo Ishiguro and music legends like Paul McCartney and Kate Bush have joined the call, warning that creators risk ‘giving away’ their life’s work to powerful tech firms. While the government insists it is consulting all parties to ensure a balanced outcome that supports both the creative sector and AI innovation, not everyone supports the amendment.

Critics, like Julia Willemyns of the Centre for British Progress, argue that stricter copyright rules could stifle technological growth, offshore development, and damage the UK economy.

Why does it matter?

The debate reflects growing global tension between protecting intellectual property and enabling AI progress. With a key vote approaching in the House of Lords, artists are pressing for urgent action to secure a fair and sustainable path forward that upholds innovation and artistic integrity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google faces DOJ’s request to sell key ad platforms

The US Department of Justice (DOJ) has moved to break up Google’s advertising technology business after a federal judge ruled that the company holds illegal monopolies across two markets.

The DOJ is seeking the sale of Google’s AdX digital advertising marketplace and its DFP platform, which helps publishers manage their ad inventory.

It follows a ruling in April by Federal Judge Leonie Brinkema, who found that Google’s dominance in the online advertising market violated antitrust laws.

AdX and DFP were key acquisitions for Google, particularly the purchase of DoubleClick in 2008 for $3.1 billion. The DOJ argues that Google used monopolistic tactics, such as acquisitions and customer lock-ins, to control the ad tech market and stifle competition.

In response, Google has disputed the DOJ’s move, claiming the proposed sale of its advertising tools exceeds the court’s findings and could harm publishers and advertisers.

The DOJ’s latest filing also comes amid a separate legal action over Google’s Chrome browser, and the company is facing additional scrutiny in the UK for its dominance in the online search market.

The UK’s Competition and Markets Authority (CMA) has found that Google engaged in anti-competitive practices in open-display advertising technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MoJ explores AI for criminal court transcripts

The UK government is actively examining the use of AI to produce official transcripts of criminal court proceedings, but ministers have stressed that any technology must meet the high standards currently achieved by human professionals.

The Ministry of Justice (MoJ) is considering introducing AI-driven transcription services in the Crown Court to help reduce costs, according to Sarah Sackman, the minister responsible for court reform, AI, and digitisation.

Sackman, responding to a parliamentary question from MP David Davis, emphasised that accuracy remains the top priority. She explained that transcripts must be of an extremely high standard to protect the interests of parties, witnesses, and victims.

At present, transcription is delivered manually by third-party suppliers who are contractually required to achieve 99.5% accuracy.

AI-based solutions would need to meet a similar threshold before being adopted. Sackman added that while the MoJ is actively exploring the technology, reducing costs cannot come at the expense of reliability.

In 2023, the Ministry established a four-year, £20 million framework agreement for court reporting and transcription services.

Eight suppliers, including Appen, Epiq, and Opus 2, are providing services across three categories: remote transcription from recordings, on-site transcription refined into final documents, and real-time transcription for instant use.

Although AI could eventually transform how transcripts are created, any new systems will need to prove they can match the performance and accuracy of human transcribers before replacing existing methods.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI adoption soars in the UK but skills gap looms

AI adoption in the UK has grown rapidly, rising by 33% over the past year. According to a new report from AWS, 52% of UK businesses are now using AI technologies, up from 39% in the previous year.

Adoption has become so widespread that a business implements new AI tools or strategies every 60 seconds. The benefits are becoming more obvious too, with 92% of AI adopters reporting revenue increases, compared with 64% in 2024.

However, the report highlights a growing divide in AI readiness. While large enterprises and startups share similar adoption rates of 55% and 59% respectively, startups appear better prepared for technological shifts.

Twice as many startups (31%) have developed comprehensive AI strategies compared with larger companies (15%), suggesting agility and forward planning remain crucial.

Despite the progress, serious challenges remain. Skills shortages are slowing businesses down, with nearly 38% citing a lack of expertise as a major barrier, up from 29% last year.

Almost half report delays in hiring qualified talent, with recruitment taking an average of five and a half months. As AI becomes more integrated, it is expected that 47% of new jobs will require AI literacy in the next three years.

In response, AWS has launched a UK initiative to train 100,000 people in AI skills by 2030. The programme includes partnerships with universities such as Exeter and Manchester.

According to the UK Government’s own projections, improved AI adoption could unlock £45 billion per year in public sector savings and productivity. Still, AWS warns that unless skill gaps are addressed, the country risks developing a two-tier AI economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI app offers early support for parents of neurodivergent children

A new app called Hazel, developed by Bristol-based company Spicy Minds, offers parents a powerful tool to understand better and support their neurodivergent children while waiting for formal diagnoses. Using AI, the app runs a series of tests and then provides personalised strategies tailored to everyday challenges like school routines or holidays.

While it doesn’t replace a medical diagnosis, Hazel aims to fill a critical gap for families stuck in long waiting queues. Spicy Minds CEO Ben Cosh emphasised the need for quicker support, noting that many families wait years before receiving an autism diagnosis through the UK’s NHS.

‘Parents shouldn’t have to wait years to understand their child’s needs and get practical support,’ he said.

In Bristol alone, around 7,000 children are currently on waiting lists for an autism assessment, a number that continues to rise. Parents like Nicola Bennett, who waited five years for her son’s diagnosis, believe the app could be life-changing.

She praised Hazel for offering real-time guidance for managing sensory needs and daily planning—tools she wished she’d had much earlier. She also suggested integrating links to local support groups and services to make the app even more impactful.

By helping reduce stress and giving families a head start on understanding neurodiversity, Hazel represents a meaningful step toward more accessible, tech-driven support for parents navigating a complex and often delayed healthcare system.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot