National Crime Agency responds to AI crime warning

The National Crime Agency (NCA) has pledged to ‘closely examine’ recommendations from the Alan Turing Institute after a recent report highlighted the UK’s insufficient preparedness for AI-enabled crime.

The report, from the Centre for Emerging Technology and Security (CETaS), urges the NCA to create a task force to address AI crime within the next five years.

Despite AI-enabled crime being in its early stages, the report warns that criminals are rapidly advancing their use of AI, outpacing law enforcement’s ability to respond.

CETaS claims that UK police forces have been slow to adopt AI themselves, which could leave them vulnerable to increasingly sophisticated crimes, such as child sexual abuse, cybercrime, and fraud.

The Alan Turing Institute emphasises that although AI-specific legislation may be needed eventually, the immediate priority is for law enforcement to integrate AI into their crime-fighting efforts.

An initiative like this would involve using AI tools to combat AI-enabled crimes effectively, as fraudsters and criminals exploit AI’s potential to deceive.

While AI crime remains a relatively new phenomenon, recent examples such as the $25 million Deepfake CFO fraud show the growing threat.

The report also highlights the role of AI in phishing scams, romance fraud, and other deceptive practices, warning that future AI-driven crimes may become harder to detect as technology evolves.

For more information on these topics, visit diplomacy.edu.

Authors in London protest Meta’s copyright violations

A wave of protest has hit Meta’s London headquarters today as authors and publishing professionals gather to voice their outrage over the tech giant’s reported use of pirated books to develop AI tools.

Among the protesters are acclaimed novelists Kate Mosse and Tracy Chevalier and poet Daljit Nagra, who assembled in Granary Square near Meta’s King’s Cross office to deliver a complaint letter from the Society of Authors (SoA).

At the heart of the protest is Meta’s alleged reliance on LibGen, a so-called ‘shadow library’ known for hosting over 7.5 million books, many without the consent of their authors.

A recent searchable database published by The Atlantic revealed that thousands of copyrighted works, including those by renowned authors, may have been used to train Meta’s AI models, provoking public outcry and legal action in the US.

Vanessa Fox O’Loughlin, chair of the SoA, condemned Meta’s reported actions as ‘illegal, shocking, and utterly devastating for writers,’ arguing that such practices devalue authors’ time and creativity.

‘A book can take a year or longer to write. Meta has stolen books so that their AI can reproduce creative content, potentially putting these same authors out of business’ she said.

Meta has denied any wrongdoing, with a spokesperson stating that the company respects intellectual property rights and believes its AI training practices comply with existing laws.

Still, the damage to trust within the creative community appears significant. Author AJ West, who discovered his novels were listed on LibGen, described the experience as a personal violation:

‘I was horrified to see that my novels were on the LibGen database, and I’m disgusted by the government’s silence on the matter,’ he said, adding, ‘To have my beautiful books ripped off like this without my permission and without a penny of compensation then fed to the AI monster feels like I’ve been mugged.’

Legal action is already underway in the US, where a group of high-profile writers, including Ta-Nehisi Coates, Junot Díaz, and Sarah Silverman, have filed a lawsuit against Meta for copyright infringement.

The suit alleges that Meta CEO Mark Zuckerberg and other top executives knew that LibGen hosts pirated content when they greenlit its use for AI development.

The protest is also aimed at UK lawmakers. Authors like Richard Osman and Kazuo Ishiguro have joined the call for British officials to summon Meta executives before parliament.

The Society of Authors has launched a petition on Change.org that has already attracted over 7,000 signatures.

Demonstrators were urged to bring placards and spread their message online using hashtags like #MetaBookThieves and #MakeItFair as they rally against alleged copyright violations and for broader protection of creative work in the age of AI.

The case, one of the lots, describes the increasingly tense relationship between the tech industry, content and data policies in training AI systems, which hardly depend on the written word and the most various literature, facts, and info from the written tradition to be trained (and thus able) to respond to most various user requests and alongside be accurate in their responses.

For more information on these topics, visit diplomacy.edu.

UK government announces new cyber bill to strengthen national defences and protect critical infrastructure

The UK government has unveiled plans for a new Cyber Security and Resilience Bill aimed at enhancing the country’s ability to defend against the growing risk of cyber threats. Scheduled to be introduced later this year, the Bill forms a key part of the government’s broader strategy to protect critical national infrastructure (CNI), support economic growth, and ensure the resilience of the UK’s digital landscape.

The forthcoming legislation will focus on bolstering the cyber resilience of essential services—such as healthcare, energy, and IT providers—that underpin the economy and daily life. Around 1,000 vital service providers will be required to meet strengthened cyber security standards under the new rules. These measures are designed to safeguard supply chains and key national functions from increasingly sophisticated cyber attacks affecting both public and private sectors.

In addition, the government is considering extending cyber security regulations to over 200 data centres across the country. These centres are integral to the functioning of modern finance, e-commerce, and digital communication. By improving their security, the government hopes to safeguard services that rely heavily on data, such as online banking, shopping platforms, and social media.

If adopted, the government’s proposals include:

  • Expanding the scope of the NIS Regulations. The scope of the Network and Information Systems (NIS) Regulations would be broadened to include a wider range of organisations and suppliers. This expansion would bring data centres, Managed Service Providers (MSPs), and other critical suppliers under the regulatory framework, ensuring that more entities are held to high standards of cyber security and resilience.
  • Enhanced regulatory powers. Regulators would be equipped with additional tools to strengthen cyber resilience within the sectors they oversee. This includes new obligations for organisations to report a broader range of significant cyber incidents, enabling faster and more informed responses to emerging threats.
  • Greater Flexibility to Adapt. The government would gain increased flexibility to update the framework in line with the evolving threat landscape. This means regulations could be swiftly extended to cover new and emerging sectors, ensuring the UK remains agile in the face of dynamic cyber risks.
  • New Executive Powers for National Security. In circumstances where national security is at stake, the government would be granted new executive powers to act decisively in response to serious cyber threats.

For more information on these topics, visit diplomacy.edu.

Alphawave acquisition eyed by arm for AI advancements

Arm Holdings, owned by SoftBank, recently considered acquiring UK-based semiconductor IP supplier Alphawave to bolster its artificial intelligence processor technology.

The focus was on Alphawave’s ‘serdes’ technology, essential for rapid data transfer in AI applications requiring interconnected chips.

Despite initial discussions, Arm decided against pursuing the acquisition. Alphawave had been exploring a sale after attracting interest from Arm and other potential buyers.

Alphawave’s joint venture in China, WiseWave, added complexity to the potential deal due to national security concerns raised by US officials.

For more information on these topics, visit diplomacy.edu.

NHS contractor fined after ransomware attack

The tech firm Advanced, which provides services to the NHS, has been fined over £3 million by the UK data watchdog following a major ransomware attack in 2022.

The breach disrupted NHS systems and exposed personal data from tens of thousands across the country.

Originally facing a £6 million penalty, Advanced saw the fine halved after settling with the Information Commissioner’s Office.

Regulators said the firm failed to implement multi-factor authentication, allowing hackers to access systems using stolen login details.

The LockBit attack caused widespread outages, including access to UK patient data. While Advanced acknowledged the resolution, it declined to offer further comment or name a spokesperson when contacted.

For more information on these topics, visit diplomacy.edu.

OnlyFans faces penalty in UK for age check inaccuracy

OnlyFans’ parent company, Fenix, has been fined £1.05 million by UK regulator Ofcom for providing inaccurate information about how it verifies users’ ages. The platform, known for hosting adult content, had claimed its age-checking technology flagged anyone under 23 for additional ID checks.

However, it was later revealed the system was set to flag those under 20, prompting Ofcom to take enforcement action. Ofcom said Fenix failed in its legal obligation to provide accurate details, undermining the regulator’s ability to assess platform safety.

While Fenix accepted the penalty — leading to a 30% reduction in the fine — Ofcom stressed the importance of holding platforms to high standards, especially when protecting minors online. The investigation began in 2022 under UK regulations that predate the Online Safety Act, which is due to take full effect this year.

Why does it matter?

The act will require stronger age verification measures from platforms like OnlyFans, with a July 2025 deadline for full compliance. OnlyFans responded by affirming its commitment to transparency and welcomed the resolution of the case. While best known for adult subscriptions, the platform hosts mainstream content and launched a non-pornographic streaming service in 2023.

For more information on these topics, visit diplomacy.edu.

UK AI plans risk delay due to outdated public sector IT

Government plans to embed AI across public services face serious challenges due to outdated IT systems, insufficient funding, and a shortage of skilled workers, MPs have warned.

A report by the Public Accounts Committee (PAC) revealed that over 20 legacy systems still await financial support for upgrades, with nearly a third of central government systems considered obsolete as of 2024.

While the government has outlined an ambitious AI strategy to improve efficiency and stimulate economic growth, including the recruitment of 2,000 new tech apprentices, the PAC report casts doubt on the public sector’s readiness.

The committee highlighted ongoing digital skills shortages, driven partly by uncompetitive pay compared to the private sector, and raised concerns over the lack of transparent systems to track and assess AI-driven decisions.

The PAC urged the Department for Science, Innovation and Technology to set a clear funding plan within six months for the most at-risk systems and warned that failing to act could lead to greater costs down the line.

It also called for stronger leadership, better oversight of AI pilot schemes, and increased public transparency to build trust in how government uses AI.

For more information on these topics, visit diplomacy.edu.

Mobile coverage from space may soon be reality

Satellite-based mobile coverage could arrive in the UK by the end of 2025, with Ofcom launching a consultation on licensing direct-to-smartphone services.

The move would allow users to stay connected in areas without mast coverage using an ordinary mobile phone.

The proposal favours mobile networks teaming up with satellite operators to share frequencies in unserved regions, offering limited services like text messaging at first, with voice and data to follow.

Ofcom plans strict interference controls, and Vodafone is among those preparing to roll out such technology.

If approved, the service would be available across the UK mainland and surrounding seas, but not yet in places like the Channel Islands.

The public has until May to respond, as Ofcom seeks to modernise mobile access and help close the digital divide.

For more information on these topics, visit diplomacy.edu.

AI physiotherapy service helps UK patients manage back pain

Lower back pain, one of the world’s leading causes of disability, has left hundreds of thousands of people in the UK stuck on long waiting lists for treatment. To address the crisis, the NHS is trialling a new solution: Flok Health, the first AI-powered physiotherapy clinic approved by the Care Quality Commission.

The app offers patients immediate access to personalised treatment plans through pre-recorded videos driven by artificial intelligence.

Created by former Olympic rower Finn Stevenson and tech expert Ric da Silva, Flok aims to treat straightforward cases that don’t require scans or hands-on intervention.

Patients interact with an AI-powered virtual physio, responding to questions that tailor the treatment pathway, with over a billion potential combinations. Unlike generative AI, Flok uses a more controlled system, eliminating the risk of fabricated medical advice.

The service has already launched in Scotland and is expanding across England, with ambitions to cover half the UK within a year. Flok is also adding treatment for conditions like hip and knee osteoarthritis, and women’s pelvic health.

While promising, the system depends on patients correctly following instructions, as the AI cannot monitor physical movements. Real physiotherapists are available to answer questions, but they do not provide live feedback during exercises.

Though effective for some, not all users find AI a perfect fit. Some, like the article’s author, prefer the hands-on guidance and posture corrections of human therapists.

Experts agree AI has potential to make healthcare more accessible and efficient, but caution that these tools must be rigorously evaluated, continuously monitored, and designed to support – not replace – clinical care.

For more information on these topics, visit diplomacy.edu.

Meta agrees to halt targeted ads in landmark UK privacy case

Meta, the owner of Facebook and Instagram, has agreed to stop targeting a UK citizen with personalised adverts as part of a settlement in a landmark privacy case.

The case, which avoided a high court trial, was brought by human rights campaigner Tanya O’Carroll in 2022, who claimed Meta had violated UK data laws by processing her personal data for targeted advertising without her consent.

O’Carroll’s case received support from the UK’s data watchdog, the Information Commissioner’s Office (ICO), which stated that users have the right to opt out of targeted ads.

The settlement has been hailed as a victory for O’Carroll, with potential implications for millions of social media users in the UK. Meta, however, disagreed with the claims. Instead of this, the company was considering introducing a subscription model in the UK for users who want an advert-free version of its platforms.

The ICO’s stance in favour of privacy rights could prompt similar lawsuits in the future, as users are increasingly demanding control over how their data is used online.

O’Carroll argued that the case demonstrated the growing desire for more control over surveillance advertising and said that the ICO’s support could encourage more people to object to targeted ads.

Meta, which generates most of its revenue from advertising, emphasised that it took its privacy obligations seriously and was exploring the option of a paid, ad-free service for UK users.

For more information on these topics, visit diplomacy.edu.