UK judges issue warning on unchecked AI use by lawyers

A senior UK judge has warned that lawyers may face prosecution if they continue citing fake legal cases generated by AI without verifying their accuracy.

High Court justice Victoria Sharp called the misuse of AI a threat to justice and public trust, after lawyers in two recent cases relied on false material created by generative tools.

In one £90 million lawsuit involving Qatar National Bank, a lawyer submitted 18 cases that did not exist. The client later admitted to supplying the false information, but Justice Sharp criticised the lawyer for depending on the client’s research instead of conducting proper legal checks.

In another case, five fabricated cases were used in a housing claim against the London Borough of Haringey. The barrister denied using AI but failed to provide a clear explanation.

Both incidents have been referred to professional regulators. Sharp warned that submitting false information could amount to contempt of court or, in severe cases, perverting the course of justice — an offence that can lead to life imprisonment.

While recognising AI as a useful legal tool, Sharp stressed the need for oversight and regulation. She said AI’s risks must be managed with professional discipline if public confidence in the legal system is to be preserved.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK teams with tech giants on AI training

The UK government is launching a nationwide AI skills initiative aimed at both workers and schoolchildren, with Prime Minister Keir Starmer announcing partnerships with major tech companies including Google, Microsoft and Amazon.

The £187 million TechFirst programme will provide AI education to one million secondary students and train 7.5 million workers over the next five years.

Rather than keeping such tools limited to specialists, the government plans to make AI training accessible across classrooms and businesses. Companies involved will make learning materials freely available to boost digital skills and productivity, particularly in using chatbots and large language models.

Starmer said the scheme is designed to empower the next generation to shape AI’s future instead of being shaped by it. He called it the start of a new era of opportunity and growth, as the UK aims to strengthen its global leadership in AI.

The initiative arrives as the country’s AI sector, currently worth £72 billion, is projected to grow to more than £800 billion by 2035.

The government also signed two agreements with NVIDIA to support a nationwide AI talent pipeline, reinforcing efforts to expand both the workforce and innovation in the sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia and FCA open AI sandbox for UK fintechs

Financial firms across the UK will soon be able to experiment with AI in a new regulatory sandbox, launched by the Financial Conduct Authority (FCA) in partnership with Nvidia.

Known as the Supercharged Sandbox, it offers a secure testing ground for firms wanting to explore AI tools without needing their advanced computing resources.

Set to begin in October, the initiative is open to any financial services company testing AI-driven ideas. Firms will have access to Nvidia’s accelerated computing platform and tailored AI software, helping them work with complex data, improve automation, and enhance risk management in a controlled setting.

The FCA said the sandbox is designed to support firms lacking the in-house capacity to test new technology.

It aims to provide not only computing power but also regulatory guidance and access to better datasets, creating an environment where innovation can flourish while remaining compliant with rules.

The move forms part of a wider push by the UK government to foster economic growth through innovation. Finance minister Rachel Reeves has urged regulators to clear away obstacles to growth and praised the FCA and Bank of England for acting on her call to cut red tape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Odyssey presents immersive AI-powered streaming

Odyssey, a startup founded by self-driving veterans Oliver Cameron and Jeff Hawke, has unveiled an AI model that allows users to interact with streaming video in real time.

The technology generates video frames every 40 milliseconds, enabling users to move through scenes like a 3D video game instead of passively watching. A demo is currently available online, though it is still in its early stages.

The system relies on a new kind of ‘world model’ that predicts future visual states based on previous actions and environments. Odyssey claims its model can maintain spatial consistency, learn motion from video, and sustain coherent video output for five minutes or more.

Unlike models trained solely on internet data, Odyssey captures real-world environments using a custom 360-degree, backpack-mounted camera to build higher-fidelity simulations.

Tech giants and AI startups are exploring world models to power next-generation simulations and interactive media. Yet creative professionals remain wary. A 2024 study commissioned by the Animation Guild predicted significant job disruptions across film and animation.

Game studios like Activision Blizzard have been scrutinised for using AI while cutting staff.

Odyssey, however, insists its goal is collaboration instead of replacement. The company is also developing software to let creators edit scenes using tools like Unreal Engine and Blender.

Backed by $27 million in funding and supported by Pixar co-founder Ed Catmull, Odyssey aims to transform video content across entertainment, education, and advertising through on-demand interactivity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT adds meeting recording and cloud access

OpenAI has launched new features for ChatGPT that allow it to record meetings, transcribe conversations, and pull information directly from cloud platforms like Google Drive and SharePoint.

Instead of relying on typed input alone, users can now speak to ChatGPT, which records audio, creates editable summaries, and helps generate follow-up content such as emails or project outlines.

‘Record’ is currently available to Team users via the macOS app and will soon expand to Enterprise and Edu accounts.

The recording tool automatically deletes the audio after transcription and applies existing workspace data rules, ensuring recordings are not used for training.

Instead of leaving notes scattered across different platforms, users gain a structured and searchable history of conversations, voice notes, or brainstorming sessions, which ChatGPT can recall and apply during future interactions.

At the same time, OpenAI has introduced new connectors for business users that let ChatGPT access files from cloud services like Dropbox, OneDrive, Box, and others.

These connectors allow ChatGPT to search and summarise information from internal documents, rather than depending only on web search or user uploads. The update also includes beta support for Deep Research agents that can work with tools like GitHub and HubSpot.

OpenAI has embraced the Model Context Protocol, an open standard allowing organisations to build their own custom connectors for proprietary tools.

Rather than serving purely as a general-purpose chatbot, ChatGPT is evolving into a workplace assistant capable of tapping into and understanding a company’s complete knowledge base.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

M&S CEO targeted by hackers in abusive ransom email

Marks & Spencer has been directly targeted by a ransomware group calling itself DragonForce, which sent a vulgar and abusive ransom email to CEO Stuart Machin using a compromised employee email address.

The message, laced with offensive language and racist terms, demanded that Machin engage via a darknet portal to negotiate payment. It also claimed that the hackers had encrypted the company’s servers and stolen customer data, a claim M&S eventually acknowledged weeks later.

The email, dated 23 April, appears to have been sent from the account of an Indian IT worker employed by Tata Consultancy Services (TCS), a long-standing M&S tech partner.

TCS has denied involvement and stated that its systems were not the source of the breach. M&S has remained silent publicly, neither confirming the full scope of the attack nor disclosing whether a ransom was paid.

The cyber attack has caused major disruption, costing M&S an estimated £300 million and halting online orders for over six weeks.

DragonForce has also claimed responsibility for a simultaneous attack on the Co-op, which left some shelves empty for days. While nothing has yet appeared on DragonForce’s leak site, the group claims it will publish stolen information soon.

Investigators believe DragonForce operates as a ransomware-as-a-service collective, offering tools and platforms to cybercriminals in exchange for a 20% share of any ransom.

Some experts suspect the real perpetrators may be young hackers from the West, linked to a loosely organised online community called Scattered Spider. The UK’s National Crime Agency has confirmed it is focusing on the group as part of its inquiry into the recent retail hacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit accuses Anthropic of misusing user content

Reddit has taken legal action against AI startup Anthropic, alleging that the company scraped its platform without permission and used the data to train and commercialise its Claude AI models.

The lawsuit, filed in San Francisco’s Superior Court, accuses Anthropic of breaching contract terms, unjust enrichment, and interfering with Reddit’s operations.

According to Reddit, Anthropic accessed the platform more than 100,000 times despite publicly claiming to have stopped doing so.

The complaint claims Anthropic ignored Reddit’s technical safeguards, such as robots.txt files, and bypassed the platform’s user agreement to extract large volumes of user-generated content.

Reddit argues that Anthropic’s actions undermine its licensing deals with companies like OpenAI and Google, who have agreed to strict content usage and deletion protocols.

The filing asserts that Anthropic intentionally used personal data from Reddit without ever seeking user consent, calling the company’s conduct deceptive. Despite public statements suggesting respect for privacy and web-scraping limitations, Anthropic is portrayed as having disregarded both.

The lawsuit even cites Anthropic’s own 2021 research that acknowledged Reddit content as useful in training AI models.

Reddit is now seeking damages, repayment of profits, and a court order to stop Anthropic from using its data further. The market responded positively, with Reddit’s shares closing nearly 67% higher at $118.21—indicating investor support for the company’s aggressive stance on data protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google email will reply by using your voice

Google is building a next-generation email system that uses generative AI to reply to mundane messages in your own tone, according to DeepMind CEO Demis Hassabis.

Speaking at SXSW London, Hassabis said the system would handle everyday emails instead of requiring users to write repetitive responses themselves.

Hassabis called email ‘the thing I really want to get rid of,’ and joked he’d pay thousands each month for that luxury. He emphasised that while AI could help cure diseases or combat climate change, it should also solve smaller daily annoyances first—like managing inbox overload.

The upcoming feature aims to identify routine emails and draft replies that reflect the user’s writing style, potentially making decisions on simpler matters.

While details are still limited, the project remains under development and could debut as part of Google’s premium AI subscription model before reaching free-tier users.

Gmail already includes generative tools that adjust message tone, but the new system goes further—automating replies instead of just suggesting edits.

Hassabis also envisioned a universal AI assistant that protects users’ attention and supports digital well-being, offering personalised recommendations and taking care of routine digital tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Salt Typhoon and Silk Typhoon reveal weaknesses

Recent revelations about Salt Typhoon and Silk Typhoon have exposed severe weaknesses in how organisations secure their networks.

These state-affiliated hacking groups have demonstrated that modern cyber threats come from well-resourced and coordinated actors instead of isolated individuals.

Salt Typhoon, responsible for one of the largest cyber intrusions into US infrastructure, exploited cloud network vulnerabilities targeting telecom giants like AT&T and Verizon, forcing companies to reassess their reliance on traditional private circuits.

Many firms continue to believe private circuits offer better protection simply because they are off the public internet. Some even add MACsec encryption for extra defence. However, MACsec’s ‘hop-by-hop’ design introduces new risks—data is repeatedly decrypted and re-encrypted at each routing point.

Every one of these hops becomes a possible target for attackers, who can intercept, manipulate, or exfiltrate data without detection, especially when third-party infrastructure is involved.

Beyond its security limitations, MACsec presents high operational complexity and cost, making it unsuitable for today’s cloud-first environments. In contrast, solutions like Internet Protocol Security (IPSec) offer simpler, end-to-end encryption.

Although not perfect in cloud settings, IPSec can be enhanced through parallel connections or expert guidance. The Cybersecurity and Infrastructure Security Agency (CISA) urges organisations to prioritise complete encryption of all data in transit, regardless of the underlying network.

Silk Typhoon has further amplified concerns by exploiting privileged credentials and cloud APIs to infiltrate both on-premise and cloud systems. These actors use covert networks to maintain long-term access while remaining hidden.

As threats evolve, companies must adopt Zero Trust principles, strengthen identity controls, and closely monitor their cloud environments instead of relying on outdated security models.

Collaborating with cloud security experts can help shut down exposure risks and protect sensitive data from sophisticated and persistent threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

HMRC got targeted in a £47 million UK fraud

A phishing scheme run by organised crime groups cost the UK government £47 million, according to officials from His Majesty’s Revenue and Customs.

Criminals posed as taxpayers to claim payments using fake or hijacked credentials. Rather than a cyberattack, the operation relied on impersonation and did not involve the theft of taxpayer data.

Angela MacDonald, HMRC’s deputy chief executive, confirmed to Parliament’s Treasury Committee that the fraud took place in 2024. The stolen funds were taken through three separate payments, though HMRC managed to block an additional £1.9 million attempt.

Officials began a cross-border criminal investigation soon after discovering the scam, which has led to arrests.

Around 100,000 PAYE accounts — typically used by employers for employee tax and national insurance payments — were either created fraudulently or accessed illegally.

Banks were also targeted through the use of HMRC-linked identity information. Customers first flagged the issue when they noticed unusual activity.

HMRC has shut down the fake accounts and removed false data as part of its response. John-Paul Marks, HMRC’s chief executive, assured the committee that the incident is now under control and contained. ‘That is a lot of money and unacceptable,’ MacDonald told MPs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!