M&S resumes online orders after cyberattack

Marks & Spencer has resumed online clothing orders following a 46-day pause triggered by a cyberattack. The retailer restarted standard home delivery across England, Scotland and Wales, focusing initially on best-selling and new items instead of the full range.

A spokesperson stated that additional products will be added daily, enabling customers to gradually access a wider selection. Services such as click and collect, next-day delivery, and international orders are expected to be reintroduced in the coming weeks, while deliveries to Northern Ireland will resume soon.

The disruption began on 25 April when M&S halted clothing and home orders after issues with contactless payments and app services during the Easter weekend. The company revealed that the breach was caused by hackers who deceived staff at a third-party contractor, bypassing security defences.

M&S had warned that the incident could reduce its 2025/26 operating profit by around £300 million, though it aims to limit losses through insurance and internal cost measures. Shares rose 3 per cent as the online service came back online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit targets AI firm over scraped sports posts

Reddit has taken legal action against AI company Anthropic, accusing it of scraping content from the platform’s sports-focused communities.

The lawsuit claims Anthropic violated Reddit’s user agreement by collecting posts without permission, particularly from fan-driven discussions that are central to how sports content is shared online.

Reddit argues the scraping undermines its obligations to over 100 million daily users, especially around privacy and user control. According to the filing, Anthropic’s actions override assurances that users can manage or delete their content as they see fit.

The platform emphasises that users gain no benefit from technology built using their contributions.

These online sports communities are rich sources of original fan commentary and analysis. On a large scale, such content could enable AI models to imitate sports fan behaviour with impressive accuracy.

While teams or platforms might use such models to enhance engagement or communication, Reddit warns that unauthorised use brings serious ethical and legal risks.

The case could influence how AI companies handle user-generated content across the internet, not just in sports. As web scraping grows more common, the outcome of the dispute may shape future standards for AI training practices and online content rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybersecurity alarm after 184 million credentials exposed

A vast unprotected database containing over 184 million credentials from major platforms and sectors has highlighted severe weaknesses in data security worldwide.

The leaked credentials, harvested by infostealer malware and stored in plain text, pose significant risks to consumers and businesses, underscoring an urgent need for stronger cybersecurity and better data governance.

Cybersecurity researcher Jeremiah Fowler discovered the 47 GB database exposing emails, passwords, and authorisation URLs from tech giants like Google, Microsoft, Apple, Facebook, and Snapchat, as well as banking, healthcare, and government accounts.

The data was left accessible without any encryption or authentication, making it vulnerable to anyone with the link.

The credentials were reportedly collected by infostealer malware such as Lumma Stealer, which silently steals sensitive information from infected devices. The stolen data fuels a thriving underground economy involving identity theft, fraud, and ransomware.

The breach’s scope extends beyond tech, affecting critical infrastructure like healthcare and government services, raising concerns over personal privacy and national security. With recurring data breaches becoming the norm, industries must urgently reinforce security measures.

Chief Data Officers and IT risk leaders face mounting pressure as regulatory scrutiny intensifies. The leak highlights the need for proactive data stewardship through encryption, access controls, and real-time threat detection.

Many organisations struggle with legacy systems, decentralised data, and cloud adoption, complicating governance efforts.

Enterprise leaders must treat data as a strategic asset and liability, embedding cybersecurity into business processes and supply chains. Beyond technology, cultivating a culture of accountability and vigilance is essential to prevent costly breaches and protect brand trust.

The massive leak signals a new era in data governance where transparency and relentless improvement are critical. The message is clear: there is no room for complacency in safeguarding the digital world’s most valuable assets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China proposes rare earth export relief for EU

China has proposed creating a ‘green channel’ for rare earth exports to the EU, aiming to ease the impact of its recent restrictions. These materials, vital to electric vehicles and household appliances, have been under stricter export controls since April.

During recent talks, Trade Commissioner Maroš Šefčovič warned Chinese officials that the curbs had caused major disruptions across Europe, describing the situation as alarming. While some progress in licence approvals has been noted, businesses argue it remains inadequate.

The talks come as both sides prepare for a high-stakes EU-China summit and continue negotiations over tariffs on Chinese electric vehicles.

Brussels has imposed duties of up to 35.3%, citing unfair subsidies, while Beijing is pushing for a deal involving minimum pricing to avoid the tariffs.

China’s commerce ministry confirmed the discussions are in their final stage but acknowledged that more work is needed to reach a resolution.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE AI megaproject faces US chip export concerns

Plans for a vast AI data hub in the UAE have raised security concerns in Washington due to the country’s close ties with China.

The $100 billion Stargate UAE campus, aims to deploy advanced US chips, but US officials are scrutinising potential technology leakage risks.

Although the Trump administration supports the project, bipartisan fears remain about whether the UAE can safeguard US-developed AI and chips from foreign adversaries.

A final agreement has not been reached as both sides negotiate export conditions, with possible restrictions on Nvidia’s hardware.

The initial phase of the Stargate project will activate 200 megawatts of capacity by 2026, but the deal’s future may depend on the UAE’s willingness to accept strict US oversight.

Talks over potential amendments continue, delaying approval of what could become a $500 billion venture.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK judges issue warning on unchecked AI use by lawyers

A senior UK judge has warned that lawyers may face prosecution if they continue citing fake legal cases generated by AI without verifying their accuracy.

High Court justice Victoria Sharp called the misuse of AI a threat to justice and public trust, after lawyers in two recent cases relied on false material created by generative tools.

In one £90 million lawsuit involving Qatar National Bank, a lawyer submitted 18 cases that did not exist. The client later admitted to supplying the false information, but Justice Sharp criticised the lawyer for depending on the client’s research instead of conducting proper legal checks.

In another case, five fabricated cases were used in a housing claim against the London Borough of Haringey. The barrister denied using AI but failed to provide a clear explanation.

Both incidents have been referred to professional regulators. Sharp warned that submitting false information could amount to contempt of court or, in severe cases, perverting the course of justice — an offence that can lead to life imprisonment.

While recognising AI as a useful legal tool, Sharp stressed the need for oversight and regulation. She said AI’s risks must be managed with professional discipline if public confidence in the legal system is to be preserved.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK teams with tech giants on AI training

The UK government is launching a nationwide AI skills initiative aimed at both workers and schoolchildren, with Prime Minister Keir Starmer announcing partnerships with major tech companies including Google, Microsoft and Amazon.

The £187 million TechFirst programme will provide AI education to one million secondary students and train 7.5 million workers over the next five years.

Rather than keeping such tools limited to specialists, the government plans to make AI training accessible across classrooms and businesses. Companies involved will make learning materials freely available to boost digital skills and productivity, particularly in using chatbots and large language models.

Starmer said the scheme is designed to empower the next generation to shape AI’s future instead of being shaped by it. He called it the start of a new era of opportunity and growth, as the UK aims to strengthen its global leadership in AI.

The initiative arrives as the country’s AI sector, currently worth £72 billion, is projected to grow to more than £800 billion by 2035.

The government also signed two agreements with NVIDIA to support a nationwide AI talent pipeline, reinforcing efforts to expand both the workforce and innovation in the sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia and FCA open AI sandbox for UK fintechs

Financial firms across the UK will soon be able to experiment with AI in a new regulatory sandbox, launched by the Financial Conduct Authority (FCA) in partnership with Nvidia.

Known as the Supercharged Sandbox, it offers a secure testing ground for firms wanting to explore AI tools without needing their advanced computing resources.

Set to begin in October, the initiative is open to any financial services company testing AI-driven ideas. Firms will have access to Nvidia’s accelerated computing platform and tailored AI software, helping them work with complex data, improve automation, and enhance risk management in a controlled setting.

The FCA said the sandbox is designed to support firms lacking the in-house capacity to test new technology.

It aims to provide not only computing power but also regulatory guidance and access to better datasets, creating an environment where innovation can flourish while remaining compliant with rules.

The move forms part of a wider push by the UK government to foster economic growth through innovation. Finance minister Rachel Reeves has urged regulators to clear away obstacles to growth and praised the FCA and Bank of England for acting on her call to cut red tape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk’s X tightens control on AI data use

Social media platform X has updated its developer agreement to prohibit the use of its content for training large language models.

The new clause, added under the restrictions section, forbids any attempt to use X’s API or content to fine-tune or train foundational or frontier AI models.

The move follows Elon Musk’s acquisition of X through his AI company xAI, which is developing its own models.

By restricting external access, the company aims to prevent competitors from freely using X’s data while maintaining control over a valuable resource for training AI systems.

X joins a growing list of platforms, including Reddit and The Browser Company, that have introduced terms blocking unauthorised AI training.

The shift reflects a broader industry trend towards limiting open data access amid the rising value of proprietary content in the AI arms race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FBI warns BADBOX 2.0 malware is infecting millions

The FBI has issued a warning about the resurgence of BADBOX 2.0, a dangerous form of malware infecting millions of consumer electronics globally.

Often preloaded onto low-cost smart TVs, streaming boxes, and IoT devices, primarily from China, the malware grants cyber criminals backdoor access, enabling theft, surveillance, and fraud while remaining essentially undetectable.

BADBOX 2.0 forms part of a massive botnet and can also infect devices through malicious apps and drive-by downloads, especially from unofficial Android stores.

Once activated, the malware enables a range of attacks, including click fraud, fake account creation, DDoS attacks, and the theft of one-time passwords and personal data.

Removing the malware is extremely difficult, as it typically requires flashing new firmware, an option unavailable for most of the affected devices.

Users are urged to check their hardware against a published list of compromised models and to avoid sideloading apps or purchasing unverified connected tech.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!