M&S halts meal deals amid ongoing cyber attack disruption

Marks & Spencer has temporarily suspended some of its popular meal deal offers as the retailer continues to grapple with the fallout from a serious cyber attack.

Signs in stores, including at major transport hubs such as Victoria Station, explain that availability issues have made it impossible to fulfil certain promotions, and ask customers for patience while the company works through the disruption.

Instead of offering its usual lunchtime combinations and dine-in meal deals priced between £6 and £15, M&S is facing stock shortfalls due to the hack, which is now in its third week.

The attack is reportedly linked to a group of teenage hackers using ransomware tactics, locking computer systems and demanding payment for their release.

The breach has already caused significant operational challenges, with fears internally that the disruption could drag on for weeks. Sources suggest the financial impact could run into tens of millions in lost orders, as systems remain frozen and supply chains struggle to recover.

Meal deal suspensions are the latest sign of the broader strain the retailer is under as it scrambles to restore normal service.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers target UK retailers with fake IT calls

British retailers are facing a new wave of cyberattacks as hackers impersonate IT help desk staff to infiltrate company systems. The National Cyber Security Centre (NCSC) has issued an urgent warning following breaches at major firms including Marks & Spencer, Co-op, and Harrods.

Attackers use sophisticated social engineering tactics—posing as locked-out employees or IT support staff—to trick individuals into giving up passwords and security details. The NCSC urges companies to strengthen how their IT help desks verify employee identities, particularly when handling password resets for senior staff.

Security experts in the UK recommend using multi-step verification methods and even code words to confirm identities over the phone. These additional layers are vital, as attackers increasingly exploit trust and human error rather than technical vulnerabilities.

While the NCSC hasn’t named any group officially, the style of attack closely resembles the methods of Scattered Spider, a loosely connected network of young, English-speaking hackers. Known for high-profile cyber incidents—including attacks on Las Vegas casinos and public transport systems—the group often coordinates via platforms like Discord and Telegram.

However, those claiming responsibility for the latest breaches deny links to Scattered Spider, calling themselves ‘DragonForce.’ Speaking to the BBC, the group claimed to have stolen significant customer and employee data from Co-op and hinted at more disruptions in the future.

The NCSC is investigating with law enforcement to determine whether DragonForce is a new player or simply a rebranded identity of the same well-known threat actors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How digital twins are being weaponised in crypto scams

Digital twins are virtual models of real-world objects, systems, or processes. They enable real-time simulations, monitoring, and predictions, helping industries like healthcare and manufacturing optimise resources. In the crypto world, cybercriminals have found a way to exploit this technology for fraudulent activities.

Scammers create synthetic identities by gathering personal data from various sources. These digital twins are used to impersonate influencers or executives, promoting fake investment schemes or stealing funds. The unregulated nature of crypto platforms makes it easier for criminals to exploit users.

Real-world scams are already happening. Deepfake CEO videos have tricked executives into transferring funds under false pretences. Counterfeit crypto platforms have also stolen sensitive information from users. These scams highlight the risks of AI-powered digital twins in the crypto space.

Blockchain offers solutions to combat these frauds. Decentralised identities (DID) and NFT identity markers can verify interactions. Blockchain’s immutable audit trails and smart contracts can help secure transactions and protect users from digital twin scams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cyber incident disrupts services at Marks & Spencer

Marks & Spencer has confirmed that a cyberattack has disrupted food availability in some stores and forced the temporary shutdown of online services. The company has not officially confirmed the nature of the breach, but cybersecurity experts suspect a ransomware attack.

The retailer paused clothing and home orders on its website and app after issues arose over the Easter weekend, affecting contactless payments and click-and-collect systems. M&S said it took some systems offline as a precautionary measure.

Reports have linked the incident to the hacking group Scattered Spider, although M&S has declined to comment further or provide a timeline for the resumption of online orders. The disruption has already led to minor product shortages and analysts anticipate a short-term hit to profits.

Still, M&S’s food division had been performing strongly, with grocery spending rising 14.4% year-on-year, according to Kantar. The retailer, which operates around 1,000 UK stores, earns about one-third of its non-food sales online. Shares dropped earlier in the week but closed Tuesday slightly up.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France accuses Russia of cyberattacks on Olympic and election targets

France has publicly accused Russia’s military intelligence agency of launching cyberattacks against key French institutions, including the 2017 presidential campaign of Emmanuel Macron and organisations tied to the Paris 2024 Olympics.

The allegations were presented by Foreign Minister Jean-Noël Barrot at the UN Security Council, where he condemned the attacks as violations of international norms. French authorities linked the operations to APT28, a well-known Russian hacking group connected to the GRU.

The group also allegedly orchestrated the 2015 cyberattack on TV5 Monde and attempted to manipulate voters during the 2017 French election by leaking thousands of campaign documents. A rise in attacks has been noted ahead of major events like the Olympics and future elections.

France’s national cybersecurity agency recorded a 15% increase in Russia-linked attacks in 2024, targeting ministries, defence firms, and cultural venues. French officials warn the hacks aim to destabilise society and erode public trust.

France plans closer cooperation with Poland and pledged to counter Russia’s cyber operations with all available means.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK refuses to include Online Safety Act in US trade talks

The UK government has ruled out watering down the Online Safety Act as part of any trade negotiations with the US, despite pressure from American tech giants.

Speaking to MPs on the Science, Innovation and Technology Committee, Baroness Jones of Whitchurch, the parliamentary under-secretary for online safety, stated unequivocally that the legislation was ‘not up for negotiation’.

‘There have been clear instructions from the Prime Minister,’ she said. ‘The Online Safety Act is not part of the trade deal discussions. It’s a piece of legislation — it can’t just be negotiated away.’

Reports had suggested that President Donald Trump’s administration might seek to make loosening the UK’s online safety rules a condition of a post-Brexit trade agreement, following lobbying from large US-based technology firms.

However, Baroness Jones said the legislation was well into its implementation phase and that ministers were ‘happy to reassure everybody’ that the government is sticking to it.

The Online Safety Act will require tech platforms that host user-generated content, such as social media firms, to take active steps to protect users — especially children — from harmful and illegal content.

Non-compliant companies may face fines of up to £18 million or 10% of global turnover, whichever is greater. In extreme cases, platforms could be blocked from operating in the UK.

Mark Bunting, a representative of Ofcom, which is overseeing enforcement of the new rules, said the regulator would have taken action had the legislation been in force during last summer’s riots in Southport, which were exacerbated by online misinformation.

His comments contrasted with tech firms including Meta, TikTok and X, which claimed in earlier hearings that little would have changed under the new rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI’s CEO Altman confirms rollback of GPT-4o after criticism

OpenAI has reversed a recent update to its GPT-4o model after users complained it had become overly flattering and blindly agreeable. The behaviour, widely mocked online, saw ChatGPT praising dangerous or clearly misguided user ideas, leading to concerns over the model’s reliability and integrity.

The change had been part of a broader attempt to make GPT-4o’s default personality feel more ‘intuitive and effective’. However, OpenAI admitted the update relied too heavily on short-term user feedback and failed to consider how interactions evolve over time.

In a blog post published Tuesday, OpenAI said the model began producing responses that were ‘overly supportive but disingenuous’. The company acknowledged that sycophantic interactions could feel ‘uncomfortable, unsettling, and cause distress’.

Following CEO Sam Altman’s weekend announcement of an impending rollback, OpenAI confirmed that the previous, more balanced version of GPT-4o had been reinstated.

It also outlined steps to avoid similar problems in future, including refining model training, revising system prompts, and expanding safety guardrails to improve honesty and transparency.

Further changes in development include real-time feedback mechanisms and allowing users to choose between multiple ChatGPT personalities. OpenAI says it aims to incorporate more diverse cultural perspectives and give users greater control over the assistant’s behaviour.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

4chan returns after major cyberattack

After suffering what it called a ‘catastrophic’ cyberattack earlier this month, controversial image board 4chan has returned online, admitting its systems were breached through outdated software.

The attacker, reportedly using a UK-based IP address, gained entry by uploading a malicious PDF, allowing access to 4chan’s database and administrative dashboard. The intruder exfiltrated source code and sensitive data before vandalising the site, which led to its temporary shutdown on 14 April.

Although 4chan avoided directly naming the software vulnerability, it indirectly confirmed suspicions that a severely outdated backend—possibly an old version of PHP—was at fault. The site confessed that slow progress in updating its infrastructure resulted from a chronic lack of funds and technical support.

It blamed years of financial instability on advertisers, payment processors, and providers pulling away under external pressure, leaving it dependent on second-hand hardware and a stretched, largely volunteer development team.

Despite purchasing new servers in mid-2024, the transition was slow and incomplete, meaning key services still ran on legacy equipment when the breach occurred. Following the attack, 4chan replaced the compromised server and implemented necessary software updates.

PDF uploads have been suspended, and the Flash board permanently closed due to the difficulty in preventing similar exploits through .swf files.

Now relying on volunteer tech workers to support its recovery efforts, the site insists it won’t be shut down. ‘4chan is back,’ it declared, claiming no other site could replace its unique community, despite long-standing criticism over its content and lax moderation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FBI warns users not to click on suspicious messages

Cybersecurity experts are raising fresh alarms following an FBI warning that clicking on a single link could lead to disaster.

With cyberattacks becoming more sophisticated, hackers now need just 60 seconds to compromise a victim’s device after launching an attack.

Techniques range from impersonating trusted brands like Google to deploying advanced malware and using AI tools to scale attacks even further.

The FBI has revealed that internet crimes caused $16 billion in losses during 2024 alone, with more than 850,000 complaints recorded.

Criminals exploit emotional triggers like fear and urgency in phishing emails, often sent from what appear to be genuine business accounts. A single click could expose sensitive data, install malware automatically, or hand attackers access to personal accounts by stealing browser session cookies.

To make matters worse, many attacks now originate from smartphone farms targeting both Android and iPhone users. Given the evolving threat landscape, the FBI has urged everyone to be extremely cautious.

Their key advice is clear: do not click on anything received via unsolicited emails or text messages, no matter how legitimate it might appear.

Remaining vigilant, avoiding interaction with suspicious messages, and reporting any potential threats are critical steps in combating the growing tide of cybercrime.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake victims gain new rights with House-approved bill

The US House of Representatives has passed the ‘Take It Down’ Act with overwhelming bipartisan support, aiming to protect Americans from the spread of deepfake and revenge pornography.

The bill, approved by a 409-2 vote, criminalises the distribution of non-consensual intimate imagery—including AI-generated content—and now heads to President Donald Trump for his signature.

First Lady Melania Trump, who returned to public advocacy earlier this year, played a key role in supporting the legislation. She lobbied lawmakers last month and celebrated the bill’s passage, saying she was honoured to help guide it through Congress.

The White House confirmed she will attend the signing ceremony.

The law requires social media platforms and similar websites to remove such harmful content upon request from victims, instead of allowing it to remain unchecked.

Victims of deepfake pornography have included both public figures such as Taylor Swift and Alexandria Ocasio-Cortez, and private individuals like high school students.

Introduced by Republican Senator Ted Cruz and backed by Democratic lawmakers including Amy Klobuchar and Madeleine Dean, the bill reflects growing concern across party lines about online abuse.

Melania Trump, echoing her earlier ‘Be Best’ initiative, stressed the need to ensure young people—especially girls—can navigate the internet safely instead of being left vulnerable to digital exploitation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!