Barcelona court investigates football stars in NFT scam

A Barcelona court has launched a criminal probe into Shirtum Europa SLU, a crypto firm accused of defrauding investors of $3.4 million through a failed NFT scheme. Several elite footballers, including World Cup winners and former Barcelona stars, are named in the case after promoting the venture.

The NFTs tied to footballer image rights and sold via the $SHI token were marketed as exclusive collectables but were never tradable or backed by a functioning platform.

Founders allegedly used a complex corporate structure to evade taxes and siphon funds, with footballers acting as public faces to boost credibility.

The footballers are ‘Papu’ Gómez, Lucas Ocampos, Ivan Rakitić, Javier Saviola, Nico Pareja, and Alberto Moreno. Reports suggest ‘Papu’ Gómez recruited others after presenting himself as a company founder before all promotional material was removed from social media.

The scandal exposes problems in Spanish football’s crypto partnerships, as a gambling ad ban left a sponsorship gap that crypto firms filled. Many clubs face unpaid fees, and experts warn big names may mislead investors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Massive leak exposes data of millions in China

Cybersecurity researchers have uncovered a brief but significant leak of over 600 gigabytes of data, exposing information on millions of Chinese citizens.

The haul, containing WeChat, Alipay, banking, and residential records, is part of a centralised system, possibly aimed at large-scale surveillance instead of a random data breach.

According to research from Cybernews and cybersecurity consultant Bob Diachenko, the data was likely used to build individuals’ detailed behavioural, social and economic profiles.

They warned the information could be exploited for phishing, fraud, blackmail or even disinformation campaigns instead of remaining dormant. Although only 16 datasets were reviewed before the database vanished, they indicated a highly organised and purposeful collection effort.

The source of the leak remains unknown, but the scale and nature of the data suggest it may involve government-linked or state-backed entities rather than lone hackers.

The exposed information could allow malicious actors to track residence locations, financial activity and personal identifiers, placing millions at risk instead of keeping their lives private and secure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Digital Social Security cards coming this summer

The US Social Security Administration is launching digital access to Social Security numbers in the summer of 2025 through its ‘My Social Security’ portal. The initiative aims to improve convenience, reduce physical card replacement delays, and protect against identity theft.

The digital rollout responds to the challenges of outdated paper cards, rising fraud risks, and growing demand for remote access to US government services. Cybersecurity experts also recommend using VPNs, antivirus software, and identity monitoring services to guard against phishing scams and data breaches.

While it promises faster and more secure access, experts urge users to bolster account protection through strong passwords, two-factor authentication, and avoidance of public Wi-Fi when accessing sensitive data.

Users should regularly check their credit reports and SSA records and consider requesting an IRS PIN to prevent tax-related fraud. The SSA says this move will make Social Security more efficient without compromising safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK regulator probes 4chan over online safety rules

The UK communications regulator Ofcom has launched an investigation into the controversial message board 4chan for potentially breaching new online safety laws. Under the Online Safety Act, platforms must assess and manage risks related to illegal content affecting UK users.

Ofcom stated that it requested 4chan’s risk assessment in April but received no response, prompting a formal inquiry into whether the site failed to meet its duty to protect users. The nature of the illegal content being scrutinised has not been disclosed.

The regulator emphasised that it has the authority to fine companies up to £18 million or 10% of their global revenue, depending on which is higher. That move marks a significant test of the UK’s stricter regulatory powers to hold online services accountable.

The watchdog’s concerns stem from user anonymity on 4chan, which has historically made the platform a hotspot for controversial, offensive, and often extreme content. A recent cyberattack further complicated matters, rendering parts of the website offline for over a week.

Alongside 4chan, Ofcom is also investigating pornographic site First Time Videos for failing to prove robust age verification systems are in place to block access by under-18s. This is part of a broader crackdown as platforms with age-restricted content face a July deadline to implement effective safeguards, which may include facial age-estimation technology.

Additionally, seven lesser-known file-sharing services, including Krakenfiles and Yolobit, are being scrutinised for potentially hosting child sexual abuse material. Like 4chan, these platforms reportedly failed to respond to Ofcom’s information requests. The regulator’s growing list of investigations signals a tougher era for digital platforms operating in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NHS launches urgent blood donor appeal

The NHS is appealing for one million new blood donors during National Blood Week, following a significant cyberattack on London hospitals that severely impacted blood stocks.

This urgent plea comes after an analysis by NHS Blood and Transplant revealed a shortfall of 200,000 donors, with only two percent of the population currently sustaining the nation’s blood supply.

The ongoing shortage stems from a July 2024 cyberattack, linked to the Russian-based Qilin ransomware group, which crippled the networks of Synnovis, a major NHS lab partner.

This disruption affected crucial pathology services at hospitals including King’s College Hospital and Guy’s and St Thomas’ NHS Foundation Trust, leading to postponed operations and procedures. The incident prompted an Amber alert for severe blood shortages, declared a ‘critical incident’ by the NHS.

With blood stocks remaining low, exacerbated by recent bank holidays, the NHS is now facing a pressing need to prevent a ‘Red Alert,’ which would signify demand far exceeding capacity and threaten public safety.

A particular emphasis is being placed on finding more O-negative and Ro donors, urging the public to come forward and help replenish vital supplies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta boosts AGI efforts with new team

Mark Zuckerberg, Meta Platforms CEO, is reportedly building a new team dedicated to achieving artificial general intelligence (AGI), aiming for machines that can match or exceed human intellect.

The initiative is linked to an investment exceeding $10 billion in Scale AI, whose founder, Alexandr Wang, is expected to join the AGI group. Meta has not yet commented on these reports.

Zuckerberg’s personal involvement in recruiting around 50 experts, including a new head of AI research, is partly driven by dissatisfaction with Meta’s recent large language model, Llama 4. Last month, Meta even delayed the release of its flagship ‘Behemoth’ AI model due to internal concerns about its performance.

The move signals an intensifying race in the AI sector, as rivals like OpenAI are also making strategic adjustments to attract further investment in their pursuit of AGI. This highlights a clear push by major tech players towards developing more advanced and capable AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s AI tools disabled for gaokao exam

As millions of high school students across China began the rigorous ‘gaokao’ college entrance exam, the country’s leading tech companies took unprecedented action by disabling AI features on their popular platforms.

Apps from Tencent, ByteDance, and Moonshot AI temporarily blocked functionalities like photo recognition and real-time question answering. This move aimed to prevent students from using AI chatbots to cheat during the critical national examination, which largely dictates university admissions in China.

This year, approximately 13.4 million students are participating in the ‘gaokao,’ a multi-day test that serves as a pivotal determinant for social mobility, particularly for those from rural or lower-income backgrounds.

The immense pressure associated with the exam has historically fueled intense test preparation. However, screenshots circulating on Chinese social media app Rednote confirmed that AI chatbots like Tencent’s YuanBao, ByteDance’s Doubao, and Moonshot AI’s Kimi displayed messages indicating the temporary closure of exam-relevant features to ensure fairness.

China’s ‘gaokao’ exam highlights a balanced approach to AI: promoting its education from a young age, with compulsory instruction in Beijing schools this autumn, while firmly asserting it’s for learning, not cheating. Regulators draw a clear line, reinforcing that AI aids development, but never compromises academic integrity.

This coordinated action by major tech firms reinforces the message that AI has no place in the examination hall, despite China’s broader push to cultivate an AI-literate generation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Growing push in Europe to regulate children’s social media use

Several European countries, led by Denmark, France, and Greece, are intensifying efforts to shield children from the potentially harmful effects of social media. With Denmark taking over the EU Council presidency from July, its Digital Minister, Caroline Stage Olsen, has made clear that her country will push for a ban on social media for children under 15.

Olsen criticises current platforms for failing to remove illegal content and relying on addictive features that encourage prolonged use. She also warned that platforms prioritise profit and data harvesting over the well-being of young users.

That initiative builds on growing concern across the EU about the mental and physical toll social media may take on children, including the spread of dangerous content, disinformation, cyberbullying, and unrealistic body image standards. France, for instance, has already passed legislation requiring parental consent for users under 15 and is pressing platforms to verify users’ ages more rigorously.

While the European Commission has issued draft guidelines to improve online safety for minors, such as making children’s accounts private by default, some countries are calling for tougher enforcement under the EU’s Digital Services Act. Despite these moves, there is currently no consensus across the EU for an outright ban.

Cultural differences and practical hurdles, like implementing consistent age verification, remain significant challenges. Still, proposals are underway to introduce a unified age of digital adulthood and a continent-wide age verification application, possibly even embedded into devices, to limit access by minors.

Olsen and her allies remain adamant, planning to dedicate the October summit of the EU digital ministers entirely to the issue of child online safety. They are also looking to future legislation, like the Digital Fairness Act, to enforce stricter consumer protection standards that explicitly account for minors. Meanwhile, age verification and parental controls are seen as crucial first steps toward limiting children’s exposure to addictive and damaging online environments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Workers struggle as ChatGPT goes down

The temporary outage of ChatGPT this morning left thousands of users struggling with their daily tasks, highlighting a growing reliance on AI.

Social media was flooded with humorous yet telling posts from users expressing their inability to perform even basic functions without AI. This incident has reignited concerns about society’s increasing dependence on closed-source AI tools for work and everyday life.

OpenAI, the developer of ChatGPT, is currently investigating the technical issues that led to ‘elevated error rates and latency.’ The widespread disruption underscores a broader debate about AI’s impact on critical thinking and productivity.

While some research suggests AI chatbots can enhance efficiency, others, like Paul Armstrong, argue that frequent reliance on generative tools may diminish critical thinking skills and understanding.

The discussion around AI’s role in the workplace was a key theme at the recent SXSW London event. Despite concerns about job displacement, exemplified by redundancies at Canva, firms like Lloyd’s Market Association are increasingly adopting AI, with 40% of London market companies now using it.

Industry leaders maintain that AI aims to rethink workflows and empower human creativity, with a ‘human layer’ remaining essential for refining and adding nuanced value.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S resumes online orders after cyberattack

Marks & Spencer has resumed online clothing orders following a 46-day pause triggered by a cyberattack. The retailer restarted standard home delivery across England, Scotland and Wales, focusing initially on best-selling and new items instead of the full range.

A spokesperson stated that additional products will be added daily, enabling customers to gradually access a wider selection. Services such as click and collect, next-day delivery, and international orders are expected to be reintroduced in the coming weeks, while deliveries to Northern Ireland will resume soon.

The disruption began on 25 April when M&S halted clothing and home orders after issues with contactless payments and app services during the Easter weekend. The company revealed that the breach was caused by hackers who deceived staff at a third-party contractor, bypassing security defences.

M&S had warned that the incident could reduce its 2025/26 operating profit by around £300 million, though it aims to limit losses through insurance and internal cost measures. Shares rose 3 per cent as the online service came back online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!