Growing push in Europe to regulate children’s social media use

Several European countries, led by Denmark, France, and Greece, are intensifying efforts to shield children from the potentially harmful effects of social media. With Denmark taking over the EU Council presidency from July, its Digital Minister, Caroline Stage Olsen, has made clear that her country will push for a ban on social media for children under 15.

Olsen criticises current platforms for failing to remove illegal content and relying on addictive features that encourage prolonged use. She also warned that platforms prioritise profit and data harvesting over the well-being of young users.

That initiative builds on growing concern across the EU about the mental and physical toll social media may take on children, including the spread of dangerous content, disinformation, cyberbullying, and unrealistic body image standards. France, for instance, has already passed legislation requiring parental consent for users under 15 and is pressing platforms to verify users’ ages more rigorously.

While the European Commission has issued draft guidelines to improve online safety for minors, such as making children’s accounts private by default, some countries are calling for tougher enforcement under the EU’s Digital Services Act. Despite these moves, there is currently no consensus across the EU for an outright ban.

Cultural differences and practical hurdles, like implementing consistent age verification, remain significant challenges. Still, proposals are underway to introduce a unified age of digital adulthood and a continent-wide age verification application, possibly even embedded into devices, to limit access by minors.

Olsen and her allies remain adamant, planning to dedicate the October summit of the EU digital ministers entirely to the issue of child online safety. They are also looking to future legislation, like the Digital Fairness Act, to enforce stricter consumer protection standards that explicitly account for minors. Meanwhile, age verification and parental controls are seen as crucial first steps toward limiting children’s exposure to addictive and damaging online environments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Apple reveals new AI features at WWDC

Apple has unveiled a range of AI features at its annual Worldwide Developers Conference, focusing on tighter privacy, enhanced user tools and broader integration with OpenAI’s ChatGPT. These updates will appear across iOS 26, iPadOS 26, macOS 26 and visionOS 26, set to launch in autumn.

While Apple Intelligence was first teased last year, the company now allows third-party developers to access its on-device AI models for the first time.

CEO Tim Cook and software chief Craig Federighi outlined how these features are intended to offer more personalised, efficient apps. Users of newer iPhones will benefit from tools such as live translation in Messages and FaceTime, and AI-powered image analysis via Visual Intelligence.

Apple also enables users to blend emojis creatively and use ChatGPT through its Image Playground to stylise photos. Enhancements to the Wallet app will help summarise order tracking from emails, and AI-generated voices will offer fitness updates.

Despite these innovations, Apple’s redesign of Siri remains incomplete and is not expected to launch soon.

The event failed to deliver major surprises, as many details had already been leaked. Investors responded cautiously, sending Apple shares down by 1.2%. The firm has lost 20% of its value in the year and no longer holds the top spot as the world’s most valuable company.

Nonetheless, Apple is expected to reveal more AI advancements in 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit targets AI firm over scraped sports posts

Reddit has taken legal action against AI company Anthropic, accusing it of scraping content from the platform’s sports-focused communities.

The lawsuit claims Anthropic violated Reddit’s user agreement by collecting posts without permission, particularly from fan-driven discussions that are central to how sports content is shared online.

Reddit argues the scraping undermines its obligations to over 100 million daily users, especially around privacy and user control. According to the filing, Anthropic’s actions override assurances that users can manage or delete their content as they see fit.

The platform emphasises that users gain no benefit from technology built using their contributions.

These online sports communities are rich sources of original fan commentary and analysis. On a large scale, such content could enable AI models to imitate sports fan behaviour with impressive accuracy.

While teams or platforms might use such models to enhance engagement or communication, Reddit warns that unauthorised use brings serious ethical and legal risks.

The case could influence how AI companies handle user-generated content across the internet, not just in sports. As web scraping grows more common, the outcome of the dispute may shape future standards for AI training practices and online content rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybersecurity alarm after 184 million credentials exposed

A vast unprotected database containing over 184 million credentials from major platforms and sectors has highlighted severe weaknesses in data security worldwide.

The leaked credentials, harvested by infostealer malware and stored in plain text, pose significant risks to consumers and businesses, underscoring an urgent need for stronger cybersecurity and better data governance.

Cybersecurity researcher Jeremiah Fowler discovered the 47 GB database exposing emails, passwords, and authorisation URLs from tech giants like Google, Microsoft, Apple, Facebook, and Snapchat, as well as banking, healthcare, and government accounts.

The data was left accessible without any encryption or authentication, making it vulnerable to anyone with the link.

The credentials were reportedly collected by infostealer malware such as Lumma Stealer, which silently steals sensitive information from infected devices. The stolen data fuels a thriving underground economy involving identity theft, fraud, and ransomware.

The breach’s scope extends beyond tech, affecting critical infrastructure like healthcare and government services, raising concerns over personal privacy and national security. With recurring data breaches becoming the norm, industries must urgently reinforce security measures.

Chief Data Officers and IT risk leaders face mounting pressure as regulatory scrutiny intensifies. The leak highlights the need for proactive data stewardship through encryption, access controls, and real-time threat detection.

Many organisations struggle with legacy systems, decentralised data, and cloud adoption, complicating governance efforts.

Enterprise leaders must treat data as a strategic asset and liability, embedding cybersecurity into business processes and supply chains. Beyond technology, cultivating a culture of accountability and vigilance is essential to prevent costly breaches and protect brand trust.

The massive leak signals a new era in data governance where transparency and relentless improvement are critical. The message is clear: there is no room for complacency in safeguarding the digital world’s most valuable assets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK judges issue warning on unchecked AI use by lawyers

A senior UK judge has warned that lawyers may face prosecution if they continue citing fake legal cases generated by AI without verifying their accuracy.

High Court justice Victoria Sharp called the misuse of AI a threat to justice and public trust, after lawyers in two recent cases relied on false material created by generative tools.

In one £90 million lawsuit involving Qatar National Bank, a lawyer submitted 18 cases that did not exist. The client later admitted to supplying the false information, but Justice Sharp criticised the lawyer for depending on the client’s research instead of conducting proper legal checks.

In another case, five fabricated cases were used in a housing claim against the London Borough of Haringey. The barrister denied using AI but failed to provide a clear explanation.

Both incidents have been referred to professional regulators. Sharp warned that submitting false information could amount to contempt of court or, in severe cases, perverting the course of justice — an offence that can lead to life imprisonment.

While recognising AI as a useful legal tool, Sharp stressed the need for oversight and regulation. She said AI’s risks must be managed with professional discipline if public confidence in the legal system is to be preserved.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK teams with tech giants on AI training

The UK government is launching a nationwide AI skills initiative aimed at both workers and schoolchildren, with Prime Minister Keir Starmer announcing partnerships with major tech companies including Google, Microsoft and Amazon.

The £187 million TechFirst programme will provide AI education to one million secondary students and train 7.5 million workers over the next five years.

Rather than keeping such tools limited to specialists, the government plans to make AI training accessible across classrooms and businesses. Companies involved will make learning materials freely available to boost digital skills and productivity, particularly in using chatbots and large language models.

Starmer said the scheme is designed to empower the next generation to shape AI’s future instead of being shaped by it. He called it the start of a new era of opportunity and growth, as the UK aims to strengthen its global leadership in AI.

The initiative arrives as the country’s AI sector, currently worth £72 billion, is projected to grow to more than £800 billion by 2035.

The government also signed two agreements with NVIDIA to support a nationwide AI talent pipeline, reinforcing efforts to expand both the workforce and innovation in the sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia and FCA open AI sandbox for UK fintechs

Financial firms across the UK will soon be able to experiment with AI in a new regulatory sandbox, launched by the Financial Conduct Authority (FCA) in partnership with Nvidia.

Known as the Supercharged Sandbox, it offers a secure testing ground for firms wanting to explore AI tools without needing their advanced computing resources.

Set to begin in October, the initiative is open to any financial services company testing AI-driven ideas. Firms will have access to Nvidia’s accelerated computing platform and tailored AI software, helping them work with complex data, improve automation, and enhance risk management in a controlled setting.

The FCA said the sandbox is designed to support firms lacking the in-house capacity to test new technology.

It aims to provide not only computing power but also regulatory guidance and access to better datasets, creating an environment where innovation can flourish while remaining compliant with rules.

The move forms part of a wider push by the UK government to foster economic growth through innovation. Finance minister Rachel Reeves has urged regulators to clear away obstacles to growth and praised the FCA and Bank of England for acting on her call to cut red tape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Odyssey presents immersive AI-powered streaming

Odyssey, a startup founded by self-driving veterans Oliver Cameron and Jeff Hawke, has unveiled an AI model that allows users to interact with streaming video in real time.

The technology generates video frames every 40 milliseconds, enabling users to move through scenes like a 3D video game instead of passively watching. A demo is currently available online, though it is still in its early stages.

The system relies on a new kind of ‘world model’ that predicts future visual states based on previous actions and environments. Odyssey claims its model can maintain spatial consistency, learn motion from video, and sustain coherent video output for five minutes or more.

Unlike models trained solely on internet data, Odyssey captures real-world environments using a custom 360-degree, backpack-mounted camera to build higher-fidelity simulations.

Tech giants and AI startups are exploring world models to power next-generation simulations and interactive media. Yet creative professionals remain wary. A 2024 study commissioned by the Animation Guild predicted significant job disruptions across film and animation.

Game studios like Activision Blizzard have been scrutinised for using AI while cutting staff.

Odyssey, however, insists its goal is collaboration instead of replacement. The company is also developing software to let creators edit scenes using tools like Unreal Engine and Blender.

Backed by $27 million in funding and supported by Pixar co-founder Ed Catmull, Odyssey aims to transform video content across entertainment, education, and advertising through on-demand interactivity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT adds meeting recording and cloud access

OpenAI has launched new features for ChatGPT that allow it to record meetings, transcribe conversations, and pull information directly from cloud platforms like Google Drive and SharePoint.

Instead of relying on typed input alone, users can now speak to ChatGPT, which records audio, creates editable summaries, and helps generate follow-up content such as emails or project outlines.

‘Record’ is currently available to Team users via the macOS app and will soon expand to Enterprise and Edu accounts.

The recording tool automatically deletes the audio after transcription and applies existing workspace data rules, ensuring recordings are not used for training.

Instead of leaving notes scattered across different platforms, users gain a structured and searchable history of conversations, voice notes, or brainstorming sessions, which ChatGPT can recall and apply during future interactions.

At the same time, OpenAI has introduced new connectors for business users that let ChatGPT access files from cloud services like Dropbox, OneDrive, Box, and others.

These connectors allow ChatGPT to search and summarise information from internal documents, rather than depending only on web search or user uploads. The update also includes beta support for Deep Research agents that can work with tools like GitHub and HubSpot.

OpenAI has embraced the Model Context Protocol, an open standard allowing organisations to build their own custom connectors for proprietary tools.

Rather than serving purely as a general-purpose chatbot, ChatGPT is evolving into a workplace assistant capable of tapping into and understanding a company’s complete knowledge base.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

M&S CEO targeted by hackers in abusive ransom email

Marks & Spencer has been directly targeted by a ransomware group calling itself DragonForce, which sent a vulgar and abusive ransom email to CEO Stuart Machin using a compromised employee email address.

The message, laced with offensive language and racist terms, demanded that Machin engage via a darknet portal to negotiate payment. It also claimed that the hackers had encrypted the company’s servers and stolen customer data, a claim M&S eventually acknowledged weeks later.

The email, dated 23 April, appears to have been sent from the account of an Indian IT worker employed by Tata Consultancy Services (TCS), a long-standing M&S tech partner.

TCS has denied involvement and stated that its systems were not the source of the breach. M&S has remained silent publicly, neither confirming the full scope of the attack nor disclosing whether a ransom was paid.

The cyber attack has caused major disruption, costing M&S an estimated £300 million and halting online orders for over six weeks.

DragonForce has also claimed responsibility for a simultaneous attack on the Co-op, which left some shelves empty for days. While nothing has yet appeared on DragonForce’s leak site, the group claims it will publish stolen information soon.

Investigators believe DragonForce operates as a ransomware-as-a-service collective, offering tools and platforms to cybercriminals in exchange for a 20% share of any ransom.

Some experts suspect the real perpetrators may be young hackers from the West, linked to a loosely organised online community called Scattered Spider. The UK’s National Crime Agency has confirmed it is focusing on the group as part of its inquiry into the recent retail hacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!