Growing push in Europe to regulate children’s social media use

Several European countries, led by Denmark, France, and Greece, are intensifying efforts to shield children from the potentially harmful effects of social media. With Denmark taking over the EU Council presidency from July, its Digital Minister, Caroline Stage Olsen, has made clear that her country will push for a ban on social media for children under 15.

Olsen criticises current platforms for failing to remove illegal content and relying on addictive features that encourage prolonged use. She also warned that platforms prioritise profit and data harvesting over the well-being of young users.

That initiative builds on growing concern across the EU about the mental and physical toll social media may take on children, including the spread of dangerous content, disinformation, cyberbullying, and unrealistic body image standards. France, for instance, has already passed legislation requiring parental consent for users under 15 and is pressing platforms to verify users’ ages more rigorously.

While the European Commission has issued draft guidelines to improve online safety for minors, such as making children’s accounts private by default, some countries are calling for tougher enforcement under the EU’s Digital Services Act. Despite these moves, there is currently no consensus across the EU for an outright ban.

Cultural differences and practical hurdles, like implementing consistent age verification, remain significant challenges. Still, proposals are underway to introduce a unified age of digital adulthood and a continent-wide age verification application, possibly even embedded into devices, to limit access by minors.

Olsen and her allies remain adamant, planning to dedicate the October summit of the EU digital ministers entirely to the issue of child online safety. They are also looking to future legislation, like the Digital Fairness Act, to enforce stricter consumer protection standards that explicitly account for minors. Meanwhile, age verification and parental controls are seen as crucial first steps toward limiting children’s exposure to addictive and damaging online environments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Apple reveals new AI features at WWDC

Apple has unveiled a range of AI features at its annual Worldwide Developers Conference, focusing on tighter privacy, enhanced user tools and broader integration with OpenAI’s ChatGPT. These updates will appear across iOS 26, iPadOS 26, macOS 26 and visionOS 26, set to launch in autumn.

While Apple Intelligence was first teased last year, the company now allows third-party developers to access its on-device AI models for the first time.

CEO Tim Cook and software chief Craig Federighi outlined how these features are intended to offer more personalised, efficient apps. Users of newer iPhones will benefit from tools such as live translation in Messages and FaceTime, and AI-powered image analysis via Visual Intelligence.

Apple also enables users to blend emojis creatively and use ChatGPT through its Image Playground to stylise photos. Enhancements to the Wallet app will help summarise order tracking from emails, and AI-generated voices will offer fitness updates.

Despite these innovations, Apple’s redesign of Siri remains incomplete and is not expected to launch soon.

The event failed to deliver major surprises, as many details had already been leaked. Investors responded cautiously, sending Apple shares down by 1.2%. The firm has lost 20% of its value in the year and no longer holds the top spot as the world’s most valuable company.

Nonetheless, Apple is expected to reveal more AI advancements in 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybersecurity alarm after 184 million credentials exposed

A vast unprotected database containing over 184 million credentials from major platforms and sectors has highlighted severe weaknesses in data security worldwide.

The leaked credentials, harvested by infostealer malware and stored in plain text, pose significant risks to consumers and businesses, underscoring an urgent need for stronger cybersecurity and better data governance.

Cybersecurity researcher Jeremiah Fowler discovered the 47 GB database exposing emails, passwords, and authorisation URLs from tech giants like Google, Microsoft, Apple, Facebook, and Snapchat, as well as banking, healthcare, and government accounts.

The data was left accessible without any encryption or authentication, making it vulnerable to anyone with the link.

The credentials were reportedly collected by infostealer malware such as Lumma Stealer, which silently steals sensitive information from infected devices. The stolen data fuels a thriving underground economy involving identity theft, fraud, and ransomware.

The breach’s scope extends beyond tech, affecting critical infrastructure like healthcare and government services, raising concerns over personal privacy and national security. With recurring data breaches becoming the norm, industries must urgently reinforce security measures.

Chief Data Officers and IT risk leaders face mounting pressure as regulatory scrutiny intensifies. The leak highlights the need for proactive data stewardship through encryption, access controls, and real-time threat detection.

Many organisations struggle with legacy systems, decentralised data, and cloud adoption, complicating governance efforts.

Enterprise leaders must treat data as a strategic asset and liability, embedding cybersecurity into business processes and supply chains. Beyond technology, cultivating a culture of accountability and vigilance is essential to prevent costly breaches and protect brand trust.

The massive leak signals a new era in data governance where transparency and relentless improvement are critical. The message is clear: there is no room for complacency in safeguarding the digital world’s most valuable assets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hong Kong builds AI tool for breast cancer diagnosis

Researchers at the Hong Kong University of Science and Technology have unveiled a pioneering AI model called MOME for non-invasive breast cancer diagnosis.

Using China’s largest multiparametric MRI breast cancer dataset, MOME performs at a level comparable to seasoned radiologists and is currently undergoing clinical trials in more than ten hospitals.

Among the institutions participating in the validation phase are Shenzhen People’s Hospital, Guangzhou First Municipal People’s Hospital, and Yunnan Cancer Center. Early results show that MOME excels in predicting response to pre-surgical chemotherapy.

The development highlights the region’s growing capabilities in medtech innovation and could reshape diagnostic strategies for breast cancer across Asia. MOME’s clinical success may also pave the way for similar AI-led models in oncology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI cracks down on misuse of ChatGPT by foreign threat actors

OpenAI has shut down a network of ChatGPT accounts allegedly linked to nation-state actors from Russia, China, Iran, North Korea, and others after uncovering their use in cyber and influence operations.

The banned accounts were used to assist in developing malware, automate social media content, and conduct reconnaissance on sensitive technologies.

According to OpenAI’s latest threat report, a Russian-speaking group used the chatbot to iteratively improve malware code written in Go. Each account was used only once to refine the code before being abandoned, a tactic highlighting the group’s emphasis on operational security.

The malicious software was later disguised as a legitimate gaming tool and distributed online, infecting victims’ devices to exfiltrate sensitive data and establish long-term access.

Chinese-linked groups, including APT5 and APT15, were found using OpenAI’s models for a range of technical tasks—from researching satellite communications to developing scripts for Android app automation and penetration testing.

Other accounts were linked to influence campaigns that generated propaganda or polarising content in multiple languages, including efforts to pose as journalists and simulate public discourse around elections and geopolitical events.

The banned activities also included scams, social engineering, and politically motivated disinformation. OpenAI stressed that although some misuse was detected, none involved sophisticated or large-scale attacks enabled solely by its tools.

The company said it is continuing to improve detection and mitigation efforts to prevent abuse of its models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk’s X tightens control on AI data use

Social media platform X has updated its developer agreement to prohibit the use of its content for training large language models.

The new clause, added under the restrictions section, forbids any attempt to use X’s API or content to fine-tune or train foundational or frontier AI models.

The move follows Elon Musk’s acquisition of X through his AI company xAI, which is developing its own models.

By restricting external access, the company aims to prevent competitors from freely using X’s data while maintaining control over a valuable resource for training AI systems.

X joins a growing list of platforms, including Reddit and The Browser Company, that have introduced terms blocking unauthorised AI training.

The shift reflects a broader industry trend towards limiting open data access amid the rising value of proprietary content in the AI arms race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OQC outlines bold 50,000 qubit quantum computing vision

Oxford Quantum Circuits (OQC) has revealed plans to develop a 50,000 qubit fault-tolerant quantum computer by 2034, using its proprietary ‘Dimon’ superconducting transmon technology.

Achieving such scale would require millions of physical qubits but promises to outperform global rivals, including Google and IBM, with real-world applications ranging from cyber threat detection to drug discovery.

The roadmap includes a significant push to reduce error rates and optimise chip materials, with recent breakthroughs enabling error detection at the hardware level. OQC claims it achieves a 99.8% gate fidelity in just 25 nanoseconds and a tenfold improvement in qubit efficiency compared to competitors.

Interim CEO Gerald Mullally said the roadmap marks a turning point, calling on finance and national security organisations to prepare for a quantum-driven future.

Now seeking $100 million in Series B funding, the firm plans to install its first quantum system in New York, later this year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic debuts AI tools for US national security

Anthropic has launched a new line of AI models, Claude Gov, explicitly tailored for US national security operations. Built with direct input from government clients, top-tier agencies already use the models.

These classified-use models were developed with enhanced safety testing and are optimised for handling sensitive material, including improved handling of classified data, rare language proficiency, and defence-specific document comprehension.

The Claude Gov models reflect Anthropic’s broader move into government partnerships, building on its collaboration with Palantir and AWS.

As competition in defence-focused AI intensifies, rivals including OpenAI, Meta, and Google are also adapting their models for secure environments.

The sector’s growing interest in custom, security-conscious AI tools marks a shift in how leading labs seek stable revenue streams and deeper ties with government agencies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT adds meeting recording and cloud access

OpenAI has launched new features for ChatGPT that allow it to record meetings, transcribe conversations, and pull information directly from cloud platforms like Google Drive and SharePoint.

Instead of relying on typed input alone, users can now speak to ChatGPT, which records audio, creates editable summaries, and helps generate follow-up content such as emails or project outlines.

‘Record’ is currently available to Team users via the macOS app and will soon expand to Enterprise and Edu accounts.

The recording tool automatically deletes the audio after transcription and applies existing workspace data rules, ensuring recordings are not used for training.

Instead of leaving notes scattered across different platforms, users gain a structured and searchable history of conversations, voice notes, or brainstorming sessions, which ChatGPT can recall and apply during future interactions.

At the same time, OpenAI has introduced new connectors for business users that let ChatGPT access files from cloud services like Dropbox, OneDrive, Box, and others.

These connectors allow ChatGPT to search and summarise information from internal documents, rather than depending only on web search or user uploads. The update also includes beta support for Deep Research agents that can work with tools like GitHub and HubSpot.

OpenAI has embraced the Model Context Protocol, an open standard allowing organisations to build their own custom connectors for proprietary tools.

Rather than serving purely as a general-purpose chatbot, ChatGPT is evolving into a workplace assistant capable of tapping into and understanding a company’s complete knowledge base.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New York Times sues OpenAI over data use

The New York Times has launched legal action against OpenAI, accusing the company of using its news articles without permission to train AI language models.

The NYT has asked the court to make OpenAI keep all ChatGPT user data indefinitely to find evidence for its case.

OpenAI’s Chief Operating Officer, Brad Lightcap, criticised the demand, saying it conflicts with privacy commitments and longstanding industry standards. OpenAI is appealing the order, arguing it represents an excessive overreach that weakens user privacy protections.

Despite the ongoing appeal, OpenAI must comply with the court’s directive until further notice. A limited, audited legal and security team will manage the stored data securely and only use it to meet legal obligations.

The data retention order impacts over 400 million weekly ChatGPT users, including those on Free, Plus, Pro, Teams, and many API plans. However, Enterprise and Zero Data Retention users remain unaffected.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot