EU strengthens IP enforcement under Digital Services Act

The European Commission has signed an agreement with the European Union Intellectual Property Office to support enforcement of the Digital Services Act in relation to intellectual property rights.

The agreement takes effect immediately and focuses on strengthening the Commission’s enforcement capacity.

Cooperation will target systemic risks linked to very large online platforms and search engines, particularly the spread of intellectual property-infringing content. Such risks include counterfeit goods and online piracy, which fall within the scope of the DSA’s oversight framework.

The EUIPO is expected to expand its activities to support judicial and enforcement authorities, as well as online intermediaries that are not classified as very large platforms. Intellectual property rights holders are also included in the broader effort to address infringement risks.

The Digital Services Act establishes rules aimed at creating a safer and more transparent online environment across the European Union. Cooperation between the EU institutions and specialised bodies is presented as a key element in safeguarding users’ rights, including those linked to intellectual property.

Strengthening enforcement mechanisms in areas such as intellectual property links platform governance with broader policy objectives, including user protection, accountability of online intermediaries, and the functioning of the digital single market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK’s Ofcom report reveals evolving online habits and growing AI reliance

New Ofcom research suggests that UK adults are becoming more cautious and passive in their use of social media, even as interest in AI tools grows, pointing to a wider shift in how people experience digital life.

While social media remains widely used, the report indicates that users are participating less actively and becoming more selective about what they share and how visible they are online.

That shift is tied in part to growing unease about digital well-being. Concerns about screen time and the wider effects of online platforms are rising, with fewer adults convinced that the benefits of being online outweigh the risks. Many say they are actively trying to limit their usage, reflecting broader anxieties about the impact of digital media on mental health and everyday life.

At the same time, AI adoption is accelerating, especially among younger users. Ofcom’s findings suggest that people are using AI not only for productivity and creative tasks, but also, in some cases, for conversational and emotional support, pointing to a changing relationship between users and digital tools.

Other findings reinforce the sense of a more fragmented digital environment. Trust in news remains uneven, mainstream sources still hold a central place but face growing scepticism, and confidence in digital skills does not always translate into an ability to identify misinformation, scams, or other online risks.

Taken together, the findings suggest that the UK’s digital habits are not simply expanding but changing in character. Users appear to be growing more wary of social platforms, more alert to digital harms, and more open to new forms of interaction through AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK regulator orders revised safety assessments under Online Safety Act

Ofcom has ordered more than 40 online services to submit revised risk assessments under the UK’s Online Safety Act, increasing pressure on platforms to show how they identify and reduce illegal content and other user harms.

The move marks a tougher phase in the UK’s online safety regime, with the regulator signalling that incomplete or delayed submissions could trigger enforcement action.

Ofcom said earlier reviews had identified weaknesses in several assessments, prompting companies to strengthen their approach and improve safeguards.

The requirement is especially significant for services likely to be accessed by children, which must also examine the risk of exposure to harmful content and demonstrate what protective measures they have in place. In that sense, the regulator is pushing platforms to treat safety not as a reactive moderation issue, but as a design and compliance obligation.

Ofcom has also indicated that major platforms will eventually have to publish summaries of their risk assessments, adding a transparency layer to the regime.

The latest demands suggest that the UK is moving beyond setting out online safety expectations and into a more interventionist stage focused on supervision, accountability, and enforcement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

MIT develops AI framework to test ethics in autonomous systems

Researchers at MIT have introduced a new framework designed to evaluate the ethical impact of autonomous systems used in high-stakes environments. The approach aims to identify cases where AI-driven decisions may be technically efficient but fail to meet fairness expectations.

Growing reliance on AI in areas such as energy distribution and traffic management has raised concerns about unintended bias. Cost-optimised systems can still disadvantage communities, especially when ethical factors are hard to measure.

The framework, known as SEED-SET, separates objective performance metrics from subjective human values. A large language model is used to simulate stakeholder preferences, enabling the system to compare scenarios and detect where outcomes diverge from ethical expectations.

Testing shows the method generates more relevant scenarios while reducing manual analysis. Findings highlight its potential to improve transparency and support more balanced decision-making before AI systems are deployed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Experts warn YouTube AI slop harms children and demand action

Fairplay and more than 200 experts have urged YouTube to address the spread of ‘AI slop’ targeting children. The letter was sent to Sundar Pichai and Neal Mohan, along with a petition.

The signatories state that AI-generated videos harm children’s development by distorting reality and overwhelming learning processes. They also warn that such content captures attention and is being recommended to young users, including infants and toddlers.

The letter cites findings that 40% of videos following shows like Cocomelon contained AI-generated content. It also states that 21% of Shorts recommendations included similar material, and misleading science videos were shown to older children.

Fairplay and its partners propose measures, including labelling AI content and banning it from YouTube Kids. They also call for restrictions on recommendations to under-18s and for tools that allow parents to turn off such content.

The initiative was organised by Fairplay and supported by organisations and experts, including Jonathan Haidt. The group says platforms must ensure content is safe and appropriate for children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

UK expands efforts to boost digital inclusion

More than one million people have been helped to get online through a national digital inclusion plan led by the Department for Science, Innovation and Technology. The initiative targets groups including older people, jobseekers and rural communities.

The programme has delivered over 22,000 donated devices and funded more than 80 local projects with £11.9 million. Support includes improved connectivity, access to affordable services and training to build essential digital skills.

Efforts also focus on strengthening long-term capabilities, with the government taking control of the national digital skills framework. Updates will reflect changing needs, such as online safety and the growing role of AI in everyday life.

British officials say the plan is helping people find work, manage finances and access services more easily. Further expansion is expected as authorities work with industry and charities to reach more communities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

France moves toward social media restrictions for children under 15

Legislative efforts in France signal a shift toward stricter governance of youth access to digital platforms, with policymakers preparing to debate a ban on social media use for children under 15.

A proposal that forms part of a broader strategy to address concerns over online harms and excessive screen exposure among adolescents.

The draft law in France extends beyond access restrictions, proposing a digital curfew for older teenagers and expanding existing school phone bans to include high schools.

These measures reflect increasing reliance on regulatory intervention instead of voluntary platform safeguards, as evidence links prolonged digital engagement with risks such as cyberbullying, disrupted sleep patterns and exposure to harmful content.

Political backing for the initiative has emerged from figures aligned with Emmanuel Macron, reinforcing the government’s position that stronger oversight of digital environments is necessary. The proposal also mirrors developments in Australia, where similar restrictions have already entered into force.

A debate that is further influenced by legal actions targeting major platforms, including TikTok and Meta, amid allegations that algorithmic systems contribute to harmful user experiences.

The outcome of the parliamentary discussions in France is expected to shape future approaches to child safety, platform accountability and digital rights governance across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Dutch court bans harmful Grok AI-generated images

A judge in Amsterdam has ordered AI chatbot Grok and platform X to stop generating and distributing explicit deepfake images. The ruling targets so-called ‘undressing’ content and illegal material involving minors.

The case was brought by Offlimits, which argued that safeguards were failing. The Dutch judges found sufficient evidence that harmful images could still be created despite existing restrictions.

The court imposed a penalty of €100,000 per day for violations, with a maximum of €10 million. Access to Grok on X must also be suspended if the system does not comply with the order.

The decision highlights growing legal pressure on AI platforms to control the misuse of generative tools. Regulators and courts are increasingly demanding stronger protections against online abuse and illegal content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California challenges federal approach with new AI rules

The government of California is advancing a more interventionist approach to AI governance, signalling a divergence from federal deregulatory preferences.

An executive order signed by Gavin Newsom mandates the development of comprehensive AI policies within 4 months, prioritising public safety and protecting fundamental rights.

The proposed framework requires companies seeking state contracts to demonstrate safeguards against harmful outputs, including the prevention of child exploitation material and violent content.

It also calls for measures addressing algorithmic bias and unlawful discrimination, alongside increased transparency through mechanisms such as watermarking AI-generated media.

Federal guidance has discouraged state-level intervention, framing such efforts as obstacles to technological leadership.

The evolving policy landscape reflects growing concern over the societal impact of AI systems, including risks to employment, content integrity and civil liberties.

An initiative by California that may therefore serve as a testing ground for future regulatory models, shaping broader debates on balancing innovation with accountability in digital governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Healthcare data breach raises concerns over cloud security

A cybersecurity incident involving CareCloud has exposed vulnerabilities in the protection of sensitive medical information, following unauthorised access to patient records stored within its systems.

A breach was detected on 16 March, allowing attackers to access electronic health records for several hours, which raised concerns about potential data exposure.

The company has stated that the intrusion was contained on the same day, with systems restored and an external investigation launched.

However, uncertainty remains about whether any data were extracted and the scale of the potential impact, particularly given the company’s role in supporting tens of thousands of healthcare providers and millions of patients.

Such an incident reflects broader structural risks within digital healthcare infrastructures, where centralised storage of highly sensitive data increases the potential impact of cyberattacks.

Cloud environments, including services provided by Amazon Web Services, are increasingly integral to such systems, amplifying both efficiency and exposure.

The breach follows a pattern of escalating cyber threats targeting healthcare data, driven by its high value in criminal markets.

As investigations continue, the case underscores the need for stronger data protection measures, enhanced monitoring systems and more robust regulatory oversight to safeguard patient information.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!