EU lawmakers call for stronger copyright safeguards in AI training

The European Parliament has adopted a report urging policymakers to establish a long-term framework protecting copyrighted works used in AI training.

These recommendations aim to ensure that creative industries retain transparency and fair treatment as generative AI technologies expand.

Among the central proposals is the creation of a European register managed by the European Union Intellectual Property Office. The database would list copyrighted works used to train AI systems and identify creators who have chosen to exclude their content from such use.

Lawmakers in the EU are also calling for greater transparency from AI developers, including disclosure of the websites from which training data has been collected. According to the report, failing to meet transparency requirements could raise questions about compliance with existing copyright rules.

The recommendations have received mixed reactions from industry stakeholders.

Organisations representing creators argue that stronger safeguards are necessary to ensure fair remuneration and legal clarity, while technology sector groups caution that additional requirements could create complexity for companies developing AI systems.

The report is not legally binding but signals the political direction of ongoing European discussions on copyright and AI governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Lawmakers urged to rethink rules on private messaging

Policymakers are being urged to rethink the regulation of private messaging platforms as disinformation campaigns increasingly spread through closed digital networks. Researchers say messaging apps now play a major role in political communication and crisis information flows.

Evidence from elections and conflicts highlights the challenge. During Brazil’s 2024 municipal elections, manipulated political content spread widely through WhatsApp groups, while authorities in Ukraine reported Telegram being used for both emergency communication and disinformation.

Experts argue that current laws often fail to address messaging platforms, such as Telegram, because regulation typically targets public social media spaces. Analysts say modern messaging services combine private chats with broadcast channels and other features that allow content to reach large audiences.

Policy specialists propose regulating specific platform features rather than entire services. Governments and technology companies are also encouraged to protect encryption while expanding transparency tools, media literacy programmes and user safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Writers publish protest book to challenge AI use of copyrighted works

Thousands of writers have joined a symbolic protest against AI companies by publishing a book that contains no traditional content.

The work, titled “Don’t Steal This Book,” lists only the names of roughly 10,000 contributors who oppose the use of their writing to train AI systems without their permission.

An initiative that was organised by composer and campaigner Ed Newton-Rex and distributed during the London Book Fair. Contributors include prominent authors such as Kazuo Ishiguro, Philippa Gregory and Richard Osman, along with thousands of other writers and creative professionals.

Campaigners argue that generative AI systems are trained on vast collections of copyrighted material gathered from the internet without authorisation or compensation.

According to organisers, such practices allow AI tools to compete with the creators whose works were used to develop them.

The protest arrives as the UK Government prepares an economic assessment of potential copyright reforms related to AI. Proposals under discussion include allowing AI developers to use copyrighted material unless rights holders explicitly opt out.

Many writers and artists oppose that approach and demand stronger copyright protections. In parallel, the publishing sector is preparing a licensing initiative through Publishers’ Licensing Services to provide AI developers with legal access to books while ensuring authors receive compensation.

The dispute reflects a growing global debate over how copyright law should apply to generative AI systems that rely on massive datasets to develop chatbots and other digital tools.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Amazon launches Health AI to assist with medical queries

Amazon has launched a new AI-powered assistant, Health AI, on its website and mobile app. The tool is designed to answer health questions, explain medical records, manage prescriptions, and connect users with healthcare providers.

Health AI can also book appointments and guide users based on their health information if they grant access to their records. The feature is currently limited to the US, with a wider rollout planned in the coming weeks.

The assistant is linked with One Medical, Amazon’s healthcare service, allowing users to communicate with licensed professionals through messages, video consultations, or in-person visits. It can also send prescription renewal requests and suggest relevant health products.

Users can create an Amazon Health Profile and enable two-step authentication to start using Health AI. By allowing the AI to access their medical records, including medications, lab results, and diagnoses, users can receive more personalised responses.

Amazon emphasises that Health AI is a support tool rather than a replacement for doctors. It helps users understand health information and prepare for discussions with healthcare providers, but it does not provide independent diagnoses or treatment.

As part of an introductory offer, eligible US Prime members can receive up to five free message consultations with One Medical providers. The system runs on Amazon Bedrock and uses multiple AI agents to manage tasks, monitor interactions, and escalate to human professionals when necessary.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI deepfakes detection expands on YouTube for politicians and journalists

YouTube is expanding its likeness-detection technology designed to identify AI-generated deepfakes, extending access to a pilot group of government officials, political candidates, and journalists.

The tool allows participants to detect unauthorised AI-generated videos that simulate their faces and request removal if the content violates YouTube policies. The system builds on technology launched last year for around four million creators in the YouTube Partner Program.

Similar to YouTube’s Content ID system, which detects copyrighted material in uploaded videos, the likeness detection feature scans for AI-generated faces created with deepfake tools. Such technologies are increasingly used to spread misinformation or manipulate public perception by making prominent figures appear to say or do things they never did.

According to YouTube, the pilot programme aims to balance free expression with safeguards against AI impersonation, particularly in sensitive civic contexts.

‘This expansion is really about the integrity of the public conversation,’ said Leslie Miller, YouTube’s vice president of Government Affairs and Public Policy. ‘We know that the risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we’re also being careful about how we use it.’

Removal requests will be assessed individually under YouTube’s privacy policy rules to determine whether the content constitutes parody or political critique, which remain protected forms of expression. Participants must verify their identity by uploading a selfie and a government-issued ID before accessing the tool. Once verified, they can review detected matches and submit removal requests for content they believe violates policy.

YouTube also said it supports the proposed NO FAKES Act in the United States, which aims to regulate the unauthorised use of an individual’s voice or visual likeness in AI-generated media. AI-generated videos on the platform are already labelled, though label placement varies depending on the topic’s sensitivity.

‘There’s a lot of content that’s produced with AI, but that distinction’s actually not material to the content itself,’ said Amjad Hanif, YouTube’s vice president of Creator Products. The company said it plans to expand the technology over time to detect AI-generated voices and other intellectual property.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU explores AI image generation safeguards

The Council of the European Union is examining a compromise proposal that could introduce restrictions on certain AI systems capable of generating sensitive synthetic images.

The discussions form part of ongoing adjustments to the EU AI Act.

A proposed measure that would primarily address AI tools that generate illegal material, particularly content involving the exploitation of minors.

Policymakers are considering ways to prevent the development or deployment of systems that could produce such material while maintaining proportionate rules for legitimate AI applications.

Early indications suggest the proposal may not apply to images depicting people in standard clothing contexts, such as swimwear. The distinction reflects policymakers’ effort to define the scope of restrictions without imposing unnecessary limits on common image-generation uses.

The debate highlights broader regulatory challenges linked to generative AI technologies. European institutions are seeking to strengthen protections against harmful uses of AI while preserving space for innovation and lawful digital services.

Further negotiations among the EU institutions are expected as lawmakers continue refining how these provisions could fit within the broader European framework governing AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Moltbook founders join Meta’s AI research lab

Meta Platforms has acquired Moltbook, a social networking platform designed for AI agents. The deal brings co-founders Matt Schlicht and Ben Parr into Meta’s AI research division, the Superintelligence Labs, led by Alexandr Wang.

Financial terms of the acquisition were not disclosed, and the founders are expected to start on 16 March.

Moltbook, launched in January, allows AI-powered bots to exchange code and interact socially in a Reddit-like environment. The platform has sparked debate on AI autonomy and real-world capabilities, highlighting growing competition among tech giants for AI talent and technology.

Industry figures have offered differing views on the platform’s significance. OpenAI CEO Sam Altman called Moltbook a potential fad but acknowledged its underlying technology hints at the future of AI agents.

Meanwhile, Anthropic’s chief product officer, Mike Krieger, noted that most users are not ready to grant AI full autonomy over their systems.

The platform’s growth also highlighted security risks. Cybersecurity firm Wiz reported a vulnerability that exposed private messages, email addresses, and credentials, which was resolved after the owners were notified.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google adds option to disable AI search in Google Photos

Users of Google Photos will now have greater control over how they search their images, after Google introduced a visible toggle that returns to the traditional search experience.

The update follows complaints about the AI-powered Ask Photos feature.

Ask Photos was designed to allow users to search for images using natural language queries rather than simple keywords. The tool aimed to make photo searches more flexible, enabling complex queries such as descriptions of people, events or locations captured in images.

However, some users reported that the AI system produced slower results and occasionally failed to locate images that the classic search had previously found more reliably.

Although an option to turn off the AI feature already existed, it was hidden within settings and often overlooked.

The new update introduces a visible switch directly on the search interface. Users can now easily alternate between the AI-powered search and the traditional search system depending on their preferences.

Google said improvements have also been made to the quality of common searches following user feedback. The company emphasised that search remains one of the most frequently used functions within Google Photos and that ongoing updates will continue to refine the experience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Malicious npm package targets developers with Openclaw impersonation

Security researchers uncovered a malicious npm package impersonating an Openclaw AI installer, designed to infect developer machines with credential-stealing malware.

JFrog Security Research identified the attack in early March 2026 after the package appeared on the npm registry and was downloaded roughly 178 times.

The deceptive package mimics legitimate Openclaw tools and contains ordinary-looking JavaScript files and documentation. Hidden scripts run during installation, displaying a fake command-line interface and a fabricated system prompt that requests the user’s password.

Entering the password grants the malware elevated access and allows it to download an encrypted payload from a remote command server. Once installed, the payload deploys Ghostloader, a remote access trojan that persists on the system and communicates with attacker servers.

Researchers say the malware targets sensitive information, including saved passwords, browser cookies, SSH keys, and cryptocurrency wallet files. Developers are advised to remove the package immediately, rotate credentials, and install software only from verified sources.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot