AI copyright warning as 5 major risks outlined in UK Lords report

Concerns about AI copyright are rising after a House of Lords committee report. The report warns that unlicensed use of creative works for AI training threatens the UK’s creative industries.

Large AI systems rely on vast amounts of human-created content, often used without clear consent or compensation. Such developments have intensified debates around AI copyright protections.

The committee argues that the key issues are not the copyright framework itself, but the widespread unlicensed use of protected works and AI developers’ lack of transparency.

The lack of clarity prevents rightsholders from knowing whether their works are being used or from enforcing their rights, raising critical questions about the practical application of AI copyright rules.

The report urges the government to reject the proposed commercial text and data mining exception, introduce stronger protections against unauthorised digital replicas, and safeguard against AI outputs that imitate a creator’s style, voice, or identity.

The committee also calls for legal transparency in AI training data, backing the development of a licensing market, and standards for rights-reservation, data provenance, labelling AI-generated content, and support for UK-governed AI models within a robust AI copyright framework.

Baroness Keeley, committee chair, warned: ‘Our creative industries face a clear and present danger from uncredited and unremunerated use of copyrighted material to train AI models.

Photographers, musicians, authors, and publishers are seeing their work fed into AI models, which then produce imitations that take employment and earning opportunities from original creators.’

Keeley added: ‘AI may contribute to our future economic growth, but the UK creative industries create jobs and economic value now.

In 2023, the creative industries delivered £124 billion of economic value to the UK, and this is set to grow to £141 billion by 2030. Watering down the protections in our existing copyright regime to lure the biggest US tech companies is a race to the bottom that does not serve UK interests. We should not sacrifice our creative industries for the AI jam tomorrow.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU and Canada begin negotiations on a digital trade agreement

The European Commission and Canada have launched negotiations on a new Digital Trade Agreement to strengthen the rules governing cross-border digital commerce.

The initiative was announced in Toronto by the EU Trade Commissioner Maroš Šefčovič and Canadian International Trade Minister Maninder Sidhu.

An agreement that will expand the digital dimension of the existing Comprehensive Economic and Trade Agreement, which has already increased trade in goods and services between the two partners.

Officials say the new negotiations aim to create clearer rules for businesses and consumers engaging in cross-border digital transactions.

Proposals under discussion include promoting paperless trade systems, recognising electronic signatures and digital contracts, and prohibiting customs duties on electronic transmissions.

The agreement between the EU and Canada will also seek to prevent protectionist practices such as unjustified data localisation requirements or forced transfers of software source code.

European officials argue that the negotiations reflect a broader effort to develop international standards for digital trade governance while preserving governments’ ability to regulate emerging challenges in the digital economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Human workers behind AI training raise new privacy concerns

AI systems rely heavily on human labour to train and improve algorithms. Images and videos collected by AI-powered devices are often reviewed and labelled by human annotators so that systems can better recognise objects, environments, and context.

This work is frequently outsourced to data annotation companies such as Sama, which provides training data services for large technology firms, including Meta Platforms. Many of these tasks are carried out by contract workers in Nairobi, Kenya, where employees review large volumes of visual data under strict confidentiality agreements.

Recent investigations have raised concerns about privacy and data governance linked to AI wearables such as the Ray-Ban Meta smart glasses, developed in partnership with EssilorLuxottica. Some device features rely on cloud processing, meaning that captured images and voice inputs may be transmitted and analysed remotely.

Workers involved in the annotation process report regularly encountering sensitive material. Footage can include scenes recorded inside private homes, bedrooms, or bathrooms, as well as images that unintentionally reveal personal or financial information.

These practices raise broader questions about transparency and cross-border data transfers, particularly when data originating in Europe or the United States is processed in other countries. They also highlight the often-hidden human role behind AI systems that are frequently presented as fully automated technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Data breach hits fintech lender Figure exposing nearly 1 million accounts

Fintech lender Figure Technology Solutions has disclosed a data breach after hackers exposed personal information from nearly one million accounts. Details from 967,200 accounts, including names, email addresses, phone numbers, home addresses, and dates of birth, were compromised.

Figure Technology Solutions, founded in 2018, operates a blockchain-based lending platform built on the Provenance blockchain. The company says it has facilitated more than $22 billion in home equity transactions through partnerships with banks, credit unions, and fintech firms. Despite blockchain security claims, attackers reportedly gained access by manipulating a staff member rather than breaking the underlying technology.

‘We recently identified that an employee was socially engineered, and that allowed an actor to download a limited number of files through their account,’ a company spokesperson said. ‘We acted quickly to block the activity and retained a forensic firm to investigate what files were affected. We understand the importance of these matters and are communicating with partners and those impacted as appropriate.’

Security researchers say the data breach follows a pattern used by groups such as ShinyHunters, who impersonate IT support staff and pressure employees into revealing login credentials through convincing phishing portals.

Once access to corporate single sign-on systems, which allow users to log in to multiple internal applications with a single set of credentials, is obtained, attackers can move across multiple internal platforms, often including services linked to major providers such as Microsoft and Google.

Experts warn that the data breach highlights a wider cybersecurity problem: even advanced technologies such as blockchain cannot prevent attacks that target human behaviour. Criminals can use exposed personal information to launch convincing phishing campaigns or financial scams, reinforcing the need for stronger employee training and security awareness.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI feature keeps Roblox chat respectful and flowing

Roblox Corporation has unveiled an AI-powered real-time chat rephrasing feature designed to maintain civility while keeping in-game conversations fluid. Previously, messages containing profanity were blocked with hashmarks, disrupting gameplay.

The new system automatically rephrases inappropriate language into more respectful alternatives while preserving the original meaning. Users in the chat are notified when their messages are rephrased, ensuring transparency.

The feature supports in-game chat between age-verified users and all languages via Roblox’s automatic translation. The company consulted its TEEN COUNCIL to design the system, ensuring it reflects how teens naturally communicate.

Earlier experiments with real-time warnings and notifications reduced filtered messages and abuse reports by 5–6%, indicating the approach’s effectiveness.

Roblox is also enhancing its text filters to detect complex attempts to bypass Community Standards, such as leet-speak or symbols. Testing shows a 20-fold reduction in missed cases involving the sharing of personal information, such as social handles or phone numbers.

These upgrades represent a significant step toward safer, more natural in-game chat.

The company plans to continue refining these tools, aiming to minimise disruptions further while promoting civil communication. Users can expect iterative improvements and additional controls in the future to enhance chat safety and overall user experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Privacy lawsuit targets Meta AI glasses after reports of footage review

Meta is facing a new lawsuit in the US over privacy concerns tied to its AI smart glasses.

The legal complaint follows investigative reporting indicating that contractors working for a Kenya-based subcontractor reviewed footage captured by users’ devices, including sensitive personal scenes.

The lawsuit alleges that some of the reviewed material included nudity and other intimate activities recorded by the glasses’ cameras.

According to the complaint, the footage formed part of a data review process designed to improve the AI system integrated into the wearable device.

Plaintiffs claim Meta marketed the product as prioritising user privacy, citing advertisements suggesting that the glasses were ‘designed for privacy’ and that users remained in control of their personal data.

The complaint argues that such messaging could mislead consumers if the footage were subject to human review without clear disclosure.

A legal action that also names eyewear manufacturer Luxottica, which partnered with Meta to produce the glasses.

Meanwhile, the UK’s Information Commissioner’s Office has begun examining the issue after reports that face-blurring safeguards may not have consistently protected individuals captured in the recordings.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Debate grows over the future of privacy

Experts gathered in London, UK, to examine how the concept of privacy has evolved over centuries. Discussions in London, UK, highlighted that privacy was only widely recognised as a legal and social norm after the Second World War.

Speakers in London noted that earlier societies often viewed privacy with suspicion or did not recognise it at all. Historical examples discussed included practices from Roman society and the French monarchy.

Modern legal protections expanded rapidly in recent decades, with privacy laws now covering about 80 percent of the global population. Scholars said the concept remains relatively new despite its central role in modern democracies.

The debate also explored whether privacy will remain a stable social value as technology evolves. Analysts in London said emerging technologies such as AI are reshaping debates over personal data and surveillance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU competition scrutiny pushes Meta to reopen WhatsApp AI access

Meta has announced that third-party AI chatbots will again be allowed to operate through WhatsApp in Europe, reversing restrictions introduced earlier this year.

The decision follows pressure from the European Commission, which had warned it could impose interim competition measures.

Earlier in 2026, Meta limited access to rival chatbot services on the messaging platform, prompting regulators to examine whether the move unfairly restricted competition in the rapidly expanding AI market.

WhatsApp remains one of the most widely used messaging applications across European countries, making platform access critical for emerging AI services.

Under the new arrangement, companies will be able to distribute general-purpose AI chatbots via the WhatsApp Business API for 12 months.

The change is intended to give European regulators time to complete their investigation while allowing competing AI services to operate within the platform ecosystem.

Meta has also indicated that businesses offering chatbots through WhatsApp will be required to pay fees to access the system.

The European Commission is now assessing whether these adjustments sufficiently address competition concerns surrounding the integration of AI services inside major digital platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Calls grow to strengthen New Zealand privacy law

Pressure is growing in New Zealand to strengthen the Privacy Act following several high-profile data breaches. Debate in New Zealand intensified after a cyberattack exposed medical records from the Manage My Health patient portal.

The breach in New Zealand affected about 120,000 patients and involved threats to release documents on the dark web. Another incident forced the MediMap medication platform offline after unauthorised changes were detected in patient records.

Privacy specialists argue that current enforcement powers are too weak to deter serious failures. The Privacy Act allows only limited financial penalties, with fines generally capped at NZD10,000.

Officials are now considering reforms, including stronger penalties for privacy violations. Policymakers also warn that failure to strengthen the law could threaten the country’s EU data adequacy status.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU launches panel on child safety online and social media age rules

The European Commission has convened a new expert panel tasked with examining how children can be better protected across digital platforms, including social media, gaming environments and AI tools.

The initiative reflects growing concern across Europe regarding the psychological and safety risks associated with young users’ online behaviour.

Announced during the 2025 State of the Union Address by Commission President Ursula von der Leyen, the panel will evaluate evidence on both the opportunities and harms linked to children’s digital engagement.

Specialists from health, computer science, child rights and digital literacy will work alongside youth representatives to assess current research and policy responses.

Discussions during the first meeting centred on platform responsibility, including age-appropriate safety-by-design features, algorithmic amplification and addictive product design.

An initiative that also addresses digital literacy for children, parents and educators, while considering how regulatory measures can reduce risks without undermining the benefits of online participation.

The panel’s work complements the enforcement of the Digital Services Act and related European policies designed to strengthen protections for minors online.

Among the tools under development is an EU age-verification application currently tested in several member states, intended to support privacy-preserving checks compatible with the future EU digital identity framework.

The panel is expected to deliver policy recommendations to the Commission by summer 2026.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!