EU faces challenges in curbing digital abuse against women

Researchers and policymakers are raising concerns about how new technologies may put women at risk online, despite existing EU rules designed to ensure safer digital spaces.

AI-powered tools and smart devices have been linked to incidents of harassment and the creation of non-consensual sexualised imagery, highlighting gaps in enforcement and compliance.

The European Commission’s Gender Equality 2026–2030 Strategy noted that women are disproportionately targeted by online gender-based violence, including harassment, doxing, and AI-generated deepfakes.

Investigations into tools such as Elon Musk’s Grok AI and Meta’s Ray-Ban smart glasses have drawn attention to how digital platforms and wearable technologies can be misused, even where legal frameworks like the Digital Services Act (DSA) are in place.

Experts emphasise that while the EU’s rules offer a foundation to regulate online content, significant challenges remain. Advocates and lawmakers say enforcement gaps let harmful AI functions like nudification persist.

Commissioners have stressed ongoing cooperation with tech companies and upcoming guidelines to prioritise flagged content from independent organisations to address gender-based cyber violence.

Authorities are also monitoring new technologies closely. In the case of wearable devices, regulators are considering how users and bystanders are informed about recording features.

Ongoing discussions aim to strengthen compliance under existing legislation and ensure that digital spaces become safer and more accountable for all users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU considers stronger child protection in Digital Fairness Act

Capitals across the EU are being asked to discuss how stronger child protection measures should be incorporated into the upcoming Digital Fairness Act (DFA).

The initiative comes as policymakers attempt to address growing concerns about how online platforms expose minors to harmful content, manipulative design practices, and unsafe digital environments.

According to a document circulated during Cyprus’s Council presidency of the European Union, member states are expected to debate which concrete safeguards should be introduced as part of the broader consumer protection framework.

Officials are exploring whether new rules should require platforms to adopt stricter safeguards when designing digital services used by children.

The discussions are part of the European Union’s broader effort to strengthen digital governance and consumer protection across online platforms. Policymakers are increasingly focusing on how platform design, recommendation algorithms, and monetisation models may affect younger users.

The proposals could complement existing EU regulations targeting large digital platforms, while expanding protections specifically focused on minors.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI legal advice case asks whether ChatGPT crosses legal boundaries

A newly filed lawsuit against OpenAI raises a key issue: Does allowing generative AI systems like ChatGPT to provide legal advice violate laws that bar the unauthorised practice of law (UPL)? UPL means providing legal services, such as drafting filings or giving advice, without the required legal qualifications or a state licence.

The case claims an individual used ChatGPT to prepare legal filings in a dispute with Nippon Life Insurance, prompting the company to argue OpenAI should be held responsible for the outcome.

The lawsuit claims ChatGPT helped the user challenge a settled legal dispute. As a result, the company had to spend additional time and resources responding to filings produced with ChatGPT. The claim alleges tortious interference with a contract, which is the unlawful disruption of an existing agreement between two parties by causing one of the parties to breach or alter it.

Ultimately, this disrupted another party’s contractual relationship. The suit also claims unauthorised practice of law and abuse of the judicial process, which means using the legal system improperly to gain an advantage. It argues OpenAI should be liable because ChatGPT operates under its control. The dispute centres on whether AI systems should analyse disputes and offer legal advice like a lawyer.

Advocates argue the tools could widen access to legal advice. They could make legal support more accessible and affordable for those who cannot easily hire a lawyer. However, US legal frameworks restrict the provision of legal advice to licensed lawyers. The rules are designed to protect consumers and ensure professional accountability.

Critics argue that limiting legal advice to licensed lawyers preserves an expensive monopoly and hinders access to justice. AI-driven legal tools highlight this tension over the future of legal services.

The outcome of this lawsuit will likely hinge on whether AI-generated responses constitute intentional legal advice and if OpenAI can be held liable for such outputs. Even if it fails, the case foregrounds the broader debate about granting generative AI a legitimate role in legal guidance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI copyright warning as 5 major risks outlined in UK Lords report

Concerns about AI copyright are rising after a House of Lords committee report. The report warns that unlicensed use of creative works for AI training threatens the UK’s creative industries.

Large AI systems rely on vast amounts of human-created content, often used without clear consent or compensation. Such developments have intensified debates around AI copyright protections.

The committee argues that the key issues are not the copyright framework itself, but the widespread unlicensed use of protected works and AI developers’ lack of transparency.

The lack of clarity prevents rightsholders from knowing whether their works are being used or from enforcing their rights, raising critical questions about the practical application of AI copyright rules.

The report urges the government to reject the proposed commercial text and data mining exception, introduce stronger protections against unauthorised digital replicas, and safeguard against AI outputs that imitate a creator’s style, voice, or identity.

The committee also calls for legal transparency in AI training data, backing the development of a licensing market, and standards for rights-reservation, data provenance, labelling AI-generated content, and support for UK-governed AI models within a robust AI copyright framework.

Baroness Keeley, committee chair, warned: ‘Our creative industries face a clear and present danger from uncredited and unremunerated use of copyrighted material to train AI models.

Photographers, musicians, authors, and publishers are seeing their work fed into AI models, which then produce imitations that take employment and earning opportunities from original creators.’

Keeley added: ‘AI may contribute to our future economic growth, but the UK creative industries create jobs and economic value now.

In 2023, the creative industries delivered £124 billion of economic value to the UK, and this is set to grow to £141 billion by 2030. Watering down the protections in our existing copyright regime to lure the biggest US tech companies is a race to the bottom that does not serve UK interests. We should not sacrifice our creative industries for the AI jam tomorrow.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU and Canada begin negotiations on a digital trade agreement

The European Commission and Canada have launched negotiations on a new Digital Trade Agreement to strengthen the rules governing cross-border digital commerce.

The initiative was announced in Toronto by the EU Trade Commissioner Maroš Šefčovič and Canadian International Trade Minister Maninder Sidhu.

An agreement that will expand the digital dimension of the existing Comprehensive Economic and Trade Agreement, which has already increased trade in goods and services between the two partners.

Officials say the new negotiations aim to create clearer rules for businesses and consumers engaging in cross-border digital transactions.

Proposals under discussion include promoting paperless trade systems, recognising electronic signatures and digital contracts, and prohibiting customs duties on electronic transmissions.

The agreement between the EU and Canada will also seek to prevent protectionist practices such as unjustified data localisation requirements or forced transfers of software source code.

European officials argue that the negotiations reflect a broader effort to develop international standards for digital trade governance while preserving governments’ ability to regulate emerging challenges in the digital economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Job losses study by Anthropic reveals 0 evidence of AI driven unemployment

A new Anthropic report finds AI has not yet caused significant job losses, introducing ‘observed exposure’ to measure actual workplace AI use.

Researchers combined language model capabilities with workplace data to identify occupations at risk of disruption. Although AI can perform many tasks, its actual adoption remains much lower across most industries, which is a main finding of the study.

Even in highly digital professions, only a fraction of potential automation results from AI use. For instance, computer and mathematics occupations rank among the most AI-exposed groups. Despite AI’s capability to assist with many tasks, it currently covers only about 33% of them in these fields.

Across the broader economy, many roles experience little or no impact from AI, which represents a key finding. About 30% of workers are in jobs such as cooking, bartending, mechanics, and lifeguarding, where physical tasks dominate, and measured AI exposure is almost zero.

The report also finds no clear evidence that AI adoption has increased unemployment or caused a spike in job losses since generative AI tools began spreading widely in 2022. Rather than triggering sudden job losses, researchers suggest labour-market effects emerge gradually, through slower hiring, shifting skill requirements, and changes in job composition.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chips exports face tighter US oversight under new proposal

Washington is considering rules that would require US government approval for overseas purchases of AI chips, tightening control over the global semiconductor supply chain. Draft proposals would make foreign buyers seek Department of Commerce authorisation before acquiring AI chips from US suppliers.

Furthermore, scrutiny will vary by order size, giving US authorities more oversight of international demand for advanced processors. The proposed rules could significantly expand oversight of leading semiconductor manufacturers such as NVIDIA and AMD, whose AI chips underpin many advanced AI systems.

The new approach to regulating exports of AI chips marks a shift toward a more interventionist strategy. Previously, during the Biden administration, an AI diffusion regulation was finalised to control the global spread of AI technology. Yet, before this rule could take effect, the current administration scrapped it. Building on these developments, the current proposed rules represent a new chapter in US AI export policy.

A US Department of Commerce spokesperson said the agency remains committed to ‘promoting secure exports of the American tech stack,’ but rejected claims that the government is reviving the earlier diffusion framework, calling it ‘burdensome, overreaching, and disastrous.’

Meanwhile, critics warn that tighter controls could have unintended effects. Restrictions on AI chip exports may drive international buyers to non-US suppliers, potentially weakening US leadership in advanced semiconductor technology as global AI hardware competition intensifies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI exposure highlights jobs most at risk

A new study introduces observed exposure, a measure that combines theoretical AI capability and real-world use to estimate which jobs are most susceptible to automation. Tasks performed by LLMs and actively automated at work receive higher exposure scores.

Computer programmers, customer service representatives, and financial analysts rank among the most exposed occupations.

The analysis finds that AI is far from reaching its full potential, with many tasks still beyond current capabilities. Occupations with higher observed exposure tend to grow more slowly, and workers in these roles are more likely to be older, female, highly educated, and earn higher wages.

Despite concerns, no systematic rise in unemployment has been detected among highly exposed workers since late 2022.

Early evidence suggests that the hiring of younger workers aged 22-25 may be slowing in highly exposed occupations. While these effects are small, they may indicate initial labour market adjustments as AI tools become more integrated into workplace tasks.

Researchers emphasise that observed exposure provides a framework for tracking AI’s economic impact over time, helping policymakers and businesses identify potential vulnerabilities.

The study underscores the gap between AI’s theoretical capabilities and actual usage, highlighting the importance of monitoring adoption patterns. The framework uses task automation and job data to track AI’s impact on the workforce.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Privacy lawsuit targets Meta AI glasses after reports of footage review

Meta is facing a new lawsuit in the US over privacy concerns tied to its AI smart glasses.

The legal complaint follows investigative reporting indicating that contractors working for a Kenya-based subcontractor reviewed footage captured by users’ devices, including sensitive personal scenes.

The lawsuit alleges that some of the reviewed material included nudity and other intimate activities recorded by the glasses’ cameras.

According to the complaint, the footage formed part of a data review process designed to improve the AI system integrated into the wearable device.

Plaintiffs claim Meta marketed the product as prioritising user privacy, citing advertisements suggesting that the glasses were ‘designed for privacy’ and that users remained in control of their personal data.

The complaint argues that such messaging could mislead consumers if the footage were subject to human review without clear disclosure.

A legal action that also names eyewear manufacturer Luxottica, which partnered with Meta to produce the glasses.

Meanwhile, the UK’s Information Commissioner’s Office has begun examining the issue after reports that face-blurring safeguards may not have consistently protected individuals captured in the recordings.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU competition scrutiny pushes Meta to reopen WhatsApp AI access

Meta has announced that third-party AI chatbots will again be allowed to operate through WhatsApp in Europe, reversing restrictions introduced earlier this year.

The decision follows pressure from the European Commission, which had warned it could impose interim competition measures.

Earlier in 2026, Meta limited access to rival chatbot services on the messaging platform, prompting regulators to examine whether the move unfairly restricted competition in the rapidly expanding AI market.

WhatsApp remains one of the most widely used messaging applications across European countries, making platform access critical for emerging AI services.

Under the new arrangement, companies will be able to distribute general-purpose AI chatbots via the WhatsApp Business API for 12 months.

The change is intended to give European regulators time to complete their investigation while allowing competing AI services to operate within the platform ecosystem.

Meta has also indicated that businesses offering chatbots through WhatsApp will be required to pay fees to access the system.

The European Commission is now assessing whether these adjustments sufficiently address competition concerns surrounding the integration of AI services inside major digital platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!