AI fuels online abuse of women in public life

Generative AI is increasingly being weaponised to harass women in public roles, according to a new report commissioned by UN Women. Journalists, activists, and human rights defenders face AI-assisted abuse that endangers personal safety and democratic freedoms.

The study surveyed 641 women from 119 countries and found that nearly one in four of those experiencing online violence reported AI-generated or amplified abuse.

Writers, communicators, and influencers reported the highest exposure, with human rights defenders and journalists also at significant risk. Rapidly developing AI tools, including deepfakes, facilitate the creation to harmful content that spreads quickly on social media.

Online attacks often escalate into offline harm, with 41% of women linking online abuse to physical harassment, stalking, or intimidation. Female journalists are particularly affected, with offline attacks more than doubling over five years.

Experts warn that such violence threatens freedom of expression and democratic processes, particularly in authoritarian contexts.

Researchers call for urgent legal frameworks, platform accountability, and technological safeguards to prevent AI-assisted attacks on women. They advocate for human rights-focused AI design and stronger support systems to protect women in public life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK plans ban on deepfake AI nudification apps

Britain plans to ban AI-nudification apps that digitally remove clothing from images. Creating or supplying these tools would become illegal under new proposals.

The offence would build on existing UK laws covering non-consensual sexual deepfakes and intimate image abuse. Technology Secretary Liz Kendall said developers and distributors would face harsh penalties.

Experts warn that nudification apps cause serious harm, mainly when used to create child sexual abuse material. Children’s Commissioner Dame Rachel de Souza has called for a total ban on the technology.

Child protection charities welcomed the move but want more decisive action from tech firms. The government said it would work with companies to stop children from creating or sharing nude images.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK launches taskforce to boost women in tech

The UK government has formed a Women in Tech taskforce to help more women enter, remain and lead across the technology sector. Technology secretary Liz Kendall will guide the group alongside industry figures determined to narrow long-standing representation gaps highlighted by recent BCS data.

Members include Anne-Marie Imafidon, Allison Kirkby and Francesca Carlesi, who will advise ministers on boosting diversity and supporting economic growth. Leaders stress that better representation enables more inclusive decision-making and encourages technology built with wider perspectives in mind.

The taskforce plans to address barriers affecting women’s progression, ranging from career access to investment opportunities. Organisations such as techUK and the Royal Academy of Engineering argue that gender imbalance limits innovation, particularly as the UK pursues ambitious AI goals.

UK officials expect working groups to develop proposals over the coming months, focusing on practical steps that broaden the talent pool. Advocates say the initiative arrives at a crucial moment as emerging technologies reshape employment and demand more inclusive leadership.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU faces new battles over digital rights

EU policy debates intensified after Denmark abandoned plans for mandatory mass scanning in the draft Child Sexual Abuse Regulation. Advocates welcomed the shift yet warned that new age checks and potential app bans still threaten privacy.

France and the UK advanced consultations on good practice guidelines for cyber intrusion firms, seeking more explicit rules for industry responsibility. Civil society groups also marked two years of the Digital Services Act by reflecting on enforcement experience and future challenges.

Campaigners highlighted rising concerns about tech-facilitated gender violence during the 16 Days initiative. The Centre for Democracy and Technology launched fresh resources stressing encryption protection, effective remedies and more decisive action against gendered misinformation.

CDT Europe also criticised the Commission’s digital omnibus package for weakening safeguards under laws, including the AI Act. The group urged firm enforcement of existing frameworks while exploring better redress options for AI-related harms in the EU legislation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia strengthens parent support for new social media age rules

Yesterday, Australia entered a new phase of its online safety framework after the introduction of the Social Media Minimum Age policy.

eSafety has established a new Parent Advisory Group to support families as the country transitions to enhanced safeguards for young people. The group held its first meeting, with the Commissioner underlining the need for practical and accessible guidance for carers.

The initiative brings together twelve organisations representing a broad cross-section of communities in Australia, including First Nations families, culturally diverse groups, parents of children with disability and households in regional areas.

Their role is to help eSafety refine its approach, so parents can navigate social platforms with greater confidence, rather than feeling unsupported during rapid regulatory change.

A group that will advise on parent engagement, offer evidence-informed insights and test updated resources such as the redeveloped Online Safety Parent Guide.

Their advice will aim to ensure materials remain relevant, inclusive and able to reach priority communities that often miss out on official communications.

Members will serve voluntarily until June 2026 and will work with eSafety to improve distribution networks and strengthen the national conversation on digital literacy. Their collective expertise is expected to shape guidance that reflects real family experiences instead of abstract policy expectations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and anonymity intensifies online violence against women

Digital violence against women is rising sharply, fuelled by AI, online anonymity, and weak legal protections, leaving millions exposed.

UN Women warns that abuse on digital platforms often spills into real life, threatening women’s safety, livelihoods, and ability to participate freely in public life.

Public figures, journalists, and activists are increasingly targeted with deepfakes, coordinated harassment campaigns, and gendered disinformation designed to silence and intimidate.

One in four women journalists report receiving online death threats, highlighting the urgent scale and severity of the problem.

Experts call for stronger laws, safer digital platforms, and more women in technology to address AI-driven abuse effectively. Investments in education, digital literacy, and culture-change programmes are also vital to challenge toxic online communities and ensure digital spaces promote equality rather than harm.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI use rises among Portuguese youth

A recent survey reveals that 38.7% of Portuguese individuals aged 16 to 74 used AI tools in the three months preceding the interview, primarily for personal purposes. Usage is particularly high among 16 to 24-year-olds (76.5%) and students (81.5%).

Internet access remains widespread, with 89.5% of residents going online recently. Nearly half (49.6%) placed orders online, primarily for clothing, footwear, and fashion accessories, while 74.2% accessed public service websites, often using a Citizen Card or Digital Mobile Key for authentication.

Digital skills are growing, with 59.2% of the population reaching basic or above basic levels. Young adults and tertiary-educated individuals show the highest digital proficiency, at 83.4% and 88.4% respectively.

Household internet penetration stands at 90.9%, predominantly via fixed connections.

Concerns about online safety are on the rise, as 45.2% of internet users reported encountering aggressive or discriminatory content, up from 35.5% in 2023. Reported issues include discrimination based on nationality, politics, and sexual identity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The future of EU data protection under the Omnibus Package

Introduction and background information

The Commission claims that the Omnibus Package aims to simplify certain European Union legislation to strengthen the Union’s long-term competitiveness. A total of six omnibus packages have been announced in total.

The latest (no. 4) targets small mid-caps and digitalisation. Package no. 4 covers data legislation, cookies and tracking technologies (i.e. the General Data Protection Regulation (GDPR) and ePrivacy Directive (ePD)), as well as cybersecurity incident reporting and adjustments to the Artificial Intelligence Act (AIA).

That ‘simplification’ is part of a broader agenda to appease business, industry and governments who argue that the EU has too much red tape. In her September 2025 speech to German economic and business associations, Ursula von der Leyen sided with industry and stated that simplification is ‘the only way to remain competitive’.

As for why these particular laws were selected, the rationale is unclear. One stated motivation for including the GDPR is its mention in Mario Draghi’s 2024 report on ‘The Future of European Competitiveness’.

Draghi, the former President of the European Central Bank, focused on innovation in advanced technologies, decarbonisation and competitiveness, as well as security. Yet, the report does not outline any concrete way in which the GDPR allegedly reduces competitiveness or requires revision.

The GDPR appears only twice in the report. First, as a brief reference to regulatory fragmentation affecting the reuse of sensitive health data across Member States (MS).

Second, in the concluding remarks, it is claimed that ‘the GDPR in particular has been implemented with a large degree of fragmentation which undermines the EU’s digital goals’. There is, however, no explanation of this ‘large fragmentation’, no supporting evidence, and no dedicated section on the GDPR as its first mention being buried in the R&I (research and innovation) context.

It is therefore unclear what legal or analytical basis the Commission relies on to justify including the GDPR in this simplification exercise.

The current debate

There are two main sides to this Omnibus, which are the privacy forward and the competitive/SME side. The two need not be mutually exclusive, but civil society warns that ‘simplification’ risks eroding privacy protection. Privacy advocates across civil society expressed strong concern and opposition to simplification in their responses to the European Commission’s recent call for evidence.

Industry positions vary in tone and ambition. For example, CrowdStrike calls for greater legal certainty under the Cybersecurity Act, such as making recital 55 binding rather than merely guiding and introducing a one-stop-shop mechanism for incident reporting.

Meta, by contrast, urges the Commission to go beyond ‘easing administrative burdens’, calling for a pause in AI Act enforcement and a sweeping reform of the EU data protection law. On the civil society side, Access Now argues that fundamental rights protections are at stake.

It warns that any reduction in consent prompts could allow tracking technologies to operate without users ever being given a real opportunity to refuse. A more balanced, yet cautious line can be found in the EDPB and EDPS joint opinion regarding easing records of processing activities for SMEs.

Similar to the industry, they support reducing administrative burdens, but with the caveat that amendments should not compromise the protection of fundamental rights, echoing key concerns of civil society.

Regarding Member State support, Estonia, France, Austria and Slovenia are firmly against any reopening of the GDPR. By contrast, the Czech Republic, Finland and Poland propose targeted amendments while Germany proposes a more systematic reopening of the GDPR.

Individual Members of the European Parliament have also come out in favour of reopening, notably Aura Salla, a Finnish centre-right MEP who previously headed Meta’s Brussels lobbying office.

Therefore, given the varied opinions, it cannot be said what the final version of the Omnibus would look like. Yet, a leaked draft document of the GDPR’s potential modifications suggests otherwise. Upon examination, it cannot be disputed that the views from less privacy-friendly entities have served as a strong guiding path.

Leaked draft document main changes

The leaked draft introduces several core changes.

Those changes include a new definition of personal and sensitive data, the use of legitimate interest (LI) for AI processing, an intertwining of the ePrivacy Directive (ePD) and GDPR, data breach reforms, a centralised data protection impact assessment (DPIA) whitelist/blacklist, and access rights being conditional on motive for use.

A new definition of personal data

The draft redefines personal data so that ‘information is not personal data for everyone merely because another entity can identify that natural person’. That directly contradicts established EU case law, which holds that if an entity can, with reasonable means, identify a natural person, then the information is personal data, regardless of who else can identify that person.

A new definition of sensitive data

Under current rules, inferred information can be sensitive personal data. If a political opinion is inferred from browsing history, that inference is protected.

The draft would narrow this by limiting sensitive data to information that ‘directly reveals’ special categories (political views, health, religion, sexual orientation, race/ethnicity, trade union membership). That would remove protection from data derived through profiling and inference.

Detected patterns, such as visits to a health clinic or political website, would no longer be treated as sensitive, and only explicit statements similar to ‘I support the EPP’ or ‘I am Muslim’ would remain covered.

Intertwining article 5(3) ePD and the GDPR

Article 5(3) ePD is effectively copied into the GDPR as a new Article 88a. Article 88a would allow the processing of personal data ‘on or from’ terminal equipment where necessary for transmission, service provision, creating aggregated information (e.g. statistics), or for security purposes, alongside the existing legal bases in Articles 6(1) and 9(2) of the GDPR.

That generates confusion about how these legal bases interact, especially when combined with AI processing under LI. Would this mean that personal data ‘on or from’ a terminal equipment may be allowed if it is done by AI?

The scope is widened. The original ePD covered ‘storing of information, or gaining access to information already stored, in the terminal equipment’. The draft instead regulates any processing of personal data ‘on or from’ terminal equipment. That significantly expands the ePD’s reach and would force controllers to reassess and potentially adapt a broad range of existing operations.

LI for AI personal data processing

A new Article 88c GDPR, ‘Processing in the context of the development and operation of AI’, would allow controllers to rely on LI to process personal data for AI processing. That move would largely sideline data subject control. Businesses could train AI systems on individuals’ images, voices or creations without obtaining consent.

A centralised data breach portal, deadline extension and change in threshold reporting

The draft introduces three main changes to data breach reporting.

  • Extending the notification deadline from 72 to 96 hours, giving privacy teams more time to investigate and report.
  • A single EU-level reporting portal, simplifying reporting for organisations active in multiple MS.
  • Raising the notification threshold when the rights and freedoms of data subjects are at ‘risk’ to ‘high risk’.

The first two changes are industry-friendly measures designed to streamline operations. The third is more contentious. While industry welcomes fewer reporting obligations, civil society warns that a ‘high-risk’ threshold could leave many incidents unreported. Taken together, these reforms simplify obligations, albeit at the potential cost of reducing transparency.

Centralised processing activity (PA) list requiring a DPIA

This is another welcome change as it would clarify which PAs would automatically require a DPIA and which would not. The list would be updated every 3 years.

What should be noted here is that some controllers may not see their PA on this list and assume or argue that a DPIA is not required. Therefore, the language on this should make it clear that it is not a closed list.

Access requests denials

Currently, a data subject may request a copy of their data regardless of the motive. Under the draft, if a data subject exploits the right of access by using that material against the controller, the controller may charge or refuse the request.

That is problematic for the protection of rights as it impacts informational self-determination and weakens an important enforcement tool for individuals.

For more information, an in depth analysis by noyb has been carried out which can be accessed here.

The Commission’s updated version

As of the 19th of November, the Commission has published its digital omnibus proposal. Most of the amendments in the leaked draft have remained. One of the measures dropped is the definition of sensitive data. This means that inferences could amount to sensitive data.

However, the final document keeps three key changes that erode fundamental rights protections:

  • Changing the definition of personal data to be a subjective and narrow one;
  • An intertwining of the ePD and the GDPR which also allows for processing based on aggregated and security purposes;
  • LI being relied upon as a legal basis for AI processing of personal data.

Still, positive changes remain:

  • A single-entry point for EU data breaches. This is a welcomed measure which streamlines reporting and appease some compliance obligations for EU businesses.
  • Another welcomed measure is the white/black-list of processing activities which would or would not require a DPIA. The same note remains with what the language of this text will look like.

Overall, these two measures are examples of simplification measures with concrete benefits.

Now, the European Parliament has the task to dissect this proposal and debate on what to keep and what to reject. Some experts have suggested that this may take minimum 1 year to accomplish given how many changes there are, but this is not certain.

We can also expect a revised version of the Commission’s proposal to be published due to the errors in language, numbering and article referencing that have been observed. This does not mean any content changes.

Final remarks

Simplification in itself is a good idea, and businesses need to have enough freedom to operate without being suffocated with red tape. However, changing a cornerstone of data protection law to such an extent that it threatens fundamental rights protections is just cause for concern.

Alarms have already been raised after the previous Omnibus package on green due diligence obligations was scrapped. We may now be witnessing a similar rollback, this time targeting digital rights.

As a result, all eyes are on 19 November, a date that could reshape not only the EU privacy standards but also global data protection norms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta rejects French ruling over gender bias in Facebook job ads

Meta has rejected a decision by France’s Défenseur des Droits that found its Facebook algorithm discriminates against users based on gender in job advertising. The case was brought by Global Witness and women’s rights groups Fondation des Femmes and Femmes Ingénieures, who argued that Meta’s ad system violates French anti-discrimination law.

The regulator ruled that Facebook’s system treats users differently according to gender when displaying job opportunities, amounting to indirect discrimination. It recommended Meta Ireland and Facebook France make adjustments within three months to prevent gender-based bias.

A Meta spokesperson said the company disagrees with the finding and is ‘assessing its options.’ The complainants welcomed the decision, saying it confirms that platforms are not exempt from laws prohibiting gender-based distinctions in recruitment advertising.

Lawyer Josephine Shefet, representing the groups, said the ruling marks a key precedent. ‘The decision sends a strong message to all digital platforms: they will be held accountable for such bias,’ she said.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google removes Gemma AI model following defamation claims

Google has removed its Gemma AI model from AI Studio after US Senator Marsha Blackburn accused it of producing false sexual misconduct claims about her. The senator said Gemma fabricated an incident allegedly from her 1987 campaign, citing nonexistent news links to support the claim.

Blackburn described the AI’s response as defamatory and demanded action from Google.

The controversy follows a similar case involving conservative activist Robby Starbuck, who claims Google’s AI tools made false accusations about him. Google acknowledged that AI’ hallucinations’ are a known issue but insisted it is working to mitigate such errors.

Blackburn argued these fabrications go beyond harmless mistakes and represent real defamation from a company-owned AI model.

Google stated that Gemma was never intended as a consumer-facing tool, noting that some non-developers misused it to ask factual questions. The company confirmed it would remove the model from AI Studio while keeping it accessible via API for developers.

The incident has reignited debates over AI bias and accountability. Blackburn highlighted what she sees as a consistent pattern of conservative figures being targeted by AI systems, amid wider political scrutiny over misinformation and AI regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot