Digital violence targeting women and girls is spreading across Europe, according to new research highlighting cyberstalking, surveillance and online threats as the most common reported abuses.
Digital tools have expanded opportunities for communication, yet online environments increasingly expose women to persistent harassment instead of safety and accountability.
Image-based abuse has grown sharply, with deepfake pornography now dominating synthetic sexual content and almost exclusively targeting women.
Algorithmic systems accelerate the circulation of misogynistic material, creating enclosed digital spaces where abuse is normalised rather than challenged. Researchers warn that automated recommendation mechanisms can quickly spread harmful narratives, particularly among younger audiences.
Recent generative technologies have further intensified concerns by enabling sexualised image manipulation with limited safeguards.
Lawmakers in the EU are moving closer to forcing technology companies to pay news publishers for the use of journalistic material in model training, according to a draft copyright report circulating in the European Parliament.
The text forms part of a broader effort to update copyright enforcement as automated content systems expand across media and information markets.
Compromise amendments also widen the scope beyond payment obligations, bringing AI-generated deepfakes and synthetic manipulation into sharper focus.
MEPs argue that existing legal tools fail to offer sufficient protection for publishers, journalists and citizens when automated systems reproduce or distort original reporting.
The report reflects growing concern that platform-driven content extraction undermines the sustainability of professional journalism. Lawmakers are increasingly framing compensation mechanisms as a corrective measure rather than as voluntary licensing or opaque commercial arrangements.
If adopted, the position of the Parliament would add further regulatory pressure on large technology firms already facing tighter scrutiny under the Digital Markets Act and related digital legislation, reinforcing Europe’s push to assert control over data use, content value and democratic safeguards.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Several major AI companies appear slow to meet EU transparency obligations, raising concerns over compliance with the AI Act.
Under the regulation, developers of large foundation models must disclose information about training data sources, allowing creators to assess whether copyrighted material has been used.
Such disclosures are intended to offer a minimal baseline of transparency, covering the use of public datasets, licensed material and scraped websites.
While open-source providers such as Hugging Face have already published detailed templates, leading commercial developers have so far provided only broad descriptions of data usage instead of specific sources.
Formal enforcement of the rules will not begin until later in the year, extending a grace period for companies that released models after August 2025.
The European Commission has indicated willingness to impose fines if necessary, although it continues to assess whether newer models fall under immediate obligations.
The issue is likely to become politically sensitive, as stricter enforcement could affect US-based technology firms and intensify transatlantic tensions over digital regulation.
Transparency under the AI Act may therefore test both regulatory resolve and international relations as implementation moves closer.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Pressure is growing on Keir Starmer after more than 60 Labour MPs called for a UK ban on social media use for under-16s, arguing that children’s online safety requires firmer regulation instead of voluntary platform measures.
The signatories span Labour’s internal divides, including senior parliamentarians and former frontbenchers, signalling broad concern over the impact of social media on young people’s well-being, education and mental health.
Supporters of the proposal point to Australia’s recently implemented ban as a model worth following, suggesting that early evidence could guide UK policy development rather than prolonged inaction.
Starmer is understood to favour a cautious approach, preferring to assess the Australian experience before endorsing legislation, as peers prepare to vote on related measures in the coming days.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
California has ordered Elon Musk’s AI company xAI to stop creating and sharing non-consensual sexual deepfakes immediately. The move follows a surge in explicit AI-generated images circulating on X.
Attorney General Rob Bonta said xAI’s Grok tool enabled the manipulation of images of women and children without consent. Authorities argue that such activity breaches state decency laws and a new deepfake pornography ban.
The Californian investigation began after researchers found Grok users shared more non-consensual sexual imagery than users of other platforms. xAI introduced partial restrictions, though regulators said the real-world impact remains unclear.
Lawmakers say the case highlights growing risks linked to AI image tools. California officials warned companies could face significant penalties if deepfake creation and distribution continue unchecked.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A major data breach at Finnish psychotherapy provider Vastaamo exposed the private therapy records of around 33,000 patients in 2020. Hackers demanded bitcoin payments and threatened to publish deeply personal notes if victims refused to pay.
Among those affected was Meri-Tuuli Auer, who described intense fear after learning her confidential therapy details could be accessed online. Stolen records included discussions of mental health, abuse, and suicidal thoughts, causing nationwide shock.
The breach became the largest criminal investigation in Finland, prompting emergency government talks led by then prime minister Sanna Marin. Despite efforts to stop the leak, the full database had already circulated on the dark web.
Finnish courts later convicted cybercriminal Julius Kivimäki, sentencing him to more than six years in prison. Many victims say the damage remains permanent, with trust in therapy and digital health systems severely weakened.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
France’s data protection regulator CNIL has fined telecom operators Free Mobile and Free a combined €42 million over a major customer data breach. The sanctions follow an October 2024 cyberattack that exposed personal data linked to 24 million subscriber contracts.
Investigators found security safeguards were inadequate, allowing attackers to access sensitive personal data, including bank account details. Weak VPN authentication and poor detection of abnormal system activity were highlighted as key failures under the GDPR.
The French regulator also ruled that affected customers were not adequately informed about the risks they faced. Notification emails lacked sufficient detail to explain potential consequences or protective steps, thereby breaching obligations to clearly communicate data breach impacts.
Free Mobile faced an additional penalty for retaining former customer data longer than permitted. Authorities ordered both companies to complete security upgrades and data clean-up measures within strict deadlines.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
An artist called Sienna Rose has drawn millions of streams on Spotify, despite strong evidence suggesting she is AI-generated. Several of her jazz-influenced soul tracks have gone viral, with one surpassing five million plays.
Streaming platform Deezer says many of its songs have been flagged as AI-made using detection tools that identify technical artefacts in the audio. Signs include an unusually high volume of releases, generic sound patterns and a complete absence of live performances or online presence.
The mystery intensified after pop star Selena Gomez briefly shared one of Rose’s tracks on social media, only for it to be removed amid growing scrutiny. Record labels linked to Rose have declined to clarify whether a human performer exists.
The case highlights mounting concern across the industry as AI music floods streaming services. Artists, including Raye and Paul McCartney, have warned audiences that they still value emotional authenticity over algorithmic output.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission plans to revise the Cybersecurity Act to expand certification schemes beyond ICT products and services. Future assessments would also cover companies’ overall risk-management posture, including governance and supply-chain practices.
Only one EU-wide scheme, the Common Criteria framework, has been formally adopted since 2019. Cloud, 5G, and digital identity certifications remain stalled due to procedural complexity and limited transparency under the current Cybersecurity Act framework.
The reforms aim to introduce clearer rules and a rolling work programme to support long-term planning. Managed security services, including incident response and penetration testing, would become eligible for EU certification.
ENISA would take on a stronger role as the central technical coordinator across member states. Additional funding and staff would be required to support its expanding mandate under the newer cybersecurity laws.
Stakeholders broadly support harmonisation to reduce administrative burden and regulatory fragmentation. The European Commission says organisational certification would assess cybersecurity maturity alongside technical product compliance.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Canada’s investment regulator has confirmed a major data breach affecting around 750,000 people after a phishing attack in August 2025.
The Canadian Investment Regulatory Organization (CIRO) said threat actors accessed and copied a limited set of investigative, compliance, and market surveillance data. Some internal systems were taken offline as a precaution, but core regulatory operations continued across the country.
CIRO reported that personal and financial information was exposed, including income details, identification records, contact information, account numbers, and financial statements collected during regulatory activities in Canada.
No passwords or PINs were compromised, and the organisation said there is no evidence that the stolen data has been misused or shared on the dark web.
Affected individuals are being offered two years of free credit monitoring and identity theft protection as CIRO continues to monitor for further malicious activity nationwide.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!