Digital violence targeting women and girls is spreading across Europe, according to new research highlighting cyberstalking, surveillance and online threats as the most common reported abuses.
Digital tools have expanded opportunities for communication, yet online environments increasingly expose women to persistent harassment instead of safety and accountability.
Image-based abuse has grown sharply, with deepfake pornography now dominating synthetic sexual content and almost exclusively targeting women.
Algorithmic systems accelerate the circulation of misogynistic material, creating enclosed digital spaces where abuse is normalised rather than challenged. Researchers warn that automated recommendation mechanisms can quickly spread harmful narratives, particularly among younger audiences.
Recent generative technologies have further intensified concerns by enabling sexualised image manipulation with limited safeguards.
A vulnerability in Google Calendar allowed attackers to bypass privacy controls by embedding hidden instructions in standard calendar invitations. The issue exploited how Gemini interprets natural language when analysing user schedules.
Researchers at Miggo found that malicious prompts could be placed inside event descriptions. When Gemini scanned calendar data to answer routine queries, it unknowingly processed the embedded instructions.
The exploit used indirect prompt injection, a technique in which harmful commands are hidden within legitimate content. The AI model treated the text as trusted context rather than a potential threat.
In the proof-of-concept attack, Gemini was instructed to summarise a user’s private meetings and store the information in a new calendar event. The attacker could then access the data without alerting the victim.
Google confirmed the findings and deployed a fix after responsible disclosure. The case highlights growing security risks linked to how AI systems interpret natural language inputs.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Lawmakers in the EU are moving closer to forcing technology companies to pay news publishers for the use of journalistic material in model training, according to a draft copyright report circulating in the European Parliament.
The text forms part of a broader effort to update copyright enforcement as automated content systems expand across media and information markets.
Compromise amendments also widen the scope beyond payment obligations, bringing AI-generated deepfakes and synthetic manipulation into sharper focus.
MEPs argue that existing legal tools fail to offer sufficient protection for publishers, journalists and citizens when automated systems reproduce or distort original reporting.
The report reflects growing concern that platform-driven content extraction undermines the sustainability of professional journalism. Lawmakers are increasingly framing compensation mechanisms as a corrective measure rather than as voluntary licensing or opaque commercial arrangements.
If adopted, the position of the Parliament would add further regulatory pressure on large technology firms already facing tighter scrutiny under the Digital Markets Act and related digital legislation, reinforcing Europe’s push to assert control over data use, content value and democratic safeguards.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Several major AI companies appear slow to meet EU transparency obligations, raising concerns over compliance with the AI Act.
Under the regulation, developers of large foundation models must disclose information about training data sources, allowing creators to assess whether copyrighted material has been used.
Such disclosures are intended to offer a minimal baseline of transparency, covering the use of public datasets, licensed material and scraped websites.
While open-source providers such as Hugging Face have already published detailed templates, leading commercial developers have so far provided only broad descriptions of data usage instead of specific sources.
Formal enforcement of the rules will not begin until later in the year, extending a grace period for companies that released models after August 2025.
The European Commission has indicated willingness to impose fines if necessary, although it continues to assess whether newer models fall under immediate obligations.
The issue is likely to become politically sensitive, as stricter enforcement could affect US-based technology firms and intensify transatlantic tensions over digital regulation.
Transparency under the AI Act may therefore test both regulatory resolve and international relations as implementation moves closer.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new report by Anthropic suggests fears that AI will replace jobs remain overstated, with current use showing AI supporting workers rather than eliminating roles.
Analysis of millions of anonymised conversations with the Claude assistant indicates technology is mainly used to assist with specific tasks rather than full job automation.
The research shows AI affects occupations unevenly, reshaping work depending on role and skill level. Higher-skilled tasks, particularly in software development, dominate use, while some roles automate simpler activities rather than core responsibilities.
Productivity gains remain limited when tasks grow more complex, as reliability declines and human correction becomes necessary.
Geographic differences also shape adoption. Wealthier countries tend to use AI more frequently for work and personal activities, while lower-income economies rely more heavily on AI for education. Such patterns reflect different stages of adoption instead of a uniform global transformation.
Anthropic argues that understanding how AI is used matters as much as measuring adoption rates. The report suggests future economic impact will depend on experimentation, regulation and the balance between automation and collaboration, rather than widespread job displacement.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Pressure is growing on Keir Starmer after more than 60 Labour MPs called for a UK ban on social media use for under-16s, arguing that children’s online safety requires firmer regulation instead of voluntary platform measures.
The signatories span Labour’s internal divides, including senior parliamentarians and former frontbenchers, signalling broad concern over the impact of social media on young people’s well-being, education and mental health.
Supporters of the proposal point to Australia’s recently implemented ban as a model worth following, suggesting that early evidence could guide UK policy development rather than prolonged inaction.
Starmer is understood to favour a cautious approach, preferring to assess the Australian experience before endorsing legislation, as peers prepare to vote on related measures in the coming days.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The team behind the Astro web framework is joining Cloudflare, strengthening long-term support for open-source tools used to build fast, content-driven websites.
Major brands and developers widely use Astro to create pages that load quickly by limiting the amount of JavaScript that runs during initial rendering, improving performance and search visibility.
Cloudflare said Astro will remain open source and continue to be developed independently, ensuring long-term stability for the framework and its global user community.
Astro’s creators said the move will allow faster development and broader infrastructure support, while keeping the framework available to developers regardless of hosting provider.
The company added that Astro already underpins platforms such as Webflow and Wix, and that recent updates have expanded runtime support and improved build speeds.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
New US tariffs on advanced AI chips are drawing scrutiny over their impact on global supply chains, with South Korea monitoring potential effects on its semiconductor industry.
The US administration has approved a 25 percent tariff on advanced chips that are imported into the US and then re-exported to third countries. The measure is widely seen as aimed at restricting the flow of AI accelerators to China.
The tariff thresholds are expected to cover processors such as Nvidia’s H200 and AMD’s MI325X, which rely on high-bandwidth memory supplied by Samsung Electronics and SK hynix.
Industry officials say most memory exports from South Korea to the US are used in domestic data centres, which are exempt under the proclamation, reducing direct exposure for suppliers.
South Korea’s trade ministry has launched consultations with industry leaders and US counterparts to assess risks and ensure Korean firms receive equal treatment to competitors in Taiwan, Japan and the EU.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A major data breach at Finnish psychotherapy provider Vastaamo exposed the private therapy records of around 33,000 patients in 2020. Hackers demanded bitcoin payments and threatened to publish deeply personal notes if victims refused to pay.
Among those affected was Meri-Tuuli Auer, who described intense fear after learning her confidential therapy details could be accessed online. Stolen records included discussions of mental health, abuse, and suicidal thoughts, causing nationwide shock.
The breach became the largest criminal investigation in Finland, prompting emergency government talks led by then prime minister Sanna Marin. Despite efforts to stop the leak, the full database had already circulated on the dark web.
Finnish courts later convicted cybercriminal Julius Kivimäki, sentencing him to more than six years in prison. Many victims say the damage remains permanent, with trust in therapy and digital health systems severely weakened.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Africa’s rate of AI implementation is roughly half that of the US, according to insights from Specno. Analysts attribute the gap to shortages in skills, weak data infrastructure and limited alignment between AI projects and core business strategy.
Despite moderate AI readiness levels, execution remains a major challenge across South African organisations. Skills shortages, insufficient workforce training and weak organisational readiness continue to prevent AI systems from moving beyond pilot stages.
Industry experts say many executives recognise the value of AI but struggle to adopt it in practice. Constraints include low IT maturity, risk aversion and organisational cultures that resist large-scale transformation.
By contrast, companies in the US are embedding AI into operations, talent development and decision-making. Analysts say South Africa must rapidly improve executive literacy, data ecosystems and practical skills to close the gap.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!