European Parliament moves to force AI companies to pay news publishers

Lawmakers in the EU are moving closer to forcing technology companies to pay news publishers for the use of journalistic material in model training, according to a draft copyright report circulating in the European Parliament.

The text forms part of a broader effort to update copyright enforcement as automated content systems expand across media and information markets.

Compromise amendments also widen the scope beyond payment obligations, bringing AI-generated deepfakes and synthetic manipulation into sharper focus.

MEPs argue that existing legal tools fail to offer sufficient protection for publishers, journalists and citizens when automated systems reproduce or distort original reporting.

The report reflects growing concern that platform-driven content extraction undermines the sustainability of professional journalism. Lawmakers are increasingly framing compensation mechanisms as a corrective measure rather than as voluntary licensing or opaque commercial arrangements.

If adopted, the position of the Parliament would add further regulatory pressure on large technology firms already facing tighter scrutiny under the Digital Markets Act and related digital legislation, reinforcing Europe’s push to assert control over data use, content value and democratic safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI firms fall short of EU transparency rules on training data

Several major AI companies appear slow to meet EU transparency obligations, raising concerns over compliance with the AI Act.

Under the regulation, developers of large foundation models must disclose information about training data sources, allowing creators to assess whether copyrighted material has been used.

Such disclosures are intended to offer a minimal baseline of transparency, covering the use of public datasets, licensed material and scraped websites.

While open-source providers such as Hugging Face have already published detailed templates, leading commercial developers have so far provided only broad descriptions of data usage instead of specific sources.

Formal enforcement of the rules will not begin until later in the year, extending a grace period for companies that released models after August 2025.

The European Commission has indicated willingness to impose fines if necessary, although it continues to assess whether newer models fall under immediate obligations.

The issue is likely to become politically sensitive, as stricter enforcement could affect US-based technology firms and intensify transatlantic tensions over digital regulation.

Transparency under the AI Act may therefore test both regulatory resolve and international relations as implementation moves closer.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic report shows AI is reshaping work instead of replacing jobs

A new report by Anthropic suggests fears that AI will replace jobs remain overstated, with current use showing AI supporting workers rather than eliminating roles.

Analysis of millions of anonymised conversations with the Claude assistant indicates technology is mainly used to assist with specific tasks rather than full job automation.

The research shows AI affects occupations unevenly, reshaping work depending on role and skill level. Higher-skilled tasks, particularly in software development, dominate use, while some roles automate simpler activities rather than core responsibilities.

Productivity gains remain limited when tasks grow more complex, as reliability declines and human correction becomes necessary.

Geographic differences also shape adoption. Wealthier countries tend to use AI more frequently for work and personal activities, while lower-income economies rely more heavily on AI for education. Such patterns reflect different stages of adoption instead of a uniform global transformation.

Anthropic argues that understanding how AI is used matters as much as measuring adoption rates. The report suggests future economic impact will depend on experimentation, regulation and the balance between automation and collaboration, rather than widespread job displacement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Labour MPs press Starmer to consider UK under-16s social media ban

Pressure is growing on Keir Starmer after more than 60 Labour MPs called for a UK ban on social media use for under-16s, arguing that children’s online safety requires firmer regulation instead of voluntary platform measures.

The signatories span Labour’s internal divides, including senior parliamentarians and former frontbenchers, signalling broad concern over the impact of social media on young people’s well-being, education and mental health.

Supporters of the proposal point to Australia’s recently implemented ban as a model worth following, suggesting that early evidence could guide UK policy development rather than prolonged inaction.

Starmer is understood to favour a cautious approach, preferring to assess the Australian experience before endorsing legislation, as peers prepare to vote on related measures in the coming days.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Finnish data breach exposed thousands of patients

A major data breach at Finnish psychotherapy provider Vastaamo exposed the private therapy records of around 33,000 patients in 2020. Hackers demanded bitcoin payments and threatened to publish deeply personal notes if victims refused to pay.

Among those affected was Meri-Tuuli Auer, who described intense fear after learning her confidential therapy details could be accessed online. Stolen records included discussions of mental health, abuse, and suicidal thoughts, causing nationwide shock.

The breach became the largest criminal investigation in Finland, prompting emergency government talks led by then prime minister Sanna Marin. Despite efforts to stop the leak, the full database had already circulated on the dark web.

Finnish courts later convicted cybercriminal Julius Kivimäki, sentencing him to more than six years in prison. Many victims say the damage remains permanent, with trust in therapy and digital health systems severely weakened.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Researchers report increased ransomware and hacktivist activities targeting industrial systems in 2025

Industrial technology environments experienced a higher volume of cyber incidents in 2025, alongside a reported doubling in the exploitation of industrial control system (ICS) vulnerabilities.

According to the Cyble Research & Intelligence Labs Annual Threat Landscape Report 2025, manufacturing and healthcare (both highly dependent on ICS) were the sectors most affected by ransomware. The report recorded a 37% increase in total ransomware incidents between 2024 and 2025.

The analysis shows that the increase in reported ICS vulnerabilities is partly linked to greater exploitation by threat actors targeting human–machine interfaces (HMIs) and supervisory control and data acquisition (SCADA) systems. Over the reporting period, 600 manufacturing entities and 477 healthcare organizations were affected by ransomware incidents.

In parallel, hacktivist activity targeting ICT- and OT-reliant sectors, including energy, utilities, and transportation, increased in 2025. Several groups focused on ICS environments, primarily by exposing internet-accessible HMIs and other operational interfaces. Cyble further noted that 27 of the disclosed ICT vulnerabilities involved internet-exposed assets across multiple critical infrastructure sectors.

The report assessed hacktivism as increasingly coordinated across borders, with activity patterns aligning with geopolitical developments. Cyber operations linked to tensions between Israel and Iran involved 74 hacktivist groups, while India–Pakistan tensions were associated with approximately 1.5 million intrusion attempts.

Based on these observations, Cyble researchers assess that in 2026, threat actors are likely to continue focusing on exposed HMI and SCADA systems, including through virtual network computing (VNC) access, where such systems remain reachable from the internet.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Energy-efficient AI training with memristors

Scientists in China developed an error-aware probabilistic update (EaPU) to improve neural network training on memristor hardware. The method tackles accuracy and stability limits in analog computing.

Training inefficiency caused by noisy weight updates has slowed progress beyond inference tasks. EaPU applies probabilistic, threshold-based updates that preserve learning and sharply reduce write operations.

Experiments and simulations show major gains in energy efficiency, accuracy and device lifespan across vision models. Results suggest broader potential for sustainable AI training using emerging memory technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

French regulator fines Free and Free Mobile €42 million

France’s data protection regulator CNIL has fined telecom operators Free Mobile and Free a combined €42 million over a major customer data breach. The sanctions follow an October 2024 cyberattack that exposed personal data linked to 24 million subscriber contracts.

Investigators found security safeguards were inadequate, allowing attackers to access sensitive personal data, including bank account details. Weak VPN authentication and poor detection of abnormal system activity were highlighted as key failures under the GDPR.

The French regulator also ruled that affected customers were not adequately informed about the risks they faced. Notification emails lacked sufficient detail to explain potential consequences or protective steps, thereby breaching obligations to clearly communicate data breach impacts.

Free Mobile faced an additional penalty for retaining former customer data longer than permitted. Authorities ordered both companies to complete security upgrades and data clean-up measures within strict deadlines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI power demand pushes nuclear energy back into focus

Rising AI-driven electricity demand is straining power grids and renewing focus on nuclear energy as a stable, low-carbon solution. Data centres powering AI systems already consume electricity at the scale of small cities, and demand is accelerating rapidly.

Global electricity consumption could rise by more than 10,000 terawatt-hours by 2035, largely driven by AI workloads. In advanced economies, data centres are expected to drive over a fifth of electricity-demand growth by 2030, outpacing many traditional industries.

Nuclear energy is increasingly positioned as a reliable backbone for this expansion, offering continuous power, high energy density, and grid stability.

Governments, technology firms, and nuclear operators are advancing new reactor projects, while long-term power agreements between tech companies and nuclear plants are becoming more common.

Alongside large reactors, interest is growing in small modular reactors designed for faster deployment near data centres. Supporters say these systems could ease grid bottlenecks and deliver dedicated power for AI, strengthening nuclear energy’s role in the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

xAI faces stricter pollution rules for Memphis data centre

US regulators have closed a loophole that allowed Elon Musk’s AI company, xAI, to operate gas-burning turbines at its Memphis data centre without full air pollution permits. The move follows concerns over emissions and local health impacts.

The US Environmental Protection Agency clarified that mobile gas turbines cannot be classified as ‘non-road engines’ to avoid Clean Air Act requirements. Companies must now obtain permits if their combined emissions exceed regulatory thresholds.

Local authorities had previously allowed the turbines to operate without public consultation or environmental review. The updated federal rule may slow xAI’s expansion plans in the Memphis area.

The Colossus data centre, opened in 2024, supports training and inference for Grok AI models and other services linked to Musk’s X platform. NVIDIA hardware is used extensively at the site.

Residents and environmental groups have raised concerns about air quality, particularly in nearby communities. Legal advocates say xAI’s future operations will be closely monitored for regulatory compliance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!