AI firms fall short of EU transparency rules on training data

Several major AI companies appear slow to meet EU transparency obligations, raising concerns over compliance with the AI Act.

Under the regulation, developers of large foundation models must disclose information about training data sources, allowing creators to assess whether copyrighted material has been used.

Such disclosures are intended to offer a minimal baseline of transparency, covering the use of public datasets, licensed material and scraped websites.

While open-source providers such as Hugging Face have already published detailed templates, leading commercial developers have so far provided only broad descriptions of data usage instead of specific sources.

Formal enforcement of the rules will not begin until later in the year, extending a grace period for companies that released models after August 2025.

The European Commission has indicated willingness to impose fines if necessary, although it continues to assess whether newer models fall under immediate obligations.

The issue is likely to become politically sensitive, as stricter enforcement could affect US-based technology firms and intensify transatlantic tensions over digital regulation.

Transparency under the AI Act may therefore test both regulatory resolve and international relations as implementation moves closer.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic report shows AI is reshaping work instead of replacing jobs

A new report by Anthropic suggests fears that AI will replace jobs remain overstated, with current use showing AI supporting workers rather than eliminating roles.

Analysis of millions of anonymised conversations with the Claude assistant indicates technology is mainly used to assist with specific tasks rather than full job automation.

The research shows AI affects occupations unevenly, reshaping work depending on role and skill level. Higher-skilled tasks, particularly in software development, dominate use, while some roles automate simpler activities rather than core responsibilities.

Productivity gains remain limited when tasks grow more complex, as reliability declines and human correction becomes necessary.

Geographic differences also shape adoption. Wealthier countries tend to use AI more frequently for work and personal activities, while lower-income economies rely more heavily on AI for education. Such patterns reflect different stages of adoption instead of a uniform global transformation.

Anthropic argues that understanding how AI is used matters as much as measuring adoption rates. The report suggests future economic impact will depend on experimentation, regulation and the balance between automation and collaboration, rather than widespread job displacement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New IBM offering blends expert teams and AI digital workers for enterprise scale

IBM has unveiled a new consulting service designed to help organisations deploy and scale enterprise AI by pairing human experts with digital workers powered by AI.

The approach aims to address common challenges in AI adoption, such as skills gaps, governance, and integration with legacy systems, by combining domain expertise with automated AI capabilities that can execute repetitive and data-intensive tasks.

The service positions digital workers as extensions of human teams, enabling enterprises to accelerate workflows across areas such as finance, supply chain, customer service and IT operations. IBM emphasises that human specialists remain central to strategy, oversight and ethical use of AI, while digital workers support execution and scalability.

The offering includes guidance on governance frameworks, model choice, data architecture and change management to ensure responsible, secure and efficient deployment of AI technologies at scale.

IBM’s hybrid model reflects a broader industry trend toward human-AI collaboration, where AI amplifies professional capabilities while preserving human decision-making and oversight.

The company believes this will help organisations achieve measurable business outcomes faster than traditional AI implementations that rely solely on technology teams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea faces mounting pressure from US AI chip tariffs

New US tariffs on advanced AI chips are drawing scrutiny over their impact on global supply chains, with South Korea monitoring potential effects on its semiconductor industry.

The US administration has approved a 25 percent tariff on advanced chips that are imported into the US and then re-exported to third countries. The measure is widely seen as aimed at restricting the flow of AI accelerators to China.

The tariff thresholds are expected to cover processors such as Nvidia’s H200 and AMD’s MI325X, which rely on high-bandwidth memory supplied by Samsung Electronics and SK hynix.

Industry officials say most memory exports from South Korea to the US are used in domestic data centres, which are exempt under the proclamation, reducing direct exposure for suppliers.

South Korea’s trade ministry has launched consultations with industry leaders and US counterparts to assess risks and ensure Korean firms receive equal treatment to competitors in Taiwan, Japan and the EU.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

California moves to halt X AI deepfakes

California has ordered Elon Musk’s AI company xAI to stop creating and sharing non-consensual sexual deepfakes immediately. The move follows a surge in explicit AI-generated images circulating on X.

Attorney General Rob Bonta said xAI’s Grok tool enabled the manipulation of images of women and children without consent. Authorities argue that such activity breaches state decency laws and a new deepfake pornography ban.

The Californian investigation began after researchers found Grok users shared more non-consensual sexual imagery than users of other platforms. xAI introduced partial restrictions, though regulators said the real-world impact remains unclear.

Lawmakers say the case highlights growing risks linked to AI image tools. California officials warned companies could face significant penalties if deepfake creation and distribution continue unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Energy-efficient AI training with memristors

Scientists in China developed an error-aware probabilistic update (EaPU) to improve neural network training on memristor hardware. The method tackles accuracy and stability limits in analog computing.

Training inefficiency caused by noisy weight updates has slowed progress beyond inference tasks. EaPU applies probabilistic, threshold-based updates that preserve learning and sharply reduce write operations.

Experiments and simulations show major gains in energy efficiency, accuracy and device lifespan across vision models. Results suggest broader potential for sustainable AI training using emerging memory technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FDA clears AI software for fetal ultrasound

BioticsAI has received FDA approval for its AI software that detects fetal abnormalities in ultrasound images. The technology aims to improve diagnostic accuracy and clinical workflows.

Founded by CEO Robhy Bustami, the company applies computer vision to enhance ultrasound quality and automated reporting. Development focused on consistent performance across diverse patient populations.

The software helps assess image quality and anatomical completeness, and generates automated reports. Bustami emphasised the importance of reliable performance for high-risk demographics.

With regulatory approval, BioticsAI plans nationwide adoption across health systems. Additional features for fetal medicine and reproductive health are also under development.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI guidance released for UK tax professionals by leading bodies

Several UK professional organisations for tax practitioners, including the Chartered Institute of Taxation (CIOT) and the Society of Trust and Estate Practitioners (STEP), have published new AI guidance for members.

The documents aim to help tax professionals understand how to adopt AI tools securely and responsibly while maintaining professional standards and compliance with legal and regulatory frameworks.

The guidance stresses that members should be aware of risks associated with AI, including data quality, bias, model limitations and the need for human oversight. It encourages firms to implement robust governance, clear policies on use, appropriate training and verification processes where outputs affect client advice or statutory obligations.

By highlighting best practices, the professional bodies seek to balance the benefits of generative AI, such as improved efficiency and research assistance, with ethical considerations and core professional responsibilities.

The guidance also points to data-protection obligations under UK law and the importance of maintaining client confidentiality when using third-party AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WordPress AI team outlines SEO shifts

Industry expectations around SEO are shifting as AI agents increasingly rely on existing search infrastructure, according to James LePage, co-lead of the WordPress AI team at Automattic.

Search discovery for AI systems continues to depend on classic signals such as links, authority and indexed content, suggesting no structural break from traditional search engines.

Publishers are therefore being encouraged to focus on semantic markup, schema and internal linking, with AI optimisation closely aligned to established long-tail search strategies.

Future-facing content strategies prioritise clear summaries, ranked information and progressive detail, enabling AI agents to reuse and interpret material independently of traditional websites.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Why AI adoption trails in South Africa

South Africa’s rate of AI implementation is roughly half that of the US, according to insights from Specno. Analysts attribute the gap to shortages in skills, weak data infrastructure and limited alignment between AI projects and core business strategy.

Despite moderate AI readiness levels, execution remains a major challenge across South African organisations. Skills shortages, insufficient workforce training and weak organisational readiness continue to prevent AI systems from moving beyond pilot stages.

Industry experts say many executives recognise the value of AI but struggle to adopt it in practice. Constraints include low IT maturity, risk aversion and organisational cultures that resist large-scale transformation.

By contrast, companies in the US are embedding AI into operations, talent development and decision-making. Analysts say South Africa must rapidly improve executive literacy, data ecosystems and practical skills to close the gap.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot