Indian creators embrace Adobe AI tools

Adobe says generative AI is rapidly reshaping India’s creator economy, with 97% of surveyed creators reporting a positive impact. Findings come from the company’s inaugural Creators’ Toolkit Report covering more than 16,000 creators worldwide.

Adoption levels in India are among the highest globally, with almost all creators reporting that AI tools are embedded in their daily workflows. Adobe is commonly used for editing, content enhancement, asset generation and idea development across video, image and social media formats.

Despite enthusiasm, concerns remain around trust and transparency. Many creators fear their work may be used to train AI models without consent, while cost, unclear training methods and inconsistent outputs also limit wider confidence.

Interest in agentic AI is also growing, with most Indian creators expressing optimism about systems that automate tasks and adapt to personal creative styles. Mobile devices continue to gain importance, with creators expecting phone output to increase further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Forced labour data opened to the public

Exiger has launched a free online tool designed to help organisations identify links to forced labour in global supply chains. The platform, called forcedlabor.ai, was unveiled during the annual meeting of the World Economic Forum in Davos.

The tool allows users to search suppliers and companies to assess potential exposure to state-sponsored forced labour, with an initial focus on risks linked to China. Exiger says the database draws on billions of records and is powered by proprietary AI to support compliance and ethical sourcing.

US lawmakers and human rights groups have welcomed the initiative, arguing that companies face growing legal and reputational risks if their supply chains rely on forced labour. The platform highlights risks linked to US import restrictions and enforcement actions.

Exiger says making the data freely available aims to level the playing field for smaller firms with limited compliance budgets. The company argues that greater transparency can help reduce modern slavery across industries, from retail to agriculture.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

European Parliament moves to force AI companies to pay news publishers

Lawmakers in the EU are moving closer to forcing technology companies to pay news publishers for the use of journalistic material in model training, according to a draft copyright report circulating in the European Parliament.

The text forms part of a broader effort to update copyright enforcement as automated content systems expand across media and information markets.

Compromise amendments also widen the scope beyond payment obligations, bringing AI-generated deepfakes and synthetic manipulation into sharper focus.

MEPs argue that existing legal tools fail to offer sufficient protection for publishers, journalists and citizens when automated systems reproduce or distort original reporting.

The report reflects growing concern that platform-driven content extraction undermines the sustainability of professional journalism. Lawmakers are increasingly framing compensation mechanisms as a corrective measure rather than as voluntary licensing or opaque commercial arrangements.

If adopted, the position of the Parliament would add further regulatory pressure on large technology firms already facing tighter scrutiny under the Digital Markets Act and related digital legislation, reinforcing Europe’s push to assert control over data use, content value and democratic safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic report shows AI is reshaping work instead of replacing jobs

A new report by Anthropic suggests fears that AI will replace jobs remain overstated, with current use showing AI supporting workers rather than eliminating roles.

Analysis of millions of anonymised conversations with the Claude assistant indicates technology is mainly used to assist with specific tasks rather than full job automation.

The research shows AI affects occupations unevenly, reshaping work depending on role and skill level. Higher-skilled tasks, particularly in software development, dominate use, while some roles automate simpler activities rather than core responsibilities.

Productivity gains remain limited when tasks grow more complex, as reliability declines and human correction becomes necessary.

Geographic differences also shape adoption. Wealthier countries tend to use AI more frequently for work and personal activities, while lower-income economies rely more heavily on AI for education. Such patterns reflect different stages of adoption instead of a uniform global transformation.

Anthropic argues that understanding how AI is used matters as much as measuring adoption rates. The report suggests future economic impact will depend on experimentation, regulation and the balance between automation and collaboration, rather than widespread job displacement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New IBM offering blends expert teams and AI digital workers for enterprise scale

IBM has unveiled a new consulting service designed to help organisations deploy and scale enterprise AI by pairing human experts with digital workers powered by AI.

The approach aims to address common challenges in AI adoption, such as skills gaps, governance, and integration with legacy systems, by combining domain expertise with automated AI capabilities that can execute repetitive and data-intensive tasks.

The service positions digital workers as extensions of human teams, enabling enterprises to accelerate workflows across areas such as finance, supply chain, customer service and IT operations. IBM emphasises that human specialists remain central to strategy, oversight and ethical use of AI, while digital workers support execution and scalability.

The offering includes guidance on governance frameworks, model choice, data architecture and change management to ensure responsible, secure and efficient deployment of AI technologies at scale.

IBM’s hybrid model reflects a broader industry trend toward human-AI collaboration, where AI amplifies professional capabilities while preserving human decision-making and oversight.

The company believes this will help organisations achieve measurable business outcomes faster than traditional AI implementations that rely solely on technology teams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

California moves to halt X AI deepfakes

California has ordered Elon Musk’s AI company xAI to stop creating and sharing non-consensual sexual deepfakes immediately. The move follows a surge in explicit AI-generated images circulating on X.

Attorney General Rob Bonta said xAI’s Grok tool enabled the manipulation of images of women and children without consent. Authorities argue that such activity breaches state decency laws and a new deepfake pornography ban.

The Californian investigation began after researchers found Grok users shared more non-consensual sexual imagery than users of other platforms. xAI introduced partial restrictions, though regulators said the real-world impact remains unclear.

Lawmakers say the case highlights growing risks linked to AI image tools. California officials warned companies could face significant penalties if deepfake creation and distribution continue unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Energy-efficient AI training with memristors

Scientists in China developed an error-aware probabilistic update (EaPU) to improve neural network training on memristor hardware. The method tackles accuracy and stability limits in analog computing.

Training inefficiency caused by noisy weight updates has slowed progress beyond inference tasks. EaPU applies probabilistic, threshold-based updates that preserve learning and sharply reduce write operations.

Experiments and simulations show major gains in energy efficiency, accuracy and device lifespan across vision models. Results suggest broader potential for sustainable AI training using emerging memory technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FDA clears AI software for fetal ultrasound

BioticsAI has received FDA approval for its AI software that detects fetal abnormalities in ultrasound images. The technology aims to improve diagnostic accuracy and clinical workflows.

Founded by CEO Robhy Bustami, the company applies computer vision to enhance ultrasound quality and automated reporting. Development focused on consistent performance across diverse patient populations.

The software helps assess image quality and anatomical completeness, and generates automated reports. Bustami emphasised the importance of reliable performance for high-risk demographics.

With regulatory approval, BioticsAI plans nationwide adoption across health systems. Additional features for fetal medicine and reproductive health are also under development.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI guidance released for UK tax professionals by leading bodies

Several UK professional organisations for tax practitioners, including the Chartered Institute of Taxation (CIOT) and the Society of Trust and Estate Practitioners (STEP), have published new AI guidance for members.

The documents aim to help tax professionals understand how to adopt AI tools securely and responsibly while maintaining professional standards and compliance with legal and regulatory frameworks.

The guidance stresses that members should be aware of risks associated with AI, including data quality, bias, model limitations and the need for human oversight. It encourages firms to implement robust governance, clear policies on use, appropriate training and verification processes where outputs affect client advice or statutory obligations.

By highlighting best practices, the professional bodies seek to balance the benefits of generative AI, such as improved efficiency and research assistance, with ethical considerations and core professional responsibilities.

The guidance also points to data-protection obligations under UK law and the importance of maintaining client confidentiality when using third-party AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WordPress AI team outlines SEO shifts

Industry expectations around SEO are shifting as AI agents increasingly rely on existing search infrastructure, according to James LePage, co-lead of the WordPress AI team at Automattic.

Search discovery for AI systems continues to depend on classic signals such as links, authority and indexed content, suggesting no structural break from traditional search engines.

Publishers are therefore being encouraged to focus on semantic markup, schema and internal linking, with AI optimisation closely aligned to established long-tail search strategies.

Future-facing content strategies prioritise clear summaries, ranked information and progressive detail, enabling AI agents to reuse and interpret material independently of traditional websites.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!