Indian creators embrace Adobe AI tools

Adobe says generative AI is rapidly reshaping India’s creator economy, with 97% of surveyed creators reporting a positive impact. Findings come from the company’s inaugural Creators’ Toolkit Report covering more than 16,000 creators worldwide.

Adoption levels in India are among the highest globally, with almost all creators reporting that AI tools are embedded in their daily workflows. Adobe is commonly used for editing, content enhancement, asset generation and idea development across video, image and social media formats.

Despite enthusiasm, concerns remain around trust and transparency. Many creators fear their work may be used to train AI models without consent, while cost, unclear training methods and inconsistent outputs also limit wider confidence.

Interest in agentic AI is also growing, with most Indian creators expressing optimism about systems that automate tasks and adapt to personal creative styles. Mobile devices continue to gain importance, with creators expecting phone output to increase further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Forced labour data opened to the public

Exiger has launched a free online tool designed to help organisations identify links to forced labour in global supply chains. The platform, called forcedlabor.ai, was unveiled during the annual meeting of the World Economic Forum in Davos.

The tool allows users to search suppliers and companies to assess potential exposure to state-sponsored forced labour, with an initial focus on risks linked to China. Exiger says the database draws on billions of records and is powered by proprietary AI to support compliance and ethical sourcing.

US lawmakers and human rights groups have welcomed the initiative, arguing that companies face growing legal and reputational risks if their supply chains rely on forced labour. The platform highlights risks linked to US import restrictions and enforcement actions.

Exiger says making the data freely available aims to level the playing field for smaller firms with limited compliance budgets. The company argues that greater transparency can help reduce modern slavery across industries, from retail to agriculture.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI firms fall short of EU transparency rules on training data

Several major AI companies appear slow to meet EU transparency obligations, raising concerns over compliance with the AI Act.

Under the regulation, developers of large foundation models must disclose information about training data sources, allowing creators to assess whether copyrighted material has been used.

Such disclosures are intended to offer a minimal baseline of transparency, covering the use of public datasets, licensed material and scraped websites.

While open-source providers such as Hugging Face have already published detailed templates, leading commercial developers have so far provided only broad descriptions of data usage instead of specific sources.

Formal enforcement of the rules will not begin until later in the year, extending a grace period for companies that released models after August 2025.

The European Commission has indicated willingness to impose fines if necessary, although it continues to assess whether newer models fall under immediate obligations.

The issue is likely to become politically sensitive, as stricter enforcement could affect US-based technology firms and intensify transatlantic tensions over digital regulation.

Transparency under the AI Act may therefore test both regulatory resolve and international relations as implementation moves closer.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea faces mounting pressure from US AI chip tariffs

New US tariffs on advanced AI chips are drawing scrutiny over their impact on global supply chains, with South Korea monitoring potential effects on its semiconductor industry.

The US administration has approved a 25 percent tariff on advanced chips that are imported into the US and then re-exported to third countries. The measure is widely seen as aimed at restricting the flow of AI accelerators to China.

The tariff thresholds are expected to cover processors such as Nvidia’s H200 and AMD’s MI325X, which rely on high-bandwidth memory supplied by Samsung Electronics and SK hynix.

Industry officials say most memory exports from South Korea to the US are used in domestic data centres, which are exempt under the proclamation, reducing direct exposure for suppliers.

South Korea’s trade ministry has launched consultations with industry leaders and US counterparts to assess risks and ensure Korean firms receive equal treatment to competitors in Taiwan, Japan and the EU.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WordPress AI team outlines SEO shifts

Industry expectations around SEO are shifting as AI agents increasingly rely on existing search infrastructure, according to James LePage, co-lead of the WordPress AI team at Automattic.

Search discovery for AI systems continues to depend on classic signals such as links, authority and indexed content, suggesting no structural break from traditional search engines.

Publishers are therefore being encouraged to focus on semantic markup, schema and internal linking, with AI optimisation closely aligned to established long-tail search strategies.

Future-facing content strategies prioritise clear summaries, ranked information and progressive detail, enabling AI agents to reuse and interpret material independently of traditional websites.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Why AI adoption trails in South Africa

South Africa’s rate of AI implementation is roughly half that of the US, according to insights from Specno. Analysts attribute the gap to shortages in skills, weak data infrastructure and limited alignment between AI projects and core business strategy.

Despite moderate AI readiness levels, execution remains a major challenge across South African organisations. Skills shortages, insufficient workforce training and weak organisational readiness continue to prevent AI systems from moving beyond pilot stages.

Industry experts say many executives recognise the value of AI but struggle to adopt it in practice. Constraints include low IT maturity, risk aversion and organisational cultures that resist large-scale transformation.

By contrast, companies in the US are embedding AI into operations, talent development and decision-making. Analysts say South Africa must rapidly improve executive literacy, data ecosystems and practical skills to close the gap.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New Steam rules redefine when AI use must be disclosed

Steam has clarified its position on AI in video games by updating the disclosure rules developers must follow when publishing titles on the platform.

The revision arrives after months of industry debate over whether generative AI usage should be publicly declared, particularly as storefronts face growing pressure to balance transparency with practical development realities.

Under the updated policy, disclosure requirements apply exclusively to AI-generated material consumed by players.

Artwork, audio, localisation, narrative elements, marketing assets and content visible on a game’s Steam page fall within scope, while AI tools used purely during development remain outside Valve’s interest.

Developers using code assistants, concept ideation tools or AI-enabled software features without integrating outputs into the final player experience no longer need to declare such usage.

Valve’s clarification signals a more nuanced stance than earlier guidance introduced in 2024, which drew criticism for failing to reflect how AI tools are used in modern workflows.

By formally separating player-facing content from internal efficiency tools, Steam acknowledges common industry practices without expanding disclosure obligations unnecessarily.

The update offers reassurance to developers concerned about stigma surrounding AI labels while preserving transparency for consumers.

Although enforcement may remain largely procedural, the written clarification establishes clearer expectations and reduces uncertainty as generative technologies continue to shape game production.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Smarter interconnects become essential for AI processors

AI workloads are placing unprecedented strain on system on chip interconnects. Designers face complexity that exceeds the limits of traditional manual engineering approaches.

Semiconductor engineers are increasingly turning to automated network on chip design. Algorithms now generate interconnect topologies optimised for bandwidth, latency, power and area.

Physically aware automation reduces wirelengths, congestion and timing failures. Industry specialists report dramatically shorter design cycles and more predictable performance outcomes.

As AI spreads from data centres to edge devices, interconnect automation is becoming essential. The shift enables smaller teams to deliver powerful, energy efficient processors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI becomes the starting point for everyday online tasks

Consumers across the US are increasingly starting everyday digital tasks with AI, rather than search engines or individual apps, according to new research tracking changes in online behaviour.

Dedicated AI platforms are becoming the first place where intent is expressed, whether users are planning travel, comparing products, seeking advice on purchases and managing budgets.

Research shows more than 60% of US adults used a standalone AI platform last year, with younger generations especially likely to begin personal tasks through conversational tools rather than traditional search.

Businesses face growing pressure to adapt as AI reshapes how decisions begin, encouraging companies to rethink marketing, commerce and customer journeys around dialogue rather than clicks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare acquires Human Native to build a fair AI content licensing model

San Francisco-based company Cloudflare has acquired Human Native, an AI data marketplace designed to connect content creators with AI developers seeking high-quality training and inference material.

A move that reflects growing pressure to establish clearer economic rules for how online content is used by AI systems.

The acquisition is intended to help creators and publishers decide whether to block AI access entirely, optimise material for machine use, or license content for payment instead of allowing uncontrolled scraping.

Cloudflare says the tools developed through Human Native will support transparent pricing and fair compensation across the AI supply chain.

Human Native, founded in 2024 and backed by UK-based investors, focuses on structuring original content so it can be discovered, accessed and purchased by AI developers through standardised channels.

The team includes researchers and engineers with experience across AI research, design platforms and financial media.

Cloudflare argues that access to reliable and ethically sourced data will shape long-term competition in AI. By integrating Human Native into its wider platform, the company aims to support a more sustainable internet economy that balances innovation with creator rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!