Chinese AI video tool unsettles Hollywood

A new AI video model developed by ByteDance has unsettled Hollywood after generating cinema-quality clips from brief text prompts. Seedance 2.0, launched in 2025, went viral for producing realistic action scenes featuring western cinematic characters such as Spider Man and Deadpool.

In response, major studios, including Disney and Paramount, issued cease and desist letters over alleged copyright infringement. Japan has also begun investigating ByteDance after AI-generated anime videos spread widely online.

Industry experts say Seedance 2.0 stands out for combining text, visuals and audio within a single system. Analysts in Singapore and Melbourne argue that Chinese AI models are now matching US competitors at the technological frontier.

As Seedance 2.0 gains traction, Beijing continues to prioritise AI and robotics in its economic strategy. The rise of tools from China has intensified debate in the US and beyond over copyright, regulation and the future of creative work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Bremen trials AI-based safety system ‘AI Watch’ on city trams

The city of Bremen, Germany, has begun piloting an AI-based safety system called AI Watch on its tram fleet. The technology uses onboard cameras and computer vision models to automatically detect potential safety issues, such as passengers too close to doors, objects on the tracks, or unexpected pedestrian behaviour, and alerts tram operators in real time.

The goal is to reduce accidents and enhance situational awareness without replacing human oversight.

Developed with transport and AI specialists, AI Watch integrates with vehicles’ existing sensor suites and is designed to function in real-time operational environments. During the pilot, the system has been tested under various traffic and lighting conditions to refine hazard recognition accuracy and minimise false alarms.

BSAG representatives say the AI support tool complements human judgement, helping drivers focus on decision-making rather than continuously scanning for hazards.

The initiative comes as cities explore AI applications in urban mobility, from predictive maintenance to intelligent traffic management and automated incident detection, to improve safety, efficiency and passenger experience.

Bremen’s pilot will be evaluated for scalability across additional routes and potentially other types of public transport vehicles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Adoption of agentic AI slowed by data readiness and governance gaps

Agentic AI is emerging as a new stage of enterprise automation, enabling systems to reason, plan, and act across workflows. Adoption, however, remains uneven, with far fewer organisations scaling deployments beyond pilots.

Unlike traditional analytics or generative tools, agentic systems make decisions rather than simply producing insights. Without sufficient context, they struggle to align actions with real business conditions, revealing a persistent context gap.

Recent survey data highlights this disconnect. Although executives express confidence in AI ambitions, significant shares cite data readiness, infrastructure, and skills as barriers. Many identify AI as central to strategy, yet only a limited proportion tie deployments to measurable business outcomes.

Effective agentic AI depends on layered data foundations. Public data provides baseline capability, organisational data enables operational competence, and third-party context supports differentiation. Weak governance or integration can undermine autonomy at scale.

Enterprises that align data governance, enrichment, and AI oversight are more likely to scale beyond pilots. Progress depends less on model sophistication than on trusted data foundations that support transparency and measurable outcomes.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK sets 48-hour deadline for removing intimate images

The UK government plans to require technology platforms to remove intimate images shared without consent within forty-eight hours instead of allowing such content to remain online for days.

Through an amendment to the Crime and Policing Bill, firms that fail to comply could face fines amounting to ten percent of their global revenue or risk having their services blocked in the UK.

A move that reflects ministers’ commitment to treat intimate image abuse with the same seriousness as child sexual abuse material and extremist content.

The action follows mounting concern after non-consensual sexual deepfakes produced by Grok circulated widely, prompting investigations by Ofcom and political pressure on platforms owned by Elon Musk.

The government now intends victims to report an image once instead of repeating the process across multiple services. Once flagged, the content should disappear across all platforms and be blocked automatically on future uploads through hash-matching or similar detection tools.

Ministers also aim to address content hosted outside the reach of the Online Safety Act by issuing guidance requiring internet providers to block access to sites that refuse to comply.

Keir Starmer, Liz Kendall and Alex Davies-Jones emphasised that no woman should be forced to pursue platform after platform to secure removal and that the online environment must offer safety and respect.

The package of reforms forms part of a broader pledge to halve violence against women and girls during the next decade.

Alongside tackling intimate image abuse, the government is legislating against nudification tools and ensuring AI chatbots fall within regulatory scope, using this agenda to reshape online safety instead of relying on voluntary compliance from large technology firms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Reload launches Epic to bring shared memory and structure to AI agents

Founders of the Reload platform say AI is moving from simple automation toward something closer to teamwork.

Newton Asare and Kiran Das noticed that AI agents were completing tasks normally handled by employees, which pushed them to design a system that treats digital workers as part of a company’s structure instead of disposable tools.

Their platform, Reload, offers a way for organisations to manage these agents across departments, assign responsibilities and monitor performance. The firm has secured 2.275 million dollars in new funding led by Anthemis with several other investors joining the round.

The shift toward agent-driven development exposed a recurring limitation. Most agents retain only short-term memory, which means they often lose context about a product or forget why a task matters.

Reload’s answer is Epic, a new product built on its platform that acts as an architect alongside coding agents. Epic defines requirements and constraints at the start of a project, then continuously preserves the shared understanding that agents need as software evolves.

Epic integrates with popular AI-assisted code editors such as Cursor and Windsurf, allowing developers to keep a consistent system memory without changing their workflow.

The tool generates key project artefacts from the outset, including data models and technical decisions, then carries them forward even when teams switch agents. It creates a single source of truth so that engineers and digital workers develop against the same structure.

Competing systems such as LongChain and CrewAI also offer support for managing agents, but Reload argues that Epic’s ability to maintain project-level context sets it apart.

Asare and Das, who already built and sold a previous company together, plan to use the fresh capital to grow their team and expand the infrastructure needed for a future in which human workers manage AI employees instead of the other way around.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Greece positions itself as a global AI bridge

The PM of Greece, Kyriakos Mitsotakis, took part in the India AI Impact Summit in New Delhi as part of a two-day visit that highlighted the country’s ambition to deepen its presence in global technology governance.

A gathering that focuses on creating a coherent international approach to AI under the theme ‘People-Planet-Progress’, with an emphasis on practical outcomes instead of abstract commitments.

Greece presents itself as a link between Europe and the Global South, seeking a larger role in debates over AI policy and geoeconomic strategy.

Mitsotakis is joined by Minister of Digital Governance Dimitris Papastergiou, underscoring Athens’ intention to strengthen partnerships that support technological development.

During the visit, Mitsotakis attended an official dinner hosted by Narendra Modi.

On Thursday, he will address the summit at Bharat Mandapam before holding a scheduled meeting with his Indian counterpart, reinforcing efforts to expand cooperation between Greece and India in emerging technologies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UNESCO expands multilingual learning through LearnBig

The LearnBig digital application is expanding access to learning, with UNESCO supporting educational materials in national and local languages instead of relying solely on dominant teaching languages.

A project that aligns with International Mother Language Day and reflects long-standing research showing that children learn more effectively when taught in languages they understand from an early age.

The programme supports communities along the Thailand–Myanmar border, where children gain literacy and numeracy skills in both Thai and their mother tongues.

Young learners can make more substantial academic progress with this approach, which allows them to remain connected to their cultural identity rather than being pushed into unfamiliar linguistic environments. More than 2,000 digital books are available in languages such as Karen, Myanmar, and Pattani Malay.

LearnBig was developed within the ‘Mobile Literacy for Out-of-School Children’ programme, backed by partners including Microsoft, True Corporation, POSCO 1% Foundation and the Ministry of Education of Thailand.

An initiative by UNESCO that has reached more than 526,000 learners, with young people in Yala using tablets to access digital books, while learners in Mae Hong Son study through content presented in their local languages.

The project illustrates the potential of digital innovation to bridge linguistic, social, and geographic divides.

By supporting children who often fall outside formal education systems, LearnBig demonstrates how technology can help build a more inclusive and equitable learning environment rather than reinforcing existing barriers.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Universities in India partner with OpenAI to scale AI education

OpenAI is expanding its footprint in India by partnering with leading higher-education institutions to integrate AI into teaching and research. The initiative aims to reach more than 100,000 students, faculty, and staff over the next year as India seeks to scale domestic AI skills.

Six public and private institutions, spanning engineering, management, medicine, anfd design, will participate in the first phase. Partners include the Indian Institute of Technology Delhi, the Indian Institute of Management Ahmedabad, and the All India Institute of Medical Sciences, New Delhi.

The programme focuses on embedding AI into core academic workflows rather than consumer experimentation. Campus-wide access to ChatGPT Edu, faculty training, and responsible-use frameworks will support applications in coding, research, analytics, and case analysis.

Two institutions will introduce OpenAI-backed certifications, while ed-tech platforms including Physics Wallah, upGrad, and HCL GUVI will extend structured AI training beyond campuses. The move coincides with broader investment by global AI firms as India hosts the AI Impact Summit in New Delhi.

With India now OpenAI’s second-largest user base after the US, the company is positioning universities as a long-term channel for adoption. The expansion reflects a wider contest over who shapes how AI is taught, governed, and embedded across one of the world’s largest education systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia partnerships bolster India’s bid to become AI hub

US chipmaker Nvidia unveiled partnerships with Indian computing and infrastructure firms at the AI Impact Summit in New Delhi, as technology companies announced fresh investments. The agreements aim to expand AI data centre capacity and bolster India’s position in the global AI race.

Larsen & Toubro said it would work with Nvidia to build what it described as India’s largest gigawatt-scale AI factory, with planned sites in Chennai and Mumbai. Nvidia is also partnering with Yotta Data Services, which plans to deploy more than 20,000 Blackwell processors as part of a $2 billion investment.

The summit has drawn dozens of world leaders and ministerial delegations to discuss AI’s economic potential and associated risks, including job displacement and misinformation. India recently rose to third place in Stanford University’s annual AI competitiveness ranking, behind only the US and China.

Other deals followed. The Adani Group pledged $100 billion by 2035 for hyperscale AI-ready data centres, while Microsoft outlined plans to invest $50 billion to expand AI adoption in developing markets. Anthropic and Infosys also agreed to collaborate on AI agents for the telecoms industry.

Indian Prime Minister Narendra Modi and leaders, including Emmanuel Macron and Luiz Inacio Lula da Silva, are expected to issue a joint statement on AI governance. Analysts caution that nonbinding declarations may shape norms, but rapid industry advances could outpace legislative safeguards.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI agent autonomy rises as users gain trust in Anthropic’s Claude Code

A new study from Anthropic offers an early picture of how people allow AI agents to work independently in real conditions.

By examining millions of interactions across its public API and its coding agent Claude Code, the company explored how long agents operate without supervision and how users change their behaviour as they gain experience.

The analysis shows a sharp rise in the longest autonomous sessions, with top users permitting the agent to work for more than forty minutes instead of cutting tasks short.

Experienced users appear more comfortable letting the AI agent proceed on its own, shifting towards auto-approve instead of checking each action.

At the same time, these users interrupt more often when something seems unusual, which suggests that trust develops alongside a more refined sense of when oversight is required.

The agent also demonstrates its own form of caution by pausing to ask for clarification more frequently than humans interrupt it as tasks become more complex.

The research identifies a broad spread of domains that rely on agents, with software engineering dominating usage but early signs of adoption emerging in healthcare, cybersecurity and finance.

Most actions remain low-risk and reversible, supported by safeguards such as restricted permissions or human involvement instead of fully automated execution. Only a tiny fraction of actions reveal irreversible consequences such as sending messages to external recipients.

Anthropic notes that real-world autonomy remains far below the potential suggested by external capability evaluations, including those by METR.

The company argues that safer deployment will depend on stronger post-deployment monitoring systems and better design for human-AI cooperation so that autonomy is managed jointly rather than granted blindly.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!