China issues action plan for global AI governance and proposes global AI cooperation organisation

At the 2025 World AI Conference in Shanghai, Chinese Premier Li Qiang urged the international community to prioritise joint efforts in governing AI, making reference to a need to establish a global framework and set of rules widely accepted by the global community. He unveiled a proposal by the Chinese government to create a global AI cooperation organisation to foster international collaboration, innovation, and inclusivity in AI across nations.

China attaches great importance to global AI governance, and has been actively promoting multilateral and bilateral cooperation with a willingness to offer more Chinese solutions‘.

An Action Plan for AI Global Governance was also presented at the conference. The plan outlines, in its introduction, a call for ‘all stakeholders to take concrete and effective actions based on the principles of serving the public good, respecting sovereignty, development orientation, safety and controllability, equity and inclusiveness, and openness and cooperation, to jointly advance the global development and governance of AI’.

The document includes 13 points related to key areas of international AI cooperation, including promoting inclusive infrastructure development, fostering open innovation ecosystems, ensuring high-quality data supply, and advancing sustainability through green AI practices. It also calls for consensus-building around technical standards, advancing international cooperation on AI safety governance, and supporting countries – especially those in the Global South – in ‘developing AI technologies and services suited to their national conditions’.

Notably, the plan indicates China’s support for multilateralism when it comes to the governance of AI, calling for an active implementation of commitments made by UN member states in the Pact for the Future and the Global Digital Compact, and expressing support for the establishment of the International AI Scientific Panel and a Global Dialogue on AI Governance (whose terms of reference are currently negotiated by UN member states in New York).

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UBTech’s Walker S2 marks a leap towards uninterrupted robotic work

The paradigm of robotic autonomy is undergoing a profound transformation with the advent of UBTech’s new humanoid, the Walker S2. Traditionally, robots have been tethered to human assistance for power, requiring manual plugging in or lengthy recharges.

UBTech, a pioneering robotics company, is now dismantling these limitations with a groundbreaking feature in the Walker S2: the ability to swap its battery autonomously. The innovation promises to reshape the landscape of factory work and potentially many other industries, enabling near-continuous, 24/7 operation without human intervention.

The core of this advancement lies in the Walker S2’s sophisticated self-charging mechanism. When a battery begins to deplete, the robot does not power down. Instead, it intelligently navigates to a strategically placed battery swap station.

Once positioned, the robot executes a precise sequence of movements: it twists its torso, deploys built-in tools on its arms to unfasten and remove the drained battery from its back cavity, places it into an empty bay on the swap station, and then expertly retrieves a fresh, fully charged module.

The new battery is then securely plugged into one of its dual battery bays. The process is remarkably swift, taking approximately three minutes, allowing the robot to return to its tasks almost immediately.

The hot-swappable system mirrors the convenience of advanced electric vehicle technology, but its application to humanoid robotics unlocks unprecedented operational efficiency. Standing at 5 feet, 3 inches (approximately 160 cm) tall and weighing 95 pounds (about 43 kg), the Walker S2 is designed to integrate seamlessly into environments built for humans.

It has two 48-volt lithium batteries, ensuring a continuous power supply during the brief swapping procedure. While one battery powers the robot’s ongoing operations, the other can be exchanged.

Each battery provides approximately two hours of operation while walking or up to four hours when the robot stands still and performs tasks. The battery swap stations are not merely power hubs; they also meticulously monitor the health of each battery.

Should a battery show signs of degradation, a technician can be alerted to a timely replacement, further optimising the robot’s longevity and performance.

UBTech claims the Walker S2 is not a mere laboratory prototype but a robust solution engineered for real-world industrial deployment. Extensive testing has been conducted in the highly demanding environments of car factories operated by major Chinese electric vehicle manufacturers, including BYD, Nio, and Zeekr.

The trials validate the robot’s ability to operate effectively in dynamic production lines. The Walker S2 incorporates advanced vision systems, allowing it to detect battery levels and identify fully charged units, indicated by a green light on the stacked battery packs.

The robot autonomously reads the visual cues, ensuring precise selection and connection via a simple USB-style connector. Furthermore, the robot features a display face, enabling it to communicate its operational status to human workers, fostering a collaborative and transparent work environment. For safety, a prominent emergency stop button is also integrated.

China’s strategic investment in robotics is a driving force behind such innovations. Shenzhen, UBTech’s home base, is a thriving hub for robotics, boasting over 1,600 companies in the sector.

The nation’s broader push towards automation, part of its ‘Made in China 2025’ strategy, is a clear statement of global competitiveness, with China betting on AI and robotics to spearhead the next manufacturing era.

The coordinated industrial policy has led to China becoming the world’s largest market for industrial robots and a significant innovator in the field. The implications of robots like the Walker S2, built for non-stop operation, extend far beyond traditional factory floors.

Their ability to manage physical tasks continuously could redefine work in various sectors. Industries such as logistics, with vast warehouses requiring constant material handling, or airports, where baggage and cargo movement is ceaseless, benefit immensely.

Hospitals could also see these humanoids assisting with logistical duties, allowing human staff to concentrate on direct patient care. For businesses, the promise of 24/7 automation translates directly into increased output without additional human resources, ensuring operations move seamlessly day and night.

The Walker S2 exemplifies how advanced automation rapidly moves beyond research labs into practical, demanding workplaces. With its autonomous battery-swapping capability, humanoid robots are poised to work extended hours that far exceed human capacity.

The robots do not require coffee breaks or need sleep; they are designed for relentless productivity, marking a significant step towards a future where machines play an even more integral role in daily industrial and societal functions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Netflix’s AI-driven VFX marks industry milestone

Making a significant leap into generative AI, Netflix has incorporated it into the post-production of its original series The Eternaut, which marks the first time the streaming giant has used AI-generated content in a final scene.

The sequence in question, a dramatic depiction of a building collapsing in Buenos Aires, was created using generative AI, allowing for a rapid and cost-effective production process.

Co-CEO Ted Sarandos emphasised that the AI-generated sequence was completed 10 times faster and more affordably than traditional visual effects methods.

He noted that AI enabled the production team to achieve high-quality visual effects that would have been unfeasible within the show’s budget constraints.

However, this development highlights Netflix’s commitment to exploring innovative technologies to enhance its content creation processes.

The company aims to streamline production workflows and expand creative possibilities by integrating generative AI, and the move like this one also raises questions about the implications of AI in the entertainment industry, particularly concerning the potential impact on jobs and the authenticity of creative work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Z.ai unveils cheaper, advanced AI model GLM-4.5

Chinese AI startup Z.ai, formerly Zhipu, is increasing pressure on global competitors with its latest model, GLM-4.5. The company has adopted an aggressive open-source strategy to attract developers. Anyone can download and use the model without licensing fees or platform restrictions.

GLM-4.5 is designed with agentic AI, breaking tasks into smaller components for improved performance. By approaching problems step by step, the model delivers more accurate and efficient outcomes. Z.ai aims to stand out through both technical sophistication and affordability.

CEO Zhang Peng says the model runs on only eight Nvidia H20 chips, while DeepSeek’s model needs sixteen. Nvidia developed the H20 to comply with US export controls aimed at China. Reducing chip demand significantly lowers the model’s operational footprint.

Zhang said the company has enough computing power and is not seeking further hardware now. Z.ai plans to charge 11 cents per million input tokens, undercutting DeepSeek R1’s 14 cents. Output tokens will cost 28 cents per million, compared to DeepSeek’s 2.19 dollars.

Such pricing could reshape large language model deployment expectations, especially in resource-limited environments. High costs have long been a barrier to broader AI adoption. Z.ai appears to be positioning itself as a more accessible alternative.

Founded in 2019, Z.ai has raised more than 1.5 billion dollars from investors including Alibaba, Tencent, and Qiming Venture Partners. It has grown quickly from a research-focused lab to one of China’s most prominent AI contenders. A public listing in Greater China is reportedly being prepared.

OpenAI recently named Zhipu among the Chinese firms it considers strategically significant in global AI development. US authorities responded by restricting American companies from working with Z.ai. The startup has nonetheless continued to expand its model lineup and partnerships.

Chinese firms increasingly invest in open-source models, often with domestic hardware compatibility in mind. Moonshot, another Alibaba-backed company, released the Kimi K2 model. Kimi K2 has received praise for its performance in coding and mathematical tasks.

Tencent has joined the race with its HunyuanWorld-1.0 model, which is built to generate immersive 3D environments. The HunyuanWorld-1.0 can accelerate game development, virtual reality design, and simulation work. Cutting-edge features are being paired with highly efficient architectures.

Alibaba also introduced its Qwen3-Coder model to assist in code generation and debugging. Such AI tools are seeing increasing use in software engineering and education. Chinese developers are positioning themselves to compete with Western offerings such as OpenAI’s Codex and Anthropic’s Claude.

The momentum within China’s AI sector is accelerating despite geopolitical and trade restrictions. A clear shift is underway from imitation to innovation, with local startups advancing independent research. Many models are trained on China-specific datasets to optimise relevance and performance.

Z.ai’s strategy combines cost reduction, efficient chip use, and broad availability. The company can build community trust and encourage ecosystem growth by open-sourcing its tools. At the same time, pricing undercuts major rivals and could disrupt the market.

Global AI development is increasingly decentralised, with Chinese firms no longer just playing catch-up. Large-scale funding and state support are helping to close gaps in hardware and training infrastructure. Z.ai is one of several firms pushing toward greater technological autonomy.

Open-source AI development is also helping Chinese companies win favour with developers outside their borders. Many international teams are experimenting with Chinese models to diversify risk and reduce reliance on US tech. Z.ai’s GLM-4.5 is among the models gaining traction globally.

By offering a powerful, lightweight, and affordable model, Z.ai is setting a new benchmark in the industry. The combination of technical refinement and strategic pricing draws attention from investors and users. A new era of AI competition is emerging.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Robot artist Ai-Da explores human self-perception

The world’s first ultra-realistic robot artist, Ai-Da, has been prompting profound questions about human-robot interactions, according to her creator.

Designed in Oxford by Aidan Meller, a modern and contemporary art specialist, and built in the UK by Engineered Arts, Ai-Da is a humanoid robot specifically engineered for artistic creation. She recently unveiled a portrait of King Charles III, adding to her notable portfolio.

Aidan Meller, Ai-Da’s creator, stated that working with the robot has evoked ‘lots of questions about our relationship with ourselves.’ He highlighted how Ai-Da’s artwork ‘drills into some of our time’s biggest concerns and thoughts.’

Ai-Da uses cameras in her eyes to capture images, which are then processed by AI algorithms and converted into real-time coordinates for her robotic arm, enabling her to paint and draw.

Mr Meller explained, ‘You can meet her, talk to her using her language model, and she can then paint and draw you from sight.’

He also observed that people’s preconceptions about robots are often outdated: ‘It’s not until you look a robot in the eye and they say your name that the reality of this new sci-fi world that we are now in takes hold.’

Ai-Da’s contributions to the art world continue to grow. She had produced and showcased her work at the AI for Good Global Summit 2024 in Geneva, Switzerland, an event under the auspices of the UN. That same year, her triptych of Enigma code-breaker Alan Turing sold for over £1 million at auction.

Her focus this year shifted to King Charles III, chosen because, as Mr Meller noted, ‘With extraordinary strides that are taking place in technology and again, always questioning our relationship to the environment, we felt that King Charles was an excellent subject.’

Buckingham Palace authorised the display of Ai-Da’s portrait of the King, despite the robot not meeting him. Ai-Da, connected to the internet, uses extensive data to inform her choice of subjects, with Mr Meller revealing, ‘Uncannily, and rather nerve-rackingly, we just ask her.’

The conversations generated inform the artwork. Ai-Da also painted a portrait of King Charles’s mother, Queen Elizabeth II, in 2023. Mr Meller shared that the most significant realisation from six years of working with Ai-Da was ‘not so much about how human she is but actually how robotic we are.’

He concluded, ‘We hope Ai-Da’s artwork can be a provocation for that discussion.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reshaping the US labour market

AI is often seen as a job destroyer, but it’s also emerging as a significant source of new employment, according to a new Brookings report. The number of job postings mentioning AI has more than doubled in the past year, with demand continuing to surge across various industries and regions.

Over the past 15 years, AI-related job listings have grown nearly 29% annually, far outpacing the 11% growth rate of overall job postings in the broader economy.

Brookings based its findings on data from Lightcast, a labour market analytics firm, and noted rising demand for AI skills across sectors, including manufacturing. According to the US Census Bureau’s Business Trends Survey, the share of manufacturers using AI has jumped from 4% in early 2023 to 9% by mid-2025.

Yet, AI jobs still form a small part of the market. Goldman Sachs predicts widespread AI adoption will peak in the early 2030s, with a slower near-term influence on jobs. ‘AI is visible in the micro labour market data, but it doesn’t dominate broader job dynamics,’ said Joseph Briggs, an economist at Goldman Sachs.

Roles range from AI engineers and data scientists to consultants and marketers learning to integrate AI into business operations responsibly and ethically. In 2025, over 80,000 job postings cited generative AI skills—up from fewer than 4,000 in 2010, Brookings reported, indicating explosive long-term growth.

Job openings involving ‘responsible AI’—those addressing ethical AI use in business and society—are also rising, according to data from Indeed and Lightcast. ‘As AI evolves, so does what counts as an AI job,’ said Cory Stahle of the Indeed Hiring Lab, noting that definitions shift with new business applications.

AI skills carry financial value, too. Lightcast found that jobs requiring AI expertise offer an average salary premium of $18,000, or 28% more annually. Unsurprisingly, tech hubs like Silicon Valley and Seattle dominate AI hiring, but job growth spreads to regions like the Sunbelt and the East Coast.

Mark Muro of Brookings noted that universities play a key role in AI job growth across new regions by fuelling local innovation. AI is also entering non-tech fields such as finance, human resources, and marketing, with more than half of AI-related postings now being outside IT roles.

Muro expects more widespread AI adoption in the next few years, as employers gain clarity on its value, limitations and potential for productivity. ‘There’s broad consensus that AI boosts productivity and economic competitiveness,’ he said. ‘It energises regional leaders and businesses to act more quickly.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts urge broader values in AI development

Since the launch of ChatGPT in late 2023, the private sector has led AI innovation. Major players like Microsoft, Google, and Alibaba—alongside emerging firms such as Anthropic and Mistral—are racing to monetise AI and secure long-term growth in the technology-driven economy.

But during the Fortune Brainstorm AI conference in Singapore this week, experts stressed the importance of human values in shaping AI’s future. Anthea Roberts, founder of Dragonfly Thinking, argued that AI must be built not just to think faster or cheaper, but also to think better.

She highlighted the risk of narrow thinking—national, disciplinary or algorithmic—and called for diverse, collaborative thinking to counter it. Roberts sees potential in human-AI collaboration, which can help policymakers explore different perspectives, boosting the chances of sound outcomes.

Russell Wald, executive director at Stanford’s Institute for Human-Centred AI, called AI a civilisation-shifting force. He stressed the need for an interdisciplinary ecosystem—combining academia, civil society, government and industry—to steer AI development.

‘Industry must lead, but so must academia,’ Wald noted, as well as universities’ contributions to early research, training, and transparency. Despite widespread adoption, AI scepticism persists, due to issues like bias, hallucination, and unpredictable or inappropriate language.

Roberts said most people fall into two camps: those who use AI uncritically, such as students and tech firms, and those who reject it entirely.

She labelled the latter as practising ‘critical non-use’ due to concerns over bias, authenticity and ethical shortcomings in current models. Inviting a broader demographic into AI governance, Roberts urged more people—especially those outside tech hubs like Silicon Valley—to shape its future.

Wald noted that in designing AI, developers must reflect the best of humanity: ‘Not the crazy uncle at the Thanksgiving table.’

Both experts believe the stakes are high, and the societal benefits of getting AI right are too great to ignore or mishandle. ‘You need to think not just about what people want,’ Roberts said, ‘but what they want to want—their more altruistic instincts.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teachers and students warn: AI is eroding engagement

A student from San Jose and an English teacher in Chicago co-authored a Boston Globe opinion warning that widespread use of AI in schools damages the vital student-teacher bond.

While marketed as efficiency boosters, AI tools encourage students to forgo independent thinking.

Many simply generate entire assignments via AI, reformat the text to avoid detection, and undermine honest academic interaction.

Educators report feeling increasingly marginalised as AI handles much of their workload, including grading, lesson planning, and feedback within classrooms.

Though schools and tech companies promote these tools as educational enhancements, many schools have eroded trust, as teachers struggle to assess real student ability.

The authors call for a return to supervised in-class assignments, using pen and paper, strict scrutiny of AI vendors in education, and outright bans on unsupervised AI classroom tools to help reset the learning relationship.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube Shorts brings image-to-video AI tool

Google has rolled out new AI features for YouTube Shorts, including an image-to-video tool powered by its Veo 2 model. The update lets users convert still images into six-second animated clips, such as turning a static group photo into a dynamic scene.

Creators can also experiment with immersive AI effects that stylise selfies or simple drawings into themed short videos. These features aim to enhance creative expression and are currently available in the US, Canada, Australia and New Zealand, with global rollout expected later this year.

A new AI Playground hub has also been launched to house all generative tools, including video effects and inspiration prompts. Users can find the hub by tapping the Shorts camera’s ‘create’ button and then the sparkle icon in the top corner.

Google plans to introduce even more advanced tools with the upcoming Veo 3 model, which will support synchronised audio generation. The company is positioning YouTube Shorts as a key platform for AI-driven creativity in the video content space.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teens turn to AI for advice and friendship

A growing number of US teens rely on AI for daily decision‑making and emotional support, with chatbots such as ChatGPT, Character.AI and Replika. One Kansas student admits she uses AI to simplify everyday tasks, using it to choose clothes or plan events while avoiding schoolwork.

A survey by Common Sense Media reveals that over 70 per cent of teenagers have tried AI companions, with around half using them regularly. Roughly a third reported discussing serious issues with AI, sometimes finding it as or more satisfying than talking with friends.

Experts express concern that such frequent AI interactions could hinder development of creativity, critical thinking and social skills in young people. The study warns adolescents may become overly validated by AI, missing out on real‑world emotional growth.

Educators caution that while AI offers constant, non‑judgemental feedback, it is not a replacement for authentic human relationships. They recommend AI use be carefully supervised to ensure it complements rather than replaces real interaction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!