Z.ai unveils cheaper, advanced AI model GLM-4.5

Chinese AI startup Z.ai, formerly Zhipu, is increasing pressure on global competitors with its latest model, GLM-4.5. The company has adopted an aggressive open-source strategy to attract developers. Anyone can download and use the model without licensing fees or platform restrictions.

GLM-4.5 is designed with agentic AI, breaking tasks into smaller components for improved performance. By approaching problems step by step, the model delivers more accurate and efficient outcomes. Z.ai aims to stand out through both technical sophistication and affordability.

CEO Zhang Peng says the model runs on only eight Nvidia H20 chips, while DeepSeek’s model needs sixteen. Nvidia developed the H20 to comply with US export controls aimed at China. Reducing chip demand significantly lowers the model’s operational footprint.

Zhang said the company has enough computing power and is not seeking further hardware now. Z.ai plans to charge 11 cents per million input tokens, undercutting DeepSeek R1’s 14 cents. Output tokens will cost 28 cents per million, compared to DeepSeek’s 2.19 dollars.

Such pricing could reshape large language model deployment expectations, especially in resource-limited environments. High costs have long been a barrier to broader AI adoption. Z.ai appears to be positioning itself as a more accessible alternative.

Founded in 2019, Z.ai has raised more than 1.5 billion dollars from investors including Alibaba, Tencent, and Qiming Venture Partners. It has grown quickly from a research-focused lab to one of China’s most prominent AI contenders. A public listing in Greater China is reportedly being prepared.

OpenAI recently named Zhipu among the Chinese firms it considers strategically significant in global AI development. US authorities responded by restricting American companies from working with Z.ai. The startup has nonetheless continued to expand its model lineup and partnerships.

Chinese firms increasingly invest in open-source models, often with domestic hardware compatibility in mind. Moonshot, another Alibaba-backed company, released the Kimi K2 model. Kimi K2 has received praise for its performance in coding and mathematical tasks.

Tencent has joined the race with its HunyuanWorld-1.0 model, which is built to generate immersive 3D environments. The HunyuanWorld-1.0 can accelerate game development, virtual reality design, and simulation work. Cutting-edge features are being paired with highly efficient architectures.

Alibaba also introduced its Qwen3-Coder model to assist in code generation and debugging. Such AI tools are seeing increasing use in software engineering and education. Chinese developers are positioning themselves to compete with Western offerings such as OpenAI’s Codex and Anthropic’s Claude.

The momentum within China’s AI sector is accelerating despite geopolitical and trade restrictions. A clear shift is underway from imitation to innovation, with local startups advancing independent research. Many models are trained on China-specific datasets to optimise relevance and performance.

Z.ai’s strategy combines cost reduction, efficient chip use, and broad availability. The company can build community trust and encourage ecosystem growth by open-sourcing its tools. At the same time, pricing undercuts major rivals and could disrupt the market.

Global AI development is increasingly decentralised, with Chinese firms no longer just playing catch-up. Large-scale funding and state support are helping to close gaps in hardware and training infrastructure. Z.ai is one of several firms pushing toward greater technological autonomy.

Open-source AI development is also helping Chinese companies win favour with developers outside their borders. Many international teams are experimenting with Chinese models to diversify risk and reduce reliance on US tech. Z.ai’s GLM-4.5 is among the models gaining traction globally.

By offering a powerful, lightweight, and affordable model, Z.ai is setting a new benchmark in the industry. The combination of technical refinement and strategic pricing draws attention from investors and users. A new era of AI competition is emerging.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Robot artist Ai-Da explores human self-perception

The world’s first ultra-realistic robot artist, Ai-Da, has been prompting profound questions about human-robot interactions, according to her creator.

Designed in Oxford by Aidan Meller, a modern and contemporary art specialist, and built in the UK by Engineered Arts, Ai-Da is a humanoid robot specifically engineered for artistic creation. She recently unveiled a portrait of King Charles III, adding to her notable portfolio.

Aidan Meller, Ai-Da’s creator, stated that working with the robot has evoked ‘lots of questions about our relationship with ourselves.’ He highlighted how Ai-Da’s artwork ‘drills into some of our time’s biggest concerns and thoughts.’

Ai-Da uses cameras in her eyes to capture images, which are then processed by AI algorithms and converted into real-time coordinates for her robotic arm, enabling her to paint and draw.

Mr Meller explained, ‘You can meet her, talk to her using her language model, and she can then paint and draw you from sight.’

He also observed that people’s preconceptions about robots are often outdated: ‘It’s not until you look a robot in the eye and they say your name that the reality of this new sci-fi world that we are now in takes hold.’

Ai-Da’s contributions to the art world continue to grow. She had produced and showcased her work at the AI for Good Global Summit 2024 in Geneva, Switzerland, an event under the auspices of the UN. That same year, her triptych of Enigma code-breaker Alan Turing sold for over £1 million at auction.

Her focus this year shifted to King Charles III, chosen because, as Mr Meller noted, ‘With extraordinary strides that are taking place in technology and again, always questioning our relationship to the environment, we felt that King Charles was an excellent subject.’

Buckingham Palace authorised the display of Ai-Da’s portrait of the King, despite the robot not meeting him. Ai-Da, connected to the internet, uses extensive data to inform her choice of subjects, with Mr Meller revealing, ‘Uncannily, and rather nerve-rackingly, we just ask her.’

The conversations generated inform the artwork. Ai-Da also painted a portrait of King Charles’s mother, Queen Elizabeth II, in 2023. Mr Meller shared that the most significant realisation from six years of working with Ai-Da was ‘not so much about how human she is but actually how robotic we are.’

He concluded, ‘We hope Ai-Da’s artwork can be a provocation for that discussion.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reshaping the US labour market

AI is often seen as a job destroyer, but it’s also emerging as a significant source of new employment, according to a new Brookings report. The number of job postings mentioning AI has more than doubled in the past year, with demand continuing to surge across various industries and regions.

Over the past 15 years, AI-related job listings have grown nearly 29% annually, far outpacing the 11% growth rate of overall job postings in the broader economy.

Brookings based its findings on data from Lightcast, a labour market analytics firm, and noted rising demand for AI skills across sectors, including manufacturing. According to the US Census Bureau’s Business Trends Survey, the share of manufacturers using AI has jumped from 4% in early 2023 to 9% by mid-2025.

Yet, AI jobs still form a small part of the market. Goldman Sachs predicts widespread AI adoption will peak in the early 2030s, with a slower near-term influence on jobs. ‘AI is visible in the micro labour market data, but it doesn’t dominate broader job dynamics,’ said Joseph Briggs, an economist at Goldman Sachs.

Roles range from AI engineers and data scientists to consultants and marketers learning to integrate AI into business operations responsibly and ethically. In 2025, over 80,000 job postings cited generative AI skills—up from fewer than 4,000 in 2010, Brookings reported, indicating explosive long-term growth.

Job openings involving ‘responsible AI’—those addressing ethical AI use in business and society—are also rising, according to data from Indeed and Lightcast. ‘As AI evolves, so does what counts as an AI job,’ said Cory Stahle of the Indeed Hiring Lab, noting that definitions shift with new business applications.

AI skills carry financial value, too. Lightcast found that jobs requiring AI expertise offer an average salary premium of $18,000, or 28% more annually. Unsurprisingly, tech hubs like Silicon Valley and Seattle dominate AI hiring, but job growth spreads to regions like the Sunbelt and the East Coast.

Mark Muro of Brookings noted that universities play a key role in AI job growth across new regions by fuelling local innovation. AI is also entering non-tech fields such as finance, human resources, and marketing, with more than half of AI-related postings now being outside IT roles.

Muro expects more widespread AI adoption in the next few years, as employers gain clarity on its value, limitations and potential for productivity. ‘There’s broad consensus that AI boosts productivity and economic competitiveness,’ he said. ‘It energises regional leaders and businesses to act more quickly.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts urge broader values in AI development

Since the launch of ChatGPT in late 2023, the private sector has led AI innovation. Major players like Microsoft, Google, and Alibaba—alongside emerging firms such as Anthropic and Mistral—are racing to monetise AI and secure long-term growth in the technology-driven economy.

But during the Fortune Brainstorm AI conference in Singapore this week, experts stressed the importance of human values in shaping AI’s future. Anthea Roberts, founder of Dragonfly Thinking, argued that AI must be built not just to think faster or cheaper, but also to think better.

She highlighted the risk of narrow thinking—national, disciplinary or algorithmic—and called for diverse, collaborative thinking to counter it. Roberts sees potential in human-AI collaboration, which can help policymakers explore different perspectives, boosting the chances of sound outcomes.

Russell Wald, executive director at Stanford’s Institute for Human-Centred AI, called AI a civilisation-shifting force. He stressed the need for an interdisciplinary ecosystem—combining academia, civil society, government and industry—to steer AI development.

‘Industry must lead, but so must academia,’ Wald noted, as well as universities’ contributions to early research, training, and transparency. Despite widespread adoption, AI scepticism persists, due to issues like bias, hallucination, and unpredictable or inappropriate language.

Roberts said most people fall into two camps: those who use AI uncritically, such as students and tech firms, and those who reject it entirely.

She labelled the latter as practising ‘critical non-use’ due to concerns over bias, authenticity and ethical shortcomings in current models. Inviting a broader demographic into AI governance, Roberts urged more people—especially those outside tech hubs like Silicon Valley—to shape its future.

Wald noted that in designing AI, developers must reflect the best of humanity: ‘Not the crazy uncle at the Thanksgiving table.’

Both experts believe the stakes are high, and the societal benefits of getting AI right are too great to ignore or mishandle. ‘You need to think not just about what people want,’ Roberts said, ‘but what they want to want—their more altruistic instincts.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teachers and students warn: AI is eroding engagement

A student from San Jose and an English teacher in Chicago co-authored a Boston Globe opinion warning that widespread use of AI in schools damages the vital student-teacher bond.

While marketed as efficiency boosters, AI tools encourage students to forgo independent thinking.

Many simply generate entire assignments via AI, reformat the text to avoid detection, and undermine honest academic interaction.

Educators report feeling increasingly marginalised as AI handles much of their workload, including grading, lesson planning, and feedback within classrooms.

Though schools and tech companies promote these tools as educational enhancements, many schools have eroded trust, as teachers struggle to assess real student ability.

The authors call for a return to supervised in-class assignments, using pen and paper, strict scrutiny of AI vendors in education, and outright bans on unsupervised AI classroom tools to help reset the learning relationship.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube Shorts brings image-to-video AI tool

Google has rolled out new AI features for YouTube Shorts, including an image-to-video tool powered by its Veo 2 model. The update lets users convert still images into six-second animated clips, such as turning a static group photo into a dynamic scene.

Creators can also experiment with immersive AI effects that stylise selfies or simple drawings into themed short videos. These features aim to enhance creative expression and are currently available in the US, Canada, Australia and New Zealand, with global rollout expected later this year.

A new AI Playground hub has also been launched to house all generative tools, including video effects and inspiration prompts. Users can find the hub by tapping the Shorts camera’s ‘create’ button and then the sparkle icon in the top corner.

Google plans to introduce even more advanced tools with the upcoming Veo 3 model, which will support synchronised audio generation. The company is positioning YouTube Shorts as a key platform for AI-driven creativity in the video content space.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teens turn to AI for advice and friendship

A growing number of US teens rely on AI for daily decision‑making and emotional support, with chatbots such as ChatGPT, Character.AI and Replika. One Kansas student admits she uses AI to simplify everyday tasks, using it to choose clothes or plan events while avoiding schoolwork.

A survey by Common Sense Media reveals that over 70 per cent of teenagers have tried AI companions, with around half using them regularly. Roughly a third reported discussing serious issues with AI, sometimes finding it as or more satisfying than talking with friends.

Experts express concern that such frequent AI interactions could hinder development of creativity, critical thinking and social skills in young people. The study warns adolescents may become overly validated by AI, missing out on real‑world emotional growth.

Educators caution that while AI offers constant, non‑judgemental feedback, it is not a replacement for authentic human relationships. They recommend AI use be carefully supervised to ensure it complements rather than replaces real interaction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New benchmark exposes limits of current AI tools

A new coding competition has exposed the limitations of current AI models, with the winner solving just 7.5% of programming problems. The K Prize, launched by Databricks and Perplexity co-founder, aims to challenge smaller models using real-world GitHub issues in a contamination-free format.

Despite the low score, Eduardo Rocha de Andrade took home the $50,000 top prize. Konwinski says the intentionally tough benchmark helps avoid inflated results and encourages realistic assessments of AI capability.

Unlike the better-known SWE-Bench, which may allow models to train on test material, the K Prize uses only new issues submitted after a set deadline. Its design prevents exposure during training, making it a more reliable measure of generalisation.

A $1 million prize remains for any open-source model that scores over 90%. The low results are being viewed as a necessary wake-up call in the race to build competent AI software engineers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon buys Bee AI, the startup that listens to your day

Amazon has acquired Bee AI, a San Francisco-based startup known for its $50 wearable that listens to conversations and provides AI-generated summaries and reminders.

The deal was confirmed by Bee co-founder Maria de Lourdes Zollo in a LinkedIn post on Wednesday, but the acquisition terms were not disclosed. Bee gained attention earlier this year at CES in Las Vegas, where it unveiled a Fitbit-like bracelet using AI to deliver personal insights.

The device received strong feedback for its ability to analyse conversations and create to-do lists, reminders, and daily summaries. Bee also offers a $19-per-month subscription and an Apple Watch app. It raised $7 million before being acquired by Amazon.

‘When we started Bee, we imagined a world where AI is truly personal,’ Zollo wrote. ‘That dream now finds a new home at Amazon.’ Amazon confirmed the acquisition and is expected to integrate Bee’s technology into its expanding AI device strategy.

The company recently updated Alexa with generative AI and added similar features to Ring, its home security brand. Amazon’s hardware division is now led by Panos Panay, the former Microsoft executive who led Surface and Windows 11 development.

Bee’s acquisition suggests Amazon is exploring its own AI-powered wearable to compete in the rapidly evolving consumer tech space. It remains unclear whether Bee will operate independently or be folded into Amazon’s existing device ecosystem.

Privacy concerns have surrounded Bee, as its wearable records audio in real time. The company claims no recordings are stored or used for AI training. Bee insists that users can delete their data at any time. However, privacy groups have flagged potential risks.

The AI hardware market has seen mixed success. Meta’s Ray-Ban smart glasses gained traction, but others like the Rabbit R1 flopped. The Humane AI Pin also failed commercially and was recently sold to HP. Consumers remain cautious of always-on AI devices.

OpenAI is also moving into hardware. In May, it acquired Jony Ive’s AI startup, io, for a reported $6.4 billion. OpenAI has hinted at plans to develop a screenless wearable, joining the race to create ambient AI tools for daily life.

Bee’s transition from startup to Amazon acquisition reflects how big tech is absorbing innovation in ambient, voice-first AI. Amazon’s plans for Bee remain to be seen, but the move could mark a turning point for AI wearables if executed effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canadian researchers expose watermark flaws

A team at the University of Maryland found that adversarial attacks easily strip most watermarking technologies designed to label AI‑generated images. Their study reveals that even visible watermarks fail to indicate content provenance reliably.

The US researchers tested low‑perturbation invisible watermarks and more robust visible ones, demonstrating that adversaries can easily remove or forge marks. Lead author Soheil Feizi noted the technology is far from foolproof, warning that ‘we broke all of them’.

Despite these concerns, experts argue that watermarking can still be helpful in a broader detection strategy. UC Berkeley professor Hany Farid said robust watermarking is ‘part of the solution’ when combined with other forensic methods.

Tech giants and researchers continue to develop watermarking tools like Google DeepMind’s SynthID, though such systems are not considered infallible. The consensus emerging from recent tests is that watermarking alone cannot be relied upon to counter deepfake threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!