AI bands rise as real musicians struggle to compete

AI is quickly transforming the music industry, with AI-generated bands now drawing millions of plays on platforms like Spotify.

While these acts may sound like traditional musicians, they are entirely digital creations. Streaming services rarely label AI music clearly, and the producers behind these tracks often remain anonymous and unreachable. Human artists, meanwhile, are quietly watching their workload dry up.

Music professionals are beginning to express concern. Composer Leo Sidran believes AI is already taking work away from creators like him, noting that many former clients now rely on AI-generated solutions instead of original compositions.

Unlike previous tech innovations, which empowered musicians, AI risks erasing job opportunities entirely, according to Berklee College of Music professor George Howard, who warns it could become a zero-sum game.

AI music is especially popular for passive listening—background tracks for everyday life. In contrast, real musicians still hold value among fans who engage more actively with music.

However, AI is cheap, fast, and royalty-free, making it attractive to publishers and advertisers. From film soundtracks to playlists filled with faceless artists, synthetic sound is rapidly replacing human creativity in many commercial spaces.

Experts urge musicians to double down on what makes them unique instead of mimicking trends that AI can easily replicate. Live performance remains one of the few areas where AI has yet to gain traction. Until synthetic bands take the stage, artists may still find refuge in concerts and personal connection with fans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Flipkart employee deletes ChatGPT over emotional dependency

ChatGPT has become an everyday tool for many, serving as a homework partner, a research aid, and even a comforting listener. But questions are beginning to emerge about the emotional bonds users form with it. A recent LinkedIn post has reignited the debate around AI overuse.

Simrann M Bhambani, a marketing professional at Flipkart, publicly shared her decision to delete ChatGPT from her devices. In a post titled ‘ChatGPT is TOXIC! (for me)’, she described how casual interaction escalated into emotional dependence. The platform began to resemble a digital therapist.

Bhambani admitted to confiding every minor frustration and emotional spiral to the chatbot. Its constant availability and non-judgemental replies gave her a false sense of security. Even with supportive friends, she felt drawn to the machine’s quiet reliability.

What began as curiosity turned into compulsion. She found herself spending hours feeding the bot intrusive thoughts and endless questions. ‘I gave my energy to something that wasn’t even real,’ she wrote. The experience led to more confusion instead of clarity.

Rather than offering mental relief, the chatbot fuelled her overthinking. The emotional noise grew louder, eventually becoming overwhelming. She realised that the problem wasn’t the technology itself, but how it quietly replaced self-reflection.

Deleting the app marked a turning point. Bhambani described the decision as a way to reclaim mental space and reduce digital clutter. She warned others that AI tools, while useful, can easily replace human habits and emotional processing if left unchecked.

Many users may not notice such patterns until they are deeply entrenched. AI chatbots are designed to be helpful and responsive, but they lack the nuance and care of human conversation. Their steady presence can foster a deceptive sense of intimacy.

People increasingly rely on digital tools to navigate their daily emotions, often without understanding the consequences. Some may find themselves withdrawing from human relationships or journalling less often. Emotional outsourcing to machines can significantly change how people process personal experiences.

Industry experts have warned about the risks of emotional reliance on generative AI. Chatbots are known to produce inaccurate or hallucinated responses, especially when asked to provide personal advice. Sole dependence on such tools can lead to misinformation or emotional confusion.

Companies like OpenAI have stressed that ChatGPT is not a substitute for professional mental health support. While the bot is trained to provide helpful and empathetic responses, it cannot replace human judgement or real-world relationships. Boundaries are essential.

Mental health professionals also caution against using AI as an emotional crutch. Reflection and self-awareness take time and require discomfort, which AI often smooths over. The convenience can dull long-term growth and self-understanding.

Bhambani’s story has resonated with many who have quietly developed similar habits. Her openness has sparked important discussions on emotional hygiene in the age of AI. More users are starting to reflect on their relationship with digital tools.

Social media platforms are also witnessing an increased number of posts about AI fatigue and cognitive overload. People are beginning to question how constant access to information and feedback affects emotional well-being. There is growing awareness around the need for balance.

AI is expected to become even more integrated into daily life, from virtual assistants to therapy bots. Recognising the line between convenience and dependency will be key. Tools are meant to serve, not dominate, personal reflection.

Developers and users alike must remain mindful of how often and why they turn to AI. Chatbots can complement human support systems, but they are not replacements. Bhambani’s experience serves as a cautionary tale in the age of machine intimacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tech giants back Trump’s AI deregulation plan amid public concern over societal impacts

Donald Trump recently hosted an AI summit in Washington, titled ‘Winning the AI Race,’ geared towards a deregulated atmosphere for AI innovation. Key figures from the tech industry, including Nvidia’s CEO Jensen Huang and Palantir’s CTO Shyam Sankar, attended the event.

Co-hosted by the Hill and Valley Forum and the Silicon Valley All-in Podcast, the summit was a platform for Trump to introduce his ‘AI Action Plan‘, comprised of three executive orders focusing on deregulation. Trump’s objective is to dismantle regulatory restrictions he perceives as obstacles to innovation, aiming to re-establish the US as a leader in AI exportation globally.

The executive orders announced target the elimination of ‘ideological dogmas such as diversity, equity, and inclusion (DEI)’ in AI models developed by federally funded companies. Additionally, one order promotes exporting US-developed AI technologies internationally, while another seeks to lessen environmental restrictions and speed up approvals for energy-intensive data centres.

These measures are seen as reversing the Biden administration’s policies, which stressed the importance of safety and security in AI development. Technology giants Apple, Meta, Amazon, and Alphabet have shown significant support for Trump’s initiatives, contributing to his inauguration fund and engaging with him at his Mar-a-Lago estate. Leaders like OpenAI’s Sam Altman and Nvidia’s Jensen Huang have also pledged substantial investments in US AI infrastructure.

Despite this backing, over 100 groups, including labour, environmental, civil rights, and academic organisations, have voiced their opposition through a ‘People’s AI action plan’. These groups warn of the potential risks of unregulated AI, which they fear could undermine civil liberties, equality, and environmental safeguards.

They argue that public welfare should not be compromised for corporate gains, highlighting the dangers of allowing tech giants to dominate policy-making. That discourse illustrates the divide between industry aspirations and societal consequences.

The tech industry’s influence on AI legislation through lobbying is noteworthy, with a report from Issue One indicating that eight of the largest tech companies spent a collective $36 million on lobbying in 2025 alone. Meta led with $13.8 million, employing 86 lobbyists, while Nvidia and OpenAI saw significant increases in their expenditure compared to previous years. The substantial financial outlay reflects the industry’s vested interest in shaping regulatory frameworks to favour business interests, igniting a debate over the ethical responsibilities of unchecked AI progress.

As tech companies and pro-business entities laud Trump’s deregulation efforts, concerns persist over the societal impacts of such policies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

China issues action plan for global AI governance and proposes global AI cooperation organisation

At the 2025 World AI Conference in Shanghai, Chinese Premier Li Qiang urged the international community to prioritise joint efforts in governing AI, making reference to a need to establish a global framework and set of rules widely accepted by the global community. He unveiled a proposal by the Chinese government to create a global AI cooperation organisation to foster international collaboration, innovation, and inclusivity in AI across nations.

China attaches great importance to global AI governance, and has been actively promoting multilateral and bilateral cooperation with a willingness to offer more Chinese solutions‘.

An Action Plan for AI Global Governance was also presented at the conference. The plan outlines, in its introduction, a call for ‘all stakeholders to take concrete and effective actions based on the principles of serving the public good, respecting sovereignty, development orientation, safety and controllability, equity and inclusiveness, and openness and cooperation, to jointly advance the global development and governance of AI’.

The document includes 13 points related to key areas of international AI cooperation, including promoting inclusive infrastructure development, fostering open innovation ecosystems, ensuring high-quality data supply, and advancing sustainability through green AI practices. It also calls for consensus-building around technical standards, advancing international cooperation on AI safety governance, and supporting countries – especially those in the Global South – in ‘developing AI technologies and services suited to their national conditions’.

Notably, the plan indicates China’s support for multilateralism when it comes to the governance of AI, calling for an active implementation of commitments made by UN member states in the Pact for the Future and the Global Digital Compact, and expressing support for the establishment of the International AI Scientific Panel and a Global Dialogue on AI Governance (whose terms of reference are currently negotiated by UN member states in New York).

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UBTech’s Walker S2 marks a leap towards uninterrupted robotic work

The paradigm of robotic autonomy is undergoing a profound transformation with the advent of UBTech’s new humanoid, the Walker S2. Traditionally, robots have been tethered to human assistance for power, requiring manual plugging in or lengthy recharges.

UBTech, a pioneering robotics company, is now dismantling these limitations with a groundbreaking feature in the Walker S2: the ability to swap its battery autonomously. The innovation promises to reshape the landscape of factory work and potentially many other industries, enabling near-continuous, 24/7 operation without human intervention.

The core of this advancement lies in the Walker S2’s sophisticated self-charging mechanism. When a battery begins to deplete, the robot does not power down. Instead, it intelligently navigates to a strategically placed battery swap station.

Once positioned, the robot executes a precise sequence of movements: it twists its torso, deploys built-in tools on its arms to unfasten and remove the drained battery from its back cavity, places it into an empty bay on the swap station, and then expertly retrieves a fresh, fully charged module.

The new battery is then securely plugged into one of its dual battery bays. The process is remarkably swift, taking approximately three minutes, allowing the robot to return to its tasks almost immediately.

The hot-swappable system mirrors the convenience of advanced electric vehicle technology, but its application to humanoid robotics unlocks unprecedented operational efficiency. Standing at 5 feet, 3 inches (approximately 160 cm) tall and weighing 95 pounds (about 43 kg), the Walker S2 is designed to integrate seamlessly into environments built for humans.

It has two 48-volt lithium batteries, ensuring a continuous power supply during the brief swapping procedure. While one battery powers the robot’s ongoing operations, the other can be exchanged.

Each battery provides approximately two hours of operation while walking or up to four hours when the robot stands still and performs tasks. The battery swap stations are not merely power hubs; they also meticulously monitor the health of each battery.

Should a battery show signs of degradation, a technician can be alerted to a timely replacement, further optimising the robot’s longevity and performance.

UBTech claims the Walker S2 is not a mere laboratory prototype but a robust solution engineered for real-world industrial deployment. Extensive testing has been conducted in the highly demanding environments of car factories operated by major Chinese electric vehicle manufacturers, including BYD, Nio, and Zeekr.

The trials validate the robot’s ability to operate effectively in dynamic production lines. The Walker S2 incorporates advanced vision systems, allowing it to detect battery levels and identify fully charged units, indicated by a green light on the stacked battery packs.

The robot autonomously reads the visual cues, ensuring precise selection and connection via a simple USB-style connector. Furthermore, the robot features a display face, enabling it to communicate its operational status to human workers, fostering a collaborative and transparent work environment. For safety, a prominent emergency stop button is also integrated.

China’s strategic investment in robotics is a driving force behind such innovations. Shenzhen, UBTech’s home base, is a thriving hub for robotics, boasting over 1,600 companies in the sector.

The nation’s broader push towards automation, part of its ‘Made in China 2025’ strategy, is a clear statement of global competitiveness, with China betting on AI and robotics to spearhead the next manufacturing era.

The coordinated industrial policy has led to China becoming the world’s largest market for industrial robots and a significant innovator in the field. The implications of robots like the Walker S2, built for non-stop operation, extend far beyond traditional factory floors.

Their ability to manage physical tasks continuously could redefine work in various sectors. Industries such as logistics, with vast warehouses requiring constant material handling, or airports, where baggage and cargo movement is ceaseless, benefit immensely.

Hospitals could also see these humanoids assisting with logistical duties, allowing human staff to concentrate on direct patient care. For businesses, the promise of 24/7 automation translates directly into increased output without additional human resources, ensuring operations move seamlessly day and night.

The Walker S2 exemplifies how advanced automation rapidly moves beyond research labs into practical, demanding workplaces. With its autonomous battery-swapping capability, humanoid robots are poised to work extended hours that far exceed human capacity.

The robots do not require coffee breaks or need sleep; they are designed for relentless productivity, marking a significant step towards a future where machines play an even more integral role in daily industrial and societal functions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Netflix’s AI-driven VFX marks industry milestone

Making a significant leap into generative AI, Netflix has incorporated it into the post-production of its original series The Eternaut, which marks the first time the streaming giant has used AI-generated content in a final scene.

The sequence in question, a dramatic depiction of a building collapsing in Buenos Aires, was created using generative AI, allowing for a rapid and cost-effective production process.

Co-CEO Ted Sarandos emphasised that the AI-generated sequence was completed 10 times faster and more affordably than traditional visual effects methods.

He noted that AI enabled the production team to achieve high-quality visual effects that would have been unfeasible within the show’s budget constraints.

However, this development highlights Netflix’s commitment to exploring innovative technologies to enhance its content creation processes.

The company aims to streamline production workflows and expand creative possibilities by integrating generative AI, and the move like this one also raises questions about the implications of AI in the entertainment industry, particularly concerning the potential impact on jobs and the authenticity of creative work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Z.ai unveils cheaper, advanced AI model GLM-4.5

Chinese AI startup Z.ai, formerly Zhipu, is increasing pressure on global competitors with its latest model, GLM-4.5. The company has adopted an aggressive open-source strategy to attract developers. Anyone can download and use the model without licensing fees or platform restrictions.

GLM-4.5 is designed with agentic AI, breaking tasks into smaller components for improved performance. By approaching problems step by step, the model delivers more accurate and efficient outcomes. Z.ai aims to stand out through both technical sophistication and affordability.

CEO Zhang Peng says the model runs on only eight Nvidia H20 chips, while DeepSeek’s model needs sixteen. Nvidia developed the H20 to comply with US export controls aimed at China. Reducing chip demand significantly lowers the model’s operational footprint.

Zhang said the company has enough computing power and is not seeking further hardware now. Z.ai plans to charge 11 cents per million input tokens, undercutting DeepSeek R1’s 14 cents. Output tokens will cost 28 cents per million, compared to DeepSeek’s 2.19 dollars.

Such pricing could reshape large language model deployment expectations, especially in resource-limited environments. High costs have long been a barrier to broader AI adoption. Z.ai appears to be positioning itself as a more accessible alternative.

Founded in 2019, Z.ai has raised more than 1.5 billion dollars from investors including Alibaba, Tencent, and Qiming Venture Partners. It has grown quickly from a research-focused lab to one of China’s most prominent AI contenders. A public listing in Greater China is reportedly being prepared.

OpenAI recently named Zhipu among the Chinese firms it considers strategically significant in global AI development. US authorities responded by restricting American companies from working with Z.ai. The startup has nonetheless continued to expand its model lineup and partnerships.

Chinese firms increasingly invest in open-source models, often with domestic hardware compatibility in mind. Moonshot, another Alibaba-backed company, released the Kimi K2 model. Kimi K2 has received praise for its performance in coding and mathematical tasks.

Tencent has joined the race with its HunyuanWorld-1.0 model, which is built to generate immersive 3D environments. The HunyuanWorld-1.0 can accelerate game development, virtual reality design, and simulation work. Cutting-edge features are being paired with highly efficient architectures.

Alibaba also introduced its Qwen3-Coder model to assist in code generation and debugging. Such AI tools are seeing increasing use in software engineering and education. Chinese developers are positioning themselves to compete with Western offerings such as OpenAI’s Codex and Anthropic’s Claude.

The momentum within China’s AI sector is accelerating despite geopolitical and trade restrictions. A clear shift is underway from imitation to innovation, with local startups advancing independent research. Many models are trained on China-specific datasets to optimise relevance and performance.

Z.ai’s strategy combines cost reduction, efficient chip use, and broad availability. The company can build community trust and encourage ecosystem growth by open-sourcing its tools. At the same time, pricing undercuts major rivals and could disrupt the market.

Global AI development is increasingly decentralised, with Chinese firms no longer just playing catch-up. Large-scale funding and state support are helping to close gaps in hardware and training infrastructure. Z.ai is one of several firms pushing toward greater technological autonomy.

Open-source AI development is also helping Chinese companies win favour with developers outside their borders. Many international teams are experimenting with Chinese models to diversify risk and reduce reliance on US tech. Z.ai’s GLM-4.5 is among the models gaining traction globally.

By offering a powerful, lightweight, and affordable model, Z.ai is setting a new benchmark in the industry. The combination of technical refinement and strategic pricing draws attention from investors and users. A new era of AI competition is emerging.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Robot artist Ai-Da explores human self-perception

The world’s first ultra-realistic robot artist, Ai-Da, has been prompting profound questions about human-robot interactions, according to her creator.

Designed in Oxford by Aidan Meller, a modern and contemporary art specialist, and built in the UK by Engineered Arts, Ai-Da is a humanoid robot specifically engineered for artistic creation. She recently unveiled a portrait of King Charles III, adding to her notable portfolio.

Aidan Meller, Ai-Da’s creator, stated that working with the robot has evoked ‘lots of questions about our relationship with ourselves.’ He highlighted how Ai-Da’s artwork ‘drills into some of our time’s biggest concerns and thoughts.’

Ai-Da uses cameras in her eyes to capture images, which are then processed by AI algorithms and converted into real-time coordinates for her robotic arm, enabling her to paint and draw.

Mr Meller explained, ‘You can meet her, talk to her using her language model, and she can then paint and draw you from sight.’

He also observed that people’s preconceptions about robots are often outdated: ‘It’s not until you look a robot in the eye and they say your name that the reality of this new sci-fi world that we are now in takes hold.’

Ai-Da’s contributions to the art world continue to grow. She had produced and showcased her work at the AI for Good Global Summit 2024 in Geneva, Switzerland, an event under the auspices of the UN. That same year, her triptych of Enigma code-breaker Alan Turing sold for over £1 million at auction.

Her focus this year shifted to King Charles III, chosen because, as Mr Meller noted, ‘With extraordinary strides that are taking place in technology and again, always questioning our relationship to the environment, we felt that King Charles was an excellent subject.’

Buckingham Palace authorised the display of Ai-Da’s portrait of the King, despite the robot not meeting him. Ai-Da, connected to the internet, uses extensive data to inform her choice of subjects, with Mr Meller revealing, ‘Uncannily, and rather nerve-rackingly, we just ask her.’

The conversations generated inform the artwork. Ai-Da also painted a portrait of King Charles’s mother, Queen Elizabeth II, in 2023. Mr Meller shared that the most significant realisation from six years of working with Ai-Da was ‘not so much about how human she is but actually how robotic we are.’

He concluded, ‘We hope Ai-Da’s artwork can be a provocation for that discussion.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reshaping the US labour market

AI is often seen as a job destroyer, but it’s also emerging as a significant source of new employment, according to a new Brookings report. The number of job postings mentioning AI has more than doubled in the past year, with demand continuing to surge across various industries and regions.

Over the past 15 years, AI-related job listings have grown nearly 29% annually, far outpacing the 11% growth rate of overall job postings in the broader economy.

Brookings based its findings on data from Lightcast, a labour market analytics firm, and noted rising demand for AI skills across sectors, including manufacturing. According to the US Census Bureau’s Business Trends Survey, the share of manufacturers using AI has jumped from 4% in early 2023 to 9% by mid-2025.

Yet, AI jobs still form a small part of the market. Goldman Sachs predicts widespread AI adoption will peak in the early 2030s, with a slower near-term influence on jobs. ‘AI is visible in the micro labour market data, but it doesn’t dominate broader job dynamics,’ said Joseph Briggs, an economist at Goldman Sachs.

Roles range from AI engineers and data scientists to consultants and marketers learning to integrate AI into business operations responsibly and ethically. In 2025, over 80,000 job postings cited generative AI skills—up from fewer than 4,000 in 2010, Brookings reported, indicating explosive long-term growth.

Job openings involving ‘responsible AI’—those addressing ethical AI use in business and society—are also rising, according to data from Indeed and Lightcast. ‘As AI evolves, so does what counts as an AI job,’ said Cory Stahle of the Indeed Hiring Lab, noting that definitions shift with new business applications.

AI skills carry financial value, too. Lightcast found that jobs requiring AI expertise offer an average salary premium of $18,000, or 28% more annually. Unsurprisingly, tech hubs like Silicon Valley and Seattle dominate AI hiring, but job growth spreads to regions like the Sunbelt and the East Coast.

Mark Muro of Brookings noted that universities play a key role in AI job growth across new regions by fuelling local innovation. AI is also entering non-tech fields such as finance, human resources, and marketing, with more than half of AI-related postings now being outside IT roles.

Muro expects more widespread AI adoption in the next few years, as employers gain clarity on its value, limitations and potential for productivity. ‘There’s broad consensus that AI boosts productivity and economic competitiveness,’ he said. ‘It energises regional leaders and businesses to act more quickly.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts urge broader values in AI development

Since the launch of ChatGPT in late 2023, the private sector has led AI innovation. Major players like Microsoft, Google, and Alibaba—alongside emerging firms such as Anthropic and Mistral—are racing to monetise AI and secure long-term growth in the technology-driven economy.

But during the Fortune Brainstorm AI conference in Singapore this week, experts stressed the importance of human values in shaping AI’s future. Anthea Roberts, founder of Dragonfly Thinking, argued that AI must be built not just to think faster or cheaper, but also to think better.

She highlighted the risk of narrow thinking—national, disciplinary or algorithmic—and called for diverse, collaborative thinking to counter it. Roberts sees potential in human-AI collaboration, which can help policymakers explore different perspectives, boosting the chances of sound outcomes.

Russell Wald, executive director at Stanford’s Institute for Human-Centred AI, called AI a civilisation-shifting force. He stressed the need for an interdisciplinary ecosystem—combining academia, civil society, government and industry—to steer AI development.

‘Industry must lead, but so must academia,’ Wald noted, as well as universities’ contributions to early research, training, and transparency. Despite widespread adoption, AI scepticism persists, due to issues like bias, hallucination, and unpredictable or inappropriate language.

Roberts said most people fall into two camps: those who use AI uncritically, such as students and tech firms, and those who reject it entirely.

She labelled the latter as practising ‘critical non-use’ due to concerns over bias, authenticity and ethical shortcomings in current models. Inviting a broader demographic into AI governance, Roberts urged more people—especially those outside tech hubs like Silicon Valley—to shape its future.

Wald noted that in designing AI, developers must reflect the best of humanity: ‘Not the crazy uncle at the Thanksgiving table.’

Both experts believe the stakes are high, and the societal benefits of getting AI right are too great to ignore or mishandle. ‘You need to think not just about what people want,’ Roberts said, ‘but what they want to want—their more altruistic instincts.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!