Spotify removes 75 million tracks in AI crackdown

Spotify has confirmed that it removed 75 million tracks in the past year as part of a crackdown on AI-generated spam, deepfakes, and fake artist uploads. The purge, almost half of its total archive, highlights the scale of the problem facing music streaming.

Executives say they are not banning AI outright. Instead, the company is targeting misuse, such as cloned voices of real artists without permission, fake profiles, and mass-uploaded spam designed to siphon royalties.

New measures include a music spam filter, stricter rules on vocal deepfakes, and tools allowing artists to flag impersonation before publication. Spotify is also testing the DDEX disclosure system so creators can indicate whether and how AI was used in their work.

Despite the scale of removals, Spotify insists AI music engagement remains minimal and has not significantly impacted human artists’ revenue. The platform now faces the challenge of balancing innovation with transparency, while protecting both listeners and musicians.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Calls for regulation grow as OpenAI and Meta adjust chatbots for teen mental health

OpenAI and Meta are adjusting how their chatbots handle conversations with teenagers showing signs of distress or asking about suicide. OpenAI plans to launch new parental controls this fall, enabling parents to link accounts, restrict features, and receive alerts if their child appears to be in acute distress.

The company says its chatbots will also route sensitive conversations to more capable models, aiming to improve responses to vulnerable users. The announcement follows a lawsuit alleging that ChatGPT encouraged a California teenager to take his own life earlier this year.

Meta, the parent company of Instagram and Facebook, is also tightening its restrictions. Its chatbots will no longer engage teens on self-harm, suicide, eating disorders, or inappropriate topics, instead redirecting them towards expert resources. Meta already offers parental controls across teen accounts.

The moves come amid growing scrutiny of chatbot safety. A RAND Corporation study found inconsistent responses from ChatGPT, Google’s Gemini, and Anthropic’s Claude when asked about suicide, suggesting the tools require further refinement before being relied upon in high-risk situations.

Lead author Ryan McBain welcomed the updates but called them only incremental. Without safety benchmarks and enforceable standards, he argued, companies remain self-regulating in an area where risks to teenagers are uniquely high.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Claude Sonnet 4.5 expands developer options with rollbacks and longer-running agents

Anthropic has released Claude Sonnet 4.5, featuring a suite of new upgrades designed to enhance coding, automation, and creativity. The update enhances Claude Code, extends Computer Use, and introduces experimental tools to boost productivity and facilitate real-world applications.

Claude Code now features checkpoints, allowing developers to roll back projects to earlier versions. The Claude API has also been expanded, supporting longer-running agents to generate files such as slides, spreadsheets, and documents directly within chats.

The model’s Computer Use function has been strengthened, enabling agents to operate applications for up to 30 hours autonomously. Anthropic says Claude Sonnet 4.5 built a Slack-style app with 11,000 lines of code in one session.

A new feature, Imagine with Claude, focuses on generating creative software. The system produced a Shakespeare-themed desktop with customised scripts and performance schedules from a single prompt, highlighting its versatility.

Anthropic has maintained steady pricing for free and premium users, positioning Sonnet 4.5 as its most practical and feature-rich release yet, combining reliability with expanded creative and developer-friendly tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

California enacts first state-level AI safety law

In the US, California Governor Gavin Newsom has signed SB 53, a landmark law establishing transparency and safety requirements for large AI companies.

The legislation obliges major AI developers such as OpenAI, Anthropic, Meta, and Google DeepMind to disclose their safety protocols. It also introduces whistle-blower protections and a reporting mechanism for safety incidents, including cyberattacks and autonomous AI behaviour not covered by the EU AI Act.

Reactions across the industry have been mixed. Anthropic supported the law, while Meta and OpenAI lobbied against it, with OpenAI publishing an open letter urging Newsom not to sign. Tech firms have warned that state-level measures could create a patchwork of regulation that stifles innovation.

Despite resistance, the law positions California as a national leader in AI governance. Newsom said the state had demonstrated that it was possible to safeguard communities without stifling growth, calling AI ‘the new frontier in innovation’.

Similar legislation is under consideration in New York, while California lawmakers are also debating SB 243, a separate bill that would regulate AI companion chatbots.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Semicon Coalition unites EU on chip strategy and autonomy

European ministers have signed the Declaration of the Semicon Coalition, calling for a revised EU Chips Act 2.0 to boost semiconductor resilience, innovation, and competitiveness. The declaration outlines five priorities: collaboration, investment, skills, sustainability, and global partnerships.

The coalition, launched by the Netherlands in March, includes Austria, Belgium, Finland, France, Germany, Italy, Poland, and Spain. Other EU states joined today in Brussels, where Dutch minister Vincent Karremans presented the declaration to the European Commission.

Over fifty leading European and international semiconductor players have endorsed the declaration. This support strengthens momentum for placing end-markets at the core of the EU’s semiconductor strategy and aligns with Mario Draghi’s report on competitiveness.

The priorities include aligning EU and national funding, accelerating approvals for strategic projects, building a skilled talent pipeline, and promoting circular, energy-efficient manufacturing. International partnerships will also be deepened while safeguarding European strategic autonomy.

Minister Karremans said the strategy demonstrates Europe’s response to global tensions and its commitment to boosting semiconductor capacity, research funding, and readiness for demand in AI, automotive, energy, and defense.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sam Altman predicts AGI could arrive before 2030

OpenAI CEO Sam Altman has warned that AI could soon automate up to 40 percent of the tasks humans currently perform. He made the remarks in an interview with German newspaper Die Welt, highlighting the potential economic shift AI will trigger.

Altman described OpenAI’s latest model, GPT-5, as the most advanced yet and claimed it is ‘smarter than me and most people’. He said artificial general intelligence (AGI), capable of outperforming humans in all areas, could arrive before 2030.

Instead of focusing on job losses, Altman suggested examining the percentage of tasks that AI will automate. He predicted that 30 to 40 per cent of tasks currently carried out by humans may soon be completed by AI systems.

These comments contribute to the growing debate about the societal impact of AI, with mass layoffs already being linked to automation. Altman emphasised that this wave of change will reshape economies and workplaces, requiring businesses and governments to prepare for disruption.

As AGI approaches, Altman urged individuals to focus on acquiring in-demand skills to stay relevant in an AI-enabled economy. The relationship between humans and machines, he said, will be permanently reshaped by these developments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Qwen3-Omni tops Hugging Face as China’s open AI challenge grows

Alibaba’s Qwen3-Omni multimodal AI system has quickly risen to the top of Hugging Face’s trending model list, challenging closed systems from OpenAI and Google. The series unifies text, image, audio, and video processing in a single model, signalling the rapid growth of Chinese open-source AI.

Qwen3-Omni-30B-A3B currently leads Hugging Face’s list, followed by the image-editing model Qwen-Image-Edit-2509. Alibaba’s cloud division describes Qwen3-Omni as the first fully integrated multimodal AI framework built for real-world applications.

Self-reported benchmarks suggest Qwen3-Omni outperforms Qwen2.5-Omni-7B, OpenAI’s GPT-4o, and Google’s Gemini-2.5-Flash, known as ‘Nano Banana’, in audio recognition, comprehension, and video understanding tasks.

Open-source dominance is growing, with Alibaba’s models taking half the top 10 spots on Hugging Face rankings. Tencent, DeepSeek, and OpenBMB filled most of the remaining positions, leaving IBM as the only Western representative.

The ATOM Project warned that US leadership in AI could erode as open models from China gain adoption. It argued that China’s approach draws businesses and researchers away from American systems, which have become increasingly closed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The strategic shift toward open-source AI

The release of DeepSeek’s open-source reasoning model in January 2025, followed by the Trump administration’s July endorsement of open-source AI as a national priority, has marked a turning point in the global AI race, writes Jovan Kurbalija in his blog ‘The strategic imperative of open source AI’.

What once seemed an ideological stance is now being reframed as a matter of geostrategic necessity. Despite their historical reliance on proprietary systems, China and the United States have embraced openness as the key to competitiveness.

Kurbalija adds that history offers clear lessons that open systems tend to prevail. Just as TCP/IP defeated OSI in the 1980s and Linux outpaced costly proprietary operating systems in the 1990s, today’s open-source AI models are challenging closed platforms. Companies like Meta and DeepSeek have positioned their tools as the new foundations of innovation, while proprietary players such as OpenAI are increasingly seen as constrained by their closed architectures.

The advantages of open-source AI are not only philosophical but practical. Open models evolve faster through global collaboration, lower costs by sharing development across vast communities, and attract younger talent motivated by purpose and impact.

They are also more adaptable, making integrating into industries, education, and governance easier. Importantly, breakthroughs in efficiency show that smaller, smarter models can now rival giant proprietary systems, further broadening access.

The momentum is clear. Open-source AI is emerging as the dominant paradigm. Like the internet protocols and operating systems that shaped previous digital eras, openness is proving more ethical and strategically effective. As researchers, governments, and companies increasingly adopt this approach, open-source AI could become the backbone of the next phase of the digital world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk escalates legal battle with new lawsuit against OpenAI

Elon Musk’s xAI has sued OpenAI, alleging a coordinated and unlawful campaign to steal its proprietary technology. The complaint alleges OpenAI targeted former xAI staff to steal source code, training methods, and data centre strategies.

The lawsuit claims OpenAI recruiter Tifa Chen offered large packages to engineers who then allegedly uploaded xAI’s source code to personal devices. Notable incidents include Xuechen Li confessing to code theft and Jimmy Fraiture allegedly transferring confidential files via AirDrop repeatedly.

Legal experts note the case centres on employee poaching and the definition of xAI’s ‘secret sauce,’ including GPU racking, vendor contracts, and operational playbooks.

Liability may depend on whether OpenAI knowingly directed recruiters, while the company could defend itself by showing independent creation with time-stamped records.

xAI is seeking damages, restitution, and injunctions requiring OpenAI to remove its materials and destroy models built using them. The lawsuit is Musk’s latest legal action against OpenAI, following a recent antitrust case with Apple over alleged market dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tech giants warn Digital Markets Act is failing

Apple and Google have urged the European Union to revisit its Digital Markets Act, arguing the law is damaging users and businesses.

Apple said the rules have forced delays to new features for European customers, including live translation on AirPods and improvements to Apple Maps. It warned that competition requirements could weaken security and slow innovation without boosting the EU economy.

Google raised concerns that its search results must now prioritise intermediary travel sites, leading to higher costs for consumers and fewer direct sales for airlines and hotels. It added that AI services may arrive in Europe up to a year later than elsewhere.

Both firms stressed that enforcement should be more consistent and user-focused. The European Commission is reviewing the Act, with formal submissions under consideration.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot