Spotify removes 75 million tracks in AI crackdown

Spotify has confirmed that it removed 75 million tracks in the past year as part of a crackdown on AI-generated spam, deepfakes, and fake artist uploads. The purge, almost half of its total archive, highlights the scale of the problem facing music streaming.

Executives say they are not banning AI outright. Instead, the company is targeting misuse, such as cloned voices of real artists without permission, fake profiles, and mass-uploaded spam designed to siphon royalties.

New measures include a music spam filter, stricter rules on vocal deepfakes, and tools allowing artists to flag impersonation before publication. Spotify is also testing the DDEX disclosure system so creators can indicate whether and how AI was used in their work.

Despite the scale of removals, Spotify insists AI music engagement remains minimal and has not significantly impacted human artists’ revenue. The platform now faces the challenge of balancing innovation with transparency, while protecting both listeners and musicians.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Smarter Alexa+ powers Amazon’s new gadgets

Amazon has unveiled a refreshed lineup of devices in New York, designed to work with its new AI-powered assistant Alexa+. The showcase featured Echo speakers, Fire TV devices, a Kindle with a colour display and enhanced Ring and Blink cameras, all set to be released later this year.

After years of investment, the company is seeking to reignite interest in Alexa, adding AI to provide more personalisation and a natural conversational style instead of the more mechanical responses of earlier versions.

New silicon chips promise faster processing across Echo devices, while Ring cameras can now use AI to distinguish between a courier and a potential intruder.

Ring’s founder, Jamie Siminoff, who recently returned to Amazon, demonstrated how updated cameras can assist communities by helping to identify missing dogs through neighbourhood alerts. Siminoff described the effort as turning individual concerns into community action.

Ring devices will be priced between 60 and 350 dollars, depending on features, while Blink cameras now offer sharper resolution for indoor and outdoor monitoring.

Amazon’s device chief, Panos Panay, presented the new Kindle Scribe, a $630 tablet with stylus support, and the first Kindle with a colour screen, which offered a paper-like writing feel.

Updated Fire TV sets and a $40 streaming stick also integrate Alexa+, enabling users to search scenes or retrieve information about actors through voice commands instead of traditional menus.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI impact on employment still limited

A new study by Yale’s Budget Lab suggests AI has yet to cause major disruption in the US labour market. Researchers found little evidence that generative AI has significantly altered employment patterns since the launch of ChatGPT nearly three years ago.

The report shows that the mix of occupations has shifted slightly faster than in previous periods, but not dramatically. Many of the changes appear to have begun before generative AI became widely available, suggesting broader economic trends are at play.

US based industry-level analysis revealed more noticeable shifts in information and financial services. Yet these changes, too, reflect longer-term developments rather than sudden AI-driven disruption. Overall, employment and unemployment levels show no clear link to AI exposure or adoption.

The researchers stress that impacts may take longer to materialise, as seen with past technologies such as computers and the internet. They call for better data from AI developers and continued monitoring to capture longer-term effects on workers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Sora 2.0 release reignites debate on intellectual property in AI video

OpenAI has launched Sora 2.0, the latest version of its video generation model, alongside an iOS app available by invitation in the US and Canada. The tool offers advances in physical realism, audio-video synchronisation, and multi-shot storytelling, with built-in safeguards for security and identity control.

The app allows users to create, remix, or appear in clips generated from text or images. A Pro version, web interface, and developer API are expected soon, extending access to the model.

Sora 2.0 has reignited debate over intellectual property. According to The Wall Street Journal, OpenAI has informed studios and talent agencies that their universes could appear in generated clips unless they opt out.

The company defends its approach as an extension of fan creativity, while stressing that real people’s images and voices require prior consent, validated through a verified cameo system.

By combining new creative tools with identity safeguards, OpenAI aims to position Sora 2.0 as a leading platform in the fast-growing market for AI-generated video.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

How AI is transforming healthcare and patient management

AI is moving from theory to practice in healthcare. Hospitals and clinics are adopting AI to improve diagnostics, automate routine tasks, support overworked staff, and cut costs. A recent GoodFirms survey shows strong confidence that AI will become essential to patient care and health management.

Survey findings reveal that nearly all respondents believe AI will transform healthcare. Robotic surgery, predictive analytics, and diagnostic imaging are gaining momentum, while digital consultations and wearable monitors are expanding patient access.

AI-driven tools are also helping reduce human errors, improve decision-making, and support clinicians with real-time insights.

Challenges remain, particularly around data privacy, transparency, and the risk of over-reliance on technology. Concerns about misdiagnosis, lack of human empathy, and job displacement highlight the need for responsible implementation.

Even so, the direction is clear: AI is set to be a defining force in healthcare’s future, enabling more efficient, accurate, and equitable systems worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Doctors and nurses outperform AI in patient triage

Human staff are more accurate than AI in assessing patient urgency in emergency departments, according to research presented at the European Emergency Medicine Congress in Barcelona.

The study, led by Dr Renata Jukneviciene of Vilnius University, tested ChatGPT 3.5 against clinicians and nurses using real case studies.

Doctors achieved an overall accuracy of 70.6% and nurses 65.5%, compared with 50.4% for AI. Doctors also outperformed AI in surgical and therapeutic cases, while nurses were more reliable overall.

AI did show strength in recognising the most critical cases, surpassing nurses in both accuracy and specificity. Researchers suggested that AI may help prioritise life-threatening situations and support less experienced staff instead of acting as a replacement.

However, over-triaging by AI could lead to inefficiencies, making human oversight essential.

Future studies will explore newer AI models, ECG interpretation, and integration into nurse training, particularly in mass-casualty scenarios.

Commenting on the findings, Dr Barbra Backus from Amsterdam said AI has value in certain areas, such as interpreting scans, but it cannot yet replace trained staff for triage decisions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Athens Democracy Forum highlights AI challenge for democracy

The 2025 Athens Democracy Forum opened in Athens with a dedicated session on AI, ethics and democracy, co-organised by Kathimerini in partnership with The New York Times.

Held at the Athens Conservatoire, the event placed AI at the heart of discussions on the future of democratic governance.

Speakers underlined the urgency of addressing systemic challenges created by AI.

Achilleas Tsaltas, president of the Democracy & Culture Foundation, described AI as the central concern of the era. At the same time, Greece’s minister of digital governance, Dimitris Papastergiou, warned that AI should remain a servant instead of becoming a master.

Axel Dauchez, founder of Make.org, pointed to the conflict between democratic and authoritarian governance models and called for stronger civic education.

The opening panel brought together academics such as Oxford’s Stathis Kalyvas and Yale’s Hélène Landemore, who examined how AI affects liberal democracies, global inequalities and political accountability.

Discussions concluded with a debate on Aristotle’s ethics as a framework for evaluating opportunities and risks in AI development, moderated by Stephen Dunbar-Johnson of The New York Times.

The session continues with panels on the AI transformation blueprint of Greece, regulation of AI, and the emerging concept of AI sovereignty as a business model.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Digital on Day 6 of UNGA80: Global AI governance, technology equity, and closing the digital divide

 Lighting, Stage, Purple, Electronics, Screen, Computer Hardware, Hardware, Monitor, Light, Urban, Indoors, Club

Welcome to the fifth daily report from the General Debate at the 80th session of the UN General Assembly (UNGA80). Our daily hybrid AI–human reports bring you a concise overview of how world leaders are framing the digital future.

Day 6 discussion centred on the transformative potential and urgent risks of AI, emphasising that while AI can boost development, health, education, and productivity—especially in least developed countries—it must be governed responsibly to prevent inequality, bias, and insecurity. Calls for a global AI framework were echoed in various statements, alongside broader appeals for inclusive digital cooperation, accelerated technology transfer, and investment in infrastructure, literacy, and talent development. Speakers warned that digital disruption is deepening geopolitical divides, with smaller and developing nations demanding a voice in shaping emerging governance regimes. Bridging the digital divide, advancing secure and rights-based technologies, and protecting against cybercrime were framed as essential

To keep the highlights clear and accessible, we leave them in bullet points — capturing the key themes and voices as they emerge.


Artificial intelligence

  • Responsible AI governance
  • AI presents both unprecedented opportunities and profound challenges, and if harnessed responsibly, it can accelerate development, improve health and education, and unlock economic growth. Without clear governance, AI risks deepening inequalities and undermining security. A global framework is called for to ensure AI is ethical, inclusive, and accessible to all nations, enabling it to serve as a force for development rather than division. (Malawi)
  • AI is a tool that must be harnessed for all humankind, equally in a controlled manner, as opportunities are vast, including for farmers, city planning, and disaster risk management. (President of the General Assembly)
  • The risks of AI are becoming more prevalent, and age-old biases are being perpetuated by algorithms, as seen in the targeting of women and girls by sexually related deepfakes. (President of the General Assembly)
  • Discussions on AI lend further prudence to the argument that ‘we are better together,’ and few would be comfortable leaving the benefits or risks of this immense resource in the hands of a few. (President of the General Assembly)
    International cooperation remains essential to establishing comprehensive regulations governing the use and development of AI. (Timor-Leste)

AI for development and growth

  • The transformative potential of science, technology, and AI, should be harnessed for national and global development. Malawi is optimistic that AI will usher in a new era of enhanced productivity for its citizens, helping to propel the country’s development trajectory. (Malawi)
  • Advancing AI and digital capabilities in LDCs is imperative, requiring investment in digital infrastructure and enhancing digital literacy, implementing e-government initiatives, promoting AI research and innovation, cultivating talent and establishing a policy framework. (Timor-Leste)
  • Making AI a technology that benefits all is an important issue agreed upon in the Global Digital Compact, which also covers peace and security, sustainable development, climate change, and digital cooperation. (Djibouti) 
  • Canada emphasised national strength in AI, clean technologies, critical minerals and digital innovation. (Canada)

Global digital governance

  • Nepal advocates for a global digital cooperation framework that ensures access to infrastructures, digital literacy, and data protection for all. (Nepal)
  • Digital transformation and digital and technological disruption are converging with other crises, such as climate catastrophe and widening inequality. (Malawi, Nepal, Holy See) Digital transformation demands renewed collective action. A renewed collective resolve to fortify the founding values of the UN, and a revitalised, transformed UN are needed. (Malawi, Nepal, Holy See)

Digital technologies and development

Addressing the digital divide and inequality

  • Rapid technological, geopolitical, and environmental shifts are ushering in a new, multipolar global order that offers both opportunities and risks, and insisted that smaller states must not be sidelined but fully heard in shaping it. (Benin)
  • The development gap has expanded between the North and the South despite technological revolutions. (Algeria)
  • Digital transformations deserve urgent global attention, and technology must be inclusive, secure, and rights-based. (Nepal)
  • It is crucial to narrow the digital divide within and among countries to create a peaceful and equitable society. (Nepal)
  • Policies and programmes for technologies and progress should be within the reach of everyone for the good of everyone. (Nicaragua)

Technology transfer 

  • The gap between rich and poor nations continues to widen, and developing countries struggle with limited technology transfer and low productivity. (Malawi)
  • The full and effective implementation of the Paris Agreement should include ensuring equitable access to sustainable technologies. (Malawi)
  • The international community is called upon to foster an environment that supports inclusive growth and harnesses the transformative potential of science and technology, and AI. (Malawi)
  • A comprehensive and inclusive approach is needed to address the pressing challenges in the Mediterranean, making economic development on the Southern Front a shared priority through investment and technology transfer. (Algeria)
  • Technology transfer must be accelerated and scaled up, with calls for scaled-up, predictable and accessible technology transfer and capacity building for countries on the front line, particularly LDCs. (Nepal)

Cybersecurity

  • Safeguarding cybersecurity is imperative alongside the advancement of AI and digital capabilities in LDCs. (Timor-Leste)
  • Russia has sought to undermine Moldova’s sovereignty through illicit financing, disinformation, cyberattacks, and voter intimidation. (Moldova)

For other topics discussed, head over to our dedicated UNGA80 page, where you can explore more insights from the General Debate.

Diplo NEWS25 Insta UNGA
The General Debate at the 80th session of the UN General Assembly brings together high-level representatives from across the globe to discuss the most pressing issues of our time. The session took place against the backdrop of the UN’s 80th anniversary, serving as a moment for both reflection and a forward-looking assessment of the organisation’s role and relevance.

Anthropic unveils Claude Sonnet 4.5 as the best AI coding model yet

Anthropic has released Claude Sonnet 4.5, its most advanced AI model yet, claiming state-of-the-art results in coding benchmarks. The company says the model can build production-ready applications, rather than limited prototypes, making it more reliable than earlier versions.

Claude Sonnet 4.5 is available through the Claude API and chatbot at the same price as its predecessor, with $3 per million input tokens and $15 per million output tokens.

Early enterprise tests suggest the model can autonomously code for extended periods, integrate databases, secure domains, and perform compliance checks such as SOC 2 audits.

Industry leaders have endorsed the launch, with Cursor and Windsurf calling it a new generation of AI coding models. Anthropic also emphasises more substantial alignment, noting reduced risks of deception and sycophancy, and improved resistance to prompt injection attacks.

Alongside the model, the company has introduced a Claude Agent SDK to let developers build customised agents, and launched ‘Imagine with Claude’, a research preview showing real-time code generation.

A release that highlights the intense competition in AI, with Anthropic pushing frequent updates to keep pace with rivals such as OpenAI, which has recently gained ground on coding performance with GPT-5.

Claude Sonnet 4.5 follows just weeks after Anthropic’s Claude Opus 4.1, underlining the rapid development cycles driving the sector.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

California enacts first state-level AI safety law

In the US, California Governor Gavin Newsom has signed SB 53, a landmark law establishing transparency and safety requirements for large AI companies.

The legislation obliges major AI developers such as OpenAI, Anthropic, Meta, and Google DeepMind to disclose their safety protocols. It also introduces whistle-blower protections and a reporting mechanism for safety incidents, including cyberattacks and autonomous AI behaviour not covered by the EU AI Act.

Reactions across the industry have been mixed. Anthropic supported the law, while Meta and OpenAI lobbied against it, with OpenAI publishing an open letter urging Newsom not to sign. Tech firms have warned that state-level measures could create a patchwork of regulation that stifles innovation.

Despite resistance, the law positions California as a national leader in AI governance. Newsom said the state had demonstrated that it was possible to safeguard communities without stifling growth, calling AI ‘the new frontier in innovation’.

Similar legislation is under consideration in New York, while California lawmakers are also debating SB 243, a separate bill that would regulate AI companion chatbots.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!