Labels and Spotify align on artist-first AI safeguards

Spotify partners with major labels on artist-first AI tools, putting consent and copyright at the centre of product design. The plan aims to align new features with transparent labelling and fair compensation while addressing concerns about generative music flooding platforms.

The collaboration with Sony, Universal, Warner, and Merlin will give artists control over participation in AI experiences and how their catalogues are used. Spotify says it will prioritise consent, clearer attribution, and rights management as it builds new tools.

Early direction points to expanded labelling via DDEX, stricter controls against mass AI uploads, and protections against search and recommendation manipulation. Spotify’s AI DJ and prompt-based playlists hint at how engagement features could evolve without sidelining creators.

Future products are expected to let artists opt in, monitor usage, and manage when their music feeds AI-generated works. Rights holders and distributors would gain better tracking and payment flows as transparency improves across the ecosystem.

Industry observers say the tie-up could set a benchmark for responsible AI in music if enforcement matches ambition. By moving in step with labels, Spotify is pitching a path where innovation and artist advocacy reinforce rather than undermine each other.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft warns of a surge in ransomware and extortion incidents

Financially motivated cybercrime now accounts for the majority of global digital threats, according to Microsoft’s latest Digital Defense Report.

The company’s analysts found that over half of all cyber incidents with known motives in the past year were driven by extortion or ransomware, while espionage represented only a small fraction.

Microsoft warns that automation and accessible off-the-shelf tools have allowed criminals with limited technical skills to launch widespread attacks, making cybercrime a constant global threat.

The report reveals that attackers increasingly target critical services such as hospitals and local governments, where weak security and urgent operational demands make them easy victims.

Cyberattacks on these sectors have already led to real-world harm, from disrupted emergency care to halted transport systems. Microsoft highlights that collaboration between governments and private industry is essential to protect vulnerable sectors and maintain vital services.

While profit-seeking criminals dominate by volume, nation-state actors are also expanding their reach. State-sponsored operations are growing more sophisticated and unpredictable, with espionage often intertwined with financial motives.

Some state actors even exploit the same cybercriminal networks, complicating attribution and increasing risks for global organisations.

Microsoft notes that AI is being used by both attackers and defenders. Criminals are employing AI to refine phishing campaigns, generate synthetic media and develop adaptive malware, while defenders rely on AI to detect threats faster and close security gaps.

The report urges leaders to prioritise cybersecurity as a strategic responsibility, adopt phishing-resistant multifactor authentication, and build strong defences across industries.

Security, Microsoft concludes, must now be treated as a shared societal duty rather than an isolated technical task.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Capita hit with £14 million fine after major data breach

The UK outsourcing firm Capita has been fined £14 million after a cyber-attack exposed the personal data of 6.6 million people. Sensitive information, including financial details, home addresses, passport images, and criminal records, was compromised.

Initially, the fine was £45 million, but it was reduced after Capita improved its cybersecurity, supported affected individuals, and engaged with regulators.

A breach that affected 325 of the 600 pension schemes Capita manages, highlighting risks for organisations handling large-scale sensitive data.

The Information Commissioner’s Office (ICO) criticised Capita for failing to secure personal information, emphasising that proper security measures could have prevented the incident.

Experts note that holding companies financially accountable reinforces the importance of data protection and sends a message to the market.

Capita’s CEO said the company has strengthened its cyber defences and remains vigilant to prevent future breaches.

The UK government has advised companies like Capita to prepare contingency plans following a rise in nationally significant cyberattacks, a trend also seen at Co-op, M&S, Harrods, and Jaguar Land Rover earlier in the year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI at scale with Salesforce and AWS

Salesforce and AWS outlined a tighter partnership on agentic AI, citing rapid growth in enterprise agents and usage. They set four pillars for the ‘Agentic Enterprise’: unified data, interoperable agents, modernised contact centres and streamlined procurement via AWS Marketplace.

Data 360 ‘Zero Copy’ accesses Amazon Redshift without duplication, while Data 360 Clean Rooms integrate with AWS Clean Rooms for privacy-preserving collaboration. 1-800Accountant reports agents resolving most routine inquiries so human experts focus on higher-value work.

Agentforce supports open standards such as Model Context Protocol and Agent2Agent to coordinate multi-vendor agents. Pilots link Bedrock-based agents and Slack integrations that surface Quick Suite tools, with Anthropic and Amazon Nova models available inside Salesforce’s trust boundary.

Contact centres extend agentic workflows through Salesforce Contact Center with Amazon Connect, adding voice self-service plus real-time transcription and sentiment. Complex issues hand off to representatives with full context, and Toyota Motor North America plans automation for service tasks.

Procurement scales via AWS Marketplace, where Salesforce surpassed $2bn in lifetime sales across 30 countries. AgentExchange listings provide prebuilt, customisable agents and workflows, helping enterprises adopt agentic AI faster with governance and security intact.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New ISO 27701 update strengthens privacy compliance

The International Organization for Standardization has released a major update to ISO 27701, the global standard for managing privacy compliance programmes. The revised version, published in 2025, separates the Privacy Information Management System (PIMS) from ISO 27001.

The updated standard introduces detailed clauses defining how organisations should establish, implement and continually improve their PIMS. It places strong emphasis on leadership accountability, risk assessment, performance evaluation and continual improvement.

Annex A of the standard sets out new control tables for both data controllers and processors. The update also refines terminology and aligns more closely with the principles of the EU GDPR and UK GDPR, making it suitable for multinational organisations seeking a unified privacy management approach.

Experts say the revised ISO 27701 offers a flexible structure but should not be seen as a substitute for legal compliance. Instead, it provides a foundation for building stronger, auditable privacy frameworks that align global business operations with evolving regulatory standards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Vietnam unveils draft AI law inspired by EU model

Vietnam is preparing to become one of Asia’s first nations with a dedicated AI law, following the release of a draft bill that mirrors key elements of the EU’s AI Act. The proposal aims to consolidate rules for AI use, strengthen rights protections and promote innovation.

The law introduces a four-tier system for classifying risks, from banned applications such as manipulative facial recognition to low-risk uses subject to voluntary standards. High-risk systems, including those in healthcare or finance, would require registration, oversight and incident reporting to a national database.

Under the law, companies deploying powerful general-purpose AI models must meet strict transparency, safety and intellectual property standards. The law would create a National AI Commission and a National AI Development Fund to support local research, sandboxes and tax incentives for emerging businesses.

Violations involving unsafe AI systems could lead to revenue-based fines and suspensions. The phased rollout begins in January 2026, with full compliance for high-risk systems expected by mid-2027. The government of Vietnam says the initiative reflects its ambition to build a trustworthy AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK government uses AI to boost efficiency and save taxpayer money

The UK government has developed an AI tool, named ‘Consult’, which analysed over 50,000 responses to the Independent Water Commission review in just two hours. The system matched human accuracy and could save 75,000 days of work annually, worth £20 million in staffing costs.

Consult sorted responses into key themes at a cost of just £240, with experts needing only 22 hours to verify the results. The AI agreed with human experts 83% of the time, versus 55% between humans, letting officials focus on policy instead of administrative work.

The technology has also been used to analyse consultations for the Scottish government on non-surgical cosmetics and the Digital Inclusion Action Plan. Part of the Humphrey suite, the tool helps government act faster and deliver better value for taxpayers.

Digital Government Minister Ian Murray highlighted the potential of AI to deliver efficient services and save costs. Engineers are using insights from Consult and Redbox to develop new tools, including GOV.UK Chat, a generative AI chatbot soon to be trialled in the GOV.UK App.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Humanity AI launches $500M initiative to build a people-centred future

A coalition of ten leading philanthropic foundations has pledged $500 million over five years to ensure that AI evolves in ways that strengthen humanity rather than marginalise it.

The initiative, called Humanity AI, brings together organisations such as the Ford, MacArthur, Mellon, and Mozilla foundations to promote a people-driven vision for AI that enhances creativity, democracy, and security.

As AI increasingly shapes every aspect of daily life, the coalition seeks to place citizens at the centre of the conversation instead of leaving decisions to a few technology firms.

It plans to support new research, advocacy, and partnerships that safeguard democratic rights, protect creative ownership, and promote equitable access to education and employment.

The initiative also prioritises the ethical use of AI in safety and economic systems, ensuring innovation does not come at the expense of human welfare.

John Palfrey, president of the MacArthur Foundation, said Humanity AI aims to shift power back to the public by funding technologists and advocates committed to responsible innovation.

Michele Jawando of the Omidyar Network added that the future of AI should be designed by people collectively, not predetermined by algorithms or corporate agendas.

Rockefeller Philanthropy Advisors will oversee the fund, which begins issuing grants in 2026. Humanity AI invites additional partners to join in creating a future where people shape technology instead of being shaped by it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Adult erotica tests OpenAI’s safety claims

OpenAI will loosen some ChatGPT rules, letting users make replies friendlier and allowing erotica for verified adults. Altman framed the shift as ‘treat adult users like adults’, tied to stricter age-gating. The move follows months of new guardrails against sycophancy and harmful dynamics.

The change arrives after reports of vulnerable users forming unhealthy attachments to earlier models. OpenAI has since launched GPT-5 with reduced sycophancy and behaviour routing, plus safeguards for minors and a mental-health council. Critics question whether evidence justifies loosening limits so soon.

Erotic role-play can boost engagement, raising concerns that at-risk users may stay online longer. Access will be restricted to verified adults via age prediction and, if contested, ID checks. That trade-off intensifies privacy tensions around document uploads and potential errors.

It is unclear whether permissive policies will extend to voice, image, or video features, or how regional laws will apply to them. OpenAI says it is not ‘usage-maxxing’ but balancing utility with safety. Observers note that ambitions to reach a billion users heighten moderation pressures.

Supporters cite overdue flexibility for consenting adults and more natural conversation. Opponents warn normalising intimate AI may outpace evidence on mental-health impacts. Age checks can fail, and vulnerable users may slip through without robust oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI forms Expert Council to guide well-being in AI

OpenAI has announced the establishment of an Expert Council on Well-Being and AI to help it shape ChatGPT, Sora and other products in ways that promote healthier interactions and better emotional support.

The council comprises eight distinguished figures from psychology, psychiatry, human-computer interaction, developmental science and clinical practice.

Members include David Bickham (Digital Wellness Lab, Harvard), Munmun De Choudhury (Georgia Tech), Tracy Dennis-Tiwary (Hunter College), Sara Johansen (Stanford), Andrew K. Przybylski (University of Oxford), David Mohr (Northwestern), Robert K. Ross (public health) and Mathilde Cerioli (everyone.AI).

OpenAI says this new body will meet regularly with internal teams to examine how AI should function in ‘complex or sensitive situations,’ advise on guardrails, and explore what constitutes well-being in human-AI interaction. For example, the council already influenced how parental controls and user-teen distress notifications were prioritised.

OpenAI emphasises that it remains accountable for its decisions, but commits to ongoing learning through this council, the Global Physician Network, policymakers and experts. The company notes that different age groups, especially teenagers, use AI tools differently, hence the need for tailored insights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot