French streaming platform Deezer has opened access to its AI music detection tool for rival services, including Spotify. The move follows mounting concern in France and across the industry over the rapid rise of synthetic music uploads.
Deezer said around 60,000 AI-generated tracks are uploaded daily, with 13.4 million detected in 2025. In France, the company has already demonetised 85% of AI-generated streams to redirect royalties to human artists.
The tool automatically tags fully AI-generated tracks, removes them from recommendations and flags fraudulent streaming activity. Spotify, which also operates widely in France, has introduced its own measures but relies more heavily on creator disclosure.
Challenges remain for Deezer in France and beyond, as the system struggles to identify hybrid tracks mixing human and AI elements. Industry pressure continues to grow for shared standards that balance innovation, transparency and fair payment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Pentagon officials are at odds with AI developer Anthropic over restrictions designed to prevent autonomous weapons targeting and domestic surveillance. The disagreement has stalled discussions under a $200 million contract.
Anthropic has expressed concern about its tools being used in ways that could harm civilians or breach privacy. The company emphasises that human oversight is essential for national security applications.
The dispute reflects broader tensions between Silicon Valley firms and government use of AI. Pentagon officials argue that commercial AI can be deployed as long as it follows US law, regardless of corporate guidelines.
Anthropic’s stance may affect its Pentagon contracts as the firm prepares for a public offering. The company continues to engage with officials while advocating for ethical AI deployment in defence operations.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Director Darren Aronofsky’s creative studio, Primordial Soup, has released the first episodes of On This Day… 1776, a short-form animated series that uses generative AI technology from Google DeepMind to visualise pivotal events from the American Revolution ahead of the 250th anniversary of the Declaration of Independence.
Episodes are published weekly on TIME’s YouTube channel throughout 2026, with each one focusing on a specific date in 1776.
The project combines AI-generated visuals with traditional post-production elements, including colour grading and voice performances by SAG-AFTRA actors, to expand narrative possibilities while retaining human creative input.
Aronofsky and collaborators describe the series as an example of how thoughtful, artist-led AI use can enhance storytelling rather than replace artistic craft.
The initiative is part of a broader trend in entertainment where AI tools are being explored as creative accelerators, though reactions have been mixed on social media, with some viewers questioning the quality and artistic decisions in early episodes.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Millions of South Africans are set to gain access to AI and digital skills through a partnership between Microsoft South Africa and the national broadcaster SABC Plus. The initiative will deliver online courses, assessments, and recognised credentials directly to learners’ devices.
Building on Microsoft Elevate and the AI Skills Initiative, the programme follows the training of 1.4 million people and the credentialing of nearly half a million citizens since 2025. SABC Plus, with over 1.9 million registered users, provides an ideal platform to reach diverse communities nationwide.
AI and data skills are increasingly critical for employability, with global demand for AI roles growing rapidly. Microsoft and SABC aim to equip citizens with practical, future-ready capabilities, ensuring learning opportunities are not limited by geography or background.
The collaboration also complements Microsoft’s broader initiatives in South Africa, including Ikamva Digital, ElevateHer, Civic AI, and youth certification programmes, all designed to foster inclusion and prepare the next generation for a digital economy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has developed an internal AI data agent designed to help employees move from complex questions to reliable insights in minutes. The tool allows teams to analyse vast datasets using natural language instead of manual SQL-heavy workflows.
Across engineering, finance, research and product teams, the agent reduces friction by locating the right tables, running queries and validating results automatically. Built on GPT-5.2, it adapts as it works, correcting errors and refining its approach without constant human input.
Context plays a central role in the system’s accuracy, combining metadata, human annotations, code-level insights and institutional knowledge. A built-in memory function stores non-obvious corrections, helping the agent improve over time and avoid repeated mistakes.
To maintain trust, OpenAI evaluates the agent continuously using automated tests that compare generated results with verified benchmarks. Strong access controls and transparent reasoning ensure the system remains secure, reliable and aligned with existing data permissions.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A survey of contact centre and customer experience (CX) leaders finds that AI has become ‘non-negotiable’ for organisations seeking to deliver efficient, personalised, and data-driven customer service.
Respondents reported widespread use of AI-enabled tools such as chatbots, virtual agents, and conversational analytics to handle routine queries, triage requests and surface insights from large volumes of interaction data.
CX leaders emphasised AI’s ability to boost service quality and reduce operational costs, enabling faster response times and better outcomes across channels.
Many organisations are investing in AI platforms that integrate with existing systems to automate workflows, assist human agents, and personalise interactions based on real-time customer context.
Despite optimism, leaders also noted challenges, including data quality, governance, skills gaps and maintaining human oversight, and stressed that AI should augment, not replace, human agents.
The article underscores that today’s competitive CX landscape increasingly depends on strategic AI adoption rather than optional experimentation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A coalition of researchers and experts has identified future research directions aimed at enhancing AI safety, robustness and quality as systems are increasingly integrated into critical functions.
The work highlights the need for improved tools to evaluate, verify and monitor AI behaviour across diverse real-world contexts, including methods to detect harmful outputs, mitigate bias and ensure consistent performance under uncertainty.
The discussion emphasises that technical quality attributes such as reliability, explainability, fairness and alignment with human values should be core areas of focus, especially for high-stakes applications in healthcare, transport, finance and public services.
Researchers advocate for interdisciplinary approaches, combining insights from computer science, ethics, and the social sciences to address systemic risks and to design governance frameworks that balance innovation with public trust.
The article also notes emerging strategies such as formal verification techniques, benchmarks for robustness and continuous post-deployment auditing, which could help contain unintended consequences and improve the safety of AI models before and after deployment at scale.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI is often criticised for its growing electricity and water use, but experts argue it can also support sustainability. AI can reduce emissions, save energy, and optimise resource use across multiple sectors.
In agriculture, AI-powered irrigation helps farmers use water more efficiently. In Chile, precision systems reduced water consumption by up to 30%, while farmers earned extra income from verified savings.
Data centres and energy companies are deploying AI to improve efficiency, predict workloads, optimise cooling, monitor methane leaks, and schedule maintenance. These measures help reduce emissions and operational costs.
Buildings and aviation are also benefiting from AI. Innovative systems manage heating, cooling, and appliances more efficiently. AI also optimises flight routes, reducing fuel consumption and contrail formation, showing that wider adoption could help fight climate change.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The exposure of more than 50,000 children’s chat logs by AI toy company Bondu highlights serious gaps in child data protection. Sensitive personal information, including names, birth dates, and family details, was accessible through a poorly secured parental portal, raising immediate concerns about children’s privacy and safety.
The incident highlights the absence of mandatory security-by-design standards for AI products for children, with weak safeguards enabling unauthorised access and exposing vulnerable users to serious risks.
Beyond the specific flaw, the case raises wider concerns about AI toys used by children. Researchers warned that the exposed data could be misused, strengthening calls for stricter rules and closer oversight of AI systems designed for minors.
Concerns also extend to transparency around data handling and AI supply chains. Uncertainty over whether children’s data was shared with third-party AI model providers points to the need for clearer rules on data flows, accountability, and consent in AI ecosystems.
Finally, the incident has added momentum to policy discussions on restricting or pausing the sale of interactive AI toys. Lawmakers are increasingly considering precautionary measures while more robust child-focused AI safety frameworks are developed.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Enforcement Directorate (ED) has alleged in a prosecution complaint before a special court in Bengaluru that WinZO, an online real-money gaming platform with millions of users, manipulated outcomes in its games, contrary to public assurances of fairness and transparency.
WinZO deployed AI-powered bots, algorithmic player profiles and simulated gameplay data to control game outcomes. According to the ED complaint, WinZO hosted over 100 games on its mobile app and claimed a large user base, especially in smaller cities.
Its probe found that until late 2023, bots directly competed against real users, and from May 2024 to August 2025, the company used simulated profiles based on historical user data without disclosing this to players.
These practices were allegedly concealed within internal terminology such as ‘Engagement Play’ and ‘Past Performance of Player’.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!