EU instructs X to keep all Grok chatbot records

The European Commission has ordered X to retain all internal documents and data on its AI chatbot Grok until the end of 2026. The order falls under the Digital Services Act after concerns Grok’s ‘spicy’ mode enabled sexualised deepfakes of minors.

The move continues EU oversight, recalling a January 2025 order to preserve X’s recommender system documents amid claims it amplified far-right content during German elections. EU regulators emphasised that platforms must manage the content generated by their AI responsibly.

Earlier this week, X submitted responses to the Commission regarding Grok’s outputs following concerns over Holocaust denial content. While the deepfake scandal has prompted calls for further action, the Commission has not launched a formal investigation into Grok.

Regulators reiterated that it remains X’s responsibility to ensure the chatbot’s outputs meet European standards, and retention of all internal records is crucial for ongoing monitoring and accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Researchers launch AURA to protect AI knowledge graphs

A novel framework called AURA has been unveiled by researchers aiming to safeguard proprietary knowledge graphs in AI systems by deliberately corrupting stolen copies with realistic yet false data.

The approach is designed to preserve full utility for authorised users while rendering illicit copies ineffective instead of relying solely on traditional encryption or watermarking.

AURA works by injecting ‘adulterants’ into critical nodes of knowledge graphs, chosen using advanced algorithms to minimise changes while maximising disruption for unauthorised users.

Tests with GPT-4o, Gemini-2.5, Qwen-2.5, and Llama2-7B showed that 94–96% of correct answers in stolen data were flipped, while authorised access remained unaffected.

The framework protects valuable intellectual property in sectors such as pharmaceuticals and manufacturing, where knowledge graphs power advanced AI applications.

Unlike passive watermarking or offensive poisoning, AURA actively degrades stolen datasets, offering robust security against offline and private-use attacks.

With GraphRAG applications proliferating, major technology firms, including Microsoft, Google, and Alibaba, are evaluating AURA to defend critical AI-driven knowledge.

The system demonstrates how active protection strategies can complement existing security measures, ensuring enterprises maintain control over their data in an AI-driven world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Universal Music Group partners with NVIDIA on AI music strategy

UMG has entered a strategic collaboration with NVIDIA to reshape how billions of fans discover, experience and engage with music by using advanced AI.

An initiative that combines NVIDIA’s AI infrastructure with UMG’s extensive global catalogue, aiming to elevate music interaction instead of relying solely on traditional search and recommendation systems.

The partnership will focus on AI-driven discovery and engagement that interprets music at a deeper cultural and emotional level.

By analysing full-length tracks, the technology is designed to surface music through narrative, mood and context, offering fans richer exploration while helping artists reach audiences more meaningfully.

Artist empowerment sits at the centre of the collaboration, with plans to establish an incubator where musicians and producers help co-design AI tools.

The goal is to enhance originality and creative control instead of producing generic outputs, while ensuring proper attribution and protection of copyrighted works.

Universal Music Group and NVIDIA also emphasise responsible AI development, combining technical safeguards with industry oversight.

By aligning innovation with artist rights and fair compensation, both companies aim to set new standards for how AI supports creativity across the global music ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI cheating drives ACCA to halt online exams

The Association of Chartered Certified Accountants (ACCA) has announced it will largely end remote examinations in the UK from March 2026, requiring students to sit tests in person unless exceptional circumstances apply.

The decision aims to address a surge in cheating, particularly facilitated by AI tools.

Remote testing was introduced during the Covid-19 pandemic to allow students to continue qualifying when in-person exams were impossible. The ACCA said online assessments have now become too difficult to monitor effectively, despite efforts to strengthen safeguards against misconduct.

Investigations show cheating has impacted major auditing firms, including the ‘big four’ and other top companies. High-profile cases, such as EY’s $100m (£74m) settlement in the US, highlight the risks posed by compromised professional examinations.

While other accounting bodies, including the Institute of Chartered Accountants in England and Wales, continue to allow some online exams, the ACCA has indicated that high-stakes assessments must now be conducted in person to maintain credibility and integrity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Millions watch AI-generated brainrot content on YouTube

Kapwing research reveals that AI-generated ‘slop’ and brainrot videos now dominate a significant portion of YouTube feeds, accounting for 21–33% of the first 500 Shorts seen by new users.

These rapidly produced AI videos aim to grab attention but make it harder for traditional creators to gain visibility. Analysis of top trending channels shows Spain leads in AI slop subscribers with 20.22 million, while South Korea’s channels have amassed 8.45 billion views.

India’s Bandar Apna Dost is the most-viewed AI slop channel, earning an estimated $4.25 million annually and showing the profit potential of mass AI-generated content.

The prevalence of AI slop and brainrot has sparked debates over creativity, ethics, and advertiser confidence. YouTube CEO Neal Mohan calls generative AI transformative, but rising automated videos raise concerns over quality and brand safety.

Researchers warn that repeated exposure to AI-generated content can distort perception and contribute to information overload. Some AI content earns artistic respect, but much normalises low-quality videos, making it harder for users to tell meaningful content from repetitive or misleading material.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

MIT-IBM researchers improve large language models with PaTH Attention

Researchers at MIT and the MIT-IBM Watson AI Lab have introduced a new attention mechanism designed to enhance the capabilities of large language models (LLMs) in tracking state and reasoning across long texts.

Unlike traditional positional encoding methods, the PaTH Attention system adapts to the content of words, enabling models to follow complex sequences more effectively.

PaTH Attention models sequences through data-dependent transformations, allowing LLMs to track how meaning changes between words instead of relying solely on relative distance.

The approach improves performance on long-context reasoning, multi-step recall, and language modelling benchmarks, all while remaining computationally efficient and compatible with GPUs.

Tests demonstrated consistent gains in perplexity and content-awareness compared with conventional methods. The team combined PaTH Attention with FoX to down-weight less relevant information, improving reasoning and long-sequence understanding.

According to senior author Yoon Kim, these advances represent the next step in developing general-purpose building blocks for AI, combining expressivity, scalability, and efficiency for broader applications in structured domains such as biology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Japan investigates AI search services over news use

The Japan Fair Trade Commission (JFTC) announced it will investigate AI-based online search services over concerns that using news articles without permission could violate antitrust laws.

Authorities said such practices may amount to an abuse of a dominant bargaining position under Japan’s antimonopoly regulations.

The inquiry is expected to examine services from global tech firms, including Google, Microsoft, and OpenAI’s ChatGPT, as well as US startup Perplexity AI and Japanese company LY Corp. AI search tools summarise online content, including news articles, raising concerns about their effect on media revenue.

The Japan Newspaper Publishers and Editors Association warned AI summaries may reduce website traffic and media revenue. JFTC Secretary General Hiroo Iwanari said generative AI is evolving quickly, requiring careful review to keep up with technological change.

The investigation reflects growing global scrutiny of AI services and their interaction with content providers, with regulators increasingly assessing the balance between innovation and fair competition in digital markets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Creators embrace AI music on YouTube

Increasingly, YouTube creators are utilising AI-generated music to enhance video quality, saving time and costs. Selecting tracks that align with the content tone and audience expectations is crucial for engagement.

Subtle, balanced music supports narration without distraction and guides viewers through sections. Thoughtful use of intros, transitions and outros builds channel identity and reinforces branding.

Customisation tools allow creators to adjust tempo, mood and intensity for better pacing and cohesion with visuals. Testing multiple versions ensures the music feels natural and aligns with storytelling.

Understanding licensing terms protects monetisation and avoids copyright issues. Combining AI music with creative judgement keeps content authentic and original while maximising production impact.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI search services face competition probe in Japan

Japan’s competition authority will probe AI search services from major domestic and international tech firms. The investigation aims to identify potential antitrust violations rather than impose immediate sanctions.

The probe is expected to cover LY Corp., Google, Microsoft and AI providers such as OpenAI and Perplexity AI. Concerns centre on how AI systems present and utilise news content within search results.

Legal action by Japanese news organisations alleges unauthorised use of articles by AI services. Regulators are assessing whether such practices constitute abuse of market dominance.

The inquiry builds on a 2023 review of news distribution contracts that warned against the use of unfair terms for publishers. Similar investigations overseas, including within the EU, have guided the commission’s approach.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI reshapes media in North Macedonia with new regulatory guidance

A new analysis examines the impact of AI on North Macedonia’s media sector, offering guidance on ethical standards, human rights, and regulatory approaches.

Prepared in both Macedonian and English, the study benchmarks the country’s practices against European frameworks and provides actionable recommendations for future regulation and self-regulation.

The research, supported by the EU and Council of Europe’s PRO-FREX initiative and in collaboration with the Agency for Audio and Audiovisual Media Services (AVMU), was presented during Media Literacy Days 2025 in Skopje.

It highlights the relevance of EU and Council of Europe guidelines, including the Framework Convention on AI and Human Rights, and guidance on responsible AI in journalism.

AVMU’s involvement underlines its role in ensuring media freedom, fairness, and accountability amid rapid technological change. Participants highlighted the need for careful policymaking to manage AI’s impact, protecting media diversity, journalistic standards, and public trust online.

The analysis forms part of broader efforts under the Council of Europe and the EU’s Horizontal Facility for the Western Balkans and Türkiye, aiming to support North Macedonia in aligning media regulation with European standards while responsibly integrating AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot