European Commission launches Culture Compass to strengthen the EU identity

The European Commission unveiled the Culture Compass for Europe, a framework designed to place culture at the heart of the EU policies.

An initiative that aims to foster the identity ot the EU, celebrate diversity, and support excellence across the continent’s cultural and creative sectors.

The Compass addresses the challenges facing cultural industries, including restrictions on artistic expression, precarious working conditions for artists, unequal access to culture, and the transformative impact of AI.

It provides guidance along four key directions: upholding European values and cultural rights, empowering artists and professionals, enhancing competitiveness and social cohesion, and strengthening international cultural partnerships.

Several initiatives will support the Compass, including the EU Artists Charter for fair working conditions, a European Prize for Performing Arts, a Youth Cultural Ambassadors Network, a cultural data hub, and an AI strategy for the cultural sector.

The Commission will track progress through a new report on the State of Culture in the EU and seeks a Joint Declaration with the European Parliament and Council to reinforce political commitment.

Commission officials emphasised that the Culture Compass connects culture to Europe’s future, placing artists and creativity at the centre of policy and ensuring the sector contributes to social, economic, and international engagement.

Culture is portrayed not as a side story, but as the story of the EU itself.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU regulators, UK and eSafety lead the global push to protect children in the digital world

Children today spend a significant amount of their time online, from learning and playing to communicating.

To protect them in an increasingly digital world, Australia’s eSafety Commissioner, the European Commission’s DG CNECT, and the UK’s Ofcom have joined forces to strengthen global cooperation on child online safety.

The partnership aims to ensure that online platforms take greater responsibility for protecting and empowering children, recognising their rights under the UN Convention on the Rights of the Child.

The three regulators will continue to enforce their online safety laws to ensure platforms properly assess and mitigate risks to children. They will promote privacy-preserving age verification technologies and collaborate with civil society and academics to ensure that regulations reflect real-world challenges.

By supporting digital literacy and critical thinking, they aim to provide children and families with safer and more confident online experiences.

To advance the work, a new trilateral technical group will be established to deepen collaboration on age assurance. It will study the interoperability and reliability of such systems, explore the latest technologies, and strengthen the evidence base for regulatory action.

Through closer cooperation, the regulators hope to create a more secure and empowering digital environment for young people worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok faces scrutiny over AI moderation and UK staff cuts

TikTok has responded to the Science, Innovation and Technology Committee regarding proposed cuts to its UK Trust and Safety teams. The company claimed that reducing staff while expanding AI, third-party specialists, and more localised teams would improve moderation effectiveness.

The social media platform, however, did not provide any supporting data or risk assessment to justify these changes. MPs previously called for more transparency on content moderation data during an inquiry into social media, misinformation, and harmful algorithms.

TikTok’s increasing reliance on AI comes amid broader concerns over AI safety, following reports of chatbots encouraging harmful behaviours.

Committee Chair Dame Chi Onwurah expressed concern that AI cannot reliably replace human moderators. She warned AI could cause harm and criticised TikTok for not providing evidence that staff cuts would protect users.

The Committee urges the Government and Ofcom to take action to ensure user safety before implementing staffing reductions. Dame Onwurah emphasised that without credible data, it is impossible to determine whether the changes will effectively protect users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta, TikTok and Snapchat prepare to block under-16s as Australia enforces social media ban

Social media platforms, including Meta, TikTok and Snapchat, will begin sending notices to more than a million Australian teens, telling them to download their data, freeze their profiles or lose access when the national ban for under-16s comes into force on 10 December.

According to people familiar with the plans, platforms will deactivate accounts believed to belong to users under the age of 16. About 20 million Australians who are older will not be affected. However, this marks a shift from the year-long opposition seen from tech firms, which warned the rules would be intrusive or unworkable.

Companies plan to rely on their existing age-estimation software, which predicts age from behaviour signals such as likes and engagement patterns. Only users who challenge a block will be pushed to the age assurance apps. These tools estimate age from a selfie and, if disputed, allow users to upload ID. Trials show they work, but accuracy drops for 16- and 17-year-olds.

Yoti’s Chief Policy Officer, Julie Dawson, said disruption should be brief, with users adapting within a few weeks. Meta, Snapchat, TikTok and Google declined to comment. In earlier hearings, most respondents stated that they would comply.

The law blocks teenagers from using mainstream platforms without any parental override. It follows renewed concern over youth safety after internal Meta documents in 2021 revealed harm linked to heavy social media use.

A smooth rollout is expected to influence other countries as they explore similar measures. France, Denmark, Florida and the UK have pursued age checks with mixed results due to concerns over privacy and practicality.

Consultants say governments are watching to see whether Australia’s requirement for platforms to take ‘reasonable steps’ to block minors, including trying to detect VPN use, works in practice without causing significant disruption for other users.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Language models mimic human belief reasoning

In a recent paper, researchers at Stevens Institute of Technology revealed that large language models (LLMs) use a small, specialised subset of their parameters to perform tasks associated with the psychological concept of ‘Theory of Mind’ (ToM), the human ability to infer others’ beliefs, intentions and perspectives.

The study found that although LLMs activate almost their whole network for each input, the ToM-related reasoning appears to rely disproportionately on a narrow internal circuit, particularly shaped by the model’s positional encoding mechanism.

This discovery matters because it highlights a significant efficiency gap between human brains and current AI systems: humans carry out social-cognitive tasks with only a tiny fraction of neural activity, whereas LLMs still consume substantial computational resources even for ‘simple’ reasoning.

The researchers suggest these points as a way to design AI models that are more brain-inspired, selectively activating only those parameters needed for particular tasks.

From a policy and digital-governance perspective, this raises questions about how we interpret AI’s understanding and social cognition.

If AI can exhibit behaviour that resembles human belief-reasoning, oversight frameworks and transparency standards become all the more critical in assessing what AI systems are doing, and what they are capable of.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI faces major copyright setback in US court

A US federal judge has ruled that a landmark copyright case against OpenAI can proceed, rejecting the company’s attempt to dismiss claims brought by authors and the Authors Guild.

The authors argue that ChatGPT’s summaries of copyrighted works, including George R.R. Martin’s Game of Thrones, unlawfully replicate the original tone, plot, and characters, raising concerns about AI-generated content infringing on creative rights.

The Publishers Association (PA) welcomed the ruling, warning that generative AI could ‘devastate the market’ for books and other creative works by producing infringing content at scale.

It urged the UK government to strengthen transparency rules to protect authors and publishers, stressing that AI systems capable of reproducing an author’s style could undermine the value of original creation.

The case follows a £1.5bn settlement against Anthropic earlier this year for using pirated books to train its models and comes amid growing scrutiny of AI firms.

In Britain, Stability AI recently avoided a copyright ruling after a claim by Getty Images was dismissed on grounds of jurisdiction. Still, the PA stated that the outcome highlighted urgent gaps in UK copyright law regarding AI training and output.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Brussels leak signals GDPR and AI Act adjustments

The European Commission is preparing a Digital Package on simplification for 19 November. A leaked draft outlines instruments covering GDPR, ePrivacy, Data Act and AI Act reforms.

Plans include a single breach portal and a higher reporting threshold. Authorities would receive notifications within 96 hours, with standardised forms and narrower triggers. Controllers could reject or charge for data subject access requests used to pursue disputes.

Cookie rules would shift toward browser-level preference signals respected across services. Aggregated measurement and security uses would not require popups, while GDPR lawful bases expand. News publishers could receive limited exemptions recognising reliance on advertising revenues.

Drafting recognises legitimate interest for training AI models on personal data. Narrow allowances are provided for sensitive data during development, along with EU-wide data protection impact assessment templates. Critics warn proposals dilute safeguards and may soften the AI Act.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-powered Google Photos features land on iOS, search expands to 100+ countries

Google Photos is introducing prompt-based edits, an ‘Ask’ button, and style templates across iOS and Android. In the US, iPhone users can describe edits by voice or text, with a redesigned editor for faster controls. The rollout builds on the August Pixel 10’s debut of prompt editing.

Personalised edits now recognise people from face groups, so you can issue multi-person requests, such as removing sunglasses or opening eyes. Find it under ‘Help me edit’, where changes apply to each named person. It’s designed for faster, more granular everyday fixes.

A new Ask button serves as a hub for AI requests, from questions about a photo to suggested edits and related moments. The interface surfaces chips that hint at actions users can take. The Ask experience is rolling out in the US on both iOS and Android.

Google is also adding AI templates that turn a single photo into set formats, such as retro portraits or comic-style panels. The company states that its Nano Banana model powers these creative styles and that templates will be available next week under the Create tab on Android in the US and India.

AI search in Google Photos, first launched in the US, is expanding to over 100 countries with support for 17 languages. Markets include Argentina, Australia, Brazil, India, Japan, Mexico, Singapore, and South Africa. Google says this brings natural-language photo search to a far greater number of users.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The AI soldier and the ethics of war

The rise of the machine soldier

For decades, Western militaries have led technological revolutions on the battlefield. From bows to tanks to drones, technological innovation has disrupted and redefined warfare for better or worse. However, the next evolution is not about weapons, it is about the soldier.

New AI-integrated systems such as Anduril’s EagleEye Helmet are transforming troops into data-driven nodes, capable of perceiving and responding with machine precision. This fusion of human and algorithmic capabilities is blurring the boundary between human roles and machine learning, redefining what it means to fight and to feel in war.

Today’s ‘AI soldier’ is more than just enhanced. They are networked, monitored, and optimised. Soldiers now have 3D optical displays that give them a god’s-eye view of combat, while real-time ‘guardian angel’ systems make decisions faster than any human brain can process.

Yet in this pursuit of efficiency, the soldier’s humanity and the rules-based order of war risk being sidelined in favour of computational power.

From soldier to avatar

In the emerging AI battlefield, the soldier increasingly resembles a character in a first-person shooter video game. There is an eerie overlap between AI soldier systems and the interface of video games, like Metal Gear Solid, where augmented players blend technology, violence, and moral ambiguity. The more intuitive and immersive the tech becomes, the easier it is to forget that killing is not a simulation.

By framing war through a heads-up display, AI gives troops an almost cinematic sense of control, and in turn, a detachment from their humanity, emotions, and the physical toll of killing. Soldiers with AI-enhanced senses operate through layers of mediated perception, acting on algorithmic prompts rather than their own moral intuition. When soldiers view the world through the lens of a machine, they risk feeling less like humans and more like avatars, designed to win, not to weigh the cost.

The integration of generative AI into national defence systems creates vulnerabilities, ranging from hacking decision-making systems to misaligned AI agents capable of escalating conflicts without human oversight. Ironically, the same guardrails that prevent civilian AI from encouraging violence cannot apply to systems built for lethal missions.

The ethical cost

Generative AI has redefined the nature of warfare, introducing lethal autonomy that challenges the very notion of ethics in combat. In theory, AI systems can uphold Western values and ethical principles, but in practice, the line between assistance and automation is dangerously thin.

When militaries walk this line, outsourcing their decision-making to neural networks, accountability becomes blurred. Without the basic principles and mechanisms of accountability in warfare, states risk the very foundation of rules-based order. AI may evolve the battlefield, but at the cost of diplomatic solutions and compliance with international law.  

AI does not experience fear, hesitation, or empathy, the very qualities that restrain human cruelty. By building systems that increase efficiency and reduce the soldier’s workload through automated targeting and route planning, we risk erasing the psychological distinction that once separated human war from machine-enabled extermination. Ethics, in this new battlescape, become just another setting in the AI control panel. 

The new war industry 

The defence sector is not merely adapting to AI. It is being rebuilt around it. Anduril, Palantir, and other defence tech corporations now compete with traditional military contractors by promising faster innovation through software.

As Anduril’s founder, Palmer Luckey, puts it, the goal is not to give soldiers a tool, but ‘a new teammate.’ The phrasing is telling, as it shifts the moral axis of warfare from command to collaboration between humans and machines.

The human-machine partnership built for lethality suggests that the military-industrial complex is evolving into a military-intelligence complex, where data is the new weapon, and human experience is just another metric to optimise.

The future battlefield 

If the past century’s wars were fought with machines, the next will likely be fought through them. Soldiers are becoming both operators and operated, which promises efficiency in war, but comes with the cost of human empathy.

When soldiers see through AI’s lens, feel through sensors, and act through algorithms, they stop being fully human combatants and start becoming playable characters in a geopolitical simulation. The question is not whether this future is coming; it is already here. 

There is a clear policy path forward, as states remain tethered to their international obligations. Before AI blurs the line between soldier and system, international law could enshrine a human-in-the-loop requirement for all lethal actions, while defence firms are compelled to maintain high ethical transparency standards.

The question now is whether humanity can still recognise itself once war feels like a game, or whether, without safeguards, it will remain present in war at all.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT-5 outperformed by a Chinese startup model

A Chinese company has stunned the AI world after its new open-source model outperformed OpenAI’s ChatGPT-5 and Anthropic’s Claude Sonnet 4.5 in key benchmarks.

Moonshot AI’s Kimi K2 Thinking model achieved the best reasoning and coding scores yet, shaking confidence in American dominance over advanced AI systems.

The Beijing-based startup, backed by Alibaba and Tencent, released Kimi K2 Thinking on 6 November. It scored 44.9 percent in Humanity’s Last Exam and 60.2 percent in BrowseComp, both surpassing leading US models.

Analysts dubbed it another ‘DeepSeek moment ‘, echoing the earlier success of China in breaking AI cost barriers.

Moonshot AI trained the trillion-parameter system for just US$4.6 million (nearly ten times cheaper than GPT-5’s reported costs) using a Mixture-of-Experts structure and advanced quantisation for faster generation.

The fully open-weight model, released under a Modified MIT License, adds commercial flexibility and intensifies competition with US labs.

Industry observers called it a turning point. Hugging Face’s Thomas Wolf said the achievement shows how open-source models can now rival closed systems.

Researchers from the Allen Institute for AI noted that Chinese innovation is narrowing the gap faster than expected, driven by efficiency and high-quality training data rather than raw computing power.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!