European Commission launches Culture Compass to strengthen the EU identity

The European Commission unveiled the Culture Compass for Europe, a framework designed to place culture at the heart of the EU policies.

An initiative that aims to foster the identity ot the EU, celebrate diversity, and support excellence across the continent’s cultural and creative sectors.

The Compass addresses the challenges facing cultural industries, including restrictions on artistic expression, precarious working conditions for artists, unequal access to culture, and the transformative impact of AI.

It provides guidance along four key directions: upholding European values and cultural rights, empowering artists and professionals, enhancing competitiveness and social cohesion, and strengthening international cultural partnerships.

Several initiatives will support the Compass, including the EU Artists Charter for fair working conditions, a European Prize for Performing Arts, a Youth Cultural Ambassadors Network, a cultural data hub, and an AI strategy for the cultural sector.

The Commission will track progress through a new report on the State of Culture in the EU and seeks a Joint Declaration with the European Parliament and Council to reinforce political commitment.

Commission officials emphasised that the Culture Compass connects culture to Europe’s future, placing artists and creativity at the centre of policy and ensuring the sector contributes to social, economic, and international engagement.

Culture is portrayed not as a side story, but as the story of the EU itself.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU regulators, UK and eSafety lead the global push to protect children in the digital world

Children today spend a significant amount of their time online, from learning and playing to communicating.

To protect them in an increasingly digital world, Australia’s eSafety Commissioner, the European Commission’s DG CNECT, and the UK’s Ofcom have joined forces to strengthen global cooperation on child online safety.

The partnership aims to ensure that online platforms take greater responsibility for protecting and empowering children, recognising their rights under the UN Convention on the Rights of the Child.

The three regulators will continue to enforce their online safety laws to ensure platforms properly assess and mitigate risks to children. They will promote privacy-preserving age verification technologies and collaborate with civil society and academics to ensure that regulations reflect real-world challenges.

By supporting digital literacy and critical thinking, they aim to provide children and families with safer and more confident online experiences.

To advance the work, a new trilateral technical group will be established to deepen collaboration on age assurance. It will study the interoperability and reliability of such systems, explore the latest technologies, and strengthen the evidence base for regulatory action.

Through closer cooperation, the regulators hope to create a more secure and empowering digital environment for young people worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Denmark’s new chat control plan raises fresh privacy concerns

Denmark has proposed an updated version of the EU’s controversial ‘chat control’ regulation, shifting from mandatory to voluntary scanning of private messages. Former MEP Patrick Breyer has warned, however, that the revision still threatens Europeans’ right to private communication.

Under the new plan, messaging providers could choose to scan chats for illegal material, but without a clear requirement for court orders. Breyer argued that this sidesteps the European Parliament’s position, which insists on judicial authorisation before any access to communications.

He also criticised the proposal for banning under-16s from using messaging apps like WhatsApp and Telegram, claiming such restrictions would prove ineffective and easily bypassed. In addition, the plan would effectively outlaw anonymous communication, requiring users to verify their identities through IDs.

Privacy advocates say the Danish proposal could set a dangerous precedent by eroding fundamental digital rights. Civil society groups have urged EU lawmakers to reject measures that compromise secure, anonymous communication essential for journalists and whistleblowers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Inside OpenAI’s battle to protect AI from prompt injection attacks

OpenAI has identified prompt injection as one of the most pressing new challenges in AI security. As AI systems gain the ability to browse the web, handle personal data and act on users’ behalf, they become targets for malicious instructions hidden within online content.

These attacks, known as prompt injections, can trick AI models into taking unintended actions or revealing sensitive information.

To counter the issue, OpenAI has adopted a multi-layered defence strategy that combines safety training, automated monitoring and system-level security protections. The company’s research into ‘Instruction Hierarchy’ aims to help models distinguish between trusted and untrusted commands.

Continuous red-teaming and automated detection systems further strengthen resilience against evolving threats.

OpenAI also provides users with greater control, featuring built-in safeguards such as approval prompts before sensitive actions, sandboxing for code execution, and ‘Watch Mode’ when operating on financial or confidential sites.

These measures ensure that users remain aware of what actions AI agents perform on their behalf.

While prompt injection remains a developing risk, OpenAI expects adversaries to devote significant resources to exploiting it. The company continues to invest in research and transparency, aiming to make AI systems as secure and trustworthy as a cautious, well-informed human colleague.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How Google uses AI to support teachers and inspire students

Google is redefining education with AI designed to enhance learning, rather than replace teachers. The company has unveiled new tools grounded in learning science to support both educators and students, aiming to make learning more effective, efficient and engaging.

Through its Gemini platform, users can follow guided learning paths that encourage discovery rather than passive answers.

YouTube and Search now include conversational features that allow students to ask questions as they learn, while NotebookLM can transform personal materials into quizzes or immersive study aids.

Instructors can also utilise Google Classroom’s free AI tools for lesson planning and administrative support, thereby freeing up time for direct student engagement.

Google emphasises that its goal is to preserve the human essence of education while using AI to expand understanding. The company also addresses challenges linked to AI in learning, such as cheating, fairness, accuracy and critical thinking.

It is exploring assessment models that cannot be easily replicated by AI, including debates, projects, and oral examinations.

The firm pledges to develop its tools responsibly by collaborating with educators, parents and policymakers. By combining the art of teaching with the science of AI-driven learning, Google seeks to make education more personal, equitable and inspiring for all.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI outlines roadmap for AI safety, accountability and global cooperation

New recommendations have been published by OpenAI for managing rapid advances in AI, stressing the need for shared safety standards, public accountability, and resilience frameworks.

The company warned that while AI systems are increasingly capable of solving complex problems and accelerating discovery, they also pose significant risks that must be addressed collaboratively.

According to OpenAI, the next few years could bring systems capable of discoveries once thought centuries away.

The firm expects AI to transform health, materials science, drug development and education, while acknowledging that economic transitions may be disruptive and could require a rethinking of social contracts.

To ensure safe development, OpenAI proposed shared safety principles among frontier labs, new public oversight mechanisms proportional to AI capabilities, and the creation of a resilience ecosystem similar to cybersecurity.

It also called for regular reporting on AI’s societal impact to guide evidence-based policymaking.

OpenAI reiterated that the goal should be to empower individuals by making advanced AI broadly accessible, within limits defined by society, and to treat access to AI as a foundational public utility in the years ahead.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LIBE backs new Europol Regulation despite data protection and discrimination warnings

The European Parliament’s civil liberties committee (LIBE) voted to endorse a new Europol Regulation, part of the ‘Facilitators Package’, by 59–10 with four abstentions.

Rights groups and the European Data Protection Supervisor had urged MEPs to reject the proposal, arguing the law fuels discrimination and grants Europol and Frontex unprecedented surveillance capabilities with insufficient oversight.

If approved in plenary later this month, the reform would grant Europol broader powers to collect, process and share data, including biometrics such as facial recognition, and enable exchanges with non-EU states.

Campaigners note the proposal advanced without an impact assessment, contrary to the Commission’s Better Regulation guidance.

Civil society groups warn that the changes risk normalising surveillance in migration management. Access Now’s Caterina Rodelli said MEPs had ‘greenlighted the European Commission’s long-term plan to turn Europe into a digital police state’. At the same time, Equinox’s Sarah Chander called the vote proof the EU has ‘abandoned’ humane, evidence-based policy.

EDRi’s Chloé Berthélémy said the reform legitimises ‘unaccountable and opaque data practices’, creating a ‘data black hole’ that undermines rights and the rule of law. More than 120 organisations called on MEPs to reject the text, arguing it is ‘unlawful, unsafe, and unsubstantiated’.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Snap brings Perplexity’s answer engine into Chat for nearly a billion users

Starting in early 2026, Perplexity’s AI will be integrated into Snapchat’s Chat, accessible to nearly 1 billion users. Snapchatters can ask questions and receive concise, cited answers in-app. Snap says the move reinforces its position as a trusted, mobile-first AI platform.

Under the deal, Perplexity will pay Snap $400 million in cash and equity over a one-year period, tied to the global rollout. Revenue contribution is expected to begin in 2026. Snap points to its 943 million MAUs and reaches over 75% of 13–34-year-olds in 25+ countries.

Perplexity frames the move as meeting curiosity where it occurs, within everyday conversations. Evan Spiegel says Snap aims to make AI more personal, social, and fun, woven into friendships and conversations. Both firms pitch the partnership as enhancing discovery and learning on Snapchat.

Perplexity joins, rather than replaces, Snapchat’s existing My AI. Messages sent to Perplexity will inform personalisation on Snapchat, similar to My AI’s current behaviour. Snap claims the approach is privacy-safe and designed to provide credible, real-time answers from verifiable sources.

Snap casts this as a first step toward a broader AI partner platform inside Snapchat. The companies plan creative, trusted ways for leading AI providers to reach Snap’s global community. The integration aims to enable seamless, in-chat exploration while keeping users within Snapchat’s product experience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Social media platforms ordered to enforce minimum age rules in Australia

Australia’s eSafety Commissioner has formally notified major social media platforms, including Facebook, Instagram, TikTok, Snapchat, and YouTube, that they must comply with new minimum age restrictions from 10 December.

The rule will require these services to prevent social media users under 16 from creating accounts.

eSafety determined that nine popular services currently meet the definition of age-restricted platforms since their main purpose is to enable online social interaction. Platforms that fail to take reasonable steps to block underage users may face enforcement measures, including fines of up to 49.5 million dollars.

The agency clarified that the list of age-restricted platforms will not remain static, as new services will be reviewed and reassessed over time. Others, such as Discord, Google Classroom, and WhatsApp, are excluded for now as they do not meet the same criteria.

Commissioner Julie Inman Grant said the new framework aims to delay children’s exposure to social media and limit harmful design features such as infinite scroll and opaque algorithms.

She emphasised that age limits are only part of a broader effort to build safer, more age-appropriate online environments supported by education, prevention, and digital resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI introduces IndQA to test AI on Indian languages and culture

The US R&D company, OpenAI, has introduced IndQA, a new benchmark designed to test how well AI systems understand and reason across Indian languages and cultural contexts. The benchmark covers 2,278 questions in 12 languages and 10 cultural domains, from literature and food to law and spirituality.

Developed with input from 261 Indian experts, IndQA evaluates AI models through rubric-based grading that assesses accuracy, cultural understanding, and reasoning depth. Questions were created to challenge leading OpenAI models, including GPT-4o and GPT-5, ensuring space for future improvement.

India was chosen as the first region for the initiative, reflecting its linguistic diversity and its position as ChatGPT’s second-largest market.

OpenAI aims to expand the approach globally, using IndQA as a model for building culturally aware benchmarks that help measure real progress in multilingual AI performance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!