European Commission launches Culture Compass to strengthen the EU identity

The European Commission unveiled the Culture Compass for Europe, a framework designed to place culture at the heart of the EU policies.

An initiative that aims to foster the identity ot the EU, celebrate diversity, and support excellence across the continent’s cultural and creative sectors.

The Compass addresses the challenges facing cultural industries, including restrictions on artistic expression, precarious working conditions for artists, unequal access to culture, and the transformative impact of AI.

It provides guidance along four key directions: upholding European values and cultural rights, empowering artists and professionals, enhancing competitiveness and social cohesion, and strengthening international cultural partnerships.

Several initiatives will support the Compass, including the EU Artists Charter for fair working conditions, a European Prize for Performing Arts, a Youth Cultural Ambassadors Network, a cultural data hub, and an AI strategy for the cultural sector.

The Commission will track progress through a new report on the State of Culture in the EU and seeks a Joint Declaration with the European Parliament and Council to reinforce political commitment.

Commission officials emphasised that the Culture Compass connects culture to Europe’s future, placing artists and creativity at the centre of policy and ensuring the sector contributes to social, economic, and international engagement.

Culture is portrayed not as a side story, but as the story of the EU itself.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU regulators, UK and eSafety lead the global push to protect children in the digital world

Children today spend a significant amount of their time online, from learning and playing to communicating.

To protect them in an increasingly digital world, Australia’s eSafety Commissioner, the European Commission’s DG CNECT, and the UK’s Ofcom have joined forces to strengthen global cooperation on child online safety.

The partnership aims to ensure that online platforms take greater responsibility for protecting and empowering children, recognising their rights under the UN Convention on the Rights of the Child.

The three regulators will continue to enforce their online safety laws to ensure platforms properly assess and mitigate risks to children. They will promote privacy-preserving age verification technologies and collaborate with civil society and academics to ensure that regulations reflect real-world challenges.

By supporting digital literacy and critical thinking, they aim to provide children and families with safer and more confident online experiences.

To advance the work, a new trilateral technical group will be established to deepen collaboration on age assurance. It will study the interoperability and reliability of such systems, explore the latest technologies, and strengthen the evidence base for regulatory action.

Through closer cooperation, the regulators hope to create a more secure and empowering digital environment for young people worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok faces scrutiny over AI moderation and UK staff cuts

TikTok has responded to the Science, Innovation and Technology Committee regarding proposed cuts to its UK Trust and Safety teams. The company claimed that reducing staff while expanding AI, third-party specialists, and more localised teams would improve moderation effectiveness.

The social media platform, however, did not provide any supporting data or risk assessment to justify these changes. MPs previously called for more transparency on content moderation data during an inquiry into social media, misinformation, and harmful algorithms.

TikTok’s increasing reliance on AI comes amid broader concerns over AI safety, following reports of chatbots encouraging harmful behaviours.

Committee Chair Dame Chi Onwurah expressed concern that AI cannot reliably replace human moderators. She warned AI could cause harm and criticised TikTok for not providing evidence that staff cuts would protect users.

The Committee urges the Government and Ofcom to take action to ensure user safety before implementing staffing reductions. Dame Onwurah emphasised that without credible data, it is impossible to determine whether the changes will effectively protect users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Police warn of scammers posing as AFP officers in crypto fraud

Cybercriminals are exploiting Australia’s national cybercrime reporting platform, ReportCyber, to trick people into handing over cryptocurrency. The AFP-led Joint Policing Cybercrime Coordination Centre (JPC3) warns scammers are posing as police and using stolen data to file fake reports.

In one recent case, a victim was contacted by someone posing as an AFP officer and informed that their details had been found in a data breach linked to cryptocurrency. The impersonator provided an official reference number, which appeared genuine when checked on the ReportCyber portal.

A second caller, pretending to be from a crypto platform, then urged the target to transfer funds to a so-called ‘Cold Storage’ account. The victim realised the deception and ended the call before losing money.

Detective Superintendent Marie Andersson said the scam’s sophistication lay in its false sense of legitimacy and urgency. Criminals verify personal data and act quickly to pressure victims, she explained. However, growing awareness within the community has helped authorities detect such scams sooner.

Authorities are reminding the public that legitimate officers will never request access to wallets, bank accounts, or seed phrases. Australians should remain cautious, verify unexpected calls, and report any suspicious activity through official channels.

The AFP reaffirmed that ReportCyber remains a safe platform for genuine reports and continues to be a vital tool in tracking and preventing cybercrime nationwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta, TikTok and Snapchat prepare to block under-16s as Australia enforces social media ban

Social media platforms, including Meta, TikTok and Snapchat, will begin sending notices to more than a million Australian teens, telling them to download their data, freeze their profiles or lose access when the national ban for under-16s comes into force on 10 December.

According to people familiar with the plans, platforms will deactivate accounts believed to belong to users under the age of 16. About 20 million Australians who are older will not be affected. However, this marks a shift from the year-long opposition seen from tech firms, which warned the rules would be intrusive or unworkable.

Companies plan to rely on their existing age-estimation software, which predicts age from behaviour signals such as likes and engagement patterns. Only users who challenge a block will be pushed to the age assurance apps. These tools estimate age from a selfie and, if disputed, allow users to upload ID. Trials show they work, but accuracy drops for 16- and 17-year-olds.

Yoti’s Chief Policy Officer, Julie Dawson, said disruption should be brief, with users adapting within a few weeks. Meta, Snapchat, TikTok and Google declined to comment. In earlier hearings, most respondents stated that they would comply.

The law blocks teenagers from using mainstream platforms without any parental override. It follows renewed concern over youth safety after internal Meta documents in 2021 revealed harm linked to heavy social media use.

A smooth rollout is expected to influence other countries as they explore similar measures. France, Denmark, Florida and the UK have pursued age checks with mixed results due to concerns over privacy and practicality.

Consultants say governments are watching to see whether Australia’s requirement for platforms to take ‘reasonable steps’ to block minors, including trying to detect VPN use, works in practice without causing significant disruption for other users.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google launches Private AI Compute for secure cloud-AI

In a move that underscores the evolving balance between capability and privacy in AI, Google today introduced Private AI Compute. This new cloud-based processing platform supports its most advanced models, such as those in the Gemini family, while maintaining what it describes as on-device-level data security.

The blog post explains that many emerging AI tasks now exceed the capabilities of on-device hardware alone. To solve this, Google built Private AI Compute to offload heavy computation to its cloud, powered by custom Tensor Processing Units (TPUs) and wrapped in a fortified enclave environment called Titanium Intelligence Enclaves (TIE).

The system uses remote attestation, encryption and IP-blinding relays to ensure user data remains private and inaccessible; ot even Google’s supposed to gain access.

Google identifies initial use-cases in its Pixel devices: features such as Magic Cue and Recorder will benefit from the extra compute, enabling more timely suggestions, multilingual summarisation and advanced context-aware assistance.

At the same time, the company says this platform ‘opens up a new set of possibilities for helpful AI experiences’ that go beyond what on-device AI alone can fully achieve.

This announcement is significant from both a digital policy and platform economy perspective. It illustrates how major technology firms are reconciling user privacy demands with the computational intensity of next-generation AI.

For organisations and governments focused on AI governance and digital diplomacy, the move raises questions about data sovereignty, transparency of remote enclaves and the true nature of ‘secure ‘cloud processing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI faces major copyright setback in US court

A US federal judge has ruled that a landmark copyright case against OpenAI can proceed, rejecting the company’s attempt to dismiss claims brought by authors and the Authors Guild.

The authors argue that ChatGPT’s summaries of copyrighted works, including George R.R. Martin’s Game of Thrones, unlawfully replicate the original tone, plot, and characters, raising concerns about AI-generated content infringing on creative rights.

The Publishers Association (PA) welcomed the ruling, warning that generative AI could ‘devastate the market’ for books and other creative works by producing infringing content at scale.

It urged the UK government to strengthen transparency rules to protect authors and publishers, stressing that AI systems capable of reproducing an author’s style could undermine the value of original creation.

The case follows a £1.5bn settlement against Anthropic earlier this year for using pirated books to train its models and comes amid growing scrutiny of AI firms.

In Britain, Stability AI recently avoided a copyright ruling after a claim by Getty Images was dismissed on grounds of jurisdiction. Still, the PA stated that the outcome highlighted urgent gaps in UK copyright law regarding AI training and output.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Brussels leak signals GDPR and AI Act adjustments

The European Commission is preparing a Digital Package on simplification for 19 November. A leaked draft outlines instruments covering GDPR, ePrivacy, Data Act and AI Act reforms.

Plans include a single breach portal and a higher reporting threshold. Authorities would receive notifications within 96 hours, with standardised forms and narrower triggers. Controllers could reject or charge for data subject access requests used to pursue disputes.

Cookie rules would shift toward browser-level preference signals respected across services. Aggregated measurement and security uses would not require popups, while GDPR lawful bases expand. News publishers could receive limited exemptions recognising reliance on advertising revenues.

Drafting recognises legitimate interest for training AI models on personal data. Narrow allowances are provided for sensitive data during development, along with EU-wide data protection impact assessment templates. Critics warn proposals dilute safeguards and may soften the AI Act.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google and Cassava expand Gemini access in Africa

Google announced a partnership with Cassava Technologies to widen access to Gemini across Africa. The deal includes data-free Gemini usage for eligible users coordinated through Cassava’s network partners. The initiative aims to address affordability and adoption barriers for mobile users.

A six-month trial of the Google AI Plus plan is part of the package. Benefits include access to more capable Gemini models and added cloud storage. Coverage by regional tech outlets reported the exact core details.

Education features were highlighted, including NotebookLM for study aids and Gemini in Docs for writing support. Google said the offer aims to help students, teachers, and creators work without worrying about data usage. Reports highlight a focus on youth and skills development.

Cassava’s role aligns with broader investments in AI infrastructure and services across the continent; recent announcements reference model exchanges and planned AI facilities that support regional development. Observers see momentum behind accessible AI tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Northern Ireland teachers reclaim hours with AI

A six-month pilot across Northern Ireland put Gemini and Workspace into classrooms. One hundred teachers participated under the Education Authority’s C2k programme. Reported benefits centred on time savings and practical support for everyday teaching.

Participants said they saved around ten hours per week on routine tasks where freed time was redirected to pupil engagement and professional development. More than six hundred use cases from the one hundred participants were documented during the trial period.

Teachers cited varied applications, from drafting parent letters to generating risk assessments quickly. NotebookLM helped transform curriculum materials into podcasts and interactive mind maps. Inclusive lessons were tailored, including Irish language activities and support for neurodivergent learners.

C2k plans wider training so more Northen Ireland educators can adopt the tools responsibly. Leadership framed AI as collaborative, not a replacement for teachers. Further partnerships are expected to align products with established pedagogical principles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot