EU investigates Google over potential Digital Markets Act breach

The European Commission has opened an investigation into whether Google may be breaching the Digital Markets Act by unfairly demoting news publishers in search results.

An inquiry that centres on Google’s ‘site reputation abuse policy’, which appears to lower rankings for publishers that host content from commercial partners, even when those partnerships support legitimate ways of monetising online journalism.

The Commission is examining whether Alphabet’s approach restricts publishers from conducting business, innovating, and cooperating with third-party content providers. Officials highlighted concerns that such demotions may undermine revenue at a difficult moment for the media sector.

These proceedings do not imply a final decision; instead, they allow the EU to gather evidence and assess Google’s practices in detail.

If the Commission finds evidence of non-compliance, it will present preliminary findings and request corrective measures. The investigation is expected to conclude within 12 months.

Under the DMA, infringements can lead to fines of up to ten percent of a company’s worldwide turnover, rising to twenty percent for repeated violations, alongside possible structural remedies.

Senior Commissioners stressed that gatekeepers must offer fair and non-discriminatory access to their platforms. They argued that protecting publishers’ ability to reach audiences supports media pluralism, innovation, and democratic resilience.

Google Search, designated as a core platform service under the DMA, has been required to comply fully with the regulation since March 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New York Times lawsuit prompts OpenAI to strengthen privacy protections

OpenAI says a New York Times demand to hand over 20 million private ChatGPT conversations threatens user privacy and breaks with established security norms. The request forms part of the Times’ lawsuit over alleged misuse of its content.

The company argues the demand would expose highly personal chats from people with no link to the case. It previously resisted broader requests, including one seeking more than a billion conversations, and says the latest move raises similar concerns about proportionality.

OpenAI says it offered privacy-preserving alternatives, such as targeted searches and high-level usage data, but these were rejected. It adds that chats covered by the order are being de-identified and stored in a secure, legally restricted environment.

The dispute arises as OpenAI accelerates its security roadmap, which includes plans for client-side encryption and automated systems that detect serious safety risks without requiring broad human access. These measures aim to ensure private conversations remain inaccessible to external parties.

OpenAI maintains that strong privacy protections are essential as AI tools handle increasingly sensitive tasks. It says it will challenge any attempt to make private conversations public and will continue to update users as the legal process unfolds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI credentials grow as AWS launches practical training pathway

AWS is launching four solutions to help close the AI skills gap as demand rises and job requirements shift. The company positions the new tools as a comprehensive learning journey, offering structured pathways that progress from foundational knowledge to hands-on practice and formal validation.

AWS Skill Builder now hosts over 220 free AI courses, ranging from beginner introductions to advanced topics in generative and agentic AI. The platform enables learners to build skills at their own pace, with flexible study options that accommodate work schedules.

Practical experience anchors the new suite. The Meeting Simulator helps learners explain AI concepts to realistic personas and refine communication with instant feedback. Cohorts Studio offers team-based training through study groups, boot camps, and game-based challenges.

AWS is expanding its credential portfolio with the AWS Certified Generative AI Developer – Professional certification. The exam helps cloud practitioners demonstrate proficiency in foundation models, RAG architectures, and responsible deployment, supported by practice tasks and simulated environments.

Learners can validate hands-on capability through new microcredentials that require troubleshooting and implementation in real AWS settings. Combined credentials signal both conceptual understanding and task-ready skills, with Skill Builder’s more expansive library offering a clear starting point for career progression.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Irish regulator opens DSA probe into X

Ireland’s media watchdog has opened a formal investigation into X under the EU’s Digital Services Act. Regulators will assess appeal rights and internal complaint handling after reports of inaccessible processes for users.

Irish officials will examine whether users can challenge refusals to remove reported content and receive clear outcomes. Potential penalties reach up to 6% of global turnover for confirmed breaches.

The case stems from ongoing supervision, a user complaint, and information from HateAid, marking the first such probe by Ireland. Wider EU scrutiny continues across huge platforms.

Other services, including Meta and TikTok, have faced DSA actions, underscoring tighter enforcement across the bloc. Remedial measures and transparency improvements could follow if non-compliance is found.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

European Commission launches Culture Compass to strengthen the EU identity

The European Commission unveiled the Culture Compass for Europe, a framework designed to place culture at the heart of the EU policies.

An initiative that aims to foster the identity ot the EU, celebrate diversity, and support excellence across the continent’s cultural and creative sectors.

The Compass addresses the challenges facing cultural industries, including restrictions on artistic expression, precarious working conditions for artists, unequal access to culture, and the transformative impact of AI.

It provides guidance along four key directions: upholding European values and cultural rights, empowering artists and professionals, enhancing competitiveness and social cohesion, and strengthening international cultural partnerships.

Several initiatives will support the Compass, including the EU Artists Charter for fair working conditions, a European Prize for Performing Arts, a Youth Cultural Ambassadors Network, a cultural data hub, and an AI strategy for the cultural sector.

The Commission will track progress through a new report on the State of Culture in the EU and seeks a Joint Declaration with the European Parliament and Council to reinforce political commitment.

Commission officials emphasised that the Culture Compass connects culture to Europe’s future, placing artists and creativity at the centre of policy and ensuring the sector contributes to social, economic, and international engagement.

Culture is portrayed not as a side story, but as the story of the EU itself.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU regulators, UK and eSafety lead the global push to protect children in the digital world

Children today spend a significant amount of their time online, from learning and playing to communicating.

To protect them in an increasingly digital world, Australia’s eSafety Commissioner, the European Commission’s DG CNECT, and the UK’s Ofcom have joined forces to strengthen global cooperation on child online safety.

The partnership aims to ensure that online platforms take greater responsibility for protecting and empowering children, recognising their rights under the UN Convention on the Rights of the Child.

The three regulators will continue to enforce their online safety laws to ensure platforms properly assess and mitigate risks to children. They will promote privacy-preserving age verification technologies and collaborate with civil society and academics to ensure that regulations reflect real-world challenges.

By supporting digital literacy and critical thinking, they aim to provide children and families with safer and more confident online experiences.

To advance the work, a new trilateral technical group will be established to deepen collaboration on age assurance. It will study the interoperability and reliability of such systems, explore the latest technologies, and strengthen the evidence base for regulatory action.

Through closer cooperation, the regulators hope to create a more secure and empowering digital environment for young people worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok faces scrutiny over AI moderation and UK staff cuts

TikTok has responded to the Science, Innovation and Technology Committee regarding proposed cuts to its UK Trust and Safety teams. The company claimed that reducing staff while expanding AI, third-party specialists, and more localised teams would improve moderation effectiveness.

The social media platform, however, did not provide any supporting data or risk assessment to justify these changes. MPs previously called for more transparency on content moderation data during an inquiry into social media, misinformation, and harmful algorithms.

TikTok’s increasing reliance on AI comes amid broader concerns over AI safety, following reports of chatbots encouraging harmful behaviours.

Committee Chair Dame Chi Onwurah expressed concern that AI cannot reliably replace human moderators. She warned AI could cause harm and criticised TikTok for not providing evidence that staff cuts would protect users.

The Committee urges the Government and Ofcom to take action to ensure user safety before implementing staffing reductions. Dame Onwurah emphasised that without credible data, it is impossible to determine whether the changes will effectively protect users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta, TikTok and Snapchat prepare to block under-16s as Australia enforces social media ban

Social media platforms, including Meta, TikTok and Snapchat, will begin sending notices to more than a million Australian teens, telling them to download their data, freeze their profiles or lose access when the national ban for under-16s comes into force on 10 December.

According to people familiar with the plans, platforms will deactivate accounts believed to belong to users under the age of 16. About 20 million Australians who are older will not be affected. However, this marks a shift from the year-long opposition seen from tech firms, which warned the rules would be intrusive or unworkable.

Companies plan to rely on their existing age-estimation software, which predicts age from behaviour signals such as likes and engagement patterns. Only users who challenge a block will be pushed to the age assurance apps. These tools estimate age from a selfie and, if disputed, allow users to upload ID. Trials show they work, but accuracy drops for 16- and 17-year-olds.

Yoti’s Chief Policy Officer, Julie Dawson, said disruption should be brief, with users adapting within a few weeks. Meta, Snapchat, TikTok and Google declined to comment. In earlier hearings, most respondents stated that they would comply.

The law blocks teenagers from using mainstream platforms without any parental override. It follows renewed concern over youth safety after internal Meta documents in 2021 revealed harm linked to heavy social media use.

A smooth rollout is expected to influence other countries as they explore similar measures. France, Denmark, Florida and the UK have pursued age checks with mixed results due to concerns over privacy and practicality.

Consultants say governments are watching to see whether Australia’s requirement for platforms to take ‘reasonable steps’ to block minors, including trying to detect VPN use, works in practice without causing significant disruption for other users.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Language models mimic human belief reasoning

In a recent paper, researchers at Stevens Institute of Technology revealed that large language models (LLMs) use a small, specialised subset of their parameters to perform tasks associated with the psychological concept of ‘Theory of Mind’ (ToM), the human ability to infer others’ beliefs, intentions and perspectives.

The study found that although LLMs activate almost their whole network for each input, the ToM-related reasoning appears to rely disproportionately on a narrow internal circuit, particularly shaped by the model’s positional encoding mechanism.

This discovery matters because it highlights a significant efficiency gap between human brains and current AI systems: humans carry out social-cognitive tasks with only a tiny fraction of neural activity, whereas LLMs still consume substantial computational resources even for ‘simple’ reasoning.

The researchers suggest these points as a way to design AI models that are more brain-inspired, selectively activating only those parameters needed for particular tasks.

From a policy and digital-governance perspective, this raises questions about how we interpret AI’s understanding and social cognition.

If AI can exhibit behaviour that resembles human belief-reasoning, oversight frameworks and transparency standards become all the more critical in assessing what AI systems are doing, and what they are capable of.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI faces major copyright setback in US court

A US federal judge has ruled that a landmark copyright case against OpenAI can proceed, rejecting the company’s attempt to dismiss claims brought by authors and the Authors Guild.

The authors argue that ChatGPT’s summaries of copyrighted works, including George R.R. Martin’s Game of Thrones, unlawfully replicate the original tone, plot, and characters, raising concerns about AI-generated content infringing on creative rights.

The Publishers Association (PA) welcomed the ruling, warning that generative AI could ‘devastate the market’ for books and other creative works by producing infringing content at scale.

It urged the UK government to strengthen transparency rules to protect authors and publishers, stressing that AI systems capable of reproducing an author’s style could undermine the value of original creation.

The case follows a £1.5bn settlement against Anthropic earlier this year for using pirated books to train its models and comes amid growing scrutiny of AI firms.

In Britain, Stability AI recently avoided a copyright ruling after a claim by Getty Images was dismissed on grounds of jurisdiction. Still, the PA stated that the outcome highlighted urgent gaps in UK copyright law regarding AI training and output.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot