Noyb study points to demand for tracking-free option

A new study commissioned by noyb reports that most users favour a tracking-free advertising option when navigating Pay or Okay systems. Researchers found low genuine support for data collection when participants were asked without pressure.

Consent rates rose sharply when users were presented only with payment or agreement to tracking, leading most to select consent. Findings indicate that the absence of a realistic alternative shapes outcomes more than actual preference.

Introduction of a third option featuring advertising without tracking prompted a strong shift, with most participants choosing that route. Evidence suggests users accept ad-funded models provided their behavioural data remains untouched.

Researchers observed similar patterns on social networks, news sites and other platforms, undermining claims that certain sectors require special treatment. Debate continues as regulators assess whether Pay or Okay complies with EU data protection rules such as the GDPR.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NITDA warns of prompt injection risks in ChatGPT models

Nigeria’s National Information Technology Development Agency (NITDA) has issued an urgent advisory on security weaknesses in OpenAI’s ChatGPT models. The agency warned that flaws affecting GPT-4o and GPT-5 could expose users to data leakage through indirect prompt injection.

According to NITDA’s Computer Emergency Readiness and Response Team, seven critical flaws were identified that allow hidden instructions to be embedded in web content. Malicious prompts can be triggered during routine browsing, search or summarisation without user interaction.

The advisory warned that attackers can bypass safety filters, exploit rendering bugs and manipulate conversation context. Some techniques allow injected instructions to persist across future interactions by interfering with the models’ memory functions.

While OpenAI has addressed parts of the issue, NITDA said large language models still struggle to reliably distinguish malicious data from legitimate input. Risks include unintended actions, information leakage and long-term behavioural influence.

NITDA urged users and organisations in Nigeria to apply updates promptly and limit browsing or memory features when not required. The agency said that exposing AI systems to external tools increases their attack surface and demands stronger safeguards.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU gains stronger ad oversight after TikTok agreement

Regulators in the EU have accepted binding commitments from TikTok aimed at improving advertising transparency under the Digital Services Act.

An agreement that follows months of scrutiny and addresses concerns raised in the Commission’s preliminary findings earlier in the year.

TikTok will now provide complete versions of advertisements exactly as they appear in user feeds, along with associated URLs, targeting criteria and aggregated demographic data.

Researchers will gain clearer insight into how advertisers reach users, rather than relying on partial or delayed information. The platform has also agreed to refresh its advertising repository within 24 hours.

Further improvements include new search functions and filters that make it easier for the public, civil society and regulators to examine advertising content.

These changes are intended to support efforts to detect scams, identify harmful products and analyse coordinated influence operations, especially around elections.

TikTok must implement its commitments to the EU within deadlines ranging from two to twelve months, depending on each measure.

The Commission will closely monitor compliance while continuing broader investigations into algorithmic design, protection of minors, data access and risks connected to elections and civic discourse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU ministers call for faster action on digital goals

European ministers have adopted conclusions aimed to boosting the Union’s digital competitiveness, urging quicker progress toward the 2030 digital decade goals.

Officials called for stronger digital skills, wider adoption of technology, and a framework that supports innovation while protecting fundamental rights. Digital sovereignty remains a central objective, framed as open, risk-based and aligned with European values.

Ministers supported simplifying digital rules for businesses, particularly SMEs and start-ups, which face complex administrative demands. A predictable legal environment, less reporting duplication and more explicit rules were seen as essential for competitiveness.

Governments emphasised that simplification must not weaken data protection or other core safeguards.

Concerns over online safety and illegal content were a prominent feature in discussions on enforcing the Digital Services Act. Ministers highlighted the presence of harmful content and unsafe products on major marketplaces, calling for stronger coordination and consistent enforcement across member states.

Ensuring full compliance with EU consumer protection and product safety rules was described as a priority.

Cyber-resilience was a key focus as ministers discussed the increasing impact of cyberattacks on citizens and the economy. Calls for stronger defences grew as digital transformation accelerated, with several states sharing updates on national and cross-border initiatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia introduces new codes to protect children online

Australian regulators have released new guidance ahead of the introduction of industry codes designed to protect children from exposure to harmful online material.

The Age Restricted Material Codes will apply to a wide range of online services, including app stores, social platforms, equipment providers, pornography sites and generative AI services, with the first tranche beginning on 27 December.

The rules require search engines to blur image results involving pornography or extreme violence to reduce accidental exposure among young users.

Search services must also redirect people seeking information related to suicide, self-harm or eating disorders to professional mental health support instead of allowing harmful spirals to unfold.

eSafety argues that many children unintentionally encounter disturbing material at very young ages, often through search results that act as gateways rather than deliberate choices.

The guidance emphasises that adults will still be able to access unblurred material by clicking through, and there is no requirement for Australians to log in or identify themselves before searching.

eSafety maintains that the priority lies in shielding children from images and videos they cannot cognitively process or forget once they have seen them.

These codes will operate alongside existing standards that tackle unlawful content and will complement new minimum age requirements for social media, which are set to begin in mid-December.

Authorities in Australia consider the reforms essential for reducing preventable harm and guiding vulnerable users towards appropriate support services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AMVER Awards honour UK maritime companies for search-and-rescue commitment

At an event hosted on 2 December by VIKAND in partnership with the United States Coast Guard (USCG), 30 UK maritime companies were awarded for their continued commitment to safety at sea through the AMVER system.

In total, 255 vessels under their operation represent 1,587 collective years of eligibility in AMVER. However, this reflects decades of voluntary participation in a global ship-reporting network that helps coordinate rescue operations far from shore using real-time vessel-position data.

Speakers at the ceremony emphasised that AMVER remains essential for ‘mariners helping mariners’, enabling merchant vessels to respond swiftly to distress calls anywhere in the world, regardless of nationality.

Representatives from maritime insurers, navigational-services firms and classification societies underscored the continuing importance of collaboration, readiness and mutual support across the global shipping industry.

This recognition illustrates how safety and solidarity at sea continue to matter deeply in an industry facing mounting pressures, from regulatory change to environmental and geopolitical risks. The awards reaffirm the UK fleet’s active role in keeping maritime trade not only productive, but also ready to save lives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe builds a laser ground station in Greenland to protect satellite links

Europe is building a laser-based ground station in Greenland to secure satellite links as Russian jamming intensifies. ESA and Denmark chose Kangerlussuaq for its clear skies and direct access to polar-orbit traffic.

The optical system uses Astrolight’s technology to transmit data markedly faster than radio signals. Narrow laser beams resist interference, allowing vast imaging sets to reach analysts with far fewer disruptions.

Developers expect terabytes to be downloaded in under a minute, reducing reliance on vulnerable Arctic radio sites. European officials say the upgrade strengthens autonomy as undersea cables and navigation systems face repeated targeting from countries such as Russia.

The Danish station will support defence monitoring, climate science and search-and-rescue operations across high latitudes. Work is underway, with completion planned for 2026 and ambitions for a wider global laser network.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Pope urges guidance for youth in an AI-shaped world

Pope Leo XIV urged global institutions to guide younger generations as they navigate the expanding influence of AI. He warned that rapid access to information cannot replace the deeper search for meaning and purpose.

Previously, the Pope had warned students not to rely solely on AI for educational support. He encouraged educators and leaders to help young people develop discernment and confidence when encountering digital systems.

Additionally, he called for coordinated action across politics, business, academia and faith communities to steer technological progress toward the common good. He argued that AI development should not be treated as an inevitable pathway shaped by narrow interests.

He noted that AI reshapes human relationships and cognition, raising concerns about its effects on freedom, creativity and contemplation. He insisted that safeguarding human dignity is essential to managing AI’s wide-ranging consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ireland and Australia deepen cooperation on online safety

Ireland’s online safety regulator has agreed a new partnership with Australia’s eSafety Commissioner to strengthen global approaches to digital harm. The Memorandum of Understanding (MoU) reinforces shared ambitions to improve online protection for children and adults.

The Irish and Australian plan to exchange data, expertise and methodological insights to advance safer digital platforms. Officials describe the arrangement as a way to enhance oversight of systems used to minimise harmful content and promote responsible design.

Leaders from both organisations emphasised the need for accountability across the tech sector. Their comments highlighted efforts to ensure that platforms embed user protection into their product architecture, rather than relying solely on reactive enforcement.

The MoU also opens avenues for collaborative policy development and joint work on education programs. Officials expect a deeper alignment around age assurance technologies and emerging regulatory challenges as online risks continue to evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

LLM shortcomings highlighted by Gary Marcus during industry debate

Gary Marcus argued at Axios’ AI+ Summit that large language models (LLMs) offer utility but fall short of the transformative claims made by their developers. He framed their fundamental role as groundwork for future artificial general intelligence. He suggested that meaningful capability shifts lie beyond today’s systems.

Marcus said alignment challenges stem from LLMs lacking robust world models and reliable constraints. He noted that models still hallucinate despite explicit instructions to avoid errors. He described current systems as an early rehearsal rather than a route to AGI.

Concerns raised included bias, misinformation, environmental impact and implications for education. Marcus also warned about the decline of online information quality as automated content spreads. He believes structural flaws make these issues persistent.

Industry momentum remains strong despite unresolved risks. Developers continue to push forward without clear explanations for model behaviour. Investment flows remain focused on the promise of AGI, despite timelines consistently shifting.

Strategic competition adds pressure, with the United States seeking to maintain an edge over China in advanced AI. Political signals reinforce the drive toward rapid development. Marcus argued that stronger frameworks are needed before systems scale further.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!