Mitigated ads personalisation coming to Meta platforms in the EU

Meta has agreed to introduce a less personalised ads option for Facebook and Instagram users in the EU, as part of efforts to comply with the bloc’s Digital Markets Act and address concerns over data use and user consent.

Under the revised model, users will be able to access Meta’s social media platforms without agreeing to extensive personal data processing for fully personalised ads. Instead, they can opt for an alternative experience based on significantly reduced data inputs, resulting in more limited ad targeting.

The option is set to roll out across the EU from January 2026. It marks the first time Meta has offered users a clear choice between highly personalised advertising and a reduced-data model across its core platforms.

The change follows months of engagement between Meta and Brussels after the European Commission ruled in April that the company had breached the DMA. Regulators stated that Meta’s previous approach had failed to provide users with a genuine and effective choice over how their data was used for advertising.

Once implemented, the Commission said it will gather evidence and feedback from Meta, advertisers, publishers, and other stakeholders. The goal is to assess the extent to which the new option is adopted and whether it significantly reshapes competition and data practices in the EU digital advertising market.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google faces renewed EU scrutiny over AI competition

The European Commission has opened a formal antitrust investigation into whether AI features embedded in online search are being used to unfairly squeeze competitors in newly emerging digital markets shaped by generative AI.

The probe targets Alphabet-owned Google, focusing on allegations that the company imposes restrictive conditions on publishers and content creators while giving its own AI-driven services preferential placement over rival technologies and alternative search offerings.

Regulators are examining products such as AI Overviews and AI Mode, assessing how publisher content is reused within AI-generated summaries and whether media organisations are compensated in a clear, fair, and transparent manner.

EU competition chief Teresa Ribera said the European Commission’s action reflects a broader effort to protect online media and preserve competitive balance as artificial intelligence increasingly shapes how information is produced, discovered, and monetised.

The case adds to years of scrutiny by the European Commission over Google’s search and advertising businesses, even as the company proposes changes to its ad tech operations and continues to challenge earlier antitrust rulings.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New spyware threat alerts issued by Apple and Google

Apple and Google have issued a fresh round of cyber threat notifications, warning users worldwide they may have been targeted by sophisticated surveillance operations linked to state-backed actors.

Apple said it sent alerts on 2 December, confirming it has now notified users in more than 150 countries, though it declined to disclose how many people were affected or who was responsible.

Google followed on 3 December, announcing warnings for several hundred accounts targeted by Intellexa spyware across multiple countries in Africa, Central Asia, and the Middle East.

The Alphabet-owned company said Intellexa continues to evade restrictions despite US sanctions, highlighting persistent challenges in limiting the spread of commercial surveillance tools.

Researchers say such alerts raise costs for cyber spies by exposing victims, often triggering investigations that can lead to public scrutiny and accountability over spyware misuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japan weighs easing rules on personal data use

Japan is preparing to relax restrictions on personal data use to support rapid AI development. Government sources say a draft bill aims to expand third-party access to sensitive information.

Plans include allowing medical histories and criminal records to be obtained without consent for statistical purposes. Japanese officials argue such access could accelerate research while strengthening domestic competitiveness.

New administrative fines would target companies that profit from unlawfully acquired data affecting large groups. Penalties would match any gains made through misconduct, reflecting growing concern over privacy abuses.

A government panel has reviewed the law since 2023 and intends to present reforms soon. Debate is expected to intensify as critics warn of increased risks to individual rights if support for AI development in this regard continues.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia seals $4.6 billion deal for new AI hub

OpenAI has partnered with Australian data centre operator NextDC to build a major AI campus in western Sydney. The companies signed an agreement covering development, planning and long-term operation of the vast site.

NextDC said the project will include a supercluster of graphics processors to support advanced AI workloads. Both firms intend to create infrastructure capable of meeting rapid global demand for high-performance computing.

Australia estimates the development at A$7 billion and forecasts thousands of jobs during construction and ongoing roles across engineering and operations. Officials say the initiative aligns with national efforts to strengthen technological capability.

Plans feature renewable energy procurement and cooling systems that avoid drinking water use, addressing sustainability concerns. Treasurer Jim Chalmers said the project reflects growing confidence in Australia’s talent, clean energy capacity and emerging AI economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Noyb study points to demand for tracking-free option

A new study commissioned by noyb reports that most users favour a tracking-free advertising option when navigating Pay or Okay systems. Researchers found low genuine support for data collection when participants were asked without pressure.

Consent rates rose sharply when users were presented only with payment or agreement to tracking, leading most to select consent. Findings indicate that the absence of a realistic alternative shapes outcomes more than actual preference.

Introduction of a third option featuring advertising without tracking prompted a strong shift, with most participants choosing that route. Evidence suggests users accept ad-funded models provided their behavioural data remains untouched.

Researchers observed similar patterns on social networks, news sites and other platforms, undermining claims that certain sectors require special treatment. Debate continues as regulators assess whether Pay or Okay complies with EU data protection rules such as the GDPR.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NITDA warns of prompt injection risks in ChatGPT models

Nigeria’s National Information Technology Development Agency (NITDA) has issued an urgent advisory on security weaknesses in OpenAI’s ChatGPT models. The agency warned that flaws affecting GPT-4o and GPT-5 could expose users to data leakage through indirect prompt injection.

According to NITDA’s Computer Emergency Readiness and Response Team, seven critical flaws were identified that allow hidden instructions to be embedded in web content. Malicious prompts can be triggered during routine browsing, search or summarisation without user interaction.

The advisory warned that attackers can bypass safety filters, exploit rendering bugs and manipulate conversation context. Some techniques allow injected instructions to persist across future interactions by interfering with the models’ memory functions.

While OpenAI has addressed parts of the issue, NITDA said large language models still struggle to reliably distinguish malicious data from legitimate input. Risks include unintended actions, information leakage and long-term behavioural influence.

NITDA urged users and organisations in Nigeria to apply updates promptly and limit browsing or memory features when not required. The agency said that exposing AI systems to external tools increases their attack surface and demands stronger safeguards.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU gains stronger ad oversight after TikTok agreement

Regulators in the EU have accepted binding commitments from TikTok aimed at improving advertising transparency under the Digital Services Act.

An agreement that follows months of scrutiny and addresses concerns raised in the Commission’s preliminary findings earlier in the year.

TikTok will now provide complete versions of advertisements exactly as they appear in user feeds, along with associated URLs, targeting criteria and aggregated demographic data.

Researchers will gain clearer insight into how advertisers reach users, rather than relying on partial or delayed information. The platform has also agreed to refresh its advertising repository within 24 hours.

Further improvements include new search functions and filters that make it easier for the public, civil society and regulators to examine advertising content.

These changes are intended to support efforts to detect scams, identify harmful products and analyse coordinated influence operations, especially around elections.

TikTok must implement its commitments to the EU within deadlines ranging from two to twelve months, depending on each measure.

The Commission will closely monitor compliance while continuing broader investigations into algorithmic design, protection of minors, data access and risks connected to elections and civic discourse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU ministers call for faster action on digital goals

European ministers have adopted conclusions aimed to boosting the Union’s digital competitiveness, urging quicker progress toward the 2030 digital decade goals.

Officials called for stronger digital skills, wider adoption of technology, and a framework that supports innovation while protecting fundamental rights. Digital sovereignty remains a central objective, framed as open, risk-based and aligned with European values.

Ministers supported simplifying digital rules for businesses, particularly SMEs and start-ups, which face complex administrative demands. A predictable legal environment, less reporting duplication and more explicit rules were seen as essential for competitiveness.

Governments emphasised that simplification must not weaken data protection or other core safeguards.

Concerns over online safety and illegal content were a prominent feature in discussions on enforcing the Digital Services Act. Ministers highlighted the presence of harmful content and unsafe products on major marketplaces, calling for stronger coordination and consistent enforcement across member states.

Ensuring full compliance with EU consumer protection and product safety rules was described as a priority.

Cyber-resilience was a key focus as ministers discussed the increasing impact of cyberattacks on citizens and the economy. Calls for stronger defences grew as digital transformation accelerated, with several states sharing updates on national and cross-border initiatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia introduces new codes to protect children online

Australian regulators have released new guidance ahead of the introduction of industry codes designed to protect children from exposure to harmful online material.

The Age Restricted Material Codes will apply to a wide range of online services, including app stores, social platforms, equipment providers, pornography sites and generative AI services, with the first tranche beginning on 27 December.

The rules require search engines to blur image results involving pornography or extreme violence to reduce accidental exposure among young users.

Search services must also redirect people seeking information related to suicide, self-harm or eating disorders to professional mental health support instead of allowing harmful spirals to unfold.

eSafety argues that many children unintentionally encounter disturbing material at very young ages, often through search results that act as gateways rather than deliberate choices.

The guidance emphasises that adults will still be able to access unblurred material by clicking through, and there is no requirement for Australians to log in or identify themselves before searching.

eSafety maintains that the priority lies in shielding children from images and videos they cannot cognitively process or forget once they have seen them.

These codes will operate alongside existing standards that tackle unlawful content and will complement new minimum age requirements for social media, which are set to begin in mid-December.

Authorities in Australia consider the reforms essential for reducing preventable harm and guiding vulnerable users towards appropriate support services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!