Transparency push for online advertising systems

Researchers from the University of California and Iowa have warned that structural weaknesses in the digital advertising ecosystem continue to expose advertisers to hidden risks and fraud. The study highlights how complexity and limited transparency enable manipulation across the supply chain.

A key issue identified is ‘dark pooling’, in which lower-quality advertising inventory is bundled with premium placements, obscuring their true value. This practice can mislead buyers and distort pricing across the market.

The authors argue that current safeguards fail to address these vulnerabilities effectively, as responsibilities are fragmented among multiple stakeholders. This lack of coordination allows systemic issues to persist.

To address the problem, the researchers propose a shared vulnerability notification framework involving advertisers, publishers and intermediaries. The study suggests such collaboration could strengthen accountability and improve trust in digital advertising markets in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

DMCC Act 2024 brings UK ADR reporting rules into force

UK regulations under the Digital Markets, Competition and Consumers (DMCC) Act 2024, referred to here as the DMCC Act 2024, are now in force, requiring accredited alternative dispute resolution providers to report information to the ADR authority and to make it available to consumers on their websites.

Under the DMCC Act 2024 (Alternative Dispute Resolution) (Information) Regulations, an accredited ADR provider must submit an annual report to the ADR authority in writing on a durable medium.

An accredited ADR provider is a person or entity that either conducts alternative dispute resolution for a consumer contract dispute or arranges for it to occur. The same information must also be published for consumers on the provider’s website within one month of each anniversary of accreditation.

Accredited ADR providers must also notify the ADR authority of any changes to the information listed in Part 2 of the Schedule. Former accredited ADR providers are required to submit a Part 1 report within one month after their accreditation ends.

Exempt ADR providers must provide the information in Parts 1 and 2 of the Schedule to the ADR authority to the extent that the same information is also supplied to a regulator, and must do so within one month of providing it to that regulator.

Why does it matter?

The DMCC Act 2024 regulations add transparency to the UK ADR system. Accredited providers must now report information to the ADR authority and publish it for consumers, creating clearer oversight and making it easier to see how accredited schemes operate.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft markets Copilot as a productivity boost but warns it is ‘for entertainment purposes only’

Microsoft has spent the past year pushing Copilot as a mainstream productivity tool, baking it into Windows 11 and promoting new hardware such as Copilot+ PCs, yet its own legal language urges caution. In Microsoft’s Copilot Terms of Use, updated in October last year, the company states Copilot is ‘for entertainment purposes only’, may ‘make mistakes’, and ‘may not work as intended’.

The terms warn users not to rely on Copilot for important advice and to ‘use Copilot at your own risk’, a caveat that sits uneasily alongside the product’s business-focused marketing.

The Tom’s Hardware article argues Microsoft is not unique in issuing such warnings. Similar disclaimers are common across the generative AI industry. It points to xAI’s guidance that AI is ‘probabilistic in nature’ and may produce ‘hallucinations’, generate offensive or objectionable content, or fail to reflect real people, places or facts.

While these limitations are well known to those familiar with large language models, the piece notes that many users still treat AI output as authoritative, even in professional settings where scepticism should be standard.

To underline the risks of overreliance, the text cites reports of Amazon-related incidents allegedly linked to ‘Gen-AI assisted changes’. It says some AWS outages were reportedly caused after engineers let an AI coding bot address an issue without sufficient oversight, and that Amazon’s website experienced ‘high blast radius’ problems that required senior engineers to step in. These examples are used to illustrate how AI-generated errors can propagate quickly in complex systems when humans fail to verify the output.

Why does it matter?

Overall, the article acknowledges that generative AI can boost productivity, but stresses it remains a tool with no accountability for mistakes, making verification essential. It warns that automation bias, people trusting machine outputs over contradictory evidence, can be intensified by AI systems that produce plausible-sounding answers that pass casual inspection.

While such disclaimers help companies limit legal liability, the piece suggests aggressive marketing of AI as a productivity ‘hack’ may downplay real-world risks, particularly as firms seek returns on the billions invested in AI hardware and talent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU delegation in China calls for sustainable e-commerce and safety standards

Members of the European Parliament (MEPs) completed a visit to Beijing and Shanghai to address pressing e-commerce challenges affecting the European single market.

The delegation studied local business models and market supervision frameworks, engaging with Chinese regulators, e-commerce platforms, and the EU company representatives.

The discussions highlighted the surge of parcels from China, which now account for 91% of small shipments to Europe, and the resulting pressures on fair competition.

MEPs stressed that regulatory compliance must be consistent across all operators, ensuring consumer protection is not compromised by disparities in market practices or enforcement gaps.

The delegation urged representatives of e-commerce platforms to implement preventive measures, reinforcing accountability in areas such as product safety, customs compliance, and the removal of unsafe goods from the market.

MEPs underscored that these standards are essential to maintaining a sustainable and secure e-commerce environment for European citizens.

The visit, the first in eight years, demonstrated the EU’s commitment to safeguarding consumer rights, strengthening international cooperation, and ensuring digital commerce evolves in a manner that is fair, transparent, and safe for all citizens.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Call to scrap cookie banners gains traction

A new study argues that cookie consent banners should be scrapped, claiming they fail to protect user privacy and instead create frustration. The research highlights how repeated pop-ups have become a defining feature of the modern internet.

The paper suggests that cookie banners, originally introduced under data protection laws, have led to ‘performative compliance’ rather than meaningful consent. Users often click through notices without understanding them, weakening the purpose of privacy regulation.

Researchers say the system may even normalise data tracking by encouraging habitual acceptance. Instead of improving transparency, the approach risks obscuring how personal data is collected and used across digital platforms.

The study calls for regulators to move beyond banner-based consent towards more effective privacy protections. It argues that current rules may hinder the development of better solutions by giving the impression that the problem has already been addressed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Experts warn YouTube AI slop harms children and demand action

Fairplay and more than 200 experts have urged YouTube to address the spread of ‘AI slop’ targeting children. The letter was sent to Sundar Pichai and Neal Mohan, along with a petition.

The signatories state that AI-generated videos harm children’s development by distorting reality and overwhelming learning processes. They also warn that such content captures attention and is being recommended to young users, including infants and toddlers.

The letter cites findings that 40% of videos following shows like Cocomelon contained AI-generated content. It also states that 21% of Shorts recommendations included similar material, and misleading science videos were shown to older children.

Fairplay and its partners propose measures, including labelling AI content and banning it from YouTube Kids. They also call for restrictions on recommendations to under-18s and for tools that allow parents to turn off such content.

The initiative was organised by Fairplay and supported by organisations and experts, including Jonathan Haidt. The group says platforms must ensure content is safe and appropriate for children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

France moves toward social media restrictions for children under 15

Legislative efforts in France signal a shift toward stricter governance of youth access to digital platforms, with policymakers preparing to debate a ban on social media use for children under 15.

A proposal that forms part of a broader strategy to address concerns over online harms and excessive screen exposure among adolescents.

The draft law in France extends beyond access restrictions, proposing a digital curfew for older teenagers and expanding existing school phone bans to include high schools.

These measures reflect increasing reliance on regulatory intervention instead of voluntary platform safeguards, as evidence links prolonged digital engagement with risks such as cyberbullying, disrupted sleep patterns and exposure to harmful content.

Political backing for the initiative has emerged from figures aligned with Emmanuel Macron, reinforcing the government’s position that stronger oversight of digital environments is necessary. The proposal also mirrors developments in Australia, where similar restrictions have already entered into force.

A debate that is further influenced by legal actions targeting major platforms, including TikTok and Meta, amid allegations that algorithmic systems contribute to harmful user experiences.

The outcome of the parliamentary discussions in France is expected to shape future approaches to child safety, platform accountability and digital rights governance across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Dutch court bans harmful Grok AI-generated images

A judge in Amsterdam has ordered AI chatbot Grok and platform X to stop generating and distributing explicit deepfake images. The ruling targets so-called ‘undressing’ content and illegal material involving minors.

The case was brought by Offlimits, which argued that safeguards were failing. The Dutch judges found sufficient evidence that harmful images could still be created despite existing restrictions.

The court imposed a penalty of €100,000 per day for violations, with a maximum of €10 million. Access to Grok on X must also be suspended if the system does not comply with the order.

The decision highlights growing legal pressure on AI platforms to control the misuse of generative tools. Regulators and courts are increasingly demanding stronger protections against online abuse and illegal content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California challenges federal approach with new AI rules

The government of California is advancing a more interventionist approach to AI governance, signalling a divergence from federal deregulatory preferences.

An executive order signed by Gavin Newsom mandates the development of comprehensive AI policies within 4 months, prioritising public safety and protecting fundamental rights.

The proposed framework requires companies seeking state contracts to demonstrate safeguards against harmful outputs, including the prevention of child exploitation material and violent content.

It also calls for measures addressing algorithmic bias and unlawful discrimination, alongside increased transparency through mechanisms such as watermarking AI-generated media.

Federal guidance has discouraged state-level intervention, framing such efforts as obstacles to technological leadership.

The evolving policy landscape reflects growing concern over the societal impact of AI systems, including risks to employment, content integrity and civil liberties.

An initiative by California that may therefore serve as a testing ground for future regulatory models, shaping broader debates on balancing innovation with accountability in digital governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Healthcare data breach raises concerns over cloud security

A cybersecurity incident involving CareCloud has exposed vulnerabilities in the protection of sensitive medical information, following unauthorised access to patient records stored within its systems.

A breach was detected on 16 March, allowing attackers to access electronic health records for several hours, which raised concerns about potential data exposure.

The company has stated that the intrusion was contained on the same day, with systems restored and an external investigation launched.

However, uncertainty remains about whether any data were extracted and the scale of the potential impact, particularly given the company’s role in supporting tens of thousands of healthcare providers and millions of patients.

Such an incident reflects broader structural risks within digital healthcare infrastructures, where centralised storage of highly sensitive data increases the potential impact of cyberattacks.

Cloud environments, including services provided by Amazon Web Services, are increasingly integral to such systems, amplifying both efficiency and exposure.

The breach follows a pattern of escalating cyber threats targeting healthcare data, driven by its high value in criminal markets.

As investigations continue, the case underscores the need for stronger data protection measures, enhanced monitoring systems and more robust regulatory oversight to safeguard patient information.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!