ENISA opens public review of draft EUDI Wallet cybersecurity scheme

The European Union Agency for Cybersecurity has published a draft candidate scheme for the European Digital Identity Wallet and the electronic identity schemes under which it is provided. ENISA describes it as a draft version of the European Cybersecurity Certification Scheme for European Digital Identity Wallets.

ENISA states the draft addresses the certification of the cybersecurity of cloud services and is being developed under Article 48(2) of Regulation (EU) 2019/881, the Cybersecurity Act.

As per ENISA, an ad hoc working group has been set up to prepare the candidate scheme. The agency says the public review is intended to validate the principles and general organisation of the proposed scheme and to gather feedback on the draft and its annexes.

ENISA also says the draft candidate scheme is accompanied by an early draft of a separate document, Wallet-Related Service Provider Security Requirements, version 0.5.614, which is provided as a reference and for early opinion on the approach used to define those requirements.

The public review will remain open until the end of April 2026. ENISA has also said it will organise a webinar on 8 April to provide information about the draft candidate scheme and answer questions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Transparency push for online advertising systems

Researchers from the University of California and Iowa have warned that structural weaknesses in the digital advertising ecosystem continue to expose advertisers to hidden risks and fraud. The study highlights how complexity and limited transparency enable manipulation across the supply chain.

A key issue identified is ‘dark pooling’, in which lower-quality advertising inventory is bundled with premium placements, obscuring their true value. This practice can mislead buyers and distort pricing across the market.

The authors argue that current safeguards fail to address these vulnerabilities effectively, as responsibilities are fragmented among multiple stakeholders. This lack of coordination allows systemic issues to persist.

To address the problem, the researchers propose a shared vulnerability notification framework involving advertisers, publishers and intermediaries. The study suggests such collaboration could strengthen accountability and improve trust in digital advertising markets in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

CNN develops agent infrastructure for AI media trading

CNN is developing an internal agent infrastructure as part of a plan to begin AI-driven media trading by early 2027. The company aims to complete protocol scoping by the end of the second quarter before moving into testing phases later in the year.

Testing will focus on how properties are interpreted by large language models and how buyers allocate budgets to agent-based systems. Executives say the timeline may change as the technology and market conditions continue to evolve.

The initiative combines in-house development with external technology partners, while aligning with industry frameworks to ensure compatibility. CNN is also working with standards bodies to ensure agent communication produces accurate outcomes for buyers.

Agentic protocols enable systems to exchange information, negotiate pricing, and manage tasks autonomously between buyers and sellers. The company is prioritising consistent communication to support efficient and reliable transactions.

Early efforts are centred on learning and experimentation, even without immediate revenue generation. Initial use cases are expected to focus on performance-driven campaigns before expanding into broader advertising activities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

DMCC Act 2024 brings UK ADR reporting rules into force

UK regulations under the Digital Markets, Competition and Consumers (DMCC) Act 2024, referred to here as the DMCC Act 2024, are now in force, requiring accredited alternative dispute resolution providers to report information to the ADR authority and to make it available to consumers on their websites.

Under the DMCC Act 2024 (Alternative Dispute Resolution) (Information) Regulations, an accredited ADR provider must submit an annual report to the ADR authority in writing on a durable medium.

An accredited ADR provider is a person or entity that either conducts alternative dispute resolution for a consumer contract dispute or arranges for it to occur. The same information must also be published for consumers on the provider’s website within one month of each anniversary of accreditation.

Accredited ADR providers must also notify the ADR authority of any changes to the information listed in Part 2 of the Schedule. Former accredited ADR providers are required to submit a Part 1 report within one month after their accreditation ends.

Exempt ADR providers must provide the information in Parts 1 and 2 of the Schedule to the ADR authority to the extent that the same information is also supplied to a regulator, and must do so within one month of providing it to that regulator.

Why does it matter?

The DMCC Act 2024 regulations add transparency to the UK ADR system. Accredited providers must now report information to the ADR authority and publish it for consumers, creating clearer oversight and making it easier to see how accredited schemes operate.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI presents policy proposals addressing AI’s economic and labour impacts

Policy proposals advanced by OpenAI outline a vision of economic restructuring in response to the growing influence of AI.

Framed within an emerging ‘intelligence age‘, the approach reflects concerns that AI-driven productivity gains may concentrate wealth while undermining traditional labour-based economic models.

The proposals, therefore, attempt to reconcile market-led innovation with mechanisms aimed at broader distribution of economic benefits.

A central element involves shifting taxation away from labour towards capital, reflecting expectations that automation will reduce reliance on human work.

Instruments such as robot taxes and public wealth funds are presented as potential tools to redistribute gains generated by AI systems.

Such proposals by OpenAI indicate a policy direction where states may need to redefine fiscal structures to sustain social protection systems traditionally funded through employment-based taxation.

Labour market adaptation forms another key pillar, with suggestions including shorter working weeks, portable benefits, and increased corporate contributions to social welfare.

However, reliance on employer-linked mechanisms raises questions about coverage gaps, particularly for individuals displaced by automation. The proposals highlight ongoing tensions between corporate-led welfare models and the need for more comprehensive public safety nets.

Alongside economic measures, the framework addresses governance challenges linked to advanced AI systems, including systemic risks and misuse.

OpenAI’s proposals also recommend that oversight bodies, risk containment strategies, and infrastructure expansion reflect an effort to balance innovation with control.

Treating AI as a utility further signals a shift towards recognising digital infrastructure as a public good, though implementation will depend on political consensus and regulatory capacity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

South Korea advances energy transition strategy to strengthen resilience and green industry

An expansive energy transition strategy has been outlined by South Korea aimed at reshaping its national energy system around renewables, electrification and industrial transformation.

The plan responds directly to heightened geopolitical risks and supply vulnerabilities, signalling a shift from import-dependent energy security towards domestic resilience.

Central targets include exceeding a 20% renewable energy share and deploying 100GW of capacity by 2030, alongside accelerating the adoption of electric and hydrogen vehicles across both public and commercial fleets.

The strategy by South Korea reflects structural change, combining large-scale renewable expansion with the phased retirement of the 60 currently operating coal-fired power plants by 2040 and the introduction of a ‘just transition’ framework to mitigate regional and labour impacts.

Industrial policy plays a central role, with support directed towards green manufacturing ecosystems, hydrogen-based steel production, carbon capture technologies and electrified industrial processes.

Rising electricity demand, driven in part by AI infrastructure and data centres, reinforces the need for grid modernisation, including decentralised and bidirectional systems designed to balance regional supply and demand more efficiently.

Governance mechanisms extend beyond infrastructure, incorporating market reforms, green finance instruments and subsidy reallocation away from fossil fuels.

Citizen participation is also embedded through ‘energy income’ models, enabling local investment in renewable projects.

South Korea positions energy transition not only as a climate objective but as a broader economic and social restructuring agenda centred on resilience, competitiveness and public engagement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Power hardware shortages are delaying AI data centre expansion, despite record investment

US AI data-centre expansion is increasingly being constrained not by chips, servers or funding, but by the electrical hardware needed to connect new facilities to reliable power, Bloomberg reports. While the US–China trade war has pushed many server makers to move production out of China, the deeper dependency remains in power-delivery equipment.

China is still the world’s largest producer of electrical gear used to build and upgrade power infrastructure, both inside data centres and across the wider grid. Shortages of key components, especially transformers, switchgear and batteries, sourced from China and elsewhere, are now slowing project timelines.

The scale of planned build-outs is colliding with these supply limits. Bloomberg cites forecasts that Alphabet, Amazon, Meta and Microsoft will spend more than $650bn in 2026 to expand AI capacity, yet close to half of the planned US data-centre builds this year are expected to be delayed or cancelled.

The problem extends beyond the data-centre fence line. Companies must also fund and coordinate grid upgrades to supply enough electricity, competing for the same scarce equipment as utilities coping with rising demand from electric vehicles and electrified heating.

Sightline Climate data cited by Bloomberg suggests about 12GW of US data-centre capacity is expected to come online in 2026, but only around a third of that capacity is currently under active construction due to multiple constraints. Electrical infrastructure may represent less than 10% of total data-centre cost, but it is schedule-critical, because delays in any link of the power chain can halt an entire project.

Lead times for high-power transformers, in particular, have deteriorated sharply, typically 24 to 30 months before 2020, but now stretching to as long as five years, clashing with AI deployment cycles that can be under 18 months.

To cope, developers are turning to global suppliers, with Canada, Mexico and South Korea becoming major sources of high-power transformers. Even so, US imports of Chinese high-power transformers have surged from fewer than 1,500 units in 2022 to more than 8,000 units through October 2025, according to Wood Mackenzie data cited by Bloomberg. China also supplies over 40% of US battery imports and remains near 30% in some transformer and switchgear categories, underscoring continued reliance despite tariffs and security concerns.

Why does it matter?

Bloomberg’s central warning is that without easing bottlenecks in transformers, switchgear and batteries, and expanding US manufacturing capacity, trillions of dollars of AI investment may not translate into delivered AI capacity, because power infrastructure, not compute, is becoming the limiting factor.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft markets Copilot as a productivity boost but warns it is ‘for entertainment purposes only’

Microsoft has spent the past year pushing Copilot as a mainstream productivity tool, baking it into Windows 11 and promoting new hardware such as Copilot+ PCs, yet its own legal language urges caution. In Microsoft’s Copilot Terms of Use, updated in October last year, the company states Copilot is ‘for entertainment purposes only’, may ‘make mistakes’, and ‘may not work as intended’.

The terms warn users not to rely on Copilot for important advice and to ‘use Copilot at your own risk’, a caveat that sits uneasily alongside the product’s business-focused marketing.

The Tom’s Hardware article argues Microsoft is not unique in issuing such warnings. Similar disclaimers are common across the generative AI industry. It points to xAI’s guidance that AI is ‘probabilistic in nature’ and may produce ‘hallucinations’, generate offensive or objectionable content, or fail to reflect real people, places or facts.

While these limitations are well known to those familiar with large language models, the piece notes that many users still treat AI output as authoritative, even in professional settings where scepticism should be standard.

To underline the risks of overreliance, the text cites reports of Amazon-related incidents allegedly linked to ‘Gen-AI assisted changes’. It says some AWS outages were reportedly caused after engineers let an AI coding bot address an issue without sufficient oversight, and that Amazon’s website experienced ‘high blast radius’ problems that required senior engineers to step in. These examples are used to illustrate how AI-generated errors can propagate quickly in complex systems when humans fail to verify the output.

Why does it matter?

Overall, the article acknowledges that generative AI can boost productivity, but stresses it remains a tool with no accountability for mistakes, making verification essential. It warns that automation bias, people trusting machine outputs over contradictory evidence, can be intensified by AI systems that produce plausible-sounding answers that pass casual inspection.

While such disclaimers help companies limit legal liability, the piece suggests aggressive marketing of AI as a productivity ‘hack’ may downplay real-world risks, particularly as firms seek returns on the billions invested in AI hardware and talent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Will AI turn novel-writing into a collaborative process

The article argues that a novel’s value cannot be judged solely by the quality of its prose, because many readers respond to other elements such as premise, ideas and character. It points to Amazon reviews of ‘Shy Girl’, which holds a four-out-of-five-star rating based on hundreds of reviews, with many praising its hook despite awareness of ‘the controversy’ around it. One reviewer writes, ‘The premise sucked me in.’

The broader point is that plenty of novels are poorly written yet still succeed, because fiction, like music, is forgiving: a song may have an irresistible beat even with a predictable melody, and a book can move readers through suspense, beauty, realism, fantasy, or a protagonist they recognise in themselves.

From that premise, the piece asks whether fiction’s ‘layers’ (premise, plot, style and voice) must all come from a single person. It notes that collaborative creation is already normal in many fields, even if audiences rarely state their expectations explicitly: readers tend to assume a Booker Prize-winning novel is written entirely by the named author, while journalism is understood to be shaped by both writers and editors, and television and film are widely accepted as writers’ room and revision-heavy processes.

The article uses James Patterson as an example of industrial-scale collaboration in publishing, describing how he supplies collaborators with outlines and treatments and oversees many projects at once, an approach likened to a ‘novel factory’ that some argue distances him from ‘literary fiction’, yet may be the only practical way to sustain a decades-long series.

The author suggests AI will make such factories easier to create, citing a New York Times report on ‘Coral Hart’, a pseudonymous romance writer who uses AI to generate drafts in about 45 minutes, then revises them before self-publishing hundreds of books under dozens of names. Although not a bestseller, she reportedly earns ‘six figures’ and teaches others to do the same.

This points to a future in which authors act more like showrunners supervising AI-powered writers’ rooms, while raising a central risk: readers may not know who, or what, produced what they are reading, especially if AI use is not consistently disclosed despite platforms such as Amazon asking for it.

The piece ends by questioning whether AI necessarily implies high-volume, depersonalised production. Using a personal analogy from music-making, the author notes that technology can enable rapid output, but can also serve a more artistic purpose: helping a creator overcome technical limits and ‘realise a vision’.

Why does it matter?

The underlying argument is not that AI guarantees either shallow churn or genuine creativity, but that the most consequential issues may lie in intent, authorial expectations, and honest disclosure to readers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Russian draft laws introduce licensing and limits on digital assets

The Russian government has approved a package of draft laws aimed at regulating digital currencies and digital rights, forming part of a broader effort to formalise and ‘de-shadow’ sections of the economy.

The legislation establishes a structured framework for crypto operations, including rules on trading, intermediaries, and market access.

Under the proposed system, transactions with digital currencies must be conducted through regulated intermediaries, while residents are permitted to buy crypto abroad and transfer funds under defined conditions.

Authorities also require taxpayers to report foreign crypto-related operations to the Federal Tax Service.

Access to digital assets will vary by investor type, with non-qualified investors limited to lower-risk assets and annual caps of up to 300,000 roubles per intermediary, subject to testing requirements. Qualified investors will face no such limits.

The framework also introduces licensing requirements for exchanges, custodians, and other market participants.

The reforms further expand the regulation of digital financial assets and rights, allowing issuance and circulation in public blockchain networks. Administrative penalties will apply for violations, reinforcing compliance standards across the emerging digital asset sector.

The move signals a broader effort to bring Russia’s large and highly active digital asset market into a formal regulatory perimeter, potentially increasing state oversight, investor protection, and fiscal transparency in a strategically important sector.

At the same time, it reflects the ongoing challenge of balancing effective market regulation with the risk of overregulation that could limit innovation, reduce market participation, or push activity into less regulated channels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot