Utah governor urges state control over AI rules

Utah’s governor, Spencer Cox, has again argued that states should retain authority over AI policy, warning that centralised national rules might fail to reflect local needs. He said state governments remain closer to communities and, therefore, better placed to respond quickly to emerging risks.

Cox explained that innovation often moves faster than federal intervention, and excessive national control could stifle responsible development. He also emphasised that different states face varied challenges, suggesting that tailored AI rules may be more effective in balancing safety and opportunity.

Debate across the US has intensified as lawmakers confront rapid advances in AI tools, with several states drafting their own frameworks. Cox suggested a cooperative model, where states lead, and federal agencies play a supporting role without overriding regional safeguards.

Analysts say the governor’s comments highlight a growing split between national uniformity and local autonomy in technology governance. Supporters argue that adaptable state systems foster trust, while critics warn that a patchwork approach could complicate compliance for developers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

South Korea’s Hopae boosts EU presence with €5 million investment

Hopae is expanding into Europe with a €5M investment as the region prepares for mandatory EUDI Wallet adoption. The company aims to help businesses navigate multiple electronic identity systems before new requirements take effect in 2026 and 2027.

The firm plans to offer an intermediary platform that unifies eIDs and wallet-based verification. It says the model can ease compliance for regulated sectors and Very Large Online Platforms, which will need to accept EUDI Wallets under the EU rules.

Hopae has already signed a partnership with Luxembourg’s INCERT, becoming the first officially registered intermediary service. It secured OIDC certification and opened a Luxembourg office, naming Bertrand Bouteloup to lead its European expansion and trust-service ambitions.

The company says its system already integrates more than 50 eIDs and wallets, to reach 100 by mid-2026. CEO Ace Jaehoon Shim says demand for secure, wallet-based identity verification will require further investment across the continent.

Founded in 2022, Hopae previously developed the national vaccination pass in South Korea and has expanded into the United States. It is now contributing to the Korean Architecture Reference Framework while operating offices in Seoul, San Francisco, Paris, and Luxembourg.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Vietnam tops region in AI adoption and trust

Vietnam has emerged as Southeast Asia’s leader in AI readiness, with daily usage, upskilling rates and data-sharing willingness topping regional rankings. Survey data show 81 percent of users engage with AI tools each day, supported by widespread training and high trust levels.

Commercial activity reflects the shift, with AI-enhanced apps recording a 78 percent rise in revenue over the past year. Investors contributed 123 million dollars to local AI ventures, and most expect funding to grow further across software, services and deep-tech fields.

Vietnam’s digital economy is forecast to reach 39 billion dollars in 2025, fuelled by rapid expansion across e-commerce, online media, travel and digital finance. E-commerce continues to dominate, while gaming and online payments record notable acceleration across broader markets.

Vietnamese government support for cashless payments and favourable travel measures further strengthens digital adoption. Analysts say that Vietnam’s combination of strong user trust, fast-growing platforms and rising investment positions the country as a strong regional technological powerhouse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Huawei and ZTE expand 5G foothold in Vietnam amid US concern

Vietnam has moved to expand its use of Chinese 5G technology, awarding Huawei and ZTE a series of new contracts. Under recent deals, the two companies will supply advanced 5G radio equipment to strengthen network coverage, while European vendors remain responsible for core systems.

Vietnam, which borders China, Laos, and Cambodia, previously echoed allies’ warnings that Chinese-made 5G gear posed an unacceptable security risk. Recent tariff frictions with the United States and shifting economic priorities have since pushed officials to reconsider that stance.

According to local reports, Huawei and ZTE have together secured contracts worth about 43 million dollars for non-core 5G equipment. Ericsson and Nokia are expected to continue supplying the 5G core, with Chinese vendors focused on antennas and related infrastructure at the network edge.

In April, a consortium including Huawei won a 23 million dollar deal to provide 5G gear, shortly after new US tariffs on Vietnamese exports came into force. Analysts say those measures have strained ties between Hanoi and Washington while nudging Vietnam to deepen economic and technological links with Beijing.

Vietnamese supply chain specialist Nguyen Hung says Hanoi is prioritising its own strategic interests, seeing closer ties with Chinese vendors as a route to deeper regional integration. US officials warn the deals could damage network trust and limit access to advanced American technology.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU moves forward on new online child protection rules

EU member states reached a common position on a regulation intended to reduce online child sexual abuse.

The proposal introduces obligations for digital service providers to prevent the spread of harmful content and to respond when national authorities require the removal, blocking or delisting of material.

A framework that requires providers to assess how their services could be misused and to adopt measures that lower the risk.

Authorities will classify services into three categories based on objective criteria, allowing targeted obligations for higher-risk environments. Victims will be able to request assistance when seeking the removal or disabling of material that concerns them.

The regulation establishes an EU Centre on Child Sexual Abuse, which will support national authorities, process reports from companies and maintain a database of indicators. The Centre will also work with Europol to ensure that relevant information reaches law enforcement bodies in member states.

The Council position makes permanent the voluntary activities already carried out by companies, including scanning and reporting, which were previously supported by a temporary exemption.

Formal negotiations with the European Parliament can now begin with the aim of adopting the final regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU faces new battles over digital rights

EU policy debates intensified after Denmark abandoned plans for mandatory mass scanning in the draft Child Sexual Abuse Regulation. Advocates welcomed the shift yet warned that new age checks and potential app bans still threaten privacy.

France and the UK advanced consultations on good practice guidelines for cyber intrusion firms, seeking more explicit rules for industry responsibility. Civil society groups also marked two years of the Digital Services Act by reflecting on enforcement experience and future challenges.

Campaigners highlighted rising concerns about tech-facilitated gender violence during the 16 Days initiative. The Centre for Democracy and Technology launched fresh resources stressing encryption protection, effective remedies and more decisive action against gendered misinformation.

CDT Europe also criticised the Commission’s digital omnibus package for weakening safeguards under laws, including the AI Act. The group urged firm enforcement of existing frameworks while exploring better redress options for AI-related harms in the EU legislation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia moves to curb nudify tools after eSafety action

A major provider of three widely used nudify services has cut off Australian access after enforcement action from eSafety.

The company received an official warning in September for allowing its tools to be used to produce AI-generated material that harmed children.

A withdrawal that follows concerns about incidents involving school students and repeated reminders that online services must meet Australia’s mandatory safety standards.

eSafety stated that Australia’s codes and standards are encouraging companies to adopt stronger safeguards.

The Commissioner noted that preventing the misuse of consumer tools remains central to reducing the risk of harm and that more precise boundaries can lower the likelihood of abuse affecting young people.

Attention has also turned to underlying models and the hosting platforms that distribute them.

Hugging Face has updated its terms to require users to take steps to mitigate the risks associated with uploaded models, including preventing misuse for generating harmful content. The company is required to act when reports or internal checks reveal breaches of its policies.

eSafety indicated that failure to comply with industry codes or standards can lead to enforcement measures, including significant financial penalties.

The agency is working with the government on further reforms intended to restrict access to nudify tools and strengthen protections across the technology stack.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

As AI agents proliferate, human purpose is being reconsidered

As AI agents rapidly evolve from tools to autonomous actors, experts are raising existential questions about human value and purpose.

These agents, equipped with advanced reasoning and decision-making capabilities, can now complete entire workflows with minimal human intervention.

The report notes that in corporate settings, AI agents are already being positioned to handle tasks such as client negotiations, quote generation, project coordination, or even strategic decision support. Some proponents foresee these agents climbing organisational charts, potentially serving as virtual CFOs or CEOs.

At the same time, sceptics warn that such a shift could hollow out traditional human roles. Research from McKinsey Global Institute suggests that while many human skills remain relevant, the nature and context of work will change significantly, with humans increasingly collaborating with AI rather than directly doing classical tasks.

The questions this raises extend beyond economics and efficiency: they touch on identity, dignity, and social purpose. If AI can handle optimisation and execution, what remains uniquely human, and how will societies value those capacities?

Some analysts suggest we shift from valuing output to valuing emotional leadership, creativity, ethical judgement and human connection.

The rise of AI agents thus invites a critical rethink of labour, value, and our roles in an AI-augmented world. As debates continue, it may become ever more crucial to define what we expect from people, beyond productivity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Staffordshire Police trials AI agents on its 101 line

Staffordshire Police will trial AI-powered ‘agents’ on its 101 non-emergency service early next year, according to a recent BBC report.

The technology, known as Agentforce, is designed to resolve simple information requests without human intervention, allowing call handlers to focus on more complex or urgent cases. The force said the system aims to improve contact centre performance after past criticism over long wait times.

Senior officers explained that the AI agent will support queries where callers are seeking information rather than reporting crimes. If keywords indicating risk or vulnerability are detected, the system will automatically route the call to a human operator.

Thames Valley Police is already using the technology and has given ‘very positive reports’, according to acting Chief Constable Becky Riggs.

The force’s current average wait for 101 calls is 3.3 minutes, a marked improvement on the previous 7.1-minute average. Abandonment rates have also fallen from 29.2% to 18.7%. However, Commissioner Ben Adams noted that around eight percent of callers still wait over an hour.

UK officers say they have been calling back those affected, both to apologise and to gather ‘significant intelligence’ that has strengthened public confidence in the system.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot