New Steam rules redefine when AI use must be disclosed

Steam has clarified its position on AI in video games by updating the disclosure rules developers must follow when publishing titles on the platform.

The revision arrives after months of industry debate over whether generative AI usage should be publicly declared, particularly as storefronts face growing pressure to balance transparency with practical development realities.

Under the updated policy, disclosure requirements apply exclusively to AI-generated material consumed by players.

Artwork, audio, localisation, narrative elements, marketing assets and content visible on a game’s Steam page fall within scope, while AI tools used purely during development remain outside Valve’s interest.

Developers using code assistants, concept ideation tools or AI-enabled software features without integrating outputs into the final player experience no longer need to declare such usage.

Valve’s clarification signals a more nuanced stance than earlier guidance introduced in 2024, which drew criticism for failing to reflect how AI tools are used in modern workflows.

By formally separating player-facing content from internal efficiency tools, Steam acknowledges common industry practices without expanding disclosure obligations unnecessarily.

The update offers reassurance to developers concerned about stigma surrounding AI labels while preserving transparency for consumers.

Although enforcement may remain largely procedural, the written clarification establishes clearer expectations and reduces uncertainty as generative technologies continue to shape game production.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ETSI standard defines cybersecurity rules for AI systems

ETSI has released ETSI EN 304 223, a new European Standard establishing baseline cybersecurity requirements for AI systems.

Approved by national standards bodies, the framework becomes the first globally applicable EN focused specifically on securing AI, extending its relevance beyond European markets.

The standard recognises that AI introduces security risks not found in traditional software. Threats such as data poisoning, indirect prompt injection and vulnerabilities linked to complex data management demand tailored defences instead of conventional approaches alone.

ETSI EN 304 223 combines established cybersecurity practices with targeted measures designed for the distinctive characteristics of AI models and systems.

Adopting a full lifecycle perspective, the ETSI framework defines thirteen principles across secure design, development, deployment, maintenance and end of life.

Alignment with internationally recognised AI lifecycle models supports interoperability and consistent implementation across existing regulatory and technical ecosystems.

ETSI EN 304 223 is intended for organisations across the AI supply chain, including vendors, integrators and operators, and covers systems based on deep neural networks, including generative AI.

Further guidance is expected through ETSI TR 104 159, which will focus on generative AI risks such as deepfakes, misinformation, confidentiality concerns and intellectual property protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How autonomous vehicles shape physical AI trust

Physical AI is increasingly embedded in public and domestic environments, from self-driving vehicles to delivery robots and household automation. As intelligent machines begin to operate alongside people in shared spaces, trust emerges as a central condition for adoption instead of technological novelty alone.

Autonomous vehicles provide the clearest illustration of how trust must be earned through openness, accountability, and continuous engagement.

Self-driving systems address long-standing challenges such as road safety, congestion, and unequal access to mobility by relying on constant perception, rule-based behaviour, and fatigue-free operation.

Trials and early deployments suggest meaningful improvements in safety and efficiency, yet public confidence remains uneven. Social acceptance depends not only on performance outcomes but also on whether communities understand how systems behave and why specific decisions occur.

Dialogue plays a critical role at two levels. Ongoing communication among policymakers, developers, emergency services, and civil society helps align technical deployment with social priorities such as safety, accessibility, and environmental impact.

At the same time, advances in explainable AI allow machines to communicate intent and reasoning directly to users, replacing opacity with interpretability and predictability.

The experience of autonomous vehicles suggests a broader framework for physical AI governance centred on demonstrable public value, transparent performance data, and systems capable of explaining behaviour in human terms.

As physical AI expands into infrastructure, healthcare, and domestic care, trust will depend on sustained dialogue and responsible design rather than the speed of deployment alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Verizon responds to major network outage

A large-scale network disruption has been confirmed by Verizon, affecting wireless voice, messaging, and mobile data services and leaving many customer devices operating in SOS mode across several regions.

The company acknowledged service interruptions during Wednesday afternoon and evening, while emergency calling capabilities remained available.

Additionally, the telecom provider issued multiple statements apologising for the disruption and pledged to provide account credits to impacted customers. Engineering teams were deployed throughout the incident, with service gradually restored later in the day.

Verizon advised users still experiencing connectivity problems to restart their devices once normal operations resumed.

Despite repeated updates, the company has not disclosed the underlying cause of the outage. Independent outage-tracking platforms described the incident as a severe breakdown in cellular connectivity, with most reports citing complete signal loss and mobile phone failures.

Verizon stated that further updates would be shared following internal reviews, while rival mobile networks reported no comparable disruptions during the same period.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft urges systems approach to AI skills in Europe

AI is increasingly reshaping European workplaces, though large-scale job losses have not yet materialised. Studies by labour bodies show that tasks change faster than roles disappear.

Policymakers and employers face pressure to expand AI skills while addressing unequal access to them. Researchers warn that the benefits and risks concentrate among already skilled workers and larger organisations.

Education systems across Europe are beginning to integrate AI literacy, including teacher training and classroom tools. Progress remains uneven between countries and regions.

Microsoft experts say workforce readiness will depend on evidence-based policy and sustained funding. Skills programmes alone may not offset broader economic and social disruption from AI adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Indian companies remain committed to AI spending

Almost all Indian companies plan to sustain AI spending even without near-term financial returns. A BCG survey shows 97 percent will keep investing, higher than the 94 percent global rate.

Corporate AI budgets in India are expected to rise to about 1.7 percent of revenue in 2026. Leaders see AI as a long-term strategic priority rather than a short-term cost.

Around 88 percent of Indian executives express confidence in AI generating positive business outcomes. That is above the global average of 82 percent, reflecting strong optimism among local decision-makers.

Despite enthusiasm, fewer Indian CEOs personally lead AI strategy than their global peers, and workforce AI skills lag international benchmarks. Analysts say talent and leadership alignment remain key as spending grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI hoax targets Kate Garraway and family

Presenter Kate Garraway has condemned a cruel AI-generated hoax that falsely showed her with a new boyfriend. The images appeared online shortly after the death of her husband, Derek Draper.

Fake images circulated mainly on Facebook through impersonation accounts using her name and likeness. Members of the public and even friends mistakenly believed the relationship was real.

The situation escalated when fabricated news sites began publishing false stories involving her teenage son Billy. Garraway described the experience as deeply hurtful during an already raw period.

Her comments followed renewed scrutiny of AI image tools and platform responsibility. Recent restrictions aim to limit harmful and misleading content generated using artificial intelligence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare acquires Human Native to build a fair AI content licensing model

San Francisco-based company Cloudflare has acquired Human Native, an AI data marketplace designed to connect content creators with AI developers seeking high-quality training and inference material.

A move that reflects growing pressure to establish clearer economic rules for how online content is used by AI systems.

The acquisition is intended to help creators and publishers decide whether to block AI access entirely, optimise material for machine use, or license content for payment instead of allowing uncontrolled scraping.

Cloudflare says the tools developed through Human Native will support transparent pricing and fair compensation across the AI supply chain.

Human Native, founded in 2024 and backed by UK-based investors, focuses on structuring original content so it can be discovered, accessed and purchased by AI developers through standardised channels.

The team includes researchers and engineers with experience across AI research, design platforms and financial media.

Cloudflare argues that access to reliable and ethically sourced data will shape long-term competition in AI. By integrating Human Native into its wider platform, the company aims to support a more sustainable internet economy that balances innovation with creator rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ofcom probes AI companion chatbot over age checks

Ofcom has opened an investigation into Novi Ltd over age checks on its AI companion chatbot. The probe focuses on duties under the Online Safety Act.

Regulators will assess whether children can access pornographic content without effective age assurance. Sanctions could include substantial fines or business disruption measures under the UK’s Online Safety Bill.

In a separate case, Ofcom confirmed enforcement pressure led Snapchat to overhaul its illegal content risk assessment. Revised findings now require stronger protections for UK users.

Ofcom said accurate risk assessments underpin online safety regulation. Platforms must match safeguards to real world risks, particularly when AI and children are concerned.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Regulators press on with Grok investigations in Britain and Canada

Britain and Canada are continuing regulatory probes into xAI’s Grok chatbot, signalling that official scrutiny will persist despite the company’s announcement of new safeguards. Authorities say concerns remain over the system’s ability to generate explicit and non-consensual images.

xAI said it had updated Grok to block edits that place real people in revealing clothing and restricted image generation in jurisdictions where such content is illegal. The company did not specify which regions are affected by the new limits.

Reuters testing found Grok was still capable of producing sexualised images, including in Britain. Social media platform X and xAI did not respond to questions about how effective the changes have been.

UK regulator Ofcom said its investigation remains ongoing, despite welcoming xAI’s announcement. A privacy watchdog in Canada also confirmed it is expanding an existing probe into both X and xAI.

Pressure is growing internationally, with countries including France, India, and the Philippines raising concerns. British Technology Secretary Liz Kendall said the Online Safety Act gives the government tools to hold platforms accountable for harmful content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!