OpenAI outlines advertising plans for ChatGPT access

The US AI firm, OpenAI, has announced plans to test advertising within ChatGPT as part of a broader effort to widen access to advanced AI tools.

An initiative that focuses on supporting the free version and the low-cost ChatGPT Go subscription, while paid tiers such as Plus, Pro, Business, and Enterprise will continue without advertisements.

According to the company, advertisements will remain clearly separated from ChatGPT responses and will never influence the answers users receive.

Responses will continue to be optimised for usefulness instead of commercial outcomes, with OpenAI emphasising that trust and perceived neutrality remain central to the product’s value.

User privacy forms a core pillar of the approach. Conversations will stay private, data will not be sold to advertisers, and users will retain the ability to disable ad personalisation or remove advertising-related data at any time.

During early trials, ads will not appear for accounts linked to users under 18, nor within sensitive or regulated areas such as health, mental wellbeing, or politics.

OpenAI describes advertising as a complementary revenue stream rather than a replacement for subscriptions.

The company argues that a diversified model can help keep advanced intelligence accessible to a wider population, while maintaining long term incentives aligned with user trust and product quality.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Steam rules redefine when AI use must be disclosed

Steam has clarified its position on AI in video games by updating the disclosure rules developers must follow when publishing titles on the platform.

The revision arrives after months of industry debate over whether generative AI usage should be publicly declared, particularly as storefronts face growing pressure to balance transparency with practical development realities.

Under the updated policy, disclosure requirements apply exclusively to AI-generated material consumed by players.

Artwork, audio, localisation, narrative elements, marketing assets and content visible on a game’s Steam page fall within scope, while AI tools used purely during development remain outside Valve’s interest.

Developers using code assistants, concept ideation tools or AI-enabled software features without integrating outputs into the final player experience no longer need to declare such usage.

Valve’s clarification signals a more nuanced stance than earlier guidance introduced in 2024, which drew criticism for failing to reflect how AI tools are used in modern workflows.

By formally separating player-facing content from internal efficiency tools, Steam acknowledges common industry practices without expanding disclosure obligations unnecessarily.

The update offers reassurance to developers concerned about stigma surrounding AI labels while preserving transparency for consumers.

Although enforcement may remain largely procedural, the written clarification establishes clearer expectations and reduces uncertainty as generative technologies continue to shape game production.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ETSI standard defines cybersecurity rules for AI systems

ETSI has released ETSI EN 304 223, a new European Standard establishing baseline cybersecurity requirements for AI systems.

Approved by national standards bodies, the framework becomes the first globally applicable EN focused specifically on securing AI, extending its relevance beyond European markets.

The standard recognises that AI introduces security risks not found in traditional software. Threats such as data poisoning, indirect prompt injection and vulnerabilities linked to complex data management demand tailored defences instead of conventional approaches alone.

ETSI EN 304 223 combines established cybersecurity practices with targeted measures designed for the distinctive characteristics of AI models and systems.

Adopting a full lifecycle perspective, the ETSI framework defines thirteen principles across secure design, development, deployment, maintenance and end of life.

Alignment with internationally recognised AI lifecycle models supports interoperability and consistent implementation across existing regulatory and technical ecosystems.

ETSI EN 304 223 is intended for organisations across the AI supply chain, including vendors, integrators and operators, and covers systems based on deep neural networks, including generative AI.

Further guidance is expected through ETSI TR 104 159, which will focus on generative AI risks such as deepfakes, misinformation, confidentiality concerns and intellectual property protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How autonomous vehicles shape physical AI trust

Physical AI is increasingly embedded in public and domestic environments, from self-driving vehicles to delivery robots and household automation. As intelligent machines begin to operate alongside people in shared spaces, trust emerges as a central condition for adoption instead of technological novelty alone.

Autonomous vehicles provide the clearest illustration of how trust must be earned through openness, accountability, and continuous engagement.

Self-driving systems address long-standing challenges such as road safety, congestion, and unequal access to mobility by relying on constant perception, rule-based behaviour, and fatigue-free operation.

Trials and early deployments suggest meaningful improvements in safety and efficiency, yet public confidence remains uneven. Social acceptance depends not only on performance outcomes but also on whether communities understand how systems behave and why specific decisions occur.

Dialogue plays a critical role at two levels. Ongoing communication among policymakers, developers, emergency services, and civil society helps align technical deployment with social priorities such as safety, accessibility, and environmental impact.

At the same time, advances in explainable AI allow machines to communicate intent and reasoning directly to users, replacing opacity with interpretability and predictability.

The experience of autonomous vehicles suggests a broader framework for physical AI governance centred on demonstrable public value, transparent performance data, and systems capable of explaining behaviour in human terms.

As physical AI expands into infrastructure, healthcare, and domestic care, trust will depend on sustained dialogue and responsible design rather than the speed of deployment alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Verizon responds to major network outage

A large-scale network disruption has been confirmed by Verizon, affecting wireless voice, messaging, and mobile data services and leaving many customer devices operating in SOS mode across several regions.

The company acknowledged service interruptions during Wednesday afternoon and evening, while emergency calling capabilities remained available.

Additionally, the telecom provider issued multiple statements apologising for the disruption and pledged to provide account credits to impacted customers. Engineering teams were deployed throughout the incident, with service gradually restored later in the day.

Verizon advised users still experiencing connectivity problems to restart their devices once normal operations resumed.

Despite repeated updates, the company has not disclosed the underlying cause of the outage. Independent outage-tracking platforms described the incident as a severe breakdown in cellular connectivity, with most reports citing complete signal loss and mobile phone failures.

Verizon stated that further updates would be shared following internal reviews, while rival mobile networks reported no comparable disruptions during the same period.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok faces perilous legal challenge over child safety concerns

British parents suing TikTok over the deaths of their children have called for greater accountability from the platform, as the case begins hearings in the United States. One of the claimants said social media companies must be held accountable for the content shown to young users.

Ellen Roome, whose son died in 2022, said the lawsuit is about understanding what children were exposed to online.

The legal filing claims the deaths were a foreseeable result of TikTok’s design choices, which allegedly prioritised engagement over safety. TikTok has said it prohibits content that encourages dangerous behaviour.

Roome is also campaigning for proposed legislation that would allow parents to access their children’s social media accounts after a death. She said the aim is to gain clarity and prevent similar tragedies.

TikTok said it removes most harmful content before it is reported and expressed sympathy for the families. The company is seeking to dismiss the case, arguing that the US court lacks jurisdiction.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Samsara turns operational data into real-world impact

Samsara has built a platform that helps companies with physical operations run more safely and efficiently. Founded in 2015 by MIT alumni John Bicket and Sanjit Biswas, the company connects workers, vehicles, and equipment through cloud-based analytics.

The platform combines sensors, AI cameras, GPS tracking, and real-time alerts to cut accidents, fuel use, and maintenance costs. Large companies across logistics, construction, manufacturing, and energy report cost savings and improved safety after adopting the system.

Samsara turns large volumes of operational data into actionable insights for frontline workers and managers. Tools like driver coaching, predictive maintenance, and route optimisation reduce risk at scale while recognising high-performing field workers.

The company is expanding its use of AI to manage weather risk, support sustainability, and enable the adoption of electric fleets. They position data-driven decision-making as central to modernising critical infrastructure worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft launches Elevate for Educators programme

Elevate for Educators, launched by Microsoft, is a global programme designed to help teachers build the skills and confidence to use AI tools in the classroom. The initiative provides free access to training, credentials, and professional learning resources.

The programme connects educators to peer networks, self-paced courses, and AI-powered simulations. The aim is to support responsible AI adoption while improving teaching quality and classroom outcomes.

New educator credentials have been developed in partnership with ISTE and ASCD. Schools and education systems can also gain recognition for supporting professional development and demonstrating impact in classrooms.

AI-powered education tools within Microsoft 365 have been expanded to support lesson planning and personalised instruction. New features help teachers adapt materials to different learning needs and provide students with faster feedback.

College students will also receive free access to Microsoft 365 Premium and LinkedIn Premium Career for 12 months. The offer includes AI tools, productivity apps, and career resources to support future employment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sadiq Khan voices strong concerns over AI job impact

London Mayor Sir Sadiq Khan has warned that AI could become a ‘weapon of mass destruction of jobs‘ if its impact is not managed correctly. He said urgent action is needed to prevent large-scale unemployment.

Speaking at Mansion House in the UK capital, Khan said London is particularly exposed due to the concentration of finance, professional services, and creative industries. He described the potential impact on jobs as ‘colossal’.

Khan said AI could improve public services and help tackle challenges such as cancer care and climate change. At the same time, he warned that reckless use could increase inequality and concentrate wealth and power.

Polling by City Hall suggests more than half of London workers expect AI to affect their jobs within a year. Sadiq Khan said entry-level roles may disappear fastest, limiting opportunities for young people.

The mayor announced a new task force to assess how Londoners can be supported through the transition. His office will also commission free AI training for residents.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Technology is reshaping smoke alarm safety

Smoke alarms remain critical in preventing fatal house fires, according to fire safety officials. Real-life incidents show how early warnings can allow families to escape rapidly spreading blazes.

Modern fire risks are evolving, with lithium-ion batteries and e-bikes creating fast and unpredictable fires. These incidents can release toxic gases and escalate before flames are clearly visible.

Traditional smoke alarm technology continues to perform reliably despite changes in household risks. At the same time, intelligent and AI-based systems are being developed to detect danger sooner.

Reducing false alarms has become a priority, as nuisance alerts often lead people to turn off devices. Fire experts stress that a maintained, certified smoke alarm is far safer than no smoke alarm at all.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!