OpenAI outlines advertising plans for ChatGPT access

The US AI firm, OpenAI, has announced plans to test advertising within ChatGPT as part of a broader effort to widen access to advanced AI tools.

An initiative that focuses on supporting the free version and the low-cost ChatGPT Go subscription, while paid tiers such as Plus, Pro, Business, and Enterprise will continue without advertisements.

According to the company, advertisements will remain clearly separated from ChatGPT responses and will never influence the answers users receive.

Responses will continue to be optimised for usefulness instead of commercial outcomes, with OpenAI emphasising that trust and perceived neutrality remain central to the product’s value.

User privacy forms a core pillar of the approach. Conversations will stay private, data will not be sold to advertisers, and users will retain the ability to disable ad personalisation or remove advertising-related data at any time.

During early trials, ads will not appear for accounts linked to users under 18, nor within sensitive or regulated areas such as health, mental wellbeing, or politics.

OpenAI describes advertising as a complementary revenue stream rather than a replacement for subscriptions.

The company argues that a diversified model can help keep advanced intelligence accessible to a wider population, while maintaining long term incentives aligned with user trust and product quality.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Steam rules redefine when AI use must be disclosed

Steam has clarified its position on AI in video games by updating the disclosure rules developers must follow when publishing titles on the platform.

The revision arrives after months of industry debate over whether generative AI usage should be publicly declared, particularly as storefronts face growing pressure to balance transparency with practical development realities.

Under the updated policy, disclosure requirements apply exclusively to AI-generated material consumed by players.

Artwork, audio, localisation, narrative elements, marketing assets and content visible on a game’s Steam page fall within scope, while AI tools used purely during development remain outside Valve’s interest.

Developers using code assistants, concept ideation tools or AI-enabled software features without integrating outputs into the final player experience no longer need to declare such usage.

Valve’s clarification signals a more nuanced stance than earlier guidance introduced in 2024, which drew criticism for failing to reflect how AI tools are used in modern workflows.

By formally separating player-facing content from internal efficiency tools, Steam acknowledges common industry practices without expanding disclosure obligations unnecessarily.

The update offers reassurance to developers concerned about stigma surrounding AI labels while preserving transparency for consumers.

Although enforcement may remain largely procedural, the written clarification establishes clearer expectations and reduces uncertainty as generative technologies continue to shape game production.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kazakhstan adopts AI robotics for orthopaedic surgery

Kazakhstan has introduced an AI-enabled robotic system in Astana to improve the accuracy and efficiency of orthopaedic surgeries. The technology supports more precise surgical planning and execution.

The system was presented during an event highlighting growing cooperation between Kazakhstan and India in medical technologies. Officials from both countries emphasised knowledge exchange and joint progress in advanced healthcare solutions.

Health authorities say robotic assistance could help narrow the gap between performed joint replacements and unmet patient demand. Standardised procedures and improved precision are expected to raise treatment quality nationwide.

The initiative builds on recent medical advances, including Kazakhstan’s first robot-assisted heart surgery in Astana. Authorities view such technologies as part of broader efforts to modernise healthcare funding and expand access to high-tech treatment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How autonomous vehicles shape physical AI trust

Physical AI is increasingly embedded in public and domestic environments, from self-driving vehicles to delivery robots and household automation. As intelligent machines begin to operate alongside people in shared spaces, trust emerges as a central condition for adoption instead of technological novelty alone.

Autonomous vehicles provide the clearest illustration of how trust must be earned through openness, accountability, and continuous engagement.

Self-driving systems address long-standing challenges such as road safety, congestion, and unequal access to mobility by relying on constant perception, rule-based behaviour, and fatigue-free operation.

Trials and early deployments suggest meaningful improvements in safety and efficiency, yet public confidence remains uneven. Social acceptance depends not only on performance outcomes but also on whether communities understand how systems behave and why specific decisions occur.

Dialogue plays a critical role at two levels. Ongoing communication among policymakers, developers, emergency services, and civil society helps align technical deployment with social priorities such as safety, accessibility, and environmental impact.

At the same time, advances in explainable AI allow machines to communicate intent and reasoning directly to users, replacing opacity with interpretability and predictability.

The experience of autonomous vehicles suggests a broader framework for physical AI governance centred on demonstrable public value, transparent performance data, and systems capable of explaining behaviour in human terms.

As physical AI expands into infrastructure, healthcare, and domestic care, trust will depend on sustained dialogue and responsible design rather than the speed of deployment alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Law schools urged to embed practical AI training in legal education

With AI tools now widely available to legal professionals, educators and practitioners argue that law schools should integrate practical AI instruction into curricula rather than leave students to learn informally.

The article describes a semester-long experiment in an Entrepreneurship Clinic where students were trained on legal AI tools from platforms such as Bloomberg Law, Lexis and Westlaw, with exercises designed to show both advantages and limitations of these systems.

In structured exercises, students used different AI products to carry out tasks like drafting, research and client communication, revealing that tools vary widely in capabilities and reinforcing the importance of independent legal judgement.

Educators emphasise that AI should be taught as a complement to legal reasoning, not a substitute, and that understanding how and when to verify AI outputs is essential for responsible practice.

The article concludes that clarifying the distinction between AI as a tool and as a crutch will help prepare future lawyers to use technology ethically and competently in both transactional work and litigation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft urges systems approach to AI skills in Europe

AI is increasingly reshaping European workplaces, though large-scale job losses have not yet materialised. Studies by labour bodies show that tasks change faster than roles disappear.

Policymakers and employers face pressure to expand AI skills while addressing unequal access to them. Researchers warn that the benefits and risks concentrate among already skilled workers and larger organisations.

Education systems across Europe are beginning to integrate AI literacy, including teacher training and classroom tools. Progress remains uneven between countries and regions.

Microsoft experts say workforce readiness will depend on evidence-based policy and sustained funding. Skills programmes alone may not offset broader economic and social disruption from AI adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Samsara turns operational data into real-world impact

Samsara has built a platform that helps companies with physical operations run more safely and efficiently. Founded in 2015 by MIT alumni John Bicket and Sanjit Biswas, the company connects workers, vehicles, and equipment through cloud-based analytics.

The platform combines sensors, AI cameras, GPS tracking, and real-time alerts to cut accidents, fuel use, and maintenance costs. Large companies across logistics, construction, manufacturing, and energy report cost savings and improved safety after adopting the system.

Samsara turns large volumes of operational data into actionable insights for frontline workers and managers. Tools like driver coaching, predictive maintenance, and route optimisation reduce risk at scale while recognising high-performing field workers.

The company is expanding its use of AI to manage weather risk, support sustainability, and enable the adoption of electric fleets. They position data-driven decision-making as central to modernising critical infrastructure worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Indian companies remain committed to AI spending

Almost all Indian companies plan to sustain AI spending even without near-term financial returns. A BCG survey shows 97 percent will keep investing, higher than the 94 percent global rate.

Corporate AI budgets in India are expected to rise to about 1.7 percent of revenue in 2026. Leaders see AI as a long-term strategic priority rather than a short-term cost.

Around 88 percent of Indian executives express confidence in AI generating positive business outcomes. That is above the global average of 82 percent, reflecting strong optimism among local decision-makers.

Despite enthusiasm, fewer Indian CEOs personally lead AI strategy than their global peers, and workforce AI skills lag international benchmarks. Analysts say talent and leadership alignment remain key as spending grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Smarter interconnects become essential for AI processors

AI workloads are placing unprecedented strain on system on chip interconnects. Designers face complexity that exceeds the limits of traditional manual engineering approaches.

Semiconductor engineers are increasingly turning to automated network on chip design. Algorithms now generate interconnect topologies optimised for bandwidth, latency, power and area.

Physically aware automation reduces wirelengths, congestion and timing failures. Industry specialists report dramatically shorter design cycles and more predictable performance outcomes.

As AI spreads from data centres to edge devices, interconnect automation is becoming essential. The shift enables smaller teams to deliver powerful, energy efficient processors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Sadiq Khan voices strong concerns over AI job impact

London Mayor Sir Sadiq Khan has warned that AI could become a ‘weapon of mass destruction of jobs‘ if its impact is not managed correctly. He said urgent action is needed to prevent large-scale unemployment.

Speaking at Mansion House in the UK capital, Khan said London is particularly exposed due to the concentration of finance, professional services, and creative industries. He described the potential impact on jobs as ‘colossal’.

Khan said AI could improve public services and help tackle challenges such as cancer care and climate change. At the same time, he warned that reckless use could increase inequality and concentrate wealth and power.

Polling by City Hall suggests more than half of London workers expect AI to affect their jobs within a year. Sadiq Khan said entry-level roles may disappear fastest, limiting opportunities for young people.

The mayor announced a new task force to assess how Londoners can be supported through the transition. His office will also commission free AI training for residents.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!