Altman urges urgent AI regulation

OpenAI chief Sam Altman has called for urgent global regulation of AI, speaking at the AI Impact Summit in New Delhi. Addressing leaders and executives in New Delhi, he said the rapid pace of development demands coordinated international oversight.

In New Delhi, Altman suggested creating a body similar to the International Atomic Energy Agency to oversee advanced AI systems. He warned that highly capable open source biomodels could pose serious biosecurity risks if misused.

Altman argued in New Delhi that democratising AI is essential to prevent power from being concentrated in a single company or country. He added that safeguards are urgently required, even as technology continues to disrupt labour markets.

During the summit in New Delhi, Altman said ChatGPT has 100 million weekly users in India, with more than a third being students. OpenAI also announced plans with Tata Consultancy Services to build data centre infrastructure in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Digital addiction in Italy sparks debate over social media bans

Italy has warned that digital addiction among teenagers is rising sharply, as health authorities link excessive social media and gaming use to family and educational challenges. Officials say bans alone will not resolve the issue.

According to Italy’s National Institute of Health, about 100,000 young people aged 15 to 18 are at risk of social media addiction. A further 500,000 are estimated to suffer from gaming disorder, recognised by the World Health Organisation as a medical condition.

A survey by digital ethics group Social Warning found that 77 percent of Italian teenagers consider themselves addicted to their devices. However, many say they lack the tools or support to change their behaviour.

Research by ‘Con i Bambini’, which funds projects tackling educational poverty in Italy, links digital dependency to isolation and strained parental relationships. The organisation says legislative measures can protect minors but cannot replace structured education and family support.

The debate extends across the EU. The European Parliament has called for a minimum age of 16 for social media platforms, while France, Italy, and Spain are considering national restrictions. Experts argue that prevention and digital literacy must complement regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated harmful imagery sparks alarming warning from 60 regulators

Nearly 60 privacy and data protection authorities issued a joint statement warning about the risks of AI-generated harmful and non-consensual imagery. The initiative was coordinated through the Global Privacy Assembly (GPA) and its International Enforcement Cooperation Working Group (IEWG), reflecting growing cross-border cooperation.

Regulators expressed concern about AI systems that create realistic but fabricated images and videos of identifiable individuals without their knowledge or consent. They warned that such tools can lead to serious privacy violations and reputational harm.

The Office of the Privacy Commissioner for Personal Data (PCPD), which co-chairs the IEWG, highlighted the global dimension of the issue. Privacy Commissioner stressed that children are particularly vulnerable to abusive AI-generated content.

Authorities called on organisations developing and using AI systems to introduce strong safeguards against the misuse of personal data. They also urged transparency, effective mechanisms for content removal, and enhanced, age-appropriate protections for children and other vulnerable groups.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit tests AI shopping search

Reddit has begun testing an AI-powered shopping search tool with a limited group of users in the US. Search queries for product ideas now generate interactive carousels featuring prices, images and direct links to retailers.

Items appearing in the results are drawn from recommendations shared in posts and comments across the platform. Listings are connected to Reddit’s advertising and shopping partners, bringing community discussions closer to online purchasing.

Expansion into AI-led commerce builds on the company’s earlier launch of Dynamic Product Ads, designed to deliver personalised suggestions. Closer integration of search and shopping signals a broader effort to strengthen digital revenue streams.

Chief executive Steve Huffman recently described AI search as a significant business opportunity beyond product development alone. Weekly search users increased from 60 million to 80 million over the past year, while engagement with the AI-powered Reddit Answers tool rose sharply throughout 2025.

Developments place Reddit alongside other technology platforms investing in AI-driven retail features. Growing user engagement suggests the company sees search as central to its future commercial strategy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Chinese AI video tool unsettles Hollywood

A new AI video model developed by ByteDance has unsettled Hollywood after generating cinema-quality clips from brief text prompts. Seedance 2.0, launched in 2025, went viral for producing realistic action scenes featuring western cinematic characters such as Spider Man and Deadpool.

In response, major studios, including Disney and Paramount, issued cease and desist letters over alleged copyright infringement. Japan has also begun investigating ByteDance after AI-generated anime videos spread widely online.

Industry experts say Seedance 2.0 stands out for combining text, visuals and audio within a single system. Analysts in Singapore and Melbourne argue that Chinese AI models are now matching US competitors at the technological frontier.

As Seedance 2.0 gains traction, Beijing continues to prioritise AI and robotics in its economic strategy. The rise of tools from China has intensified debate in the US and beyond over copyright, regulation and the future of creative work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google’s Lyria 3 advances generative AI music with transparency and copyright safeguards

Google has introduced Lyria 3 inside its Gemini app, marking its expansion into AI-generated music. The model enables users to create 30-second tracks from text prompts, images, or short videos. It also supports Dream Track on YouTube Shorts, strengthening AI integration in creator tools.

The development reflects the growing convergence of multimodal AI systems. Gemini can already generate text, images, and video, and music is now added to this ecosystem. This positions Google within the broader race to embed generative AI across digital content infrastructures.

Lyria 3 lowers technical barriers to music production. Users can generate instrumentals and lyrics without prior composition skills, simply by describing a mood, genre, or memory. This aligns with wider efforts to democratise creative expression through AI tools.

The model also introduces technical improvements over earlier audio systems. It offers greater control over tempo, vocals, and style, while producing more realistic and musically complex outputs. However, tracks are currently limited to 30 seconds, suggesting a phased rollout approach.

Transparency measures are embedded through SynthID watermarking technology. All AI-generated tracks include an imperceptible identifier to signal synthetic origin. Such mechanisms respond to increasing policy discussions on labelling and traceability of AI-generated content.

Google also emphasises safeguards related to intellectual property. The system is designed for original expression rather than direct imitation of specific artists. Prompts referencing known artists are treated as stylistic inspiration, and outputs are filtered against existing works, with reporting mechanisms available for potential rights violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Brand turns AI demon into marketing stunt

Beverage company Liquid Death triggered confusion during the Winter Olympics after airing an AI advert featuring a figure skater who transforms into a red-eyed demon. The commercial appeared on Peacock’s Olympics stream but was not posted online, leaving viewers questioning whether it was real.

The brand later confirmed the advert was intentional and designed to parody fears around AI. According to Liquid Death, the limited run and lack of online acknowledgement were meant to amplify the sense of unease during the Winter Olympics broadcast.

Marketing analysts said that brands are increasingly leaning into AI scepticism to build trust with wary consumers. Campaigns from Equinox and Almond Breeze have similarly contrasted human authenticity with AI-generated content.

Despite the strategy, the Winter Olympics stunt drew criticism on social media, with some users labelling the advert AI slop. The reaction highlights both the risks and rewards for brands experimenting with AI-themed messaging.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US freedom.gov and the EU’s DSA in a transatlantic fight over online speech

The transatlantic debate over ‘digital sovereignty’ is also, in a discrete measure, about whose rules govern online speech. In the EU, digital sovereignty has essentially meant building enforceable guardrails for platforms, especially around illegal content, systemic risks, and transparency, through instruments such as the Digital Services Act (DSA) and its transparency mechanisms for content moderation decisions. In Washington, the emphasis has been shifting toward ‘free speech diplomacy‘, framing some EU online-safety measures as de facto censorship that spills across borders when US-based platforms comply with the EU requirements.

What is ‘freedom.gov’?

The newest flashpoint is a reported US State Department plan to develop an online portal, widely described as ‘freedom.gov‘, intended to help users in the EU and elsewhere access content blocked under local rules, and it aligns with the Trump administration policy and a State Department programme called Internet Freedom. The ‘freedom.gov’ plan reportedly includes adding VPN-like functionality so traffic would appear to originate in the US, effectively sidestepping geographic enforcement of content restrictions. According to the US House of Representatives’ legal framework, the idea could be seen as a digital-rights tool, but experts warn it would export a US free-speech standard into jurisdictions that regulate hate speech and extremist material more tightly.

The ‘freedom.gov’ portal story occurs within a broader escalation that has already moved from rhetoric to sanctions. In late 2025, the US imposed visa bans on several EU figures it accused of pressuring platforms to suppress ‘American viewpoints,’ a move the EU governments and officials condemned as unjustified and politically coercive. The episode brought to the conclusion that Washington is treating some foreign content-governance actions not as domestic regulation, but as a challenge to US speech norms and US technology firms.

The EU legal perspective

From the EU perspective, this framing misses the point of to DSA. The Commission argues that the DSA is about platform accountability, requiring large platforms to assess and mitigate systemic risks, explain moderation decisions, and provide users with avenues to appeal. The EU has also built new transparency infrastructure, such as the DSA Transparency Database, to make moderation decisions more visible and auditable. Civil-society groups broadly supportive of the DSA stress that it targets illegal content and opaque algorithmic amplification; critics, especially in US policy circles, argue that compliance burdens fall disproportionately on major US platforms and can chill lawful speech through risk-averse moderation.

That’s where the two sides’ risk models diverge most sharply. The EU rules are shaped by the view that disinformation, hate speech, and extremist propaganda can create systemic harms that platforms must proactively reduce. On the other side, the US critics counter that ‘harm’ categories can expand into viewpoint policing, and that tools like a government-backed portal or VPN could be portrayed as restoring access to lawful expression. Yet the same reporting that casts the portal as a speech workaround also notes it may facilitate access to content the EU considers dangerous, raising questions about whether the initiative is rights-protective ‘diplomacy,’ a geopolitical pressure tactic, or something closer to state-enabled circumvention.

Why does it matter?

The dispute has gone from theoretical to practical, reshaping digital alliances, compliance strategies, and even travel rights for policy actors, not to mention digital sovereignty in the governance of online discourse and data. The EU’s approach is to make platforms responsible for systemic online risks through enforceable transparency and risk-reduction duties, while the US approach is increasingly to contest those duties as censorship with extraterritorial effects, using instruments ranging from public messaging to visa restrictions, and, potentially, state-backed bypass tools.

What could we expect then, if not a more fragmented internet, with platforms pulled between competing legal expectations, users encountering different speech environments by region, and governments treating content policy as an extension of foreign policy, complete with retaliation, countermeasures, and escalating mistrust?

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

MIT study finds AI chatbots underperform for vulnerable users

Research from the MIT Centre for Constructive Communication (CCC) finds that leading AI chatbots often provide lower-quality responses to users with lower English proficiency, less education, or who are outside the US.

Models tested include GPT-4, Claude 3 Opus, and Llama 3, which sometimes refuse to answer or respond condescendingly. Using TruthfulQA and SciQ datasets, researchers added user biographies to simulate differences in education, language, and country.

Accuracy fell sharply among non-native English speakers and less-educated users, with the most significant drop among those affected by both; users from countries like Iran also received lower-quality responses.

Refusal behaviour was notable. Claude 3 Opus declined 11% of questions for less-educated, non-native English speakers versus 3.6% for control users. Manual review showed 43.7% of refusals contained condescending language.

Some users were denied access to specific topics even though they answered correctly for others.

The study echoes human sociocognitive biases, in which non-native speakers are often perceived as less competent. Researchers warn AI personalisation could worsen inequities, providing marginalised users with subpar or misleading information when they need it most.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gemini 3.1 Pro brings advanced logic to developers and consumers

Google has launched Gemini 3.1 Pro, an upgraded AI model for solving complex science, research, and engineering challenges. Following the Gemini 3 Deep Think release, the update adds enhanced core reasoning for consumer, developer, and enterprise applications.

Developers can access 3.1 Pro in preview via the Gemini API, Google AI Studio, Gemini CLI, Antigravity, and Android Studio, while enterprise users can use it through Vertex AI and Gemini Enterprise.

Consumers can now try the upgrade through the Gemini app and NotebookLM, with higher limits for Google AI Pro and Ultra plan users.

Benchmarks show significant improvements in logic and problem-solving. On the ARC-AGI-2 benchmark, 3.1 Pro scored 77.1%, more than doubling the reasoning performance of its predecessor.

The upgrade is intended to make AI reasoning more practical, offering tools to visualise complex topics, synthesise data, and enhance creative projects.

Feedback from Gemini 3 Pro users has driven the rapid development of 3.1 Pro. The preview release allows Google to validate improvements and continue refining advanced agentic workflows before the model becomes widely available.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot