Japan and OpenAI team up for public sector AI innovation

Japan’s Digital Agency partners with OpenAI to integrate AI into public services, enhancing efficiency and innovation. Gennai, an OpenAI-powered tool, will enable government employees to explore innovative public sector applications, supporting Japan’s modern governance vision.

The collaboration supports Japan’s leadership in the Hiroshima AI Process, backed by the OECD and G7. The framework sets global AI guidelines, ensuring safety, security, and trust while promoting inclusive governance across governments, industry, academia, and civil society in Asia and beyond.

OpenAI is committed to meeting Japan’s rigorous standards and pursuing ISMAP certification to ensure secure and reliable AI use in government operations. The partnership strengthens trust and transparency in AI deployment, aligning with Japan’s national policies.

OpenAI plans to strengthen ties with Japanese authorities, educational institutions, and industry stakeholders. The collaboration seeks to integrate AI into society responsibly, prioritising safety, transparency, and global cooperation for sustainable benefits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI’s Sora app raises tension between mission and profit

The US AI company, OpenAI, has entered the social media arena with Sora, a new app offering AI-generated videos in a TikTok-style feed.

The launch has stirred debate among current and former researchers, some praising its technical achievement while others worry it diverges from OpenAI’s nonprofit mission to develop AI for the benefit of humanity.

Researchers have expressed concerns about deepfakes, addictive loops and the ethical risks of AI-driven feeds. OpenAI insists Sora is designed for creativity rather than engagement, highlighting safeguards such as reminders for excessive scrolling and prioritisation of content from known contacts.

The company argues that revenue from consumer apps helps fund advanced AI research, including its pursuit of artificial general intelligence.

A debate that reflects broader tensions within OpenAI: balancing commercial growth with its founding mission. Critics fear the consumer push could dilute its focus, while executives maintain products like ChatGPT and Sora expand public access and provide essential funding.

Regulators are watching closely, questioning whether the company’s for-profit shift undermines its stated commitment to safety and ethical development.

Sora’s future remains uncertain, but its debut marks a significant expansion of AI-powered social platforms. Whether OpenAI can avoid the pitfalls that defined earlier social media models will be a key test of both its mission and its technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How OpenAI designs Sora’s recommendation feed for creativity and safety

OpenAI outlines the core principles behind Sora’s content feed in its Sora Feed Philosophy document. The company states that the feed is designed to spark creativity, foster connections, and maintain a safe user environment.

To achieve these goals, OpenAI says it prioritises creativity over passive consumption. The ranking is steered not simply for engagement, but to encourage active participation. Users can also influence what they see via steerable ranking controls.

Another guiding principle is putting users in control. For instance, parental settings let caretakers turn off feed personalisation or continuous scroll for teen accounts.

OpenAI also emphasises connection. The feed is biassed toward content from people you know or connect with, rather than purely global content, so the experience feels more communal.

In terms of safety and expression, OpenAI embeds guardrails at the content creation level. Because every post is generated within Sora, the system can block disallowed content before it appears.

The feed layers additional filtering, removing or deprioritising harmful or unsafe material (e.g. violent, sexual, hate, self-harm content). At the same time, the design aims not to over-censor, allowing space for genuine expression and experimentation.

On how the feed works, OpenAI says it considers signals like user activity (likes, comments, remixes), location data, ChatGPT history (unless turned off), engagement metrics, and author-level data (e.g. follower counts). Safety signals also weigh in to suppress or filter content flagged as inappropriate.

OpenAI describes the feed as a ‘living, breathing’ system. It expects to update and refine algorithms based on user behaviour and feedback while staying aligned with its founding principles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sora 2.0 release reignites debate on intellectual property in AI video

OpenAI has launched Sora 2.0, the latest version of its video generation model, alongside an iOS app available by invitation in the US and Canada. The tool offers advances in physical realism, audio-video synchronisation, and multi-shot storytelling, with built-in safeguards for security and identity control.

The app allows users to create, remix, or appear in clips generated from text or images. A Pro version, web interface, and developer API are expected soon, extending access to the model.

Sora 2.0 has reignited debate over intellectual property. According to The Wall Street Journal, OpenAI has informed studios and talent agencies that their universes could appear in generated clips unless they opt out.

The company defends its approach as an extension of fan creativity, while stressing that real people’s images and voices require prior consent, validated through a verified cameo system.

By combining new creative tools with identity safeguards, OpenAI aims to position Sora 2.0 as a leading platform in the fast-growing market for AI-generated video.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Calls for regulation grow as OpenAI and Meta adjust chatbots for teen mental health

OpenAI and Meta are adjusting how their chatbots handle conversations with teenagers showing signs of distress or asking about suicide. OpenAI plans to launch new parental controls this fall, enabling parents to link accounts, restrict features, and receive alerts if their child appears to be in acute distress.

The company says its chatbots will also route sensitive conversations to more capable models, aiming to improve responses to vulnerable users. The announcement follows a lawsuit alleging that ChatGPT encouraged a California teenager to take his own life earlier this year.

Meta, the parent company of Instagram and Facebook, is also tightening its restrictions. Its chatbots will no longer engage teens on self-harm, suicide, eating disorders, or inappropriate topics, instead redirecting them towards expert resources. Meta already offers parental controls across teen accounts.

The moves come amid growing scrutiny of chatbot safety. A RAND Corporation study found inconsistent responses from ChatGPT, Google’s Gemini, and Anthropic’s Claude when asked about suicide, suggesting the tools require further refinement before being relied upon in high-risk situations.

Lead author Ryan McBain welcomed the updates but called them only incremental. Without safety benchmarks and enforceable standards, he argued, companies remain self-regulating in an area where risks to teenagers are uniquely high.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches Instant Checkout to enable in-chat purchases

OpenAI has launched Instant Checkout, a feature that lets users make direct purchases within ChatGPT. The initial rollout applies to US Etsy sellers, with Shopify merchants to follow.

The system is powered by the Agentic Commerce Protocol, which OpenAI co-developed with Stripe, and currently supports single-item purchases. Future updates will add multi-item carts and expand to more regions.

According to OpenAI, product results in ChatGPT are organic and ranked for relevance. The e-commerce framework will be open-sourced to accelerate integrations for merchants and developers. Users can pay using cards already on file, and transactions involve explicit confirmation steps, scoped payment tokens, and limited data sharing to build trust.

Michelle Fradin, OpenAI’s product lead for ChatGPT commerce, said the goal is to move beyond information retrieval and support real-world actions. Stripe’s president for technology and business, Will Gaybrick, described the partnership as laying economic infrastructure for AI.

Merchants will pay a small fee on completed purchases, while users are not charged extra and product prices remain unchanged.

Reuters reported that Etsy and Shopify’s stocks rose significantly following the announcement, with Etsy closing up nearly 16 percent and Shopify more than 6 percent. The company plans to extend the system to more merchants and payment types over time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI reports $4.3 billion revenue in first half of 2025

OpenAI posted approximately $4.3 billion in revenue in the first half of 2025, according to a report by The Information cited in Cyprus Mail. That figure is roughly 16 percent higher than what the company is said to have earned in 2024.

During the same period, OpenAI reportedly burned around $2.5 billion due to heavy research, development investments, and operational costs tied to ChatGPT. Total R&D spending for H1 2025 is reported to have reached $6.7 billion, and the company held about $17.5 billion in cash and securities at period’s close.

OpenAI is targeting full-year revenue of $13 billion and aims to cap annual cash burn at $8.5 billion. Meanwhile, in August, the company was reportedly in early discussions about a potential stock sale to allow employee access to liquidity and possibly reach a valuation near $500 billion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Digital on Day 6 of UNGA80: Global AI governance, technology equity, and closing the digital divide

 Lighting, Stage, Purple, Electronics, Screen, Computer Hardware, Hardware, Monitor, Light, Urban, Indoors, Club

Welcome to the fifth daily report from the General Debate at the 80th session of the UN General Assembly (UNGA80). Our daily hybrid AI–human reports bring you a concise overview of how world leaders are framing the digital future.

Day 6 discussion centred on the transformative potential and urgent risks of AI, emphasising that while AI can boost development, health, education, and productivity—especially in least developed countries—it must be governed responsibly to prevent inequality, bias, and insecurity. Calls for a global AI framework were echoed in various statements, alongside broader appeals for inclusive digital cooperation, accelerated technology transfer, and investment in infrastructure, literacy, and talent development. Speakers warned that digital disruption is deepening geopolitical divides, with smaller and developing nations demanding a voice in shaping emerging governance regimes. Bridging the digital divide, advancing secure and rights-based technologies, and protecting against cybercrime were framed as essential

To keep the highlights clear and accessible, we leave them in bullet points — capturing the key themes and voices as they emerge.


Artificial intelligence

  • Responsible AI governance
  • AI presents both unprecedented opportunities and profound challenges, and if harnessed responsibly, it can accelerate development, improve health and education, and unlock economic growth. Without clear governance, AI risks deepening inequalities and undermining security. A global framework is called for to ensure AI is ethical, inclusive, and accessible to all nations, enabling it to serve as a force for development rather than division. (Malawi)
  • AI is a tool that must be harnessed for all humankind, equally in a controlled manner, as opportunities are vast, including for farmers, city planning, and disaster risk management. (President of the General Assembly)
  • The risks of AI are becoming more prevalent, and age-old biases are being perpetuated by algorithms, as seen in the targeting of women and girls by sexually related deepfakes. (President of the General Assembly)
  • Discussions on AI lend further prudence to the argument that ‘we are better together,’ and few would be comfortable leaving the benefits or risks of this immense resource in the hands of a few. (President of the General Assembly)
    International cooperation remains essential to establishing comprehensive regulations governing the use and development of AI. (Timor-Leste)

AI for development and growth

  • The transformative potential of science, technology, and AI, should be harnessed for national and global development. Malawi is optimistic that AI will usher in a new era of enhanced productivity for its citizens, helping to propel the country’s development trajectory. (Malawi)
  • Advancing AI and digital capabilities in LDCs is imperative, requiring investment in digital infrastructure and enhancing digital literacy, implementing e-government initiatives, promoting AI research and innovation, cultivating talent and establishing a policy framework. (Timor-Leste)
  • Making AI a technology that benefits all is an important issue agreed upon in the Global Digital Compact, which also covers peace and security, sustainable development, climate change, and digital cooperation. (Djibouti) 
  • Canada emphasised national strength in AI, clean technologies, critical minerals and digital innovation. (Canada)

Global digital governance

  • Nepal advocates for a global digital cooperation framework that ensures access to infrastructures, digital literacy, and data protection for all. (Nepal)
  • Digital transformation and digital and technological disruption are converging with other crises, such as climate catastrophe and widening inequality. (Malawi, Nepal, Holy See) Digital transformation demands renewed collective action. A renewed collective resolve to fortify the founding values of the UN, and a revitalised, transformed UN are needed. (Malawi, Nepal, Holy See)

Digital technologies and development

Addressing the digital divide and inequality

  • Rapid technological, geopolitical, and environmental shifts are ushering in a new, multipolar global order that offers both opportunities and risks, and insisted that smaller states must not be sidelined but fully heard in shaping it. (Benin)
  • The development gap has expanded between the North and the South despite technological revolutions. (Algeria)
  • Digital transformations deserve urgent global attention, and technology must be inclusive, secure, and rights-based. (Nepal)
  • It is crucial to narrow the digital divide within and among countries to create a peaceful and equitable society. (Nepal)
  • Policies and programmes for technologies and progress should be within the reach of everyone for the good of everyone. (Nicaragua)

Technology transfer 

  • The gap between rich and poor nations continues to widen, and developing countries struggle with limited technology transfer and low productivity. (Malawi)
  • The full and effective implementation of the Paris Agreement should include ensuring equitable access to sustainable technologies. (Malawi)
  • The international community is called upon to foster an environment that supports inclusive growth and harnesses the transformative potential of science and technology, and AI. (Malawi)
  • A comprehensive and inclusive approach is needed to address the pressing challenges in the Mediterranean, making economic development on the Southern Front a shared priority through investment and technology transfer. (Algeria)
  • Technology transfer must be accelerated and scaled up, with calls for scaled-up, predictable and accessible technology transfer and capacity building for countries on the front line, particularly LDCs. (Nepal)

Cybersecurity

  • Safeguarding cybersecurity is imperative alongside the advancement of AI and digital capabilities in LDCs. (Timor-Leste)
  • Russia has sought to undermine Moldova’s sovereignty through illicit financing, disinformation, cyberattacks, and voter intimidation. (Moldova)

For other topics discussed, head over to our dedicated UNGA80 page, where you can explore more insights from the General Debate.

Diplo NEWS25 Insta UNGA
The General Debate at the 80th session of the UN General Assembly brings together high-level representatives from across the globe to discuss the most pressing issues of our time. The session took place against the backdrop of the UN’s 80th anniversary, serving as a moment for both reflection and a forward-looking assessment of the organisation’s role and relevance.

California enacts first state-level AI safety law

In the US, California Governor Gavin Newsom has signed SB 53, a landmark law establishing transparency and safety requirements for large AI companies.

The legislation obliges major AI developers such as OpenAI, Anthropic, Meta, and Google DeepMind to disclose their safety protocols. It also introduces whistle-blower protections and a reporting mechanism for safety incidents, including cyberattacks and autonomous AI behaviour not covered by the EU AI Act.

Reactions across the industry have been mixed. Anthropic supported the law, while Meta and OpenAI lobbied against it, with OpenAI publishing an open letter urging Newsom not to sign. Tech firms have warned that state-level measures could create a patchwork of regulation that stifles innovation.

Despite resistance, the law positions California as a national leader in AI governance. Newsom said the state had demonstrated that it was possible to safeguard communities without stifling growth, calling AI ‘the new frontier in innovation’.

Similar legislation is under consideration in New York, while California lawmakers are also debating SB 243, a separate bill that would regulate AI companion chatbots.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ChatGPT gets family safety update with parental controls

OpenAI has introduced new parental controls for ChatGPT, giving families greater oversight of how teens use the AI platform. The tools, which are live for all users, allow parents to link accounts with their children and manage settings through a simple control dashboard.

The system introduces stronger safeguards for teen accounts, including filters on graphic or harmful content and restrictions on roleplay involving sex, violence or extreme beauty ideals.

Parents can also fine-tune features such as voice mode, memory, image generation, or set quiet hours when ChatGPT cannot be accessed.

A notification mechanism has been added to alert parents if a teen shows signs of acute distress, escalating to emergency services in critical cases. OpenAI said the controls were shaped by consultation with experts, advocacy groups, and policymakers and will be expanded as research evolves.

To complement the parental controls, a new online resource hub has been launched to help families learn how ChatGPT works and explore positive uses in study, creativity and daily life.

OpenAI also plans to roll out an age-prediction system that automatically applies teen-appropriate settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!