OpenAI outlines Japan’s AI Blueprint for inclusive economic growth

A new Japan Economic Blueprint released by OpenAI sets out how AI can power innovation, competitiveness, and long-term prosperity across the country. The plan estimates that AI could add more than ¥100 trillion to Japan’s economy and raise GDP by up to 16%.

Centred on inclusive access, infrastructure, and education, the Blueprint calls for equal AI opportunities for citizens and small businesses, national investment in semiconductors and renewable energy, and expanded lifelong learning to build an adaptive workforce.

AI is already reshaping Japanese industries from manufacturing and healthcare to education and public administration. Factories reduce inspection costs, schools use ChatGPT Edu for personalised teaching, and cities from Saitama to Fukuoka employ AI to enhance local services.

OpenAI suggests that the focus of Japan on ethical and human-centred innovation could make it a model for responsible AI governance. By aligning digital and green priorities, the report envisions technology driving creativity, equality, and shared prosperity across generations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI leaders call for a global pause in superintelligence development

More than 850 public figures, including leading computer scientists Geoffrey Hinton and Yoshua Bengio, have signed a joint statement urging a global slowdown in the development of artificial superintelligence.

The open letter warns that unchecked progress could lead to human economic displacement, loss of freedom, and even extinction.

An appeal that follows growing anxiety that the rush toward machines surpassing human cognition could spiral beyond human control. Alan Turing predicted as early as the 1950s that machines might eventually dominate by default, a view that continues to resonate among AI researchers today.

Despite such fears, global powers still view the AI race as essential for national security and technological advancement.

Tech firms like Meta are also exploiting the superintelligence label to promote their most ambitious models, while leaders such as OpenAI’s Sam Altman and Microsoft’s Mustafa Suleyman have previously acknowledged the existential risks of developing systems beyond human understanding.

The statement calls for an international prohibition on superintelligence research until there is a broad scientific consensus on safety and public approval.

Its signatories include technologists, academics, religious figures, and cultural personalities, reflecting a rare cross-sector demand for restraint in an era defined by rapid automation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT faces EU’s toughest platform rules after 120 million users

OpenAI’s ChatGPT could soon face the EU’s strictest platform regulations under the Digital Services Act (DSA), after surpassing 120 million monthly users in Europe.

A milestone that places OpenAI’s chatbot above the 45 million-user threshold that triggers heightened oversight.

The DSA imposes stricter obligations on major platforms such as Meta, TikTok, and Amazon, requiring greater transparency, risk assessments, and annual fees to fund EU supervision.

The European Commission confirmed it has begun assessing ChatGPT’s eligibility for the ‘very large online platform’ status, which would bring the total number of regulated platforms to 26.

OpenAI reported that its ChatGPT search function alone had 120.4 million monthly active users across the EU in the six months ending 30 September 2025. Globally, the chatbot now counts around 700 million weekly users.

If designated under the DSA, ChatGPT would be required to curb illegal and harmful content more rigorously and demonstrate how its algorithms handle information, marking the EU’s most direct regulatory test yet for generative AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

‘Wicked’ AI data scraping: Pullman calls for regulation to protect creative rights

Author Philip Pullman has publicly urged the UK government to intervene in what he describes as the ‘wicked’ practice of AI firms scraping authors’ works for training models. Pullman insists that writing is more than data, it is creative labour, and authors deserve protection.

Pullman’s intervention comes amid increasing concern in the literary community about how generative AI models are built using large volumes of existing texts, often without permission or clear compensation. He argues that uninhibited scraping undermines the rights of creators and could hollow out the foundations of culture.

He has called on UK policymakers to establish clearer rules and safeguards over how AI systems access, store, and reuse writers’ content. Pullman warns that without intervention, authors may lose control over their work, and the public could be deprived of authentic, quality literature.

His statement adds to growing pressure from writers, unions and rights bodies calling for better transparency, consent mechanisms and a balance between innovation and creator rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Zuckerberg to testify in landmark trial over social media’s harm to youth

A US court has mandated that Mark Zuckerberg, CEO of Meta, must appear and testify in a high-stakes trial about social media’s effects on children and adolescents. The case, brought by parents and school districts, alleges that platforms contributed to mental health harms by deploying addictive algorithms and weak moderation in their efforts to retain user engagement.

The plaintiffs argue that platforms including Facebook, Instagram, TikTok and Snapchat failed to protect young users, particularly through weak parental controls and design choices that encourage harmful usage patterns. They contend that the executives and companies neglected risks in favour of growth and profits.

Meta had argued that such platforms are shielded from liability under US federal law (Section 230) and that high-level executives should not be dragged into testimony. But the judge rejected those defenses, saying that hearing directly from executives is integral to assessing accountability and proving claims of negligence.

Legal experts say the decision marks an inflection point: social media’s architecture and leadership may now be put under the microscope in ways previously reserved for sectors like tobacco and pharmaceuticals. The trial could set a precedent for how tech chief executives are held personally responsible for harms tied to platform design.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Netflix goes ‘all in’ on generative AI as entertainment industry remains divided

Netflix has declared itself ‘all in’ on generative artificial intelligence (GenAI), signalling a significant commitment to embedding AI across its business, from production and VFX to search, advertising and user-experience, according to a recent investor letter and earnings call.

Co-CEO Ted Sarandos emphasised that while AI will be widely used, it is not a replacement for the creative talent behind Netflix’s original shows. ‘It takes a great artist to make something great,’ he remarked. ‘AI can give creatives better tools … but it doesn’t automatically make you a great storyteller if you’re not.’

Netflix has already applied GenAI in production. For example, in The Eternaut, an Argentine series in which a building-collapse scene was generated using AI tools, reportedly ten times faster than with conventional VFX workflows. The company says it plans to extend GenAI use to search experiences (natural language queries), advertising formats, localisation of titles, and creative pre-visualisation workflows.

However, the entertainment industry remains divided over generative AI’s role. While Netflix embraces the tools, many creators and unions continue to raise concerns about job displacement, copyright and the erosion of human-centred storytelling. Netflix is walking a line of deploying AI at scale while assuring audiences and creators that human artistry remains central.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cloudflare calls for UK action on Google’s AI crawlers

Cloudflare’s chief executive Matthew Prince has urged the UK regulator to curb Google’s AI practices. He met with the Competition and Markets Authority (CMA) in London to argue that Google’s bundled crawlers give it excessive power.

Prince said Google uses the same web crawler to gather data for both search and AI products. Blocking the crawler, he added, can also disrupt advertising systems, leaving websites financially exposed.

Cloudflare, which supplies network services to most major AI companies, has proposed separating Google’s AI and search crawlers. Prince believes the change would create fairer access to online content for smaller AI developers.

He also provided data to the UK CMA showing why rivals cannot easily replicate Google’s infrastructure. Media groups have echoed his concerns, warning that Google’s dominance risks deepening inequalities across the AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube launches likeness detection to protect creators from AI misuse

YouTube has expanded its AI safeguards with a new likeness detection system that identifies AI-generated videos imitating creators’ faces or voices. The tool is now available to eligible members of the YouTube Partner Program after a limited pilot phase.

Creators can review detected videos and request their removal under YouTube’s privacy rules or submit copyright claims.

YouTube said the feature aims to protect users from having their image used to promote products or spread misinformation without consent.

The onboarding process requires identity verification through a short selfie video and photo ID. Creators can opt out at any time, with scanning ending within a day of deactivation.

YouTube has backed recent legislative efforts, such as the NO FAKES Act in the US, which targets deceptive AI replicas. The move highlights growing industry concern over deepfake misuse and the protection of digital identity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta strengthens protection for older adults against online scams

The US giant, Meta, has intensified its campaign against online scams targeting older adults, marking Cybersecurity Awareness Month with new safety tools and global partnerships.

Additionally, Meta said it had detected and disrupted nearly eight million fraudulent accounts on Facebook and Instagram since January, many linked to organised scam centres operating across Asia and the Middle East.

The social media giant is joining the National Elder Fraud Coordination Center in the US, alongside partners including Google, Microsoft and Walmart, to strengthen investigations into large-scale fraud operations.

It is also collaborating with law enforcement and research groups such as Graphika to identify scams involving fake customer service pages, fraudulent financial recovery services and deceptive home renovation schemes.

Meta continues to roll out product updates to improve online safety. WhatsApp now warns users when they share screens with unknown contacts, while Messenger is testing AI-powered scam detection that alerts users to suspicious messages.

Across Facebook, Instagram and WhatsApp, users can activate passkeys and complete a Security Checkup to reinforce account protection.

The company has also partnered with organisations worldwide to raise scam awareness among older adults, from digital literacy workshops in Bangkok to influencer-led safety campaigns across Europe and India.

These efforts form part of Meta’s ongoing drive to protect users through a mix of education, advanced technology and cross-industry cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Dutch watchdog warns AI chatbots threaten election integrity

The Dutch data authority warns AI chatbots are biased and unreliable for voting advice ahead of national elections. An AP investigation found chatbots often steered users to the same two parties, ignoring their actual preferences.

In over half of the tests, the bots suggested either Geert Wilders’ far-right Freedom Party (PVV) or the leftwing GroenLinks-PvdA led by Frans Timmermans. Other parties, such as the centre-right CDA, were rarely mentioned even when users’ answers closely matched their platforms.

AP deputy head Monique Verdier said that voters were being steered towards parties that did not necessarily reflect their political views, warning that this undermines the integrity of free and fair elections.

The report comes ahead of the 29 October election, where the PVV currently leads the polls. However, the race remains tight, with GroenLinks-PvdA and CDA still in contention and many voters undecided.

Although the AP noted that the bias was not intentional, it attributed the problem to the way AI chatbots function, highlighting the risks of relying on opaque systems for democratic decisions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!