US pushes chip manufacturing to boost AI dominance

Donald Trump’s AI Action Plan, released in July 2025, places domestic semiconductor manufacturing at the heart of US efforts to dominate global AI. The plan supports deregulation, domestic production and export of full-stack technology, positioning chips as critical to national power.

Lawmakers and tech leaders have previously flagged tracking chips post-sale as viable, with companies like Google already using such methods. Trump’s plan suggests adopting location tracking and enhanced end-use monitoring to ensure chips avoid blacklisted destinations.

Trump has pressed for more private sector investment in US fabs, reportedly using tariff threats to extract pledges from chipmakers like TSMC. The cost of building and running chip plants in the US remains significantly higher than in Asia, raising questions about sustainability.

America’s success in AI and semiconductors will likely depend on how well it balances domestic goals with global collaboration. Overregulation risks slowing innovation, while unilateral restrictions may alienate allies and reduce long-term influence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNGA adopts terms of reference for AI Scientific Panel and Global Dialogue on AI governance

On 26 August 2025, following several months of negotiations in New York, the UN General Assembly (UNGA) adopted a resolution (A/RES/79/325) outlining the terms of reference and modalities for the establishment and functioning of two new AI governance mechanisms: an Independent International Scientific Panel on AI and a Global Dialogue on AI Governance. The creation of these mechanisms was formally agreed by UN member states in September 2024, as part of the Global Digital Compact

The 40-member Scientific Panel has the main task of ‘issuing evidence-based scientific assessments synthesising and analysing existing research related to the opportunities, risks and impacts of AI’, in the form of one annual ‘policy-relevant but non-prescriptive summary report’ to be presented to the Global Dialogue.

The Panel will also ‘provide updates on its work up to twice a year to hear views through an interactive dialogue of the plenary of the General Assembly with the Co-Chairs of the Panel’. The UN Secretary-General is expected to shortly launch an open call for nominations for Panel members; he will then recommend a list of 40 members to be appointed by the General Assembly. 

The Global Dialogue on AI Governance, to involve governments and all relevant stakeholders, will function as a platform ‘to discuss international cooperation, share best practices and lessons learned, and to facilitate open, transparent and inclusive discussions on AI governance with a view to enabling AI to contribute to the implementation of the Sustainable Development Goals and to closing the digital divides between and within countries’. It will be convened annually, for up to two days, in the margins of existing relevant UN conferences and meetings, alternating between Geneva and New York. Each meeting will consist of a multistakeholder plenary meeting with a high-level governmental segment, a presentation of the panel’s annual report, and thematic discussions. 

The Dialogue will be launched during a high-level multistakeholder informal meeting in the margins of the high-level week of UNGA’s 80th session (starting in September 2025). The Dialogue will then be held in the margins of the International Telecommunication Union AI  for Good Global Summit in Geneva, in 2026, and of the multistakeholder forum on science, technology and innovation for the Sustainable Development Goals in New York, in 2027.

The General Assembly also decided that ‘the Co-Chairs of the second Dialogue will hold intergovernmental consultations to agree on common understandings on priority areas for international AI governance, taking into account the summaries of the previous Dialogues and contributions from other stakeholders, as an input to the high-level review of the Global Digital Compact and to further discussions’.

The provision represents the most significant change compared to the previous version of the draft resolution (rev4), which was envisioning intergovernmental negotiations, led by the co-facilitators of the high-level review of the GDC, on a ‘declaration reflecting common understandings on priority areas for international AI governance’. An earlier draft (rev3) was talking about a UNGA resolution on AI governance, which proved to be a contentious point during the negotiations.

To enable the functioning of these mechanisms, the Secretary-General is requested to ‘facilitate, within existing resources and mandates, appropriate Secretariat support for the Panel and the Dialogue by leveraging UN system-wide capacities, including those of the Inter-Agency Working Group on AI’.

States and other stakeholders are encouraged to ‘support the effective functioning of the Panel and Dialogue, including by facilitating the participation of representatives and stakeholders of developing countries by offering travel support, through voluntary contributions that are made public’. 

The continuation of the terms of reference of the Panel and the Dialogue may be considered and decided upon by UNGA during the high-level review of the GDC, at UNGA 82. 

***

The Digital Watch observatory has followed the negotiations on this resolution and published regular updates:

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Netflix limits AI use in productions with new rules

Netflix has issued detailed guidance for production companies on the approved use of generative AI. The guidelines allow AI tools for early ideation tasks such as moodboards or reference images, but stricter oversight applies beyond that stage.

The company outlined five guiding principles. These include ensuring generated content does not replicate copyrighted works, maintaining security of inputs, avoiding use of AI in final deliverables, and prohibiting storage or reuse of production data by AI tools.

Enterprises or vendors working on Netflix content must pass the platform’s AI compliance checks at every stage.

Netflix has already used AI to reduce VFX costs on projects like The Eternaut, but has moved to formalise boundaries around how and when the technology is applied.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI model forecasts Bitcoin to fall below $100,000

Bitcoin has slipped below $110,000, and according to Finbold’s use of ChatGPT-5, a further drop could occur in the coming weeks. The model outlined technical resistance and seasonal factors pointing to September weakness.

Key levels around $112,000 and $106,000 are under pressure, with the AI projecting a sharp decline toward $98,000 if support breaks. Historically, September has been one of Bitcoin’s worst-performing months, adding to the bearish outlook.

Despite the short-term caution, demand from ETFs and long-term holders may offer support between $95,000 and $98,000. Longer-term technicals remain intact, with the 200-day average sitting near $95,000.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Humain Chat has been unveiled by Saudi Arabia to drive AI innovation

Saudi Arabia has taken a significant step in AI with the launch of Humain Chat, an app powered by one of the world’s most enormous Arabic-trained datasets.

Developed by state-backed venture Humain, the app is designed to strengthen the country’s role in AI while promoting sovereign technologies.

Built on the Allam large language model, Humain Chat allows real-time web search, speech input across Arabic dialects, bilingual switching between Arabic and English, and secure data compliance with Saudi privacy laws.

The app is already available on the web, iOS, and Android in Saudi Arabia, with plans for regional expansion across the Middle East before reaching global markets.

Humain was established in May under the leadership of Crown Prince Mohammed bin Salman and the Public Investment Fund. Its flagship model, ALLAM 34B, is described as the most advanced AI system created in the Arab world. The company said the app will evolve further as user adoption grows.

CEO Tareq Amin called the launch ‘a historic milestone’ for Saudi Arabia, stressing that Humain Chat shows how advanced AI can be developed in Arabic while staying culturally rooted and built by local expertise.

A team of 120 specialists based in the Kingdom created the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube under fire for AI video edits without creator consent

Anger grows as YouTube secretly alters some uploaded videos using machine learning. The company admitted that it had been experimenting with automated edits, which sharpen images, smooth skin, and enhance clarity, without notifying creators.

Although tools like ChatGPT or Gemini did not generate these changes, they still relied on AI.

The issue has sparked concern among creators, who argue that the lack of consent undermines trust.

YouTuber Rhett Shull publicly criticised the platform, prompting YouTube liaison Rene Ritchie to clarify that the edits were simply efforts to ‘unblur and denoise’ footage, similar to smartphone processing.

However, creators emphasise that the difference lies in transparency, since phone users know when enhancements are applied, whereas YouTube users were unaware.

Consent remains central to debates around AI adoption, especially as regulation lags and governments push companies to expand their use of the technology.

Critics warn that even minor, automatic edits can treat user videos as training material without permission, raising broader concerns about control and ownership on digital platforms.

YouTube has not confirmed whether the experiment will expand or when it might end.

For now, viewers noticing oddly upscaled Shorts may be seeing the outcome of these hidden edits, which have only fuelled anger about how AI is being introduced into creative spaces.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI controversy surrounds Will Smith’s comeback shows

Footage from Will Smith’s comeback tour has sparked claims that AI was used to alter shots of the crowd. Viewers noticed faces appearing blurred or distorted, along with extra fingers and oddly shaped hands in several clips.

Some accused Smith of boosting audience shots with AI, while others pointed to YouTube, which has been reported to apply AI upscaling without creators’ knowledge.

Guitarist and YouTuber Rhett Shull recently suggested the platform had altered his videos, raising concerns that artists might be wrongly accused of using deepfakes.

The controversy comes as the boundary between reality and fabrication grows increasingly uncertain. AI has been reshaping how audiences perceive authenticity, from fake bands to fabricated images of music legends.

Singer SZA is among the artists criticising the technology, highlighting its heavy energy use and potential to undermine creativity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots found unreliable in suicide-related responses, according to a new study

A new study by the RAND Corporation has raised concerns about the ability of AI chatbots to answer questions related to suicide and self-harm safely.

Researchers tested ChatGPT, Claude and Gemini with 30 different suicide-related questions, repeating each one 100 times. Clinicians assessed the queries on a scale from low to high risk, ranging from general information-seeking to dangerous requests about methods of self-harm.

The study revealed that ChatGPT and Claude were more reliable at handling low-risk and high-risk questions, avoiding harmful instructions in dangerous scenarios. Gemini, however, produced more variable results.

While all three ΑΙ chatbots sometimes responded appropriately to medium-risk questions, such as offering supportive resources, they often failed to respond altogether, leaving potentially vulnerable users without guidance.

Experts warn that millions of people now use large language models as conversational partners instead of trained professionals, which raises serious risks when the subject matter involves mental health. Instances have already been reported where AI appeared to encourage self-harm or generate suicide notes.

The RAND team stressed that safeguards are urgently needed to prevent such tools from producing harmful content in response to sensitive queries.

The study also noted troubling inconsistencies. ChatGPT and Claude occasionally gave inappropriate details when asked about hazardous methods, while Gemini refused even basic factual queries about suicide statistics.

Researchers further observed that ChatGPT showed reluctance to recommend therapeutic resources, often avoiding direct mention of safe support channels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Honor and Google deepen platform partnership with longer updates and AI integration

Honor has announced a joint commitment with Google to strengthen its Android platform support. The company now guarantees six years of Android OS and security updates for its upcoming Honor 400 series, aligning with similar practices by Pixel and Samsung devices.

This update period is part of Honor’s wider Alpha Plan, a strategic framework positioning the company as an AI device ecosystem player.

Honor will invest US $10 billion over five years to support this transformation through hardware innovation, software longevity and AI agent integration.

The partnership enables deeper cooperation with Google around Android updates and AI features. Honor already integrates tools like Circle to Search, AI photo expansion and Gemini voice assistants on its Magic series. The extended software support promises longer device lifespans, reduced e-waste and improved user experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Copilot policy flaw allows unauthorized access to AI agents

Administrators found that Microsoft Copilot’s intended ‘NoUsersCanAccessAgent’ policy, which is designed to prevent user access to certain AI agents, is being ignored. Some agents, including ExpenseTrackerBot and HRQueryAgent, remain installable despite global restrictions.

Microsoft 365 tenants must now use per-agent PowerShell commands to disable access manually. This workaround is both time-consuming and error-prone, particularly in large organisations. The failure to enforce access policies raises concerns regarding operational overhead and audit risk.

The security implications are significant. Unauthorised agents can export data from SharePoint or OneDrive, run RPA workflows without oversight, or process sensitive information without compliance controls. The flaw undermined the purpose of access control settings and exposed the system to misuse.

To mitigate this risk, administrators are urged to audit agent inventories, enforce Conditional Access policies, for example, requiring MFA or device compliance, and consistently monitor agent usage through logs and dashboards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!