US pushes chip manufacturing to boost AI dominance

Donald Trump’s AI Action Plan, released in July 2025, places domestic semiconductor manufacturing at the heart of US efforts to dominate global AI. The plan supports deregulation, domestic production and export of full-stack technology, positioning chips as critical to national power.

Lawmakers and tech leaders have previously flagged tracking chips post-sale as viable, with companies like Google already using such methods. Trump’s plan suggests adopting location tracking and enhanced end-use monitoring to ensure chips avoid blacklisted destinations.

Trump has pressed for more private sector investment in US fabs, reportedly using tariff threats to extract pledges from chipmakers like TSMC. The cost of building and running chip plants in the US remains significantly higher than in Asia, raising questions about sustainability.

America’s success in AI and semiconductors will likely depend on how well it balances domestic goals with global collaboration. Overregulation risks slowing innovation, while unilateral restrictions may alienate allies and reduce long-term influence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Publishers set to earn from Comet Plus, Perplexity’s new initiative

Perplexity has announced Comet Plus, a new service that will pay premium publishers to provide high-quality news content as an alternative to clickbait. The company has not disclosed its roster of partners or payment structure, though reports suggest a pool of $42.5 million.

Publishers have long criticised AI services for exploiting their work without compensation. Perplexity, backed by Amazon’s Jeff Bezos, said Comet Plus will create a fairer system and reward journalists for producing trusted content in the era of AI.

The platform introduces a revenue model based on three streams: human visits, search citations, and agent actions. Perplexity argues this approach better reflects how people consume information today, whether by browsing manually, seeking AI-generated answers, or using AI agents.

The company stated that the initiative aims to rebuild trust between readers and publishers, while ensuring that journalism thrives in a changing digital economy. The initial group of publishing partners will be revealed later.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNGA adopts terms of reference for AI Scientific Panel and Global Dialogue on AI governance

On 26 August 2025, following several months of negotiations in New York, the UN General Assembly (UNGA) adopted a resolution (A/RES/79/325) outlining the terms of reference and modalities for the establishment and functioning of two new AI governance mechanisms: an Independent International Scientific Panel on AI and a Global Dialogue on AI Governance. The creation of these mechanisms was formally agreed by UN member states in September 2024, as part of the Global Digital Compact

The 40-member Scientific Panel has the main task of ‘issuing evidence-based scientific assessments synthesising and analysing existing research related to the opportunities, risks and impacts of AI’, in the form of one annual ‘policy-relevant but non-prescriptive summary report’ to be presented to the Global Dialogue.

The Panel will also ‘provide updates on its work up to twice a year to hear views through an interactive dialogue of the plenary of the General Assembly with the Co-Chairs of the Panel’. The UN Secretary-General is expected to shortly launch an open call for nominations for Panel members; he will then recommend a list of 40 members to be appointed by the General Assembly. 

The Global Dialogue on AI Governance, to involve governments and all relevant stakeholders, will function as a platform ‘to discuss international cooperation, share best practices and lessons learned, and to facilitate open, transparent and inclusive discussions on AI governance with a view to enabling AI to contribute to the implementation of the Sustainable Development Goals and to closing the digital divides between and within countries’. It will be convened annually, for up to two days, in the margins of existing relevant UN conferences and meetings, alternating between Geneva and New York. Each meeting will consist of a multistakeholder plenary meeting with a high-level governmental segment, a presentation of the panel’s annual report, and thematic discussions. 

The Dialogue will be launched during a high-level multistakeholder informal meeting in the margins of the high-level week of UNGA’s 80th session (starting in September 2025). The Dialogue will then be held in the margins of the International Telecommunication Union AI  for Good Global Summit in Geneva, in 2026, and of the multistakeholder forum on science, technology and innovation for the Sustainable Development Goals in New York, in 2027.

The General Assembly also decided that ‘the Co-Chairs of the second Dialogue will hold intergovernmental consultations to agree on common understandings on priority areas for international AI governance, taking into account the summaries of the previous Dialogues and contributions from other stakeholders, as an input to the high-level review of the Global Digital Compact and to further discussions’.

The provision represents the most significant change compared to the previous version of the draft resolution (rev4), which was envisioning intergovernmental negotiations, led by the co-facilitators of the high-level review of the GDC, on a ‘declaration reflecting common understandings on priority areas for international AI governance’. An earlier draft (rev3) was talking about a UNGA resolution on AI governance, which proved to be a contentious point during the negotiations.

To enable the functioning of these mechanisms, the Secretary-General is requested to ‘facilitate, within existing resources and mandates, appropriate Secretariat support for the Panel and the Dialogue by leveraging UN system-wide capacities, including those of the Inter-Agency Working Group on AI’.

States and other stakeholders are encouraged to ‘support the effective functioning of the Panel and Dialogue, including by facilitating the participation of representatives and stakeholders of developing countries by offering travel support, through voluntary contributions that are made public’. 

The continuation of the terms of reference of the Panel and the Dialogue may be considered and decided upon by UNGA during the high-level review of the GDC, at UNGA 82. 

***

The Digital Watch observatory has followed the negotiations on this resolution and published regular updates:

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Travellers claim ChatGPT helps cut flight costs by hundreds of pounds

ChatGPT is increasingly used as a travel assistant, with some travellers claiming it can save hundreds of pounds on flights. Finance influencer Casper Opala shares cost-saving tips online and said the AI tool helped him secure a flight for £70 that initially cost more than £700.

Opala shared a series of prompts that allow ChatGPT to identify hidden routes, budget airlines not listed on major platforms, and potential savings through alternative airports or separate bookings. He also suggested using the tool to monitor prices for several days or compare one-way fares with return tickets.

While many money-saving tricks have existed for years, ChatGPT condenses the process, collecting results in seconds. Opala says this efficiency is a strong starting point for cheaper travel deals.

Experts, however, warn that ChatGPT is not connected to live flight booking systems. TravelBook’s Laura Pomer noted that the AI can sometimes present inaccurate or outdated fares, meaning users should always verify results before booking.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Brave uncovers vulnerability in Perplexity’s Comet that risked sensitive user data

Perplexity’s AI-powered browser, Comet, was found to have a serious vulnerability that could have exposed sensitive user data through indirect prompt injection, according to researchers at Brave, a rival browser company.

The flaw stemmed from how Comet handled webpage-summarisation requests. By embedding hidden instructions on websites, attackers could trick the browser’s large language model into executing unintended actions, such as extracting personal emails or accessing saved passwords.

Brave researchers demonstrated how the exploit could bypass traditional protections, such as the same-origin policy, showing scenarios where attackers gained access to Gmail or banking data by manipulating Comet into following malicious cues.

Brave disclosed the vulnerability to Perplexity on 11 August, but stated that it remained unfixed when they published their findings on 20 August. Perplexity later confirmed to CNET that the flaw had been patched, and Brave was credited for working with them to resolve it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Vietnam accelerates modernization of foreign affairs through technology and AI

The Ministry of Foreign Affairs of Vietnam spearheads an extensive digital transformation initiative in line with the Politburo’s Resolution No. 57-NQ/TW issued in December 2024. This resolution highlights the necessity of advancements in science, technology, and national digital transformation.

Under the guidance of Deputy Prime Minister and Minister Bui Thanh Son, the Ministry is committed to modernising its operations and improving efficiency, reflecting Vietnam’s broader digital evolution strategy across all sectors.

Key implementations of this transformation include the creation of three major digital platforms: an electronic information portal providing access to foreign policies and online public services, an online document management system for internal digitalisation, and an integrated data-sharing platform for connectivity and multi-dimensional data exchange.

The Ministry has digitised 100% of its administrative procedures, linking them to a national-level system, showcasing a significant stride towards administrative reform and efficiency. Additionally, the Ministry has fully adopted social media channels, including Facebook and Twitter, indicating its efforts to enhance foreign information dissemination and public engagement.

A central component of this initiative is the ‘Digital Literacy for All’ movement, inspired by President Ho Chi Minh’s historic ‘Popular Education’ campaign. This movement focuses on equipping diplomatic personnel with essential digital skills, transforming them into proficient ‘digital civil servants’ and ‘digital ambassadors.’ The Ministry aims to enhance its diplomatic functions in today’s globally connected environment by advancing its ability to navigate and utilise modern technologies.

The Ministry plans to develop its digital infrastructure further, strengthen data management, and integrate AI for strategic planning and predictive analysis.

Establishing a digital data warehouse for foreign information and enhancing human resources by nurturing technology experts within the diplomatic sector are also on the agenda. These actions reflect a strong commitment to fostering a professional and globally adept diplomatic industry, poised to safeguard national interests and thrive in the digital age.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Humain Chat has been unveiled by Saudi Arabia to drive AI innovation

Saudi Arabia has taken a significant step in AI with the launch of Humain Chat, an app powered by one of the world’s most enormous Arabic-trained datasets.

Developed by state-backed venture Humain, the app is designed to strengthen the country’s role in AI while promoting sovereign technologies.

Built on the Allam large language model, Humain Chat allows real-time web search, speech input across Arabic dialects, bilingual switching between Arabic and English, and secure data compliance with Saudi privacy laws.

The app is already available on the web, iOS, and Android in Saudi Arabia, with plans for regional expansion across the Middle East before reaching global markets.

Humain was established in May under the leadership of Crown Prince Mohammed bin Salman and the Public Investment Fund. Its flagship model, ALLAM 34B, is described as the most advanced AI system created in the Arab world. The company said the app will evolve further as user adoption grows.

CEO Tareq Amin called the launch ‘a historic milestone’ for Saudi Arabia, stressing that Humain Chat shows how advanced AI can be developed in Arabic while staying culturally rooted and built by local expertise.

A team of 120 specialists based in the Kingdom created the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube under fire for AI video edits without creator consent

Anger grows as YouTube secretly alters some uploaded videos using machine learning. The company admitted that it had been experimenting with automated edits, which sharpen images, smooth skin, and enhance clarity, without notifying creators.

Although tools like ChatGPT or Gemini did not generate these changes, they still relied on AI.

The issue has sparked concern among creators, who argue that the lack of consent undermines trust.

YouTuber Rhett Shull publicly criticised the platform, prompting YouTube liaison Rene Ritchie to clarify that the edits were simply efforts to ‘unblur and denoise’ footage, similar to smartphone processing.

However, creators emphasise that the difference lies in transparency, since phone users know when enhancements are applied, whereas YouTube users were unaware.

Consent remains central to debates around AI adoption, especially as regulation lags and governments push companies to expand their use of the technology.

Critics warn that even minor, automatic edits can treat user videos as training material without permission, raising broader concerns about control and ownership on digital platforms.

YouTube has not confirmed whether the experiment will expand or when it might end.

For now, viewers noticing oddly upscaled Shorts may be seeing the outcome of these hidden edits, which have only fuelled anger about how AI is being introduced into creative spaces.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots found unreliable in suicide-related responses, according to a new study

A new study by the RAND Corporation has raised concerns about the ability of AI chatbots to answer questions related to suicide and self-harm safely.

Researchers tested ChatGPT, Claude and Gemini with 30 different suicide-related questions, repeating each one 100 times. Clinicians assessed the queries on a scale from low to high risk, ranging from general information-seeking to dangerous requests about methods of self-harm.

The study revealed that ChatGPT and Claude were more reliable at handling low-risk and high-risk questions, avoiding harmful instructions in dangerous scenarios. Gemini, however, produced more variable results.

While all three ΑΙ chatbots sometimes responded appropriately to medium-risk questions, such as offering supportive resources, they often failed to respond altogether, leaving potentially vulnerable users without guidance.

Experts warn that millions of people now use large language models as conversational partners instead of trained professionals, which raises serious risks when the subject matter involves mental health. Instances have already been reported where AI appeared to encourage self-harm or generate suicide notes.

The RAND team stressed that safeguards are urgently needed to prevent such tools from producing harmful content in response to sensitive queries.

The study also noted troubling inconsistencies. ChatGPT and Claude occasionally gave inappropriate details when asked about hazardous methods, while Gemini refused even basic factual queries about suicide statistics.

Researchers further observed that ChatGPT showed reluctance to recommend therapeutic resources, often avoiding direct mention of safe support channels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI accuses Apple and OpenAI of blocking competition in AI

Elon Musk’s xAI has filed a lawsuit in Texas accusing Apple and OpenAI of colluding to stifle competition in the AI sector.

The case alleges that both companies locked up markets to maintain monopolies, making it harder for rivals like X and xAI to compete.

The dispute follows Apple’s 2024 deal with OpenAI to integrate ChatGPT into Siri and other apps on its devices. According to the lawsuit, Apple’s exclusive partnership with OpenAI has prevented fair treatment of Musk’s products within the App Store, including the X app and xAI’s Grok app.

Musk previously threatened legal action against Apple over antitrust concerns, citing the company’s alleged preference for ChatGPT.

Musk, who acquired his social media platform X in a $45 billion all-stock deal earlier in the year, is seeking billions of dollars in damages and a jury trial. The legal action highlights Musk’s ongoing feud with OpenAI’s CEO, Sam Altman.

Musk, a co-founder of OpenAI who left in 2018 after disagreements with Altman, has repeatedly criticised the company’s shift to a profit-driven model. He is also pursuing separate litigation against OpenAI and Altman over that transition in California.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!