Online health search grows, but scepticism about AI stays high

Trust in traditional healthcare providers remains high, but Americans are increasingly turning to AI for health information, according to new data from the Annenberg Public Policy Centre (APPC).

While 90% of adults trust their personal health provider, nearly 8 in 10 say they are likely to look online for answers to health-related questions. The rise of the internet gave the public access to government health authorities such as the CDC, FDA, and NIH.

Although trust in these institutions dipped during the Covid-19 pandemic, confidence remains relatively high at 66%–68%. Generative AI tools are now becoming a third key source of health information.

AI-generated summaries — such as Google’s ‘AI Overviews‘ or Bing’s ‘Copilot Answers’ — appear prominently in search results.

Despite disclaimers that responses may contain mistakes, nearly two-thirds (63%) of online health searchers find these responses somewhat or very reliable. Around 31% report often or always finding the answers they need in the summaries.

Public attitudes towards AI in clinical settings remain more cautious. Nearly half (49%) of US adults say they are not comfortable with providers using AI tools instead of their own experience. About 36% express some level of comfort, while 41% believe providers are already using AI at least occasionally.

AI use is growing, but most online health seekers continue exploring beyond the initial summary. Two-thirds follow links to websites such as Mayo Clinic, WebMD, or non-profit organisations like the American Heart Association. Federal resources such as the CDC and NIH are also consulted.

Younger users are more likely to recognise and interact with AI summaries. Among those aged 18 to 49, between 69% and 75% have seen AI-generated content in search results, compared to just 49% of users over 65.

Despite high smartphone ownership (93%), only 59% of users track their health with apps. Among these, 52% are likely to share data with a provider, although 36% say they would not. Most respondents (80%) welcome prescription alerts from pharmacies.

The survey, fielded in April 2025 among 1,653 US adults, highlights growing reliance on AI for health information but also reveals concerns about its use in professional medical decision-making. APPC experts urge greater transparency and caution, especially for vulnerable users who may not understand the limitations of AI-generated content.

Director Kathleen Hall Jamieson warns that confusing AI-generated summaries with professional guidance could cause harm. Analyst Laura A. Gibson adds that outdated information may persist in AI platforms, reinforcing the need for user scepticism.

As the public turns to digital health tools, researchers recommend clearer policies, increased transparency, and greater diversity in AI development to ensure safe and inclusive outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google expands NotebookLM with curated content and mobile access

While Gemini often dominates attention in Google’s AI portfolio, other innovative tools deserve the spotlight. One standout is NotebookLM, a virtual research assistant that helps users organise and interact with complex information across various subjects.

NotebookLM creates structured notebooks from curated materials, allowing meaningful engagement with the content. It supports dynamic features, including summaries and transformation options like Audio Overview, making research tasks more intuitive and efficient.

According to Google, featured notebooks are built using information from respected authors, academic institutions, and trusted nonprofits. Current topics include Shakespeare, Yellowstone National Park and more, offering a wide spectrum of well-sourced material.

Featured notebooks function just like regular ones, with added editorial quality. Users can navigate, explore, and repurpose content in ways that support individual learning and project needs. Google has confirmed the collection will grow over time.

NotebookLM remains in early development, yet the tool already shows potential for transforming everyday research tasks. Google also plans tighter integration with its other productivity tools, including Docs and Slides.

The tool significantly reduces the effort traditionally required for academic or creative research. Structured data presentation, combined with interactive features, makes information easier to consume and act upon.

NotebookLM was initially released on desktop but is now also available as a mobile app. Users can download it via the Google Play Store to create notebooks, add content, and stay productive from anywhere.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPAI Code of Practice creates legal uncertainty for non-signatories

Lawyers at William Fry say the EU’s final Code of Practice for general-purpose AI (GPAI) models leaves key questions unanswered. GPAI systems include models such as OpenAI’s GPT-4, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama, trained on vast datasets for broad applications.

The Code of Practice, released last week, addresses transparency, safety, security, and copyright, and is described by the European Commission as a voluntary tool. It was prepared by independent experts to help GPAI developers comply with upcoming legal obligations under the EU AI Act.

In a statement on the firm’s website, William Fry lawyers Barry Scannell and Leo Moore question how voluntary the code truly is. They note that signatories not in full compliance can still be seen as acting in good faith and will be supported rather than penalised.

A protected grace period runs until 2 August 2026, after which the AI Act could allow fines for non-compliance. The lawyers warn that this creates a two-tier system, shielding signatories while exposing non-signatories to immediate legal risk under the AI Act.

Developers who do not sign the code may face higher regulatory scrutiny, despite it being described as non-binding. William Fry also points out that detailed implementation guidelines and templates have not yet been published by the EU.

Additional guidance to clarify key GPAI concepts is expected later this month, but the current lack of detail creates uncertainty. The code’s copyright section, the lawyers argue, shows how the document has evolved into a quasi-regulatory framework.

An earlier draft required only reasonable efforts to avoid copyright-infringing sources. The final version demands the active exclusion of such sites. A proposed measure requiring developers to verify the source of copyrighted data acquired from third parties has been removed from the final draft.

The lawyers argue that this creates a practical blind spot, allowing unlawful content to slip into training data undetected. Rights holders still retain the ability to pursue action if they believe their content was misused, even if providers are signatories.

Meanwhile, the transparency chapter now outlines specific standards, rather than general principles. The safety and security section also sets enforceable expectations, increasing the operational burden on model developers.

William Fry warns that gaps between the code’s obligations and the missing technical documentation could have costly consequences. They conclude that, without the final training data template or implementation details, both developers and rights holders face compliance risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How AI-generated video is reshaping the film industry

AI-generated video has evolved at breakneck speed, moving from distorted and unconvincing clips to hyper-realistic creations that rival traditional filmmaking. What was once a blurry, awkward depiction of Will Smith eating spaghetti in 2023 is now flawlessly rendered on platforms like Google’s Veo 3.

In just months, tools such as Luma Labs’ Dream Machine, OpenAI’s Sora, and Runway AI’s Gen-4 have redefined what’s possible, drawing the attention of Hollywood studios, advertisers, and artists eager to test the limits of this new creative frontier.

Major industry players are already experimenting with AI for previsualisation, visual effects, and even entire animated films. Lionsgate and AMC Networks have partnered with Runway AI, with executives exploring AI-generated family-friendly versions of blockbuster franchises like John Wick and The Hunger Games.

The technology drastically cuts costs for complex scenes, making it possible to create elaborate previews—like a snowstorm filled with thousands of soldiers—for a fraction of the traditional price. However, while some see AI as a tool to expand creative possibilities, resistance remains strong.

Critics argue that AI threatens traditional artistic processes, raises ethical concerns over energy use and data training, and risks undermining human creativity. The debate mirrors past technological shifts in entertainment—inevitable yet disruptive.

As Runway and other pioneers push toward immersive experiences in augmented and virtual reality, the future of filmmaking may no longer be defined solely by Hollywood, but by anyone with access to these powerful tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Military AI and the void of accountability

In her blog post ‘Military AI: Operational dangers and the regulatory void,’ Julia Williams warns that AI is reshaping the battlefield, shifting from human-controlled systems to highly autonomous technologies that make life-and-death decisions. From the United States’ Project Maven to Israel’s AI-powered targeting in Gaza and Ukraine’s semi-autonomous drones, military AI is no longer a futuristic concept but a present reality.

While designed to improve precision and reduce risks, these systems carry hidden dangers—opaque ‘black box’ decisions, biases rooted in flawed data, and unpredictable behaviour in high-pressure situations. Operators either distrust AI or over-rely on it, sometimes without understanding how conclusions are reached, creating a new layer of risk in modern warfare.

Bias remains a critical challenge. AI can inherit societal prejudices from the data it is trained on, misinterpret patterns through algorithmic flaws, or encourage automation bias, where humans trust AI outputs even when they shouldn’t.

These flaws can have devastating consequences in military contexts, leading to wrongful targeting or escalation. Despite attempts to ensure ‘meaningful human control’ over autonomous weapons, the concept lacks clarity, allowing states and manufacturers to apply oversight unevenly. Responsibility for mistakes remains murky—should it lie with the operator, the developer, or the machine itself?

That uncertainty feeds into a growing global security crisis. Regulation lags far behind technological progress, with international forums disagreeing on how to govern military AI.

Meanwhile, an AI arms race accelerates between the US and China, driven by private-sector innovation and strategic rivalry. Export controls on semiconductors and key materials only deepen mistrust, while less technologically advanced nations fear both being left behind and becoming targets of AI warfare. The risk extends beyond states, as rogue actors and non-state groups could gain access to advanced systems, making conflicts harder to contain.

As Williams highlights, the growing use of military AI threatens to speed up the tempo of conflict and blur accountability. Without strong governance and global cooperation, it could escalate wars faster than humans can de-escalate them, shifting the battlefield from soldiers to civilian infrastructure and leaving humanity vulnerable to errors we may not survive.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google strengthens position as Perplexity and OpenAI launch browsers

OpenAI is reportedly preparing to launch an AI-powered web browser in the coming weeks, aiming to compete with Alphabet’s dominant Chrome browser, according to sources cited by Reuters.

The forthcoming browser seeks to leverage AI to reshape how users interact with the internet, while potentially granting OpenAI deeper access to valuable user data—a key driver behind Google’s advertising empire.

If adopted by ChatGPT’s 500 million weekly active users, the browser could pose a significant challenge to Chrome, which currently underpins much of Alphabet’s ad targeting and search traffic infrastructure.

The browser is expected to feature a native chat interface, reducing the need for users to click through traditional websites. The features align with OpenAI’s broader strategy to embed its services more seamlessly into users’ daily routines.

While the company declined to comment on the development, anonymous sources noted that the browser is likely to support AI agent capabilities, such as booking reservations or completing web forms on behalf of users.

The move comes as OpenAI faces intensifying competition from Google, Anthropic, and Perplexity.

In May, OpenAI acquired the AI hardware start-up io for $6.5 billion, in a deal linked to Apple design veteran Jony Ive. The acquisition signals a strategic push beyond software and into integrated consumer tools.

Despite Chrome’s grip on over two-thirds of the global browser market, OpenAI appears undeterred. Its browser will be built on Chromium—the open-source framework powering Chrome, Microsoft Edge, and other major browsers. Notably, OpenAI hired two former Google executives last year who had previously worked on Chrome.

The decision to build a standalone browser—rather than rely on third-party plug-ins—was reportedly driven by OpenAI’s desire for greater control over both data collection and core functionality.

The control could prove vital as regulatory scrutiny of Google’s dominance in search and advertising intensifies. The United States Department of Justice is currently pushing for Chrome’s divestiture as part of its broader antitrust actions against Alphabet.

Other players are already exploring the AI browser space. Perplexity recently launched its own AI browser, Comet, while The Browser Company and Brave have introduced AI-enhanced browsing features.

As the AI race accelerates, OpenAI’s entry into the browser market could redefine how users navigate and engage with the web—potentially transforming search, advertising, and digital privacy in the process.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Pentagon awards AI contracts to xAI and others after Grok controversy

The US Department of Defence has awarded contracts to four major AI firms, including Elon Musk’s xAI, as part of a strategy to boost military AI capabilities.

Each contract is valued at up to $200 million and involves developing advanced AI workflows for critical national security tasks.

Alongside xAI, Anthropic, Google, and OpenAI have also secured contracts. Pentagon officials said the deals aim to integrate commercial AI solutions into intelligence, business, and defence operations instead of relying solely on internal systems.

Chief Digital and AI Officer Doug Matty states that these technologies will help maintain the US’s strategic edge over rivals.

The decision comes as Musk’s AI company faces controversy after its Grok chatbot was reported to have published offensive content on social media. Critics, including Democratic lawmakers, have raised ethical concerns about awarding national security contracts to a company under public scrutiny.

xAI insists its Grok for Movement platform will help speed up government services and scientific innovation.

Despite political tensions and Musk’s past financial support for Donald Trump’s campaign, the Pentagon has formalised its relationship with xAI and other AI leaders instead of excluding them due to reputational risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube tightens rules on AI-only videos

YouTube will begin curbing AI-generated content lacking human input to protect content quality and ad revenue. Since July 15, creators must disclose the use of AI and provide genuine creative value to qualify for monetisation.

The platform’s clampdown aims to prevent a flood of low-quality videos, known as ‘AI slop’, that risk overwhelming its algorithm and lowering ad returns. Analysts say Google’s new stance reflects the need to balance AI leadership with platform integrity.

YouTube will still allow AI-assisted content, but it insists creators must offer original contributions such as commentary, editing, or storytelling. Without this, AI-only videos will no longer earn advertising revenue.

The move also addresses rising concerns around copyright, ownership and algorithm overload, which could destabilise the platform’s delicate content ecosystem. Experts warn that unregulated AI use may harm creators who produce high-effort, original material.

Stakeholders say the changes will benefit creators focused on meaningful content while preserving advertiser trust and fair revenue sharing across millions of global partners. YouTube’s approach signals a shift towards responsible AI integration in media platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia to restart China AI chip sales after US talks

Nvidia has announced plans to resume sales of its H20 AI chip in China, following meetings between CEO Jensen Huang and US President Donald Trump in Beijing.

The move comes after US export controls previously banned sales of the chip on national security grounds, costing Nvidia an estimated $15 billion in lost revenue.

The company confirmed it is filing for licences with the US government to restart deliveries of the H20 graphics processing unit, expecting approval shortly.

Nvidia also revealed a new RTX Pro GPU designed specifically for China, compliant with US export rules, offering a lower-cost alternative instead of risking further restrictions.

Huang, attending a supply chain expo in Beijing, described China as essential to Nvidia’s growth, despite rising competition from local firms like Huawei.

Chinese companies remain highly dependent on Nvidia’s CUDA platform, while US lawmakers have raised concerns about Nvidia engaging with Chinese entities linked to military or intelligence services.

Nvidia’s return to the Chinese market comes as Washington and Beijing show signs of easing trade tensions, including relaxed rare earth export rules from China and restored chip design services from the US.

Analysts note, however, that Chinese firms are likely to keep diversifying suppliers instead of relying solely on US chips for supply chain security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU bets on quantum to regain global influence

European policymakers are turning to quantum technology as a strategic solution to the continent’s growing economic and security challenges.

With the US and China surging ahead in AI, Europe sees quantum innovation as a last-mover advantage it cannot afford to miss.

Quantum computers, sensors, and encryption are already transforming military, industrial and cybersecurity capabilities.

From stealth detection to next-generation batteries, Europe hopes quantum breakthroughs will bolster its defences and revitalise its energy, automotive and pharmaceutical sectors.

Although EU institutions have heavily invested in quantum programmes and Europe trains more engineers than anywhere else, funding gaps persist.

Private investment remains limited, pushing some of the continent’s most promising start-ups abroad in search of capital and scale.

The EU must pair its technical excellence with bold policy reforms to avoid falling behind. Strategic protections, high-risk R&D support and new alliances will be essential to turning scientific strength into global leadership.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!