San Francisco deploys AI assistant to 30,000 staff

San Francisco has equipped almost 30,000 city employees, from social workers and healthcare staff to administrators, with Microsoft 365 Copilot Chat. The large-scale rollout followed a six-month pilot where workers gained up to five extra hours a week handling routine tasks, particularly in 311 service lines.

Copilot Chat helps streamline bureaucratic functions, such as drafting documents, translating over 40 languages, summarising lengthy reports, and analysing data. The goal is to free staff to focus more on serving residents directly.

A comprehensive five-week training scheme, supported by InnovateUS, ensures that employees learn to use AI securely and responsibly. This includes best practices for data protection, transparent disclosure of AI-generated content, and thorough fact-checking procedures.

City leadership emphasises that all AI tools run on a secure government cloud and adhere to robust guidelines. Employees must reveal when AI is used and remain accountable for its output. The city also plans future AI deployments in traffic management, permitting, and connecting homeless individuals with support services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Women see AI as more harmful across life settings

Women are showing more scepticism than men when it comes to AI particularly regarding its ethics, fairness and transparency.

A national study from Georgetown University, Boston University and the University of Vermont found that women were more concerned about AI’s risks in decision-making. Concerns were especially prominent around AI tools used in the workplace, such as hiring platforms and performance review systems.

Bias may be introduced when such tools rely on historical data, which often underrepresents women and other marginalised groups. The study also found that gender influenced compliance with workplace rules surrounding AI use, especially in restrictive environments.

When AI use was banned, women were more likely to follow the rules than men. Usage jumped when tools were explicitly permitted. In cases where AI was allowed, over 80% of both women and men reported using the tools.

Women were generally more wary of AI’s impact across all areas of life — not just in the professional sphere. From personal settings to public life, survey respondents who identified as women consistently viewed AI as more harmful than beneficial.

The study, conducted via Qualtrics in August 2023, surveyed a representative US sample with a majority of female respondents. On average, participants were 45 years old, with over half identifying as women across different educational and professional backgrounds.

The research comes amid wider concerns in the AI field about ethics and accountability, often led by women researchers. High-profile cases include Google’s dismissal of Timnit Gebru and later Margaret Mitchell, both of whom raised ethical concerns about large language models.

The study’s authors concluded that building public trust in AI may require clearer policies and greater transparency in how systems are designed. They also highlighted the importance of increasing diversity among those developing AI tools to ensure more inclusive outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Online health search grows, but scepticism about AI stays high

Trust in traditional healthcare providers remains high, but Americans are increasingly turning to AI for health information, according to new data from the Annenberg Public Policy Centre (APPC).

While 90% of adults trust their personal health provider, nearly 8 in 10 say they are likely to look online for answers to health-related questions. The rise of the internet gave the public access to government health authorities such as the CDC, FDA, and NIH.

Although trust in these institutions dipped during the Covid-19 pandemic, confidence remains relatively high at 66%–68%. Generative AI tools are now becoming a third key source of health information.

AI-generated summaries — such as Google’s ‘AI Overviews‘ or Bing’s ‘Copilot Answers’ — appear prominently in search results.

Despite disclaimers that responses may contain mistakes, nearly two-thirds (63%) of online health searchers find these responses somewhat or very reliable. Around 31% report often or always finding the answers they need in the summaries.

Public attitudes towards AI in clinical settings remain more cautious. Nearly half (49%) of US adults say they are not comfortable with providers using AI tools instead of their own experience. About 36% express some level of comfort, while 41% believe providers are already using AI at least occasionally.

AI use is growing, but most online health seekers continue exploring beyond the initial summary. Two-thirds follow links to websites such as Mayo Clinic, WebMD, or non-profit organisations like the American Heart Association. Federal resources such as the CDC and NIH are also consulted.

Younger users are more likely to recognise and interact with AI summaries. Among those aged 18 to 49, between 69% and 75% have seen AI-generated content in search results, compared to just 49% of users over 65.

Despite high smartphone ownership (93%), only 59% of users track their health with apps. Among these, 52% are likely to share data with a provider, although 36% say they would not. Most respondents (80%) welcome prescription alerts from pharmacies.

The survey, fielded in April 2025 among 1,653 US adults, highlights growing reliance on AI for health information but also reveals concerns about its use in professional medical decision-making. APPC experts urge greater transparency and caution, especially for vulnerable users who may not understand the limitations of AI-generated content.

Director Kathleen Hall Jamieson warns that confusing AI-generated summaries with professional guidance could cause harm. Analyst Laura A. Gibson adds that outdated information may persist in AI platforms, reinforcing the need for user scepticism.

As the public turns to digital health tools, researchers recommend clearer policies, increased transparency, and greater diversity in AI development to ensure safe and inclusive outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google expands NotebookLM with curated content and mobile access

While Gemini often dominates attention in Google’s AI portfolio, other innovative tools deserve the spotlight. One standout is NotebookLM, a virtual research assistant that helps users organise and interact with complex information across various subjects.

NotebookLM creates structured notebooks from curated materials, allowing meaningful engagement with the content. It supports dynamic features, including summaries and transformation options like Audio Overview, making research tasks more intuitive and efficient.

According to Google, featured notebooks are built using information from respected authors, academic institutions, and trusted nonprofits. Current topics include Shakespeare, Yellowstone National Park and more, offering a wide spectrum of well-sourced material.

Featured notebooks function just like regular ones, with added editorial quality. Users can navigate, explore, and repurpose content in ways that support individual learning and project needs. Google has confirmed the collection will grow over time.

NotebookLM remains in early development, yet the tool already shows potential for transforming everyday research tasks. Google also plans tighter integration with its other productivity tools, including Docs and Slides.

The tool significantly reduces the effort traditionally required for academic or creative research. Structured data presentation, combined with interactive features, makes information easier to consume and act upon.

NotebookLM was initially released on desktop but is now also available as a mobile app. Users can download it via the Google Play Store to create notebooks, add content, and stay productive from anywhere.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe to launch Eurosky to regain digital control

Europe is taking steps to assert its digital independence by launching the Eurosky initiative, a government-backed project to reduce reliance on US tech giants.

Eurosky seeks to build European infrastructure for social media platforms and promote digital sovereignty. The goal is to ensure that the continent’s digital space is governed by European laws, values, and rules, rather than being subject to the influence of foreign companies or governments.

To support this goal, Eurosky plans to implement a decentralised content moderation system, modelled after the approach used by the Bluesky network.

Moderation, essential for removing harmful or illegal content like child exploitation or stolen data, remains a significant obstacle for new platforms. Eurosky offers a non-profit moderation service to help emerging social media providers handle this task, thus lowering the barriers to entering the market.

The project enjoys strong public and political backing. Polls show that majorities in France, Germany, and Spain prefer Europe-based platforms, with only 5% favouring US providers.

Eurosky also has support from four European governments, though their identities remain undisclosed. This momentum aligns with a broader shift in user behaviour, as Europeans increasingly turn to local tech services amid privacy and sovereignty concerns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPAI Code of Practice creates legal uncertainty for non-signatories

Lawyers at William Fry say the EU’s final Code of Practice for general-purpose AI (GPAI) models leaves key questions unanswered. GPAI systems include models such as OpenAI’s GPT-4, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama, trained on vast datasets for broad applications.

The Code of Practice, released last week, addresses transparency, safety, security, and copyright, and is described by the European Commission as a voluntary tool. It was prepared by independent experts to help GPAI developers comply with upcoming legal obligations under the EU AI Act.

In a statement on the firm’s website, William Fry lawyers Barry Scannell and Leo Moore question how voluntary the code truly is. They note that signatories not in full compliance can still be seen as acting in good faith and will be supported rather than penalised.

A protected grace period runs until 2 August 2026, after which the AI Act could allow fines for non-compliance. The lawyers warn that this creates a two-tier system, shielding signatories while exposing non-signatories to immediate legal risk under the AI Act.

Developers who do not sign the code may face higher regulatory scrutiny, despite it being described as non-binding. William Fry also points out that detailed implementation guidelines and templates have not yet been published by the EU.

Additional guidance to clarify key GPAI concepts is expected later this month, but the current lack of detail creates uncertainty. The code’s copyright section, the lawyers argue, shows how the document has evolved into a quasi-regulatory framework.

An earlier draft required only reasonable efforts to avoid copyright-infringing sources. The final version demands the active exclusion of such sites. A proposed measure requiring developers to verify the source of copyrighted data acquired from third parties has been removed from the final draft.

The lawyers argue that this creates a practical blind spot, allowing unlawful content to slip into training data undetected. Rights holders still retain the ability to pursue action if they believe their content was misused, even if providers are signatories.

Meanwhile, the transparency chapter now outlines specific standards, rather than general principles. The safety and security section also sets enforceable expectations, increasing the operational burden on model developers.

William Fry warns that gaps between the code’s obligations and the missing technical documentation could have costly consequences. They conclude that, without the final training data template or implementation details, both developers and rights holders face compliance risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How AI-generated video is reshaping the film industry

AI-generated video has evolved at breakneck speed, moving from distorted and unconvincing clips to hyper-realistic creations that rival traditional filmmaking. What was once a blurry, awkward depiction of Will Smith eating spaghetti in 2023 is now flawlessly rendered on platforms like Google’s Veo 3.

In just months, tools such as Luma Labs’ Dream Machine, OpenAI’s Sora, and Runway AI’s Gen-4 have redefined what’s possible, drawing the attention of Hollywood studios, advertisers, and artists eager to test the limits of this new creative frontier.

Major industry players are already experimenting with AI for previsualisation, visual effects, and even entire animated films. Lionsgate and AMC Networks have partnered with Runway AI, with executives exploring AI-generated family-friendly versions of blockbuster franchises like John Wick and The Hunger Games.

The technology drastically cuts costs for complex scenes, making it possible to create elaborate previews—like a snowstorm filled with thousands of soldiers—for a fraction of the traditional price. However, while some see AI as a tool to expand creative possibilities, resistance remains strong.

Critics argue that AI threatens traditional artistic processes, raises ethical concerns over energy use and data training, and risks undermining human creativity. The debate mirrors past technological shifts in entertainment—inevitable yet disruptive.

As Runway and other pioneers push toward immersive experiences in augmented and virtual reality, the future of filmmaking may no longer be defined solely by Hollywood, but by anyone with access to these powerful tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google strengthens position as Perplexity and OpenAI launch browsers

OpenAI is reportedly preparing to launch an AI-powered web browser in the coming weeks, aiming to compete with Alphabet’s dominant Chrome browser, according to sources cited by Reuters.

The forthcoming browser seeks to leverage AI to reshape how users interact with the internet, while potentially granting OpenAI deeper access to valuable user data—a key driver behind Google’s advertising empire.

If adopted by ChatGPT’s 500 million weekly active users, the browser could pose a significant challenge to Chrome, which currently underpins much of Alphabet’s ad targeting and search traffic infrastructure.

The browser is expected to feature a native chat interface, reducing the need for users to click through traditional websites. The features align with OpenAI’s broader strategy to embed its services more seamlessly into users’ daily routines.

While the company declined to comment on the development, anonymous sources noted that the browser is likely to support AI agent capabilities, such as booking reservations or completing web forms on behalf of users.

The move comes as OpenAI faces intensifying competition from Google, Anthropic, and Perplexity.

In May, OpenAI acquired the AI hardware start-up io for $6.5 billion, in a deal linked to Apple design veteran Jony Ive. The acquisition signals a strategic push beyond software and into integrated consumer tools.

Despite Chrome’s grip on over two-thirds of the global browser market, OpenAI appears undeterred. Its browser will be built on Chromium—the open-source framework powering Chrome, Microsoft Edge, and other major browsers. Notably, OpenAI hired two former Google executives last year who had previously worked on Chrome.

The decision to build a standalone browser—rather than rely on third-party plug-ins—was reportedly driven by OpenAI’s desire for greater control over both data collection and core functionality.

The control could prove vital as regulatory scrutiny of Google’s dominance in search and advertising intensifies. The United States Department of Justice is currently pushing for Chrome’s divestiture as part of its broader antitrust actions against Alphabet.

Other players are already exploring the AI browser space. Perplexity recently launched its own AI browser, Comet, while The Browser Company and Brave have introduced AI-enhanced browsing features.

As the AI race accelerates, OpenAI’s entry into the browser market could redefine how users navigate and engage with the web—potentially transforming search, advertising, and digital privacy in the process.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US House passes NTIA cyber leadership bill after Salt Typhoon hacks

The US House of Representatives has passed legislation that would officially designate the National Telecommunications and Information Administration (NTIA) as the federal lead for cybersecurity across communications networks.

The move follows last year’s Salt Typhoon hacking spree, described by some as the worst telecom breach in US history.

The National Telecommunications and Information Administration Organization Act, introduced by Representatives Jay Obernolte and Jennifer McClellan, cleared the House on Monday and now awaits Senate approval.

The bill would rebrand an NTIA office to focus on both policy and cybersecurity, while codifying the agency’s role in coordinating cybersecurity responses alongside other federal departments.

Lawmakers argue that recent telecom attacks exposed major gaps in coordination between government and industry.

The bill promotes public-private partnerships and stronger collaboration between agencies, software developers, telecom firms, and security researchers to improve resilience and speed up innovation across communications technologies.

With Americans’ daily lives increasingly dependent on digital services, supporters say the bill provides a crucial framework for protecting sensitive information from cybercriminals and foreign hacking groups instead of relying on fragmented and inconsistent measures.

Pentagon awards AI contracts to xAI and others after Grok controversy

The US Department of Defence has awarded contracts to four major AI firms, including Elon Musk’s xAI, as part of a strategy to boost military AI capabilities.

Each contract is valued at up to $200 million and involves developing advanced AI workflows for critical national security tasks.

Alongside xAI, Anthropic, Google, and OpenAI have also secured contracts. Pentagon officials said the deals aim to integrate commercial AI solutions into intelligence, business, and defence operations instead of relying solely on internal systems.

Chief Digital and AI Officer Doug Matty states that these technologies will help maintain the US’s strategic edge over rivals.

The decision comes as Musk’s AI company faces controversy after its Grok chatbot was reported to have published offensive content on social media. Critics, including Democratic lawmakers, have raised ethical concerns about awarding national security contracts to a company under public scrutiny.

xAI insists its Grok for Movement platform will help speed up government services and scientific innovation.

Despite political tensions and Musk’s past financial support for Donald Trump’s campaign, the Pentagon has formalised its relationship with xAI and other AI leaders instead of excluding them due to reputational risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!