San Francisco deploys AI assistant to 30,000 staff

San Francisco has equipped almost 30,000 city employees, from social workers and healthcare staff to administrators, with Microsoft 365 Copilot Chat. The large-scale rollout followed a six-month pilot where workers gained up to five extra hours a week handling routine tasks, particularly in 311 service lines.

Copilot Chat helps streamline bureaucratic functions, such as drafting documents, translating over 40 languages, summarising lengthy reports, and analysing data. The goal is to free staff to focus more on serving residents directly.

A comprehensive five-week training scheme, supported by InnovateUS, ensures that employees learn to use AI securely and responsibly. This includes best practices for data protection, transparent disclosure of AI-generated content, and thorough fact-checking procedures.

City leadership emphasises that all AI tools run on a secure government cloud and adhere to robust guidelines. Employees must reveal when AI is used and remain accountable for its output. The city also plans future AI deployments in traffic management, permitting, and connecting homeless individuals with support services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta opens audio lab to improve AI smart glasses

Meta has unveiled a £12 million audio research lab in Cambridge’s Ox‑Cam corridor, aimed at enhancing immersive sound for its Ray‑Ban Meta and upcoming Oakley Meta glasses. The facility includes advanced acoustic testing environments, motion‑tracked living spaces, and one of the world’s largest configurable reverberation chambers, enabling engineers to fine‑tune spatial audio through real‑world scenarios.

Designed to filter noise, focus on speech, and respond to head movement, the lab is developing adaptive audio intelligent enough to improve clarity in settings like busy streets or on public transport. Meta plans to integrate these features into its next generation of AR eyewear.

Officials say the lab represents a long‑term investment in UK engineering talent and bolsters the Oxford‑to‑Cambridge tech corridor. Meta’s global affairs lead and the Chancellor emphasised the significance of the investment, supported by a national £22 billion R&D strategy. This marks Meta’s largest overseas engineering base and reinforces its ambition to lead the global AI glasses market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

First single-photon universal quantum system due 2026

Dutch startup QuiX Quantum has raised €15 million in Series A funding to deliver the world’s first single-photon‑based universal photonic quantum computer by 2026. This ambitious project was backed by Invest‑NL, the European Innovation Council, PhotonVentures, Oost NL and Forward One.

Since its 2019 founding, QuiX Quantum has set benchmarks with 8‑qubit and 64‑qubit photonic processors, including a notable delivery to the German Aerospace Center in 2022. Its next objective is a universal gate‑set system with fast feed‑forward electronics and single‑photon sources, essential components for fault‑tolerant, large‑scale quantum computing.

The investment will also bolster Europe’s quantum photonics supply chain. QuiX Quantum plans to deploy its systems in practical fields such as chemical simulation, pharmaceutical discovery, fraud detection and precision manufacturing, marking a key step toward commercialising quantum technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Defence AI Centre at heart of Korean strategy

South Korea has unveiled a strategy to share extensive military data with defence firms to accelerate AI-powered weapon systems, inspired by US military cloud initiatives. Plans include a national public–private fund to finance innovation and bolster the country’s defence tech prowess.

A specialised working group of around 30 experts, including participants from the Defence Acquisition Program Administration, is drafting standards for safety and reliability in AI weapon systems. Their work aims to lay the foundations for the responsible integration of AI into defence hardware.

Officials highlight the need to merge classified military databases into a consolidated defence cloud, moving away from siloed systems. This model follows the tiered cloud framework adopted by the US, enabling more agile collaboration between the military and industry.

South Korea is also fast-tracking development across core defence domains, such as autonomous drones, command-and-control systems, AI-enabled surveillance, and cyber operations. These efforts are underpinned by the recently established Defence AI Centre, positioning the country at the forefront of Asia’s military AI race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hungary enforces prison terms for unauthorised crypto trading

Hungary has introduced strict penalties for individuals and companies involved in unauthorised cryptocurrency trading or services. Under the updated Criminal Code, using unauthorised crypto exchanges can lead to two years in prison, with longer terms for larger trades.

Crypto service providers operating without authorisation face even harsher penalties. Sentences can reach up to eight years for transactions exceeding 500 million forints (around $1.46 million).

The updated law defines new offences such as ‘abuse of crypto-assets’, aiming to impose stricter control over the sector.

The implementation has caused confusion among crypto companies, with Hungary’s Supervisory Authority for Regulatory Affairs (SZTFH) yet to publish compliance guidelines. Businesses now face a 60-day regulatory vacuum with no clear direction.

UK fintech firm Revolut responded by briefly halting crypto services in Hungary, citing the new legislation. It has since reinstated crypto withdrawals, while its EU entity works towards securing a regional crypto licence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google pushes urgent Chrome update before 23 July

Google has confirmed that attackers have exploited a high-risk vulnerability in its Chrome browser. Users have been advised to update their browsers before 23 July, with cybersecurity agencies stressing the urgency.

The flaw, CVE-2025-6554, involves a type confusion issue in Chrome’s V8 JavaScript engine. The US Cybersecurity and Infrastructure Security Agency (CISA) has made the update mandatory for federal departments and recommends all users take immediate action.

Although Chrome updates are applied automatically, users must restart their browsers to activate the security patches. Many fail to do so, leaving them exposed despite downloading the latest version.

CISA highlighted that timely updates are essential for reducing vulnerability to attacks, especially for organisations managing critical infrastructure. Enterprises are at risk if patching delays allow attackers to exploit known weaknesses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPAI Code of Practice creates legal uncertainty for non-signatories

Lawyers at William Fry say the EU’s final Code of Practice for general-purpose AI (GPAI) models leaves key questions unanswered. GPAI systems include models such as OpenAI’s GPT-4, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama, trained on vast datasets for broad applications.

The Code of Practice, released last week, addresses transparency, safety, security, and copyright, and is described by the European Commission as a voluntary tool. It was prepared by independent experts to help GPAI developers comply with upcoming legal obligations under the EU AI Act.

In a statement on the firm’s website, William Fry lawyers Barry Scannell and Leo Moore question how voluntary the code truly is. They note that signatories not in full compliance can still be seen as acting in good faith and will be supported rather than penalised.

A protected grace period runs until 2 August 2026, after which the AI Act could allow fines for non-compliance. The lawyers warn that this creates a two-tier system, shielding signatories while exposing non-signatories to immediate legal risk under the AI Act.

Developers who do not sign the code may face higher regulatory scrutiny, despite it being described as non-binding. William Fry also points out that detailed implementation guidelines and templates have not yet been published by the EU.

Additional guidance to clarify key GPAI concepts is expected later this month, but the current lack of detail creates uncertainty. The code’s copyright section, the lawyers argue, shows how the document has evolved into a quasi-regulatory framework.

An earlier draft required only reasonable efforts to avoid copyright-infringing sources. The final version demands the active exclusion of such sites. A proposed measure requiring developers to verify the source of copyrighted data acquired from third parties has been removed from the final draft.

The lawyers argue that this creates a practical blind spot, allowing unlawful content to slip into training data undetected. Rights holders still retain the ability to pursue action if they believe their content was misused, even if providers are signatories.

Meanwhile, the transparency chapter now outlines specific standards, rather than general principles. The safety and security section also sets enforceable expectations, increasing the operational burden on model developers.

William Fry warns that gaps between the code’s obligations and the missing technical documentation could have costly consequences. They conclude that, without the final training data template or implementation details, both developers and rights holders face compliance risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EXA to boost European connectivity with new fibre route and subsea cable

EXA Infrastructure has launched a strategic 1,200 km high-capacity fibre route connecting London, Frankfurt, Amsterdam, and Brussels (FLAP cities), featuring the first new subsea cable in the North Sea corridor in 25 years.

The new deployment includes 1,085 km of low-loss terrestrial fibre and a 115 km subsea segment using ultra-low-loss G.654C cable, running between Margate (UK) and Ostend (Belgium).

The project also introduces two new landing stations, EXA’s 21st and 22nd globally, enhancing its infrastructure across the UK, Belgium, and the Netherlands. These efforts complement EXA’s prior investments in the Channel Tunnel route, including upgrades to in-line amplifier (ILA) facilities and modern, high-fibre-count cables.

The new route is part of EXA’s broader push to improve Europe’s digital infrastructure with ultra-low latency, high-bandwidth, and scalable fibre paths between key hubs.

Over 65,000 km of its network is now 400G-enabled, supporting future scalability demands. EXA’s network spans 155,000 km across 37 countries, including six transatlantic cables. Among them is EXA Express, which offers the lowest latency link between Europe and North America.

The network serves a range of mission-critical functions, including hyperscale infrastructure for global enterprises, government networks, and specialised solutions for latency-sensitive industries like finance, gaming, and broadcasting.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Military AI and the void of accountability

In her blog post ‘Military AI: Operational dangers and the regulatory void,’ Julia Williams warns that AI is reshaping the battlefield, shifting from human-controlled systems to highly autonomous technologies that make life-and-death decisions. From the United States’ Project Maven to Israel’s AI-powered targeting in Gaza and Ukraine’s semi-autonomous drones, military AI is no longer a futuristic concept but a present reality.

While designed to improve precision and reduce risks, these systems carry hidden dangers—opaque ‘black box’ decisions, biases rooted in flawed data, and unpredictable behaviour in high-pressure situations. Operators either distrust AI or over-rely on it, sometimes without understanding how conclusions are reached, creating a new layer of risk in modern warfare.

Bias remains a critical challenge. AI can inherit societal prejudices from the data it is trained on, misinterpret patterns through algorithmic flaws, or encourage automation bias, where humans trust AI outputs even when they shouldn’t.

These flaws can have devastating consequences in military contexts, leading to wrongful targeting or escalation. Despite attempts to ensure ‘meaningful human control’ over autonomous weapons, the concept lacks clarity, allowing states and manufacturers to apply oversight unevenly. Responsibility for mistakes remains murky—should it lie with the operator, the developer, or the machine itself?

That uncertainty feeds into a growing global security crisis. Regulation lags far behind technological progress, with international forums disagreeing on how to govern military AI.

Meanwhile, an AI arms race accelerates between the US and China, driven by private-sector innovation and strategic rivalry. Export controls on semiconductors and key materials only deepen mistrust, while less technologically advanced nations fear both being left behind and becoming targets of AI warfare. The risk extends beyond states, as rogue actors and non-state groups could gain access to advanced systems, making conflicts harder to contain.

As Williams highlights, the growing use of military AI threatens to speed up the tempo of conflict and blur accountability. Without strong governance and global cooperation, it could escalate wars faster than humans can de-escalate them, shifting the battlefield from soldiers to civilian infrastructure and leaving humanity vulnerable to errors we may not survive.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google strengthens position as Perplexity and OpenAI launch browsers

OpenAI is reportedly preparing to launch an AI-powered web browser in the coming weeks, aiming to compete with Alphabet’s dominant Chrome browser, according to sources cited by Reuters.

The forthcoming browser seeks to leverage AI to reshape how users interact with the internet, while potentially granting OpenAI deeper access to valuable user data—a key driver behind Google’s advertising empire.

If adopted by ChatGPT’s 500 million weekly active users, the browser could pose a significant challenge to Chrome, which currently underpins much of Alphabet’s ad targeting and search traffic infrastructure.

The browser is expected to feature a native chat interface, reducing the need for users to click through traditional websites. The features align with OpenAI’s broader strategy to embed its services more seamlessly into users’ daily routines.

While the company declined to comment on the development, anonymous sources noted that the browser is likely to support AI agent capabilities, such as booking reservations or completing web forms on behalf of users.

The move comes as OpenAI faces intensifying competition from Google, Anthropic, and Perplexity.

In May, OpenAI acquired the AI hardware start-up io for $6.5 billion, in a deal linked to Apple design veteran Jony Ive. The acquisition signals a strategic push beyond software and into integrated consumer tools.

Despite Chrome’s grip on over two-thirds of the global browser market, OpenAI appears undeterred. Its browser will be built on Chromium—the open-source framework powering Chrome, Microsoft Edge, and other major browsers. Notably, OpenAI hired two former Google executives last year who had previously worked on Chrome.

The decision to build a standalone browser—rather than rely on third-party plug-ins—was reportedly driven by OpenAI’s desire for greater control over both data collection and core functionality.

The control could prove vital as regulatory scrutiny of Google’s dominance in search and advertising intensifies. The United States Department of Justice is currently pushing for Chrome’s divestiture as part of its broader antitrust actions against Alphabet.

Other players are already exploring the AI browser space. Perplexity recently launched its own AI browser, Comet, while The Browser Company and Brave have introduced AI-enhanced browsing features.

As the AI race accelerates, OpenAI’s entry into the browser market could redefine how users navigate and engage with the web—potentially transforming search, advertising, and digital privacy in the process.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!