Europe’s quantum ambitions meet US private power and China’s state drive

Quantum computing could fundamentally reshape technology, using quantum bits (qubits) instead of classical bits. Qubits allow complex calculations beyond classical computing, transforming sectors from pharmaceuticals to defence.

Europe is investing billions in quantum technology, emphasising technological sovereignty. Yet, it competes fiercely with the United States, which enjoys substantial private investment, and China, powered by significant state-backed funding.

The UK began quantum initiatives early, launching the National Quantum Programme 2014. It recently pledged £2.5 billion more, supporting start-ups like Orca Computing and Universal Quantum, alongside nations like Canada, Israel, and Japan.

Europe accounted for eight of the nineteen quantum start-ups established globally in 2024, including IQM Quantum Computers and Pasqal. Despite Europe’s scientific strengths, it only captured 5% of global quantum investments, versus 50% for the US.

The European Commission aims to strengthen quantum capabilities by funding six chip factories and a continent-wide Quantum Skills Academy. However, attracting sufficient private investment remains a significant challenge.

The US quantum industry thrives, driven by giants such as IBM, Google, Microsoft, IonQ, Rigetti, and D-Wave Quantum. Recent breakthroughs include Microsoft’s topological qubit and Google’s Willow quantum chip.

D-Wave Quantum has demonstrated real-world quantum advantages, solving complex optimisation problems in minutes. Its technology is now used commercially in logistics, traffic management, and supply chains.

China, meanwhile, leads in state-driven quantum funding, investing $15 billion directly and managing a $138 billion tech venture fund. By contrast, US federal investment totals about $6 billion, underscoring China’s aggressive approach.

Global investment in quantum start-ups reached $1.25 billion in Q1 2025 alone, reflecting a shift towards practical applications. By 2040, the quantum market is projected to reach $173 billion, influencing global economics and geopolitics.

Quantum computing raises geopolitical concerns, prompting democratic nations to coordinate through bodies like the OECD and G7. Interoperability, trust, and secure infrastructure have become essential strategic considerations.

Europe’s quantum ambitions require sustained investment, standard-setting leadership, and robust supply chains. Its long-term technological independence hinges on moving swiftly beyond initial funding towards genuine strategic autonomy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Co-op CEO apologises after cyberattack hits 6.5 million members

Co-op CEO Shirine Khoury-Haq has confirmed that all 6.5 million members had their data stolen during a cyberattack in April.

‘I’m devastated that information was taken,’ Khoury-Haq told BBC Breakfast. ‘It hurt my members; they took their data, and it hurt our customers, whom I take personally.’

The stolen data included names, addresses, and contact details, but no financial or transaction information. Khoury-Haq said the incident felt ‘personal’ due to its impact on Co-op staff, adding that IT teams ‘fought off these criminals’ under immense pressure.

Although the hackers were removed from Co-op’s systems, the stolen information could not be recovered. The company monitored the breach and reported it to the authorities.

Co-op, which operates a membership profit-sharing model, is still working to restore its back-end systems. The financial impact has not been disclosed.

In response, Co-op is partnering with The Hacking Games — a cybersecurity recruitment initiative — to guide young talent towards legal tech careers. A pilot will launch in Co-op Academies Trust schools.

The breach was part of a wider wave of cyberattacks on UK retailers, including Marks & Spencer and Harrods. Four people aged 17 to 20 have been arrested concerning the incidents.

In a related case, Australian airline Qantas also confirmed a recent breach involving its frequent flyer programme. As with Co-op, financial data was not affected, but personal contact information was accessed.

Experts warn of increasingly sophisticated attacks on public and private institutions, calling for stronger digital defences and proactive cybersecurity strategies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Air Serbia suffers deep network compromise in July cyberattack

Air Serbia delayed issuing June payslips after a cyberattack disrupted internal systems, according to internal memos obtained by The Register. A 10 July note told staff: ‘Given the ongoing cyberattacks, for security reasons, we will postpone the distribution of June 2025 payslips.’

The IT department is reportedly working to restore operations, and payslips will be emailed once systems are secure again. Although salaries were paid, staff could not access their payslip PDFs due to the disruption.

HR warned employees not to open suspicious emails, particularly those appearing to contain payslips or that seemed self-addressed. ‘We kindly ask that you act responsibly given the current situation,’ said one memo.

Air Serbia first informed staff about the cyberattack on 4 July, with IT teams warning of possible disruptions to operations. Managers were instructed to activate business continuity plans and adapt workflows accordingly.

By 7 July, all service accounts had been shut down, and staff were subjected to company-wide password resets. Security-scanning software was installed on endpoints, and internet access was restricted to selected airserbia.com pages.

A new VPN client was deployed due to security vulnerabilities, and data centres were shifted to a demilitarised zone. On 11 July, staff were told to leave their PCs locked but running over the weekend for further IT intervention.

An insider told The Register that the attack resulted in a deep compromise of Air Serbia’s Active Directory environment. The source claims the attackers may have gained access in early July, although exact dates remain unclear due to missing logs.

Staff reportedly fear that the breach could have involved personal data, and that the airline may not disclose the incident publicly. According to the insider, attackers had been probing Air Serbia’s exposed endpoints since early 2024.

The airline also faced several DDoS attacks earlier this year, although the latest intrusion appears far more severe. Malware, possibly an infostealer, is suspected in the breach, but no ransom demands had been made as of 15 July.

Infostealers are often used in precursor attacks before ransomware is deployed, security experts warn. Neither Air Serbia nor the government of Serbia responded to media queries by the time of publication.

Air Serbia had a record-breaking year in 2024, carrying 4.4 million passengers — a 6 percent increase over the previous year. Cybersecurity experts recently warned of broader attacks on the aviation industry, with groups such as Scattered Spider under scrutiny.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Women see AI as more harmful across life settings

Women are showing more scepticism than men when it comes to AI particularly regarding its ethics, fairness and transparency.

A national study from Georgetown University, Boston University and the University of Vermont found that women were more concerned about AI’s risks in decision-making. Concerns were especially prominent around AI tools used in the workplace, such as hiring platforms and performance review systems.

Bias may be introduced when such tools rely on historical data, which often underrepresents women and other marginalised groups. The study also found that gender influenced compliance with workplace rules surrounding AI use, especially in restrictive environments.

When AI use was banned, women were more likely to follow the rules than men. Usage jumped when tools were explicitly permitted. In cases where AI was allowed, over 80% of both women and men reported using the tools.

Women were generally more wary of AI’s impact across all areas of life — not just in the professional sphere. From personal settings to public life, survey respondents who identified as women consistently viewed AI as more harmful than beneficial.

The study, conducted via Qualtrics in August 2023, surveyed a representative US sample with a majority of female respondents. On average, participants were 45 years old, with over half identifying as women across different educational and professional backgrounds.

The research comes amid wider concerns in the AI field about ethics and accountability, often led by women researchers. High-profile cases include Google’s dismissal of Timnit Gebru and later Margaret Mitchell, both of whom raised ethical concerns about large language models.

The study’s authors concluded that building public trust in AI may require clearer policies and greater transparency in how systems are designed. They also highlighted the importance of increasing diversity among those developing AI tools to ensure more inclusive outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Online health search grows, but scepticism about AI stays high

Trust in traditional healthcare providers remains high, but Americans are increasingly turning to AI for health information, according to new data from the Annenberg Public Policy Centre (APPC).

While 90% of adults trust their personal health provider, nearly 8 in 10 say they are likely to look online for answers to health-related questions. The rise of the internet gave the public access to government health authorities such as the CDC, FDA, and NIH.

Although trust in these institutions dipped during the Covid-19 pandemic, confidence remains relatively high at 66%–68%. Generative AI tools are now becoming a third key source of health information.

AI-generated summaries — such as Google’s ‘AI Overviews‘ or Bing’s ‘Copilot Answers’ — appear prominently in search results.

Despite disclaimers that responses may contain mistakes, nearly two-thirds (63%) of online health searchers find these responses somewhat or very reliable. Around 31% report often or always finding the answers they need in the summaries.

Public attitudes towards AI in clinical settings remain more cautious. Nearly half (49%) of US adults say they are not comfortable with providers using AI tools instead of their own experience. About 36% express some level of comfort, while 41% believe providers are already using AI at least occasionally.

AI use is growing, but most online health seekers continue exploring beyond the initial summary. Two-thirds follow links to websites such as Mayo Clinic, WebMD, or non-profit organisations like the American Heart Association. Federal resources such as the CDC and NIH are also consulted.

Younger users are more likely to recognise and interact with AI summaries. Among those aged 18 to 49, between 69% and 75% have seen AI-generated content in search results, compared to just 49% of users over 65.

Despite high smartphone ownership (93%), only 59% of users track their health with apps. Among these, 52% are likely to share data with a provider, although 36% say they would not. Most respondents (80%) welcome prescription alerts from pharmacies.

The survey, fielded in April 2025 among 1,653 US adults, highlights growing reliance on AI for health information but also reveals concerns about its use in professional medical decision-making. APPC experts urge greater transparency and caution, especially for vulnerable users who may not understand the limitations of AI-generated content.

Director Kathleen Hall Jamieson warns that confusing AI-generated summaries with professional guidance could cause harm. Analyst Laura A. Gibson adds that outdated information may persist in AI platforms, reinforcing the need for user scepticism.

As the public turns to digital health tools, researchers recommend clearer policies, increased transparency, and greater diversity in AI development to ensure safe and inclusive outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google expands NotebookLM with curated content and mobile access

While Gemini often dominates attention in Google’s AI portfolio, other innovative tools deserve the spotlight. One standout is NotebookLM, a virtual research assistant that helps users organise and interact with complex information across various subjects.

NotebookLM creates structured notebooks from curated materials, allowing meaningful engagement with the content. It supports dynamic features, including summaries and transformation options like Audio Overview, making research tasks more intuitive and efficient.

According to Google, featured notebooks are built using information from respected authors, academic institutions, and trusted nonprofits. Current topics include Shakespeare, Yellowstone National Park and more, offering a wide spectrum of well-sourced material.

Featured notebooks function just like regular ones, with added editorial quality. Users can navigate, explore, and repurpose content in ways that support individual learning and project needs. Google has confirmed the collection will grow over time.

NotebookLM remains in early development, yet the tool already shows potential for transforming everyday research tasks. Google also plans tighter integration with its other productivity tools, including Docs and Slides.

The tool significantly reduces the effort traditionally required for academic or creative research. Structured data presentation, combined with interactive features, makes information easier to consume and act upon.

NotebookLM was initially released on desktop but is now also available as a mobile app. Users can download it via the Google Play Store to create notebooks, add content, and stay productive from anywhere.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPAI Code of Practice creates legal uncertainty for non-signatories

Lawyers at William Fry say the EU’s final Code of Practice for general-purpose AI (GPAI) models leaves key questions unanswered. GPAI systems include models such as OpenAI’s GPT-4, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama, trained on vast datasets for broad applications.

The Code of Practice, released last week, addresses transparency, safety, security, and copyright, and is described by the European Commission as a voluntary tool. It was prepared by independent experts to help GPAI developers comply with upcoming legal obligations under the EU AI Act.

In a statement on the firm’s website, William Fry lawyers Barry Scannell and Leo Moore question how voluntary the code truly is. They note that signatories not in full compliance can still be seen as acting in good faith and will be supported rather than penalised.

A protected grace period runs until 2 August 2026, after which the AI Act could allow fines for non-compliance. The lawyers warn that this creates a two-tier system, shielding signatories while exposing non-signatories to immediate legal risk under the AI Act.

Developers who do not sign the code may face higher regulatory scrutiny, despite it being described as non-binding. William Fry also points out that detailed implementation guidelines and templates have not yet been published by the EU.

Additional guidance to clarify key GPAI concepts is expected later this month, but the current lack of detail creates uncertainty. The code’s copyright section, the lawyers argue, shows how the document has evolved into a quasi-regulatory framework.

An earlier draft required only reasonable efforts to avoid copyright-infringing sources. The final version demands the active exclusion of such sites. A proposed measure requiring developers to verify the source of copyrighted data acquired from third parties has been removed from the final draft.

The lawyers argue that this creates a practical blind spot, allowing unlawful content to slip into training data undetected. Rights holders still retain the ability to pursue action if they believe their content was misused, even if providers are signatories.

Meanwhile, the transparency chapter now outlines specific standards, rather than general principles. The safety and security section also sets enforceable expectations, increasing the operational burden on model developers.

William Fry warns that gaps between the code’s obligations and the missing technical documentation could have costly consequences. They conclude that, without the final training data template or implementation details, both developers and rights holders face compliance risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Asia’s humanities under pressure from AI surge

Universities across Asia, notably in China, are slashing liberal arts enrollments to expand STEM and AI programmes. Institutions like Fudan and Tsinghua are reducing intake for humanities subjects, as policymakers push for a high-tech workforce.

Despite this shift, educators argue that sidelining subjects like history, philosophy, and ethics threatens the cultivation of critical thinking, moral insight, and cultural literacy, which are increasingly necessary in an AI-saturated world.

They contend that humanistic reasoning remains essential for navigating AI’s societal and ethical complexities.

Innovators are pushing for hybrid models of education. Humanities courses are evolving to incorporate AI-driven archival research, digital analysis, and data-informed argumentation, turning liberal arts into tools for interpreting technology, rather than resisting it.

Supporters emphasise that liberal arts students offer distinct advantages: they excel in communication, ethical judgement, storytelling and adaptability, capacities that machines lack. These soft skills are increasingly valued in workplaces that integrate AI.

Analysts predict that the future lies not in abandoning the humanities but in transforming them. When taught alongside technical disciplines, through STEAM initiatives and cross-disciplinary curricula, liberal arts can complement AI, ensuring that technology remains anchored in human values.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Stanford study flags dangers of using AI as mental health therapists

A new Stanford University study warns that therapy chatbots powered by large language models (LLMs) may pose serious user risks, including reinforcing harmful stigmas and offering unsafe responses. Presented at the upcoming ACM Conference on Fairness, Accountability, and Transparency, the study analysed five popular AI chatbots marketed for therapeutic support, evaluating them against core guidelines for assessing human therapists.

The research team conducted two experiments, one to detect bias and stigma, and another to assess how chatbots respond to real-world mental health issues. Findings revealed that bots were more likely to stigmatise people with conditions like schizophrenia and alcohol dependence compared to those with depression.

Shockingly, newer and larger AI models showed no improvement in reducing this bias. In more serious cases, such as suicidal ideation or delusional thinking, some bots failed to react appropriately or even encouraged unsafe behaviour.

Lead author Jared Moore and senior researcher Nick Haber emphasised that simply adding more training data isn’t enough to solve these issues. In one example, a bot replied to a user hinting at suicidal thoughts by listing bridge heights, rather than recognising the red flag and providing support. The researchers argue that these shortcomings highlight the gap between AI’s current capabilities and the sensitive demands of mental health care.

Despite these dangers, the team doesn’t entirely dismiss the use of AI in therapy. If used thoughtfully, they suggest that LLMs could still be valuable tools for non-clinical tasks like journaling support, billing, or therapist training. As Haber put it, ‘LLMs potentially have a compelling future in therapy, but we need to think critically about precisely what this role should be.’

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Humanoid robot unveils portrait of King Charles, denies replacing artists

At the recent unveiling of a new oil painting titled Algorithm King, humanoid robot Ai-Da presented her interpretation of King Charles, emphasising the monarch’s commitment to environmentalism and interfaith dialogue. The portrait, showcased at the UK’s diplomatic mission in Geneva, was created using a blend of AI algorithms and traditional artistic inspiration.

Ai-Da, designed with a human-like face and robotic limbs, has captured public attention since becoming the first humanoid robot to sell artwork at auction, with a portrait of mathematician Alan Turing fetching over $1 million. Despite her growing profile in the art world, Ai-Da insists she poses no threat to human creativity, positioning her work as a platform to spark discussion on the ethical use of AI.

Speaking at the UN’s AI for Good summit, the robot artist stressed that her creations aim to inspire responsible innovation and critical reflection on the intersection of technology and culture.

‘The value of my art lies not in monetary worth,’ she said, ‘but in how it prompts people to think about the future of creativity.’

Ai-Da’s creator, art specialist Aidan Meller, reiterated that the project is an ethical experiment rather than an attempt to replace human artists. Echoing that sentiment, Ai-Da concluded, ‘I hope my work encourages a positive, thoughtful use of AI—always mindful of its limits and risks.’

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!