Google partners with UK government on AI training

The UK government has struck a major partnership with Google Cloud aimed at modernising public services by eliminating agreing IT systems and equipping 100,000 civil servants with digital and AI skills by 2030.

Backed by DSIT, the initiative targets sectors like the NHS and local councils, seeking both operational efficiency and workforce transformation.

Replacing legacy contracts, some of which date back decades, could unlock as much as £45 billion in efficiency savings, say ministers. Google DeepMind will provide technical expertise to help departments adopt emerging AI solutions and accelerate public sector innovation.

Despite these promising aims, privacy campaigners warn that reliance on a US-based tech giant threatens national data sovereignty and may lead to long-term lock-in.

Foxglove’s Martha Dark described the deal as ‘dangerously naive’, with concerns around data access, accountability, public procurement processes and geopolitical risk.

As ministers pursue broader technological transformation, similar partnerships with Microsoft, OpenAI and Meta are underway, reflecting an industry-wide effort to bridge digital skills gaps and bring agile solutions into Whitehall.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI interviews leave job candidates in the dark

An increasing number of startups are now using AI to conduct video job interviews, often without making this clear to applicants. Senior software developers are finding themselves unknowingly engaging with automated systems instead of human recruiters.

Applicants are typically asked to submit videos responding to broad interview prompts, including examples and case studies, often without time constraints or human engagement.

AI processes these asynchronous interviews, which evaluate responses using natural language processing, facial cues and tone to assign scores.

Critics argue that this approach shifts the burden of labour onto job seekers, while employers remain unaware of the hidden costs and flawed metrics. There is also concern about the erosion of dignity in hiring, with candidates treated as data points rather than individuals.

Although AI offers potential efficiencies, the current implementation risks deepening dysfunctions in recruitment by prioritising speed over fairness, transparency and candidate experience. Until the technology is used more thoughtfully, experts advise job seekers to avoid such processes altogether.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI industry warned of looming financial collapse

Despite widespread popularity and unprecedented investment, OpenAI may be facing a deepening financial crisis. Since launching ChatGPT, the company has lost billions yearly, including an estimated $5 billion in 2024 alone.

Tech critic Ed Zitron argues that the AI industry is heading towards a ‘subprime AI crisis’, comparing the sector’s inflated valuations and spiralling losses to the subprime mortgage collapse in 2007. Startups like OpenAI and Anthropic continue to operate at huge losses.

Companies relying on AI infrastructure are already feeling the squeeze. Anysphere, which uses Anthropic’s models, recently raised prices sharply, angering users and blaming costs passed down from its infrastructure provider.

To manage exploding demand, OpenAI has also introduced tiered pricing and restricted services for free users, raising concerns that access to AI tools will soon be locked behind expensive paywalls. With 800 million weekly users, any future revenue strategy could alienate a large part of its global base.

Zitron believes these conditions cannot sustain long-term growth and will ultimately damage revenues and public trust. The industry, he warns, may be building its future on unstable ground.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and big data to streamline South Korea’s drug evaluation processes

The Ministry of Food and Drug Safety (MFDS) of South Korea is modernising its drug review and evaluation processes by incorporating AI, big data, and other emerging technologies.

The efforts are being spearheaded by the ministry’s National Institute for Food and Drug Safety Evaluation (NIFDS).

Starting next year, NIFDS plans to apply AI to assist with routine tasks such as preparing review data.

The initial focus will be synthetic chemical drugs, gradually expanding to other product categories.

‘Initial AI applications will focus on streamlining repetitive tasks,’ said Jeong Ji-won, head of the Pharmaceutical and Medical Device Research Department at NIFDS.

‘The AI system is being developed internally, and we are evaluating its potential for real-world inspection scenarios. A phased approach is necessary due to the large volume of data required,’ Jeong added.

In parallel, NIFDS is exploring using big data in various regulatory activities.

One initiative involves applying big data analytics to enhance risk assessments during overseas GMP inspections. ‘Standardisation remains a challenge due to varying formats across facilities,’ said Sohn Kyung-hoon, head of the Drug Research Division.

‘Nonetheless, we’re working to develop a system that enhances the efficiency of inspections without relying on foreign collaborations.’ Efforts also include building domain-specific Korean-English translation models for safety documentation.

The institute also integrates AI into pharmaceutical manufacturing oversight and develops public data utilisation frameworks. The efforts include systems for analysing adverse drug reaction reports and standardising data inputs.

NIFDS is actively researching new analysis methods and safety protocols regarding impurity control.

‘We’re prioritising research on impurities such as NDMA,’ Sohn noted. Simultaneous detection methods are being tailored for smaller manufacturers.

New categorisation techniques are also being developed to monitor previously untracked substances.

On the biologics front, NIFDS aims to finalise its mRNA vaccine evaluation technology by year-end.

The five-year project supports the national strategy for improving infectious disease preparedness in South Korea, including work on delivery mechanisms and material composition.

‘This initiative is part of our broader strategy to improve preparedness for future infectious disease outbreaks,’ said Lee Chul-hyun, head of the Biologics Research Division.

Evaluation protocols for antibody drugs are still in progress. However, indirect support is being provided through guidelines and benchmarking against international cases. Separately, the Herbal Medicine Research Division is upgrading its standardised product distribution model.

The current use-based system will shift to a field-based one next year, extending to pharmaceuticals, functional foods, and cosmetics sectors.

‘We’re refining the system to improve access and quality control,’ said Hwang Jin-hee, head of the division. Collaboration with regional research institutions remains a key component of this work.’

NIFDS currently offers 396 standardised herbal medicines. The institute continues to develop new reference materials annually as part of its evolving strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI scam targets donors with fake orphan images

Cambodian authorities have warned the public about increasing online scams using AI-generated images to deceive donors. The scams often show fabricated scenes of orphaned children or grieving families, with QR codes attached to collect money.

One Facebook account, ‘Khmer Khmer’, was named in an investigation by the Anti-Cyber Crime Department for spreading false stories and deepfake images to solicit charity donations. These included claims of a wife unable to afford a coffin and false fundraising efforts near the Thai border.

The department confirmed that AI-generated realistic visuals are designed to manipulate emotions and lure donations. Cambodian officials continue investigations and have promised legal action if evidence of criminal activity is confirmed.

Authorities reminded the public to remain cautious and to only contribute to verified and officially recognised campaigns. While AI’s ability to create realistic content has many uses, it also opens the door to dangerous forms of fraud and misinformation when abused.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Privacy concerns rise over Gemini’s on‑device data access

From 7 July 2025, Google’s Gemini AI will default to accessing your WhatsApp, SMS and call apps, even without Gemini Apps Activity enabled, through an Android OS’ System Intelligence’ integration.

Google insists the assistant cannot read or summarise your WhatsApp messages; it only performs actions like sending replies and accessing notifications.

Integration occurs at the operating‑system level, granting Gemini enhanced control over third‑party apps, including reading and responding to notifications or handling media.

However, this has prompted criticism from privacy‑minded users, who view it as intrusive data access, even though Google maintains no off‑device content sharing.

Alarmed users quickly turned off the feature via Gemini’s in‑app settings or resorted to more advanced measures, like removing Gemini with ADB or turning off the Google app entirely.

The controversy highlights growing concerns over how deeply OS‑level AI tools can access personal data, blurring the lines between convenience and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN reports surge in intangible investment driven by AI and data

Global investment is increasingly flowing into intangible assets such as software, data, and AI, marking what the UN has described as a ‘fundamental shift’ in how economies develop and compete.

According to a new report from the World Intellectual Property Organisation (WIPO), co-authored with the Luiss Business School based in Italy, investment in intellectual property-related assets grew three times faster in 2024 than spending on physical assets like buildings and machinery.

WIPO reported that total intangible investment reached $7.6 trillion across 27 high- and middle-income economies last year, up from $7.4 trillion in 2023—a real-term growth rate of 3 percent. In contrast, growth in physical asset investment has been more sluggish, hindered by high interest rates and a slow economic recovery.

‘We’re witnessing a fundamental shift in how economies grow and compete,’ said WIPO Director General Daren Tang. ‘While businesses have slowed down investing in factories and equipment during uncertain times, they’re doubling on intangible assets.’

The report highlights software and databases as the fastest-growing categories, expanding by more than 7 percent annually between 2013 and 2022. It attributes much of this trend to the accelerating adoption of AI, which requires significant investment in data infrastructure and training datasets.

WIPO also noted that the United States remains the global leader in absolute intangible investment, spending nearly twice as much as France, Germany, Japan, and the United Kingdom. However, Sweden topped the list regarding investment intensity, with intangible assets representing 16 per cent of its GDP.

The US, France, and Finland followed at 15 percent each, while India ranked ahead of several EU countries and Japan at an intensity of nearly 10 percent.

Despite economic disruptions over the past decade and a half, intangible investments have remained resilient, growing at a compound annual rate of 4 percent since 2008. By contrast, investment in tangible assets rose just 1 percent over the same period.

‘We are only at the beginning of the AI boom,’ said Sacha Wunsch-Vincent, head of WIPO’s economics and data analytics department.

He noted that in addition to driving demand for physical infrastructure like chips and servers, AI is now contributing to sustained investment growth in data and software, cornerstones of the intangible economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LG’s Exaone Path 2.0 uses AI to transform genetic testing

LG AI Research has introduced Exaone Path 2.0, an upgraded AI model designed to analyse pathology images for disease diagnosis, significantly reducing the time required for genetic testing.

The new model, unveiled Wednesday, can reportedly process pathology images in under a minute—a significant shift from conventional genetic testing methods that often take more than two weeks.

According to LG, the AI system offers enhanced accuracy in detecting genetic mutations and gene expression patterns by learning from detailed image patches and full-slide pathology data.

Developed by LG AI Research, a division of the LG Group, Exaone Path 2.0 is trained on over 10,000 whole-slide images (WSIs) and multiomics pairs, enabling it to integrate structural information with molecular biology insights. The company said it has achieved a 78.4 percent accuracy rate in predicting genetic mutations.

The model has also been tailored for specific applications in oncology, including lung and colorectal cancers, where it can help clinicians identify patient groups most likely to benefit from targeted therapies.

LG AI Research is collaborating with Professor Hwang Tae-hyun and his team at Vanderbilt University Medical Centre in the US to further its application in real-world clinical settings.

Their shared goal is to develop a multimodal medical AI platform that can support precision medicine directly within clinical environments.

Hwang, a key contributor to the US government’s Cancer Moonshot program and founder of the Molecular AI Initiative at Vanderbilt, emphasised that the aim is to create AI tools usable by clinicians in active medical practice, rather than limiting innovation to the lab.

In addition to oncology, LG AI Research plans to extend its multimodal AI initiatives into transplant rejection, immunology, and diabetes.

It is also collaborating with the Jackson Laboratory to support Alzheimer’s research and working with Professor Baek Min-kyung’s team at Seoul National University on next-generation protein structure prediction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kurbalija: Digital tools are reshaping diplomacy

Once the global stage for peace negotiations and humanitarian accords, Geneva finds itself at the heart of a new kind of diplomacy shaped by algorithms, data flows, and AI. Jovan Kurbalija, Executive Director of Diplo and Head of the Geneva Internet Platform, believes this transformation reflects Geneva’s long tradition of engaging with science, technology, and global governance. He explained this in an interview with Léman Bleu.

Diplo, a Swiss-Maltese foundation, supports diplomats and international professionals as they navigate the increasingly complex landscape of digital governance.

‘Where we once trained them to understand the internet,’ Kurbalija explains, ‘we now help them grasp and negotiate issues around AI and digital tools.’

The foundation not only aids diplomats in addressing cyber threats and data privacy but also equips them with AI-enhanced tools for negotiation, public communication, and consular protection.

According to Kurbalija, digital governance touches everyone. From how our phones are built to how data moves across borders, nearly 50 distinct issues—from cybersecurity and e-commerce to data protection and digital standards—are debated in the corridors of International Geneva. These debates are no longer reserved for specialists because they affect the everyday lives of billions.

Kurbalija draws a fascinating connection between Geneva’s philosophical heritage and today’s technological dilemmas. Writers like Mary Shelley, Voltaire, and Borges, each with ties to Geneva, grappled with themes eerily relevant today: unchecked scientific ambition, the tension between freedom and control, and the challenge of processing vast amounts of knowledge. He dubs this tradition ‘EspriTech de Genève,’ a spirit of intellectual inquiry that still echoes in debates over AI and its impact on society.

AI, Kurbalija warns, is both a marvel and a potential menace.

‘It’s not exactly Frankenstein,’ he says, ‘but without proper governance, it could become one.’

As technology evolves, so must international mechanisms ensure it serves humanity rather than endangers it.

Diplomacy, meanwhile, is being reshaped not just in terms of content but in method. Digital tools allow diplomats to engage more directly with the public and make negotiations more transparent. Yet, the rise of social media has its downsides. Public broadcasting of diplomatic proceedings risks undermining the very privacy and trust needed to reach a compromise.

‘Diplomacy,’ Kurbalija notes, ‘needs space to breathe—to think, negotiate, resolve.’

He also cautions against the growing concentration of AI and data power in the hands of a few corporations.

‘We risk having our collective knowledge privatised, commodified, and sold back to us,’ he says.

The antidote? A push for more inclusive, bottom-up AI development that empowers individuals, communities, and nations.

As Geneva continues its historic role in shaping the future, Kurbalija’s message is clear: managing technology wisely is not just a diplomatic challenge—it’s a global necessity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI-powered imposter poses as US Secretary of State Rubio

An imposter posing as US Secretary of State Marco Rubio used an AI-generated voice and text messages to contact high-ranking officials, including foreign ministers, a senator, and a state governor.

The messages, sent through SMS and the encrypted app Signal, triggered an internal warning across the US State Department, according to a classified cable dated 3 July.

The individual created a fake Signal account using the name ‘Marco.Rubio@state.gov’ and began contacting targets in mid-June.

At least two received AI-generated voicemails, while others were encouraged to continue the chat via Signal. US officials said the aim was likely to gain access to sensitive information or compromise official accounts.

The State Department confirmed it is investigating the breach and has urged all embassies and consulates to remain alert. While no direct cyber threat was found, the department warned that shared information could still be exposed if targets were deceived.

A spokesperson declined to provide further details for security reasons.

The incident appears linked to a broader wave of AI-driven disinformation. A second operation, possibly tied to Russian actors, reportedly targeted Gmail accounts of journalists and former officials.

The FBI has warned of rising cases of ‘smishing’ and ‘vishing’ involving AI-generated content.

Experts now warn that deepfakes are becoming harder to detect, as the technology advances faster than defences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!