Global tech competition intensifies as the UK outlines a £1 trillion digital blueprint

The United Kingdom has unveiled a strategy to grow its digital economy to £1 trillion by harnessing AI, quantum computing, and cybersecurity. The plan emphasises public-private partnerships, training, and international collaboration to tackle skills shortages and infrastructure gaps.

The initiative builds on the UK tech sector’s £1.2 trillion valuation, with regional hubs in cities such as Bristol and Manchester fuelling expansion in emerging technologies. Experts, however, warn that outdated systems and talent deficits could stall progress unless workforce development accelerates.

AI is central to the plan, with applications spanning healthcare and finance. Quantum computing also features, with investments in research and cybersecurity aimed at strengthening resilience against supply disruptions and future threats.

The government highlights sustainability as a priority, promoting renewable energy and circular economies to ensure digital growth aligns with environmental goals. Regional investment in blockchain, agri-tech, and micro-factories is expected to create jobs and diversify innovation-driven growth.

By pursuing these initiatives, the UK aims to establish itself as a leading global tech player alongside the US and China. Ethical frameworks and adaptive strategies will be key to maintaining public trust and competitiveness.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia weighs cyber militia to counter rising digital threats

Cyberattacks are intensifying worldwide, with Australia now ranked fourth globally for threats against operational technology and industrial sectors. Rising AI-powered incursions have exposed serious vulnerabilities in the country’s national defence and critical infrastructure.

The 2023–2030 Cyber Security Strategy designed by the Government of Australia aims to strengthen resilience through six ‘cyber shields’, including legislation and intelligence sharing. But a skills shortage leaves organisations vulnerable as ransomware attacks on mining and manufacturing continue to rise.

One proposal gaining traction is the creation of a volunteer ‘cyber militia’. Inspired by the cyber defence unit in Estonia, this network would mobilise unconventional talent, retirees, hobbyist hackers, and students, to bolster monitoring, threat hunting, and incident response.

Supporters argue that such a force could fill gaps left by formal recruitment, particularly in smaller firms and rural networks. Critics, however, warn of vetting risks, insider threats, and the need for new legal frameworks to govern liability and training.

Pilot schemes in high-risk sectors, such as energy and finance, have been proposed, with public-private funding viewed as crucial. Advocates argue that a cyber militia could democratise security and foster collective responsibility, aligning with the country’s long-term cybersecurity strategy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-designed proteins could transform longevity and drug development

OpenAI has launched GPT-4b micro, an AI model developed with longevity startup Retro Biosciences to accelerate protein engineering. Unlike chatbots, it focuses on biological sequences and 3D structures.

The model redesigned two Yamanaka factors- proteins that convert adult cells into stem cells, showing 50-fold higher efficiency in lab tests and improved DNA repair. Older cells acted more youthful, potentially shortening trial-and-error in regenerative medicine.

AI-designed proteins could speed up drug development and allow longevity startups to rejuvenate cells safely and consistently. The work also opens new possibilities in synthetic biology beyond natural evolution.

OpenAI emphasised that the research is still early and lab-based, with clinical applications requiring caution. Transparency is key, as the technology’s power to design potent proteins quickly raises biosecurity considerations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mount Fuji eruption simulated in an AI video for Tokyo

Residents of Tokyo have been shown a stark warning of what could happen if Mount Fuji erupts.

The metropolitan government released a three-minute AI-generated video depicting the capital buried in volcanic ash to raise awareness and urge preparation.

The simulation shows thick clouds of ash descending on Shibuya and other districts about one to two hours after an eruption, with up to 10 centimetres expected to accumulate. Unlike snow, volcanic ash does not melt away but instead hardens, damages powerlines, and disrupts communications once wet.

The video also highlights major risks to transport. Ash on train tracks, runways, and roads would halt trains, ground planes, and make driving perilous.

Two-wheel vehicles could become unusable under even modest ashfall. Power outages and shortages of food and supplies are expected as shops run empty, echoing the disruption seen after the 2011 earthquake.

Officials advise people to prepare masks, goggles, and at least three days of emergency food. The narrator warns that because no one knows when Mount Fuji might erupt, daily preparedness in Japan is vital to protect health, infrastructure, and communities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Senior OpenAI executive Julia Villagra departs amid talent war

OpenAI’s chief people officer, Julia Villagra, has left the company, marking the latest leadership change at the AI pioneer. Villagra, who joined the San Francisco firm in early 2024 and was promoted in March, previously led its human resources operations.

Her responsibilities will temporarily be overseen by chief strategy officer Jason Kwon, while chief applications officer Fidji Simo will lead the search for her successor.

OpenAI said Villagra is stepping away to pursue her personal interest in art, music and storytelling as tools to help people understand the shift towards artificial general intelligence, a stage when machines surpass human performance in most forms of work.

The departure comes as OpenAI navigates a period of intense competition for AI expertise. Microsoft-backed OpenAI is valued at about $300 billion, with a potential share sale set to raise that figure to $500 billion.

The company faces growing rivalry from Meta, where Mark Zuckerberg has reportedly offered $100 million signing bonuses to attract OpenAI talent.

While OpenAI expands, public concerns over the impact of AI on employment continue. A Reuters/Ipsos poll found 71% of Americans fear AI could permanently displace too many workers, despite the unemployment rate standing at 4.2% in July.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google claims Gemini uses less water and energy per text prompt

Google has published new estimates on the environmental footprint of Gemini, claiming a single text prompt uses about five drops of water and 0.24 watt-hours of electricity. The company says this equates to 0.03 grams of carbon dioxide emissions.

According to Google, efficiencies have reduced Gemini’s energy consumption and carbon footprint per text prompt by factors of 33 and 44 over the past year. Chief technologist Ben Gomes said the model now delivers higher-quality responses with a significantly lower footprint.

The company argued that these figures are significantly lower than those suggested in earlier research. However, Shaolei Ren, the author of one of the cited papers, said Google’s comparisons were misleading and incomplete.

Ren noted that Google compared its latest onsite-only water figures against his study’s highest total figures, creating the impression that Gemini was far more efficient. He also said Google omitted indirect water use, such as electricity-related consumption, from its estimates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hong Kong deepfake scandal exposes gaps in privacy law

The discovery of hundreds of non-consensual deepfake images on a student’s laptop at the University of Hong Kong has reignited debate about privacy, technology, and accountability. The scandal echoes the 2008 Edison Chen photo leak, which exposed gaps in law and gender double standards.

Unlike stolen private images, today’s fabrications are AI-generated composites that can tarnish reputations with a single photo scraped from social media. The dismissal that such content is ‘not real’ fails to address the damage caused by its existence.

The legal system of Hong Kong struggles to keep pace with this shift. Its privacy ordinance, drafted in the 1990s, was not designed for machine-learning fabrications, while traditional harassment and defamation laws predate the advent of AI. Victims risk harm before distribution is even proven.

The city’s privacy watchdog has launched a criminal investigation, but questions remain over whether creation or possession of deepfakes is covered by existing statutes. Critics warn that overreach could suppress legitimate uses, yet inaction leaves space for abuse.

Observers argue that just as the snapshot camera spurred the development of modern privacy law, deepfakes must drive a new legal boundary to safeguard dignity. Without reform, victims may continue facing harm without recourse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Contested quantum study updated but questions remain

A controversial study that once claimed evidence of elusive Majorana quasiparticles has received a 20-page correction in Science five years after its publication.

The paper, led by researchers in Copenhagen and affiliated with Microsoft, originally described signals from nanowires that were said to match those expected from Majoranas, exotic entities believed to be useful for quantum computing due to their resistance to noise.

Independent investigators concluded that, although the data selection was narrow, no misconduct occurred. The omitted data did not invalidate the main claims, but raised concerns about transparency and selection bias in reporting.

The authors argue the correction merely clarifies their methods. Yet the wider research community remains divided, and no group has successfully replicated the findings. Some experts now see the approach as too fragile for practical use in quantum computing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea unveils five-year AI blueprint for ‘super-innovation economy’

South Korea’s new administration has unveiled a five-year economic plan to build what it calls a ‘super-innovation economy’ by integrating AI across all sectors of society.

The strategy, led by President Lee Jae-myung, commits 100 trillion won (approximately US$71.5 billion) to position the country among the world’s top three AI powerhouses. Private firms will drive development, with government support for nationwide adoption.

Plans include a sovereign Korean-language AI model, humanoid robots for logistics and industry, and commercialising autonomous vehicles by 2027. Unmanned ships are targeted for completion by 2030, alongside widespread use of drones in firefighting and aviation.

AI will also be introduced into drug approvals, smart factories, welfare services, and tax administration, with AI-based tax consultations expected by 2026. Education initiatives and a national AI training data cluster will nurture talent and accelerate innovation.

Five domestic firms, including Naver Cloud, SK Telecom, and LG AI Research, will receive state support to build homegrown AI foundation models. Industry reports currently rank South Korea between sixth and 10th in global AI competitiveness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google enhances AI Mode with personalised dining suggestions

Google has expanded its AI Mode in Search to 180 additional countries and territories, introducing new agentic features to help users make restaurant reservations. The service remains limited to English and is not yet available in the European Union.

The update enables users to specify their dining preferences and constraints, allowing the system to scan multiple platforms and present real-time availability. Once a choice is made, users are directed to the restaurant’s booking page.

Partners supporting the service include OpenTable, Resy, SeatGeek, StubHub, Booksy, Tock, and Ticketmaster. The feature is part of Google’s Search Labs experiment, available to subscribers of Google AI Ultra in the United States.

AI Mode also tailors suggestions based on previous searches and introduces a Share function, letting users share restaurant options or planning results with others, with the option to delete links.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!