UK researchers test robotic dogs and AI for early wildfire detection

Researchers at the University of Bradford are preparing to pilot an AI-enabled wildfire detection system that uses robotic dogs, drones, and emerging 6G networks to identify early signs of fire and alert emergency services.

The trial, set to take place in Greece in 2025, is part of the EU-funded 6G-VERSUS research project, which explores how next-generation connectivity can support crisis response.

According to project lead Dr Kamran Mahroof, wildfires have become a ‘pressing global challenge’ due to rising frequency and severity. The team intends to combine sensor data collected by four-legged robotic platforms and aerial drones with AI models capable of analysing smoke, vegetation dryness, and early heat signatures. High-bandwidth 6G links enable the near-instantaneous transmission of this data to emergency responders.

The research received funding earlier this year from the EU’s Horizon Innovation Action programme and was showcased in Birmingham during an event on AI solutions for global risks.

While the West Yorkshire Fire and Rescue Service stated that it does not currently employ AI for wildfire operations, it expressed interest in the project. It described its existing use of drones, mapping tools, and weather modelling for situational awareness.

The Bradford team emphasises that early detection remains the most effective tool for limiting wildfire spread. The upcoming pilot will evaluate whether integrated AI, robotics, and next-generation networks can help emergency services respond more quickly and predict where fires are likely to ignite.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Waterstones open to selling AI-generated books, but only with clear labelling

Waterstones CEO James Daunt has stated that the company is willing to stock books created using AI, provided the works are transparently labelled, and there is genuine customer demand.

In an interview on the BBC’s Big Boss podcast, Daunt stressed that Waterstones currently avoids placing AI-generated books on shelves and that his instinct as a bookseller is to ‘recoil’ from such titles. However, he emphasised that the decision ultimately rests with readers.

Daunt described the wider surge in AI-generated content as largely unsuitable for bookshops, saying most such works are not of a type Waterstones would typically sell. The publishing industry continues to debate the implications of generative AI, particularly around threats to authors’ livelihoods and the use of copyrighted works to train large language models.

A recent University of Cambridge survey found that more than half of published authors fear being replaced by AI, and two-thirds believe their writing has been used without permission to train models.

Despite these concerns, some writers are adopting AI tools for research or editing, while AI-generated novels and full-length works are beginning to emerge.

Daunt noted that Waterstones would consider carrying such titles if readers show interest, while making clear that the chain would always label AI-authored works to avoid misleading consumers. He added that readers tend to value the human connection with authors, suggesting that AI books are unlikely to be prominently featured in stores.

Daunt has led Waterstones since 2011, reshaping the chain by decentralising decision-making and removing the longstanding practice of publishers paying for prominent in-store placement. He also currently heads Barnes & Noble in the United States.

With both chains now profitable, Daunt acknowledged that a future share flotation is increasingly likely. However, no decision has been taken on whether London or New York would host any potential IPO.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE launches scholarship to develop future AI leaders

The UAE unveiled a scholarship programme to nurture future leaders in AI at MBZUAI. The initiative, guided by Sheikh Tahnoon bin Zayed, targets outstanding undergraduates beginning in the 2025 academic year.

Approximately 350 students will be supported over six years following a rigorous selection process. Applicants will be assessed for mathematical strength, leadership potential and entrepreneurial drive in line with national technological ambitions.

Scholars will gain financial backing alongside opportunities to represent the UAE internationally and develop innovative ventures. Senior officials said the programme strengthens the nation’s aim to build a world-class cohort of AI specialists.

MBZUAI highlighted its interdisciplinary approach that blends technical study with ethics, leadership and business education. Students will have access to advanced facilities, industry placements, and mentorships designed to prepare them for global technology roles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Pope urges guidance for youth in an AI-shaped world

Pope Leo XIV urged global institutions to guide younger generations as they navigate the expanding influence of AI. He warned that rapid access to information cannot replace the deeper search for meaning and purpose.

Previously, the Pope had warned students not to rely solely on AI for educational support. He encouraged educators and leaders to help young people develop discernment and confidence when encountering digital systems.

Additionally, he called for coordinated action across politics, business, academia and faith communities to steer technological progress toward the common good. He argued that AI development should not be treated as an inevitable pathway shaped by narrow interests.

He noted that AI reshapes human relationships and cognition, raising concerns about its effects on freedom, creativity and contemplation. He insisted that safeguarding human dignity is essential to managing AI’s wide-ranging consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google drives health innovation through new EU AI initiative

At the European Health Summit in Brussels, Google presented new research suggesting that AI could help Europe overcome rising healthcare pressures.

The report, prepared by Implement Consulting Group for Google, argues that scientific productivity is improving again, rather than continuing a long period of stagnation. Early results already show shorter waiting times in emergency departments, offering practitioners more space to focus on patient needs.

Momentum at the Summit increased as Google announced new support for AI adoption in frontline care.

Five million dollars from Google.org will fund Bayes Impact to launch an EU-wide initiative known as ‘Impulse Healthcare’. The programme will allow nurses, doctors and administrators to design and test their own AI tools through an open-source platform.

By placing development in the hands of practitioners, the project aims to expand ideas that help staff reclaim valuable time during periods of growing demand.

Successful tools developed at a local level will be scaled across the EU, providing a path to more efficient workflows and enhanced patient care.

Google views these efforts as part of a broader push to rebuild capacity in Europe’s health systems.

AI-assisted solutions may reduce administrative burdens, support strained workforces and guide decisions through faster, data-driven insights, strengthening everyday clinical practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Will the AI boom hold or collapse?

Global investment in AI has soared to unprecedented heights, yet the technology’s real-world adoption lags far behind the market’s feverish expectations. Despite trillions of dollars in valuations and a global AI market projected to reach nearly $5 trillion by 2033, mounting evidence suggests that companies struggle to translate AI pilots into meaningful results.

As Jovan Kurbalija argues in his recent analysis, hype has outpaced both technological limits and society’s ability to absorb rapid change, raising the question of whether the AI bubble is nearing a breaking point.

Kurbalija identifies several forces inflating the bubble, such as relentless media enthusiasm that fuels fear of missing out, diminishing returns on ever-larger computing power, and the inherent logical constraints of today’s large language models, which cannot simply be ‘scaled’ into human-level intelligence.

At the same time, organisations are slow to reorganise workflows, regulations, and skills around AI, resulting in high failure rates for corporate initiatives. A new competitive landscape, driven by ultra-low-cost open-source models such as China’s DeepSeek, further exposes the fragility of current proprietary spending and the vast discrepancies in development costs.

Looking forward, Kurbalija outlines possible futures ranging from a rational shift toward smaller, knowledge-centric AI systems to a world in which major AI firms become ‘too big to fail’, protected by government backstops similar to the 2008 financial crisis. Geopolitics may also justify massive public spending as the US and China frame AI leadership as a national security imperative.

Other scenarios include a consolidation of power among a handful of tech giants or a mild ‘AI winter’ in which investment cools and attention pivots to the next frontier technologies, such as quantum computing or immersive digital environments.

Regardless of which path emerges, the defining battle ahead will centre on the open-source versus proprietary AI debate. Both Washington and Beijing are increasingly embracing open models as strategic assets, potentially reshaping global standards and forcing big tech firms to rethink their closed ecosystems.

As Kurbalija concludes, the outcome will depend less on technical breakthroughs and more on societal choices, balancing openness, competition, and security in shaping whether AI becomes a sustainable foundation of economic life or the latest digital bubble to deflate under its own weight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google launches Workspace Studio for AI-powered automation

Google has made Workspace Studio generally available, allowing employees to design, manage, and share AI agents directly within Workspace. Powered by Gemini 3, these agents automate tasks ranging from simple routines to complex business workflows, all without coding.

The platform aims to save time on repetitive work, freeing employees to focus on higher-value activities.

Agents can understand context, reason through problems, and integrate with core Workspace apps such as Gmail, Drive, and Chat, as well as enterprise platforms like Asana, Jira, Mailchimp, and Salesforce.

Early adopters, including cleaning solutions leader Kärcher, have utilised Workspace Studio to streamline workflows, reducing planning time by up to 90% and consolidating multiple tasks into a single minute.

Workspace Studio allows users to build agents using templates or natural language prompts, making automation accessible to non-specialists. Agents can manage status reports, reminders, email triage, and critical tasks, such as legal notices or travel requests.

Teams can also easily share agents, ensuring collaboration and consistency across workflows.

The rollout to business customers will continue over the coming weeks. Users can start creating agents immediately, explore templates, use prompts for automations, and join the Gemini Alpha program to test early features and controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Campaigning in the age of generative AI

Generative AI is rapidly altering the political campaign landscape, argues the ORF article, which outlines how election teams worldwide are adopting AI tools for persuasion, outreach and content creation.

Campaigns can now generate customised messages for different voter groups, produce multilingual content at scale, and automate much of the traditional grunt work of campaigning.

On one hand, proponents say the technology makes campaigning more efficient and accessible, particularly in multilingual or resource-constrained settings. But the ease and speed with which content can be generated also lowers the barrier for misuse: AI-driven deepfakes, synthetic voices and disinformation campaigns can be deployed to mislead voters or distort public discourse.

Recent research supports these worries. For example, a large-scale study published in Science and Nature demonstrated that AI chatbots can influence voter opinions, swaying a non-trivial share of undecided voters toward a target candidate simply by presenting persuasive content.

Meanwhile, independent analyses show that during the 2024 US election campaign, a noticeable fraction of content on social media was AI-generated, sometimes used to spread misleading narratives or exaggerate support for certain candidates.

For democracy and governance, the shift poses thorny challenges. AI-driven campaigns risk eroding public trust, exacerbating polarisation and undermining electoral legitimacy. Regulators and policymakers now face pressure to devise new safeguards, such as transparency requirements around AI usage in political advertising, stronger fact-checking, and clearer accountability for misuse.

The ORF article argues these debates should start now, before AI becomes so entrenched that rollback is impossible.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI stroke-imaging tool halves time to treatment

A new AI-powered tool rolled out across England is helping clinicians diagnose strokes much sooner, significantly speeding up treatment decisions and improving patient outcomes. According to a study published in The Lancet Digital Health, roughly 15,000 patients benefited directly from AI-assisted scan reviews.

The tool, deployed at over 70 hospitals, analyses brain scans in minutes to rapidly identify clots, supporting doctors in deciding whether a patient needs urgent procedures such as a thrombectomy. Sites using the AI saw thrombectomy rates double (from 2.3% to 4.6%), compared with more modest increases at hospitals not using the technology.

Time is critical in stroke treatment: each 20-minute delay in thrombectomy reduces a patient’s chance of full recovery by around 1 per cent. The AI-driven system also helped cut the average ‘door-in to door-out’ time at primary stroke centres by 64 minutes, making it far more likely that patients reach a specialist centre in time for treatment.

Health-service leaders say the findings provide real-world evidence that AI imaging can save lives and reduce disability after stroke. As a result, the technology is now part of a wider national rollout across every regularly admitting stroke service in England.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japanese high-schooler suspected of hacking net-cafe chain using AI

Authorities in Tokyo have issued an arrest warrant for a 17-year-old boy from Osaka on suspicion of orchestrating a large-scale cyberattack using artificial intelligence. The alleged target was the operator of the Kaikatsu Club internet-café chain (along with related fitness-gym business), which may have exposed the personal data of about 7.3 million customers.

According to investigators, the suspect used a computer programme, reportedly built with help from an AI chatbot, to send unauthorised commands around 7.24 million times to the company’s servers in order to extract membership information. The teenager was previously arrested in November in connection with a separate fraud case involving credit-card misuse.

Police have charged him under Japan’s law against unauthorised computer access and for obstructing business, though so far no evidence has emerged of misuse (for example, resale or public leaks) of the stolen data.

In his statement to investigators, the suspect reportedly said he carried out the hack simply because he found it fun to probe system vulnerabilities.

This case is the latest in a growing pattern of so-called AI-enabled cyber crimes in Japan, from fraudulent subscription schemes to ransomware generation. Experts warn that generative AI is lowering the barrier to entry for complex attacks, enabling individuals with limited technical training to carry out large-scale hacking or fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!