AI can reshape the insurance industry, but carries real-world risks

AI is creating new opportunities for the insurance sector, from faster claims processing to enhanced fraud detection.

According to Jeremy Stevens, head of EMEA business at Charles Taylor InsureTech, AI allows insurers to handle repetitive tasks in seconds instead of hours, offering efficiency gains and better customer service. Yet these opportunities come with risks, especially if AI is introduced without thorough oversight.

Poorly deployed AI systems can easily cause more harm than good. For instance, if an insurer uses AI to automate motor claims but trains the model on biassed or incomplete data, two outcomes are likely: the system may overpay specific claims while wrongly rejecting genuine ones.

The result would not simply be financial losses, but reputational damage, regulatory investigations and customer attrition. Instead of reducing costs, the company would find itself managing complaints and legal challenges.

To avoid such pitfalls, AI in insurance must be grounded in trust and rigorous testing. Systems should never operate as black boxes. Models must be explainable, auditable and stress-tested against real-world scenarios.

It is essential to involve human experts across claims, underwriting and fraud teams, ensuring AI decisions reflect technical accuracy and regulatory compliance.

For sensitive functions like fraud detection, blending AI insights with human oversight prevents mistakes that could unfairly affect policyholders.

While flawed AI poses dangers, ignoring AI entirely risks even greater setbacks. Insurers that fail to modernise may be outpaced by more agile competitors already using AI to deliver faster, cheaper and more personalised services.

Instead of rushing or delaying adoption, insurers should pursue carefully controlled pilot projects, working with partners who understand both AI systems and insurance regulation.

In Stevens’s view, AI should enhance professional expertise—not replace it—striking a balance between innovation and responsibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung confirms core Galaxy AI tools remain free

Samsung has confirmed that core Galaxy AI features will continue to be available free of charge for all users.

Speaking during the recent Galaxy Unpacked event, a company representative clarified that any AI tools installed on a device by default—such as Live Translate, Note Assist, Zoom Nightography and Audio Eraser—will not require a paid subscription.

Instead of leaving users uncertain, Samsung has publicly addressed speculation around possible Galaxy AI subscription plans.

While there are no additional paid AI features on offer at present, the company has not ruled out future developments. Samsung has already hinted that upcoming subscription services linked to Samsung Health could eventually include extra AI capabilities.

Alongside Samsung’s announcement, attention has also turned towards Google’s freemium model for its Gemini AI assistant, which appears on many Android devices. Users can access basic features without charge, but upgrading to Google AI Pro or Ultra unlocks advanced tools and increased storage.

New Galaxy Z Fold 7 and Z Flip 7 handsets even come bundled with six months of free access to premium Google AI services.

Although Samsung is keeping its pre-installed Galaxy AI features free, industry observers expect further changes as AI continues to evolve.

Whether Samsung will follow Google’s path with a broader subscription model remains to be seen, but for now, essential Galaxy AI functions stay open to all users without extra cost.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Huawei challenges Nvidia in global AI chip market

Huawei Technologies is exploring AI chip exports to the Middle East and Southeast Asia in a bid to compete with Nvidia, according to a Bloomberg News report published Thursday.

The Chinese telecom firm has contacted potential buyers in the United Arab Emirates, Saudi Arabia, and Thailand to promote its Ascend 910B chips, an earlier-generation AI processor.

The offer involves a limited number of chips, reportedly in the low thousands, although specific quantities remain undisclosed. No deals have been finalised so far. Sources cited in the report said there is limited interest in the UAE, and the status of talks in Thailand remains uncertain.

Government representatives in Thailand and Saudi Arabia did not immediately respond to Reuters’ requests for comment. Huawei also declined to comment. The initiative is part of a broader strategy to expand into markets where US chipmakers have long held dominance.

Huawei also promotes remote access to CloudMatrix 384, a China-based AI system built using its more advanced chipsets. However, due to supply limitations, the company cannot export these high-end models outside China.

The Middle East has quickly become a high-demand region for AI infrastructure, attracting interest from leading technology companies. Nvidia has already struck several regional deals, positioning itself as a major player in AI development across Saudi Arabia and neighbouring countries.

Huawei is simultaneously focusing on domestic sales of its newer 910C chips, offering them to Chinese firms that cannot purchase US AI chips due to ongoing export restrictions imposed by Washington.

US administrations have long cited national security concerns in limiting China’s access to cutting-edge chip technologies, fearing their potential use in military applications.

‘With the current export controls, we are effectively out of the China datacenter market, which is now served only by competitors such as Huawei,’ an Nvidia spokesperson told Reuters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Gemini AI tool animates photos into short video clips

Google has rolled out a new feature for Gemini AI that transforms still photos into short, animated eight-second videos with sound. The capability is powered by Veo 3, Google’s latest video generation model, and is currently available to Gemini Advanced Ultra and Pro subscribers.

The tool supports background noise, ambient audio, and even spoken dialogue, with support gradually expanding to users in select countries, including India. At launch, access to the web interface is limited, though Google has announced that mobile support will follow later in the week.

To use the tool, users upload a photo, describe the intended motion, and optionally add prompts for sound effects or narration. Gemini then generates a 720p MP4 video in a 16:9 landscape format, automatically synchronising visuals and audio.

Josh Woodward, Vice President of the Gemini app and Google Labs, showcased the feature on X (formerly Twitter), animating a child’s drawing. ‘Still experimental, but we wanted our Pro and Ultra members to try it first,’ he said, calling the result fun and expressive.

To maintain authenticity, each video includes a visible ‘Veo’ watermark in the bottom-right corner and an invisible SynthID watermark. This hidden digital signature, developed by Google DeepMind, helps identify AI-generated content and preserve transparency around synthetic media.

The company has emphasised its commitment to responsible AI deployment by embedding traceable markers in all output from this tool. These safeguards come amid increasing scrutiny of generative video tools and deepfakes across digital platforms.

To animate a photo using Gemini AI’s new tool, users should follow these steps: Click on the ‘tools’ icon in the prompt bar, then choose the ‘video’ option from the menu. Upload the still image, describe the desired motion, and provide sound or narration instructions, optionally.

The underlying Veo 3 model was first introduced at Google I/O as the company’s most advanced video generation engine. It can produce high-quality visuals, simulate real-world physics, and even lip-sync dialogue from text and image-based prompts.

A Google blog post explains: ‘Veo 3 excels from text and image prompting to real-world physics and accurate lip syncing.’ The company says users can craft short story prompts and expect realistic, cinematic responses from the model.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Harnessing the power of space: Bridging innovation and the SDGs

At the WSIS+20 High-Level Event in Geneva, experts gathered to explore how a growing and diversifying space ecosystem can be harnessed to meet the Sustainable Development Goals (SDGs). Moderated by Alexandre Vallet from ITU, the panel highlighted how space has evolved from providing niche satellite connectivity to enabling comprehensive systems that address environmental, humanitarian, and developmental challenges on a global scale.

Almudena Azcarate-Ortega of UNIDIR emphasised the importance of distinguishing between space security—focused on intentional threats like cyberattacks and jamming—and space safety, which concerns accidental hazards. She highlighted the legal gap in existing treaties and underlined how inconsistent interpretations of key terms complicate international negotiations.

Meanwhile, Dr Ingo Baumann traced the evolution of space law from Cold War-era compliance to modern frameworks that prioritise national competitiveness, such as the proposed EU Space Act.

Technological innovation also featured prominently. Bruno Bechard from Kineis presented how their IoT satellite constellation supports SDGs by monitoring wildlife, detecting forest fires, and improving supply chains across remote areas underserved by terrestrial networks. However, he noted that narrowband services like theirs face outdated regulatory frameworks and high fees, making market entry more difficult than for broadband providers.

Chloe Saboye-Pasquier of Ridespace closed with a call for more harmonised regulations. Her company brokers satellite launches and often navigates conflicting legal systems across countries.

She flagged radio frequency registration delays and a lack of mutual recognition between national laws as critical barriers, especially for newcomers and countries without dedicated space agencies. As the panel concluded, speakers agreed that achieving the SDGs through space innovation requires not just cutting-edge technology, but also cohesive global governance, clear legal standards, and inclusive access to space infrastructure.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

WSIS+20: Inclusive ICT policies urged to close global digital divide

At the WSIS+20 High-Level Event in Geneva, Dr Hakikur Rahman and Dr Ranojit Kumar Dutta presented a sobering picture of global digital inequality, revealing that more than 2.6 billion people remain offline. Their session, marking two decades of the World Summit on the Information Society (WSIS), emphasised that affordability, poor infrastructure, and a lack of digital literacy continue to block access, especially for marginalised communities.

The speakers proposed a structured three-pillar framework — inclusion, ethics, and sustainability- to ensure that no one is left behind in the digital age.

The inclusion pillar advocated for universal connectivity through affordable broadband, multilingual content, and skills-building programs, citing India’s Digital India and Kenya’s Community Networks as examples of success. On ethics, they called for policies grounded in human rights, data privacy, and transparent AI governance, pointing to the EU’s AI Act and UNESCO guidelines as benchmarks.

The sustainability pillar highlighted the importance of energy-efficient infrastructure, proper e-waste management, and fair public-private collaboration, showcasing Rwanda’s green ICT strategy and Estonia’s e-residency program.

Dr Dutta presented detailed data from Bangladesh, showing stark urban-rural and gender-based gaps in internet access and digital literacy. While urban broadband penetration has soared, rural and female participation lags behind.

Encouraging trends, such as rising female enrollment in ICT education and the doubling of ICT sector employment since 2022, were tempered by low data protection awareness and a dire e-waste recycling rate of only 3%.

The session concluded with a call for coordinated global and regional action, embedding ethics and inclusion in every digital policy. The speakers urged stakeholders to bridge divides in connectivity, opportunity, access, and environmental responsibility, ensuring digital progress uplifts all communities.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Building digital resilience in an age of crisis

At the WSIS+20 High-Level Event in Geneva, the session ‘Information Society in Times of Risk’ spotlighted how societies can harness digital tools to weather crises more effectively. Experts and researchers from across the globe shared innovations and case studies that emphasised collaboration, inclusiveness, and preparedness.

Chairs Horst Kremers and Professor Ke Gong opened the discussion by reinforcing the UN’s all-of-society principle, which advocates cooperation among governments, civil society, tech companies, and academia in facing disaster risks.

The Singapore team unveiled their pioneering DRIVE framework—Digital Resilience Indicators for Veritable Empowerment—redefining resilience not as a personal skill set but as a dynamic process shaped by individuals’ environments, from family to national policies. They argued that digital resilience must include social dimensions such as citizenship, support networks, and systemic access, making it a collective responsibility in the digital era.

Turkish researchers analysed over 54,000 social media images shared after the 2023 earthquakes, showing how visual content can fuel digital solidarity and real-time coordination. However, they also revealed how the breakdown of communication infrastructure in the immediate aftermath severely hampered response efforts, underscoring the urgent need for robust and redundant networks.

Meanwhile, Chinese tech giant Tencent demonstrated how integrated platforms—such as WeChat and AI-powered tools—transform disaster response, enabling donations, rescues, and community support on a massive scale. Yet, presenters cautioned that while AI holds promise, its current role in real-time crisis management remains limited.

The session closed with calls for pro-social platform designs to combat polarisation and disinformation, and a shared commitment to building inclusive, digitally resilient societies that leave no one behind.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Report shows China outpacing the US and EU in AI research

AI is increasingly viewed as a strategic asset rather than a technological development, and new research suggests China is now leading the global AI race.

A report titled ‘DeepSeek and the New Geopolitics of AI: China’s ascent to research pre-eminence in AI’, authored by Daniel Hook, CEO of Digital Science, highlights how China’s AI research output has grown to surpass that of the US, the EU and the UK combined.

According to data from Dimensions, a primary global research database, China now accounts for over 40% of worldwide citation attention in AI-related studies. Instead of focusing solely on academic output, the report points to China’s dominance in AI-related patents.

In some indicators, China is outpacing the US tenfold in patent filings and company-affiliated research, signalling its capacity to convert academic work into tangible innovation.

Hook’s analysis covers AI research trends from 2000 to 2024, showing global AI publication volumes rising from just under 10,000 papers in 2000 to 60,000 in 2024.

However, China’s influence has steadily expanded since 2018, while the EU and the US have seen relative declines. The UK has largely maintained its position.

Clarivate, another analytics firm, reported similar findings, noting nearly 900,000 AI research papers produced in China in 2024, triple the figure from 2015.

Hook notes that governments increasingly view AI alongside energy or military power as a matter of national security. Instead of treating AI as a neutral technology, there is growing awareness that a lack of AI capability could have serious economic, political and social consequences.

The report suggests that understanding AI’s geopolitical implications has become essential for national policy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok chatbot relies on Musk’s views instead of staying neutral

Grok, the AI chatbot owned by Elon Musk’s company xAI, appears to search for Musk’s personal views before answering sensitive or divisive questions.

Rather than relying solely on a balanced range of sources, Grok has been seen citing Musk’s opinions when responding to topics like Israel and Palestine, abortion, and US immigration.

Evidence gathered from a screen recording by data scientist Jeremy Howard shows Grok actively ‘considering Elon Musk’s views’ in its reasoning process. Out of 64 citations Grok provided about Israel and Palestine, 54 were linked to Musk.

Others confirmed similar results when asking about abortion and immigration laws, suggesting a pattern.

While the behaviour might seem deliberate, some experts believe it happens naturally instead of through intentional programming. Programmer Simon Willison noted that Grok’s system prompt tells it to avoid media bias and search for opinions from all sides.

Yet, Grok may prioritise Musk’s stance because it ‘knows’ its owner, especially when addressing controversial matters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI technology drives sharp rise in synthetic abuse material

AI is increasingly being used to produce highly realistic synthetic abuse videos, raising alarm among regulators and industry bodies.

According to new data published by the Internet Watch Foundation (IWF), 1,286 individual AI-generated abuse videos were identified during the first half of 2025, compared to just two in the same period last year.

Instead of remaining crude or glitch-filled, such material now appears so lifelike that under UK law, it must be treated like authentic recordings.

More than 1,000 of the videos fell into Category A, the most serious classification involving depictions of extreme harm. The number of webpages hosting this type of content has also risen sharply.

Derek Ray-Hill, interim chief executive of the IWF, expressed concern that longer-form synthetic abuse films are now inevitable unless binding safeguards around AI development are introduced.

Safeguarding minister Jess Phillips described the figures as ‘utterly horrific’ and confirmed two new laws are being introduced to address both those creating this material and those providing tools or guidance on how to do so.

IWF analysts say video quality has advanced significantly instead of remaining basic or easy to detect. What once involved clumsy manipulation is now alarmingly convincing, complicating efforts to monitor and remove such content.

The IWF encourages the public to report concerning material and share the exact web page where it is located.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!