AMVER Awards honour UK maritime companies for search-and-rescue commitment

At an event hosted on 2 December by VIKAND in partnership with the United States Coast Guard (USCG), 30 UK maritime companies were awarded for their continued commitment to safety at sea through the AMVER system.

In total, 255 vessels under their operation represent 1,587 collective years of eligibility in AMVER. However, this reflects decades of voluntary participation in a global ship-reporting network that helps coordinate rescue operations far from shore using real-time vessel-position data.

Speakers at the ceremony emphasised that AMVER remains essential for ‘mariners helping mariners’, enabling merchant vessels to respond swiftly to distress calls anywhere in the world, regardless of nationality.

Representatives from maritime insurers, navigational-services firms and classification societies underscored the continuing importance of collaboration, readiness and mutual support across the global shipping industry.

This recognition illustrates how safety and solidarity at sea continue to matter deeply in an industry facing mounting pressures, from regulatory change to environmental and geopolitical risks. The awards reaffirm the UK fleet’s active role in keeping maritime trade not only productive, but also ready to save lives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe builds a laser ground station in Greenland to protect satellite links

Europe is building a laser-based ground station in Greenland to secure satellite links as Russian jamming intensifies. ESA and Denmark chose Kangerlussuaq for its clear skies and direct access to polar-orbit traffic.

The optical system uses Astrolight’s technology to transmit data markedly faster than radio signals. Narrow laser beams resist interference, allowing vast imaging sets to reach analysts with far fewer disruptions.

Developers expect terabytes to be downloaded in under a minute, reducing reliance on vulnerable Arctic radio sites. European officials say the upgrade strengthens autonomy as undersea cables and navigation systems face repeated targeting from countries such as Russia.

The Danish station will support defence monitoring, climate science and search-and-rescue operations across high latitudes. Work is underway, with completion planned for 2026 and ambitions for a wider global laser network.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NSA warns AI poses new risks for operational technology

The US National Security Agency (NSA), together with international partners including Australia’s ACSC, has issued guidance on the secure integration of AI into operational technology (OT).

The Principles for the Secure Integration of AI in OT warn that while AI can optimise critical infrastructure, it also introduces new risks for safety-critical environments. Although aimed at OT administrators, the guidance also highlights issues relevant to IT networks.

AI is increasingly deployed in sectors such as energy, water treatment, healthcare, and manufacturing to automate processes and enhance efficiency.

The NSA’s guidance, however, flags several potential threats, including adversarial prompt injection, data poisoning, AI drift, and reduced explainability, all of which can compromise safety and compliance.

Over-reliance on AI may also lead to human de-skilling, cognitive overload, and distraction, while AI hallucinations raise concerns about reliability in safety-critical settings.

Experts emphasise that AI cannot currently be trusted to make independent safety decisions in OT networks, where the margin for error is far smaller than in standard IT systems.

Sam Maesschalck, an OT engineer, noted that introducing AI without first addressing pre-existing infrastructure issues, such as insufficient data feeds or incomplete asset inventories, could undermine both security and operational efficiency.

The guidance aims to help organisations evaluate AI risks, clarify accountability, and prepare for potential misbehaviour, underlining the importance of careful planning before deploying AI in operationally critical environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google drives health innovation through new EU AI initiative

At the European Health Summit in Brussels, Google presented new research suggesting that AI could help Europe overcome rising healthcare pressures.

The report, prepared by Implement Consulting Group for Google, argues that scientific productivity is improving again, rather than continuing a long period of stagnation. Early results already show shorter waiting times in emergency departments, offering practitioners more space to focus on patient needs.

Momentum at the Summit increased as Google announced new support for AI adoption in frontline care.

Five million dollars from Google.org will fund Bayes Impact to launch an EU-wide initiative known as ‘Impulse Healthcare’. The programme will allow nurses, doctors and administrators to design and test their own AI tools through an open-source platform.

By placing development in the hands of practitioners, the project aims to expand ideas that help staff reclaim valuable time during periods of growing demand.

Successful tools developed at a local level will be scaled across the EU, providing a path to more efficient workflows and enhanced patient care.

Google views these efforts as part of a broader push to rebuild capacity in Europe’s health systems.

AI-assisted solutions may reduce administrative burdens, support strained workforces and guide decisions through faster, data-driven insights, strengthening everyday clinical practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Will the AI boom hold or collapse?

Global investment in AI has soared to unprecedented heights, yet the technology’s real-world adoption lags far behind the market’s feverish expectations. Despite trillions of dollars in valuations and a global AI market projected to reach nearly $5 trillion by 2033, mounting evidence suggests that companies struggle to translate AI pilots into meaningful results.

As Jovan Kurbalija argues in his recent analysis, hype has outpaced both technological limits and society’s ability to absorb rapid change, raising the question of whether the AI bubble is nearing a breaking point.

Kurbalija identifies several forces inflating the bubble, such as relentless media enthusiasm that fuels fear of missing out, diminishing returns on ever-larger computing power, and the inherent logical constraints of today’s large language models, which cannot simply be ‘scaled’ into human-level intelligence.

At the same time, organisations are slow to reorganise workflows, regulations, and skills around AI, resulting in high failure rates for corporate initiatives. A new competitive landscape, driven by ultra-low-cost open-source models such as China’s DeepSeek, further exposes the fragility of current proprietary spending and the vast discrepancies in development costs.

Looking forward, Kurbalija outlines possible futures ranging from a rational shift toward smaller, knowledge-centric AI systems to a world in which major AI firms become ‘too big to fail’, protected by government backstops similar to the 2008 financial crisis. Geopolitics may also justify massive public spending as the US and China frame AI leadership as a national security imperative.

Other scenarios include a consolidation of power among a handful of tech giants or a mild ‘AI winter’ in which investment cools and attention pivots to the next frontier technologies, such as quantum computing or immersive digital environments.

Regardless of which path emerges, the defining battle ahead will centre on the open-source versus proprietary AI debate. Both Washington and Beijing are increasingly embracing open models as strategic assets, potentially reshaping global standards and forcing big tech firms to rethink their closed ecosystems.

As Kurbalija concludes, the outcome will depend less on technical breakthroughs and more on societal choices, balancing openness, competition, and security in shaping whether AI becomes a sustainable foundation of economic life or the latest digital bubble to deflate under its own weight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

€700 million crypto fraud network spanning Europe broken up

Authorities have broken an extensive cryptocurrency fraud and money laundering network that moved over EUR 700 million after years of international investigation.

The operation began with an investigation into a single fraudulent cryptocurrency platform and eventually uncovered an extensive network of fake investment schemes targeting thousands of victims.

Victims were drawn in by fake ads promising high returns and pressured via criminal call centres to pay more. Transferred funds were stolen and laundered across blockchains and exchanges, exposing a highly organised operation across Europe and beyond.

Police raids across Cyprus, Germany, and Spain in late October 2025 resulted in nine arrests and the seizure of millions in assets, including bank deposits, cryptocurrencies, cash, digital devices, and luxury watches.

Europol and Eurojust coordinated the cross-border operation with national authorities from France, Belgium, Germany, Spain, Malta, Cyprus, and other nations.

The second phase, executed in November, targeted the affiliate marketing infrastructure behind fraudulent online advertising, including deepfake campaigns impersonating celebrities and media outlets.

Law enforcement teams in Belgium, Bulgaria, Germany, and Israel conducted searches, dismantling key elements of the scam ecosystem. Investigations continue to track down remaining assets and dismantle the broader network.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Russia blocks Snapchat and FaceTime access

Russia’s state communications watchdog has intensified its campaign against major foreign platforms by blocking Snapchat and restricting FaceTime calls.

The move follows earlier reports of disrupted Apple services inside the country, while users could still connect through VPNs instead of relying on direct access. Roskomnadzor accused Snapchat of enabling criminal activity and repeated earlier claims targeting Apple’s service.

A decision that marks the authorities’ first formal confirmation of limits on both platforms. It arrives as pressure increases on WhatsApp, which remains Russia’s most popular messenger, with officials warning that a whole block is possible.

Meta is accused of failing to meet data-localisation rules and of what the authorities describe as repeated violations linked to terrorism and fraud.

Digital rights groups argue that technical restrictions are designed to push citizens toward Max, a government-backed messenger that activists say grants officials sweeping access to private conversations, rather than protecting user privacy.

These measures coincide with wider crackdowns, including the recent blocking of the Roblox gaming platform over allegations of extremist content and harmful influence on children.

The tightening of controls reflects a broader effort to regulate online communication as Russia seeks stronger oversight of digital platforms. The latest blocks add further uncertainty for millions of users who depend on familiar services instead of switching to state-supported alternatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Porn site fined £1m for ignoring UK child safety age checks

A UK pornographic website has been fined £1m by Ofcom for failing to comply with mandatory age verification under the Online Safety Act. The company, AVS Group Ltd, did not respond to repeated contact from the regulator, prompting an additional £50,000 penalty.

The Act requires websites hosting adult content to implement ‘highly effective age assurance’ to prevent children from accessing explicit material. Ofcom has ordered the company to comply within 72 hours or face further daily fines.

Other tech platforms are also under scrutiny, with one unnamed major social media company undergoing compliance checks. Regulators warn that non-compliance will result in formal action, highlighting the growing enforcement of child safety online.

Critics argue the law must be tougher to ensure real protection, particularly for minors and women online. While age checks have reduced UK traffic to some sites, loopholes like VPNs remain a concern, and regulators are pushing for stricter adherence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Campaigning in the age of generative AI

Generative AI is rapidly altering the political campaign landscape, argues the ORF article, which outlines how election teams worldwide are adopting AI tools for persuasion, outreach and content creation.

Campaigns can now generate customised messages for different voter groups, produce multilingual content at scale, and automate much of the traditional grunt work of campaigning.

On one hand, proponents say the technology makes campaigning more efficient and accessible, particularly in multilingual or resource-constrained settings. But the ease and speed with which content can be generated also lowers the barrier for misuse: AI-driven deepfakes, synthetic voices and disinformation campaigns can be deployed to mislead voters or distort public discourse.

Recent research supports these worries. For example, a large-scale study published in Science and Nature demonstrated that AI chatbots can influence voter opinions, swaying a non-trivial share of undecided voters toward a target candidate simply by presenting persuasive content.

Meanwhile, independent analyses show that during the 2024 US election campaign, a noticeable fraction of content on social media was AI-generated, sometimes used to spread misleading narratives or exaggerate support for certain candidates.

For democracy and governance, the shift poses thorny challenges. AI-driven campaigns risk eroding public trust, exacerbating polarisation and undermining electoral legitimacy. Regulators and policymakers now face pressure to devise new safeguards, such as transparency requirements around AI usage in political advertising, stronger fact-checking, and clearer accountability for misuse.

The ORF article argues these debates should start now, before AI becomes so entrenched that rollback is impossible.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI stroke-imaging tool halves time to treatment

A new AI-powered tool rolled out across England is helping clinicians diagnose strokes much sooner, significantly speeding up treatment decisions and improving patient outcomes. According to a study published in The Lancet Digital Health, roughly 15,000 patients benefited directly from AI-assisted scan reviews.

The tool, deployed at over 70 hospitals, analyses brain scans in minutes to rapidly identify clots, supporting doctors in deciding whether a patient needs urgent procedures such as a thrombectomy. Sites using the AI saw thrombectomy rates double (from 2.3% to 4.6%), compared with more modest increases at hospitals not using the technology.

Time is critical in stroke treatment: each 20-minute delay in thrombectomy reduces a patient’s chance of full recovery by around 1 per cent. The AI-driven system also helped cut the average ‘door-in to door-out’ time at primary stroke centres by 64 minutes, making it far more likely that patients reach a specialist centre in time for treatment.

Health-service leaders say the findings provide real-world evidence that AI imaging can save lives and reduce disability after stroke. As a result, the technology is now part of a wider national rollout across every regularly admitting stroke service in England.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!