EU ministers call for faster action on digital goals

European ministers have adopted conclusions aimed to boosting the Union’s digital competitiveness, urging quicker progress toward the 2030 digital decade goals.

Officials called for stronger digital skills, wider adoption of technology, and a framework that supports innovation while protecting fundamental rights. Digital sovereignty remains a central objective, framed as open, risk-based and aligned with European values.

Ministers supported simplifying digital rules for businesses, particularly SMEs and start-ups, which face complex administrative demands. A predictable legal environment, less reporting duplication and more explicit rules were seen as essential for competitiveness.

Governments emphasised that simplification must not weaken data protection or other core safeguards.

Concerns over online safety and illegal content were a prominent feature in discussions on enforcing the Digital Services Act. Ministers highlighted the presence of harmful content and unsafe products on major marketplaces, calling for stronger coordination and consistent enforcement across member states.

Ensuring full compliance with EU consumer protection and product safety rules was described as a priority.

Cyber-resilience was a key focus as ministers discussed the increasing impact of cyberattacks on citizens and the economy. Calls for stronger defences grew as digital transformation accelerated, with several states sharing updates on national and cross-border initiatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia introduces new codes to protect children online

Australian regulators have released new guidance ahead of the introduction of industry codes designed to protect children from exposure to harmful online material.

The Age Restricted Material Codes will apply to a wide range of online services, including app stores, social platforms, equipment providers, pornography sites and generative AI services, with the first tranche beginning on 27 December.

The rules require search engines to blur image results involving pornography or extreme violence to reduce accidental exposure among young users.

Search services must also redirect people seeking information related to suicide, self-harm or eating disorders to professional mental health support instead of allowing harmful spirals to unfold.

eSafety argues that many children unintentionally encounter disturbing material at very young ages, often through search results that act as gateways rather than deliberate choices.

The guidance emphasises that adults will still be able to access unblurred material by clicking through, and there is no requirement for Australians to log in or identify themselves before searching.

eSafety maintains that the priority lies in shielding children from images and videos they cannot cognitively process or forget once they have seen them.

These codes will operate alongside existing standards that tackle unlawful content and will complement new minimum age requirements for social media, which are set to begin in mid-December.

Authorities in Australia consider the reforms essential for reducing preventable harm and guiding vulnerable users towards appropriate support services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU targets X for breaking the Digital Services Act

European regulators have imposed a fine of one hundred and twenty million euros on X after ruling that the platform breached transparency rules under the Digital Services Act.

The Commission concluded that the company misled users with its blue checkmark system, restricted research access and operated an inadequate advertising repository.

Officials found that paid verification on X encouraged users to believe their accounts had been authenticated when, in fact, no meaningful checks were conducted.

EU regulators argued that such practices increased exposure to scams and impersonation fraud, rather than supporting trust in online communication.

The Commission also stated that the platform’s advertising repository lacked essential information and created barriers that prevented researchers and civil society from examining potential threats.

European authorities judged that X failed to offer legitimate access to public data for eligible researchers. Terms of service blocked independent data collection, including scraping, while the company’s internal processes created further obstacles.

Regulators believe such restrictions frustrate efforts to study misinformation, influence campaigns and other systemic risks within the EU.

X must now outline the steps it will take to end the blue checkmark infringement within sixty working days and deliver a wider action plan on data access and advertising transparency within ninety days.

Failure to comply could lead to further penalties as the Commission continues its broader investigation into information manipulation and illegal content across the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI fuels a new wave of cyber threats in Greece

Greece is confronting a rapid rise in cybercrime as AI strengthens the tools available to criminals, according to the head of the National Cyber Security Authority.

Michael Bletsas warned that Europe is already experiencing hybrid conflict, with Northeastern states facing severe incidents that reveal a digital frontline. Greece has not endured physical sabotage or damage to its infrastructure, yet cyberattacks remain a pressing concern.

Bletsas noted that most activity involves cybercrime instead of destructive action. He pointed to the expansion of cyberactivism and vandalism through denial-of-service attacks, which usually cause no lasting harm.

The broader problem stems from a surge in AI-driven intrusions and espionage, which offer new capabilities to malicious groups and create a more volatile environment.

Moreover, Bletsas said that the physical and digital worlds should be viewed as a single, interconnected sphere, with security designed around shared principles rather than being treated as separate domains.

Digital warfare is already unfolding, and Greece is part of it. The country must now define its alliances and strengthen its readiness as cyber threats intensify and the global divide grows deeper.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan blocks Chinese app RedNote after surge in online scams

Authorities in Taiwan will block the Chinese social media and shopping app RedNote for a year following a surge in online scams linked to the platform. Officials report that more than 1,700 fraud cases have been linked to the app since last year, resulting in losses exceeding NT$247 million.

Regulators report that the company failed to meet required data-security standards and did not respond to requests for a plan to strengthen cybersecurity.

Internet providers have been instructed to restrict access, affecting several million users who now see a security warning message when opening the app.

Concerns over Beijing’s online influence and the spread of disinformation have added pressure on Taiwanese authorities to tighten oversight of Chinese platforms.

RedNote’s operators are also facing scrutiny in mainland China, where regulators have criticised the company over what they labelled ‘negative’ content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AMVER Awards honour UK maritime companies for search-and-rescue commitment

At an event hosted on 2 December by VIKAND in partnership with the United States Coast Guard (USCG), 30 UK maritime companies were awarded for their continued commitment to safety at sea through the AMVER system.

In total, 255 vessels under their operation represent 1,587 collective years of eligibility in AMVER. However, this reflects decades of voluntary participation in a global ship-reporting network that helps coordinate rescue operations far from shore using real-time vessel-position data.

Speakers at the ceremony emphasised that AMVER remains essential for ‘mariners helping mariners’, enabling merchant vessels to respond swiftly to distress calls anywhere in the world, regardless of nationality.

Representatives from maritime insurers, navigational-services firms and classification societies underscored the continuing importance of collaboration, readiness and mutual support across the global shipping industry.

This recognition illustrates how safety and solidarity at sea continue to matter deeply in an industry facing mounting pressures, from regulatory change to environmental and geopolitical risks. The awards reaffirm the UK fleet’s active role in keeping maritime trade not only productive, but also ready to save lives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe builds a laser ground station in Greenland to protect satellite links

Europe is building a laser-based ground station in Greenland to secure satellite links as Russian jamming intensifies. ESA and Denmark chose Kangerlussuaq for its clear skies and direct access to polar-orbit traffic.

The optical system uses Astrolight’s technology to transmit data markedly faster than radio signals. Narrow laser beams resist interference, allowing vast imaging sets to reach analysts with far fewer disruptions.

Developers expect terabytes to be downloaded in under a minute, reducing reliance on vulnerable Arctic radio sites. European officials say the upgrade strengthens autonomy as undersea cables and navigation systems face repeated targeting from countries such as Russia.

The Danish station will support defence monitoring, climate science and search-and-rescue operations across high latitudes. Work is underway, with completion planned for 2026 and ambitions for a wider global laser network.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NSA warns AI poses new risks for operational technology

The US National Security Agency (NSA), together with international partners including Australia’s ACSC, has issued guidance on the secure integration of AI into operational technology (OT).

The Principles for the Secure Integration of AI in OT warn that while AI can optimise critical infrastructure, it also introduces new risks for safety-critical environments. Although aimed at OT administrators, the guidance also highlights issues relevant to IT networks.

AI is increasingly deployed in sectors such as energy, water treatment, healthcare, and manufacturing to automate processes and enhance efficiency.

The NSA’s guidance, however, flags several potential threats, including adversarial prompt injection, data poisoning, AI drift, and reduced explainability, all of which can compromise safety and compliance.

Over-reliance on AI may also lead to human de-skilling, cognitive overload, and distraction, while AI hallucinations raise concerns about reliability in safety-critical settings.

Experts emphasise that AI cannot currently be trusted to make independent safety decisions in OT networks, where the margin for error is far smaller than in standard IT systems.

Sam Maesschalck, an OT engineer, noted that introducing AI without first addressing pre-existing infrastructure issues, such as insufficient data feeds or incomplete asset inventories, could undermine both security and operational efficiency.

The guidance aims to help organisations evaluate AI risks, clarify accountability, and prepare for potential misbehaviour, underlining the importance of careful planning before deploying AI in operationally critical environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google drives health innovation through new EU AI initiative

At the European Health Summit in Brussels, Google presented new research suggesting that AI could help Europe overcome rising healthcare pressures.

The report, prepared by Implement Consulting Group for Google, argues that scientific productivity is improving again, rather than continuing a long period of stagnation. Early results already show shorter waiting times in emergency departments, offering practitioners more space to focus on patient needs.

Momentum at the Summit increased as Google announced new support for AI adoption in frontline care.

Five million dollars from Google.org will fund Bayes Impact to launch an EU-wide initiative known as ‘Impulse Healthcare’. The programme will allow nurses, doctors and administrators to design and test their own AI tools through an open-source platform.

By placing development in the hands of practitioners, the project aims to expand ideas that help staff reclaim valuable time during periods of growing demand.

Successful tools developed at a local level will be scaled across the EU, providing a path to more efficient workflows and enhanced patient care.

Google views these efforts as part of a broader push to rebuild capacity in Europe’s health systems.

AI-assisted solutions may reduce administrative burdens, support strained workforces and guide decisions through faster, data-driven insights, strengthening everyday clinical practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Will the AI boom hold or collapse?

Global investment in AI has soared to unprecedented heights, yet the technology’s real-world adoption lags far behind the market’s feverish expectations. Despite trillions of dollars in valuations and a global AI market projected to reach nearly $5 trillion by 2033, mounting evidence suggests that companies struggle to translate AI pilots into meaningful results.

As Jovan Kurbalija argues in his recent analysis, hype has outpaced both technological limits and society’s ability to absorb rapid change, raising the question of whether the AI bubble is nearing a breaking point.

Kurbalija identifies several forces inflating the bubble, such as relentless media enthusiasm that fuels fear of missing out, diminishing returns on ever-larger computing power, and the inherent logical constraints of today’s large language models, which cannot simply be ‘scaled’ into human-level intelligence.

At the same time, organisations are slow to reorganise workflows, regulations, and skills around AI, resulting in high failure rates for corporate initiatives. A new competitive landscape, driven by ultra-low-cost open-source models such as China’s DeepSeek, further exposes the fragility of current proprietary spending and the vast discrepancies in development costs.

Looking forward, Kurbalija outlines possible futures ranging from a rational shift toward smaller, knowledge-centric AI systems to a world in which major AI firms become ‘too big to fail’, protected by government backstops similar to the 2008 financial crisis. Geopolitics may also justify massive public spending as the US and China frame AI leadership as a national security imperative.

Other scenarios include a consolidation of power among a handful of tech giants or a mild ‘AI winter’ in which investment cools and attention pivots to the next frontier technologies, such as quantum computing or immersive digital environments.

Regardless of which path emerges, the defining battle ahead will centre on the open-source versus proprietary AI debate. Both Washington and Beijing are increasingly embracing open models as strategic assets, potentially reshaping global standards and forcing big tech firms to rethink their closed ecosystems.

As Kurbalija concludes, the outcome will depend less on technical breakthroughs and more on societal choices, balancing openness, competition, and security in shaping whether AI becomes a sustainable foundation of economic life or the latest digital bubble to deflate under its own weight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!