Australia introduces new codes to protect children online

Australian regulators have released new guidance ahead of the introduction of industry codes designed to protect children from exposure to harmful online material.

The Age Restricted Material Codes will apply to a wide range of online services, including app stores, social platforms, equipment providers, pornography sites and generative AI services, with the first tranche beginning on 27 December.

The rules require search engines to blur image results involving pornography or extreme violence to reduce accidental exposure among young users.

Search services must also redirect people seeking information related to suicide, self-harm or eating disorders to professional mental health support instead of allowing harmful spirals to unfold.

eSafety argues that many children unintentionally encounter disturbing material at very young ages, often through search results that act as gateways rather than deliberate choices.

The guidance emphasises that adults will still be able to access unblurred material by clicking through, and there is no requirement for Australians to log in or identify themselves before searching.

eSafety maintains that the priority lies in shielding children from images and videos they cannot cognitively process or forget once they have seen them.

These codes will operate alongside existing standards that tackle unlawful content and will complement new minimum age requirements for social media, which are set to begin in mid-December.

Authorities in Australia consider the reforms essential for reducing preventable harm and guiding vulnerable users towards appropriate support services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU targets X for breaking the Digital Services Act

European regulators have imposed a fine of one hundred and twenty million euros on X after ruling that the platform breached transparency rules under the Digital Services Act.

The Commission concluded that the company misled users with its blue checkmark system, restricted research access and operated an inadequate advertising repository.

Officials found that paid verification on X encouraged users to believe their accounts had been authenticated when, in fact, no meaningful checks were conducted.

EU regulators argued that such practices increased exposure to scams and impersonation fraud, rather than supporting trust in online communication.

The Commission also stated that the platform’s advertising repository lacked essential information and created barriers that prevented researchers and civil society from examining potential threats.

European authorities judged that X failed to offer legitimate access to public data for eligible researchers. Terms of service blocked independent data collection, including scraping, while the company’s internal processes created further obstacles.

Regulators believe such restrictions frustrate efforts to study misinformation, influence campaigns and other systemic risks within the EU.

X must now outline the steps it will take to end the blue checkmark infringement within sixty working days and deliver a wider action plan on data access and advertising transparency within ninety days.

Failure to comply could lead to further penalties as the Commission continues its broader investigation into information manipulation and illegal content across the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI fuels a new wave of cyber threats in Greece

Greece is confronting a rapid rise in cybercrime as AI strengthens the tools available to criminals, according to the head of the National Cyber Security Authority.

Michael Bletsas warned that Europe is already experiencing hybrid conflict, with Northeastern states facing severe incidents that reveal a digital frontline. Greece has not endured physical sabotage or damage to its infrastructure, yet cyberattacks remain a pressing concern.

Bletsas noted that most activity involves cybercrime instead of destructive action. He pointed to the expansion of cyberactivism and vandalism through denial-of-service attacks, which usually cause no lasting harm.

The broader problem stems from a surge in AI-driven intrusions and espionage, which offer new capabilities to malicious groups and create a more volatile environment.

Moreover, Bletsas said that the physical and digital worlds should be viewed as a single, interconnected sphere, with security designed around shared principles rather than being treated as separate domains.

Digital warfare is already unfolding, and Greece is part of it. The country must now define its alliances and strengthen its readiness as cyber threats intensify and the global divide grows deeper.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan blocks Chinese app RedNote after surge in online scams

Authorities in Taiwan will block the Chinese social media and shopping app RedNote for a year following a surge in online scams linked to the platform. Officials report that more than 1,700 fraud cases have been linked to the app since last year, resulting in losses exceeding NT$247 million.

Regulators report that the company failed to meet required data-security standards and did not respond to requests for a plan to strengthen cybersecurity.

Internet providers have been instructed to restrict access, affecting several million users who now see a security warning message when opening the app.

Concerns over Beijing’s online influence and the spread of disinformation have added pressure on Taiwanese authorities to tighten oversight of Chinese platforms.

RedNote’s operators are also facing scrutiny in mainland China, where regulators have criticised the company over what they labelled ‘negative’ content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Europe builds a laser ground station in Greenland to protect satellite links

Europe is building a laser-based ground station in Greenland to secure satellite links as Russian jamming intensifies. ESA and Denmark chose Kangerlussuaq for its clear skies and direct access to polar-orbit traffic.

The optical system uses Astrolight’s technology to transmit data markedly faster than radio signals. Narrow laser beams resist interference, allowing vast imaging sets to reach analysts with far fewer disruptions.

Developers expect terabytes to be downloaded in under a minute, reducing reliance on vulnerable Arctic radio sites. European officials say the upgrade strengthens autonomy as undersea cables and navigation systems face repeated targeting from countries such as Russia.

The Danish station will support defence monitoring, climate science and search-and-rescue operations across high latitudes. Work is underway, with completion planned for 2026 and ambitions for a wider global laser network.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NSA warns AI poses new risks for operational technology

The US National Security Agency (NSA), together with international partners including Australia’s ACSC, has issued guidance on the secure integration of AI into operational technology (OT).

The Principles for the Secure Integration of AI in OT warn that while AI can optimise critical infrastructure, it also introduces new risks for safety-critical environments. Although aimed at OT administrators, the guidance also highlights issues relevant to IT networks.

AI is increasingly deployed in sectors such as energy, water treatment, healthcare, and manufacturing to automate processes and enhance efficiency.

The NSA’s guidance, however, flags several potential threats, including adversarial prompt injection, data poisoning, AI drift, and reduced explainability, all of which can compromise safety and compliance.

Over-reliance on AI may also lead to human de-skilling, cognitive overload, and distraction, while AI hallucinations raise concerns about reliability in safety-critical settings.

Experts emphasise that AI cannot currently be trusted to make independent safety decisions in OT networks, where the margin for error is far smaller than in standard IT systems.

Sam Maesschalck, an OT engineer, noted that introducing AI without first addressing pre-existing infrastructure issues, such as insufficient data feeds or incomplete asset inventories, could undermine both security and operational efficiency.

The guidance aims to help organisations evaluate AI risks, clarify accountability, and prepare for potential misbehaviour, underlining the importance of careful planning before deploying AI in operationally critical environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google drives health innovation through new EU AI initiative

At the European Health Summit in Brussels, Google presented new research suggesting that AI could help Europe overcome rising healthcare pressures.

The report, prepared by Implement Consulting Group for Google, argues that scientific productivity is improving again, rather than continuing a long period of stagnation. Early results already show shorter waiting times in emergency departments, offering practitioners more space to focus on patient needs.

Momentum at the Summit increased as Google announced new support for AI adoption in frontline care.

Five million dollars from Google.org will fund Bayes Impact to launch an EU-wide initiative known as ‘Impulse Healthcare’. The programme will allow nurses, doctors and administrators to design and test their own AI tools through an open-source platform.

By placing development in the hands of practitioners, the project aims to expand ideas that help staff reclaim valuable time during periods of growing demand.

Successful tools developed at a local level will be scaled across the EU, providing a path to more efficient workflows and enhanced patient care.

Google views these efforts as part of a broader push to rebuild capacity in Europe’s health systems.

AI-assisted solutions may reduce administrative burdens, support strained workforces and guide decisions through faster, data-driven insights, strengthening everyday clinical practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

€700 million crypto fraud network spanning Europe broken up

Authorities have broken an extensive cryptocurrency fraud and money laundering network that moved over EUR 700 million after years of international investigation.

The operation began with an investigation into a single fraudulent cryptocurrency platform and eventually uncovered an extensive network of fake investment schemes targeting thousands of victims.

Victims were drawn in by fake ads promising high returns and pressured via criminal call centres to pay more. Transferred funds were stolen and laundered across blockchains and exchanges, exposing a highly organised operation across Europe and beyond.

Police raids across Cyprus, Germany, and Spain in late October 2025 resulted in nine arrests and the seizure of millions in assets, including bank deposits, cryptocurrencies, cash, digital devices, and luxury watches.

Europol and Eurojust coordinated the cross-border operation with national authorities from France, Belgium, Germany, Spain, Malta, Cyprus, and other nations.

The second phase, executed in November, targeted the affiliate marketing infrastructure behind fraudulent online advertising, including deepfake campaigns impersonating celebrities and media outlets.

Law enforcement teams in Belgium, Bulgaria, Germany, and Israel conducted searches, dismantling key elements of the scam ecosystem. Investigations continue to track down remaining assets and dismantle the broader network.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Russia blocks Snapchat and FaceTime access

Russia’s state communications watchdog has intensified its campaign against major foreign platforms by blocking Snapchat and restricting FaceTime calls.

The move follows earlier reports of disrupted Apple services inside the country, while users could still connect through VPNs instead of relying on direct access. Roskomnadzor accused Snapchat of enabling criminal activity and repeated earlier claims targeting Apple’s service.

A decision that marks the authorities’ first formal confirmation of limits on both platforms. It arrives as pressure increases on WhatsApp, which remains Russia’s most popular messenger, with officials warning that a whole block is possible.

Meta is accused of failing to meet data-localisation rules and of what the authorities describe as repeated violations linked to terrorism and fraud.

Digital rights groups argue that technical restrictions are designed to push citizens toward Max, a government-backed messenger that activists say grants officials sweeping access to private conversations, rather than protecting user privacy.

These measures coincide with wider crackdowns, including the recent blocking of the Roblox gaming platform over allegations of extremist content and harmful influence on children.

The tightening of controls reflects a broader effort to regulate online communication as Russia seeks stronger oversight of digital platforms. The latest blocks add further uncertainty for millions of users who depend on familiar services instead of switching to state-supported alternatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japanese high-schooler suspected of hacking net-cafe chain using AI

Authorities in Tokyo have issued an arrest warrant for a 17-year-old boy from Osaka on suspicion of orchestrating a large-scale cyberattack using artificial intelligence. The alleged target was the operator of the Kaikatsu Club internet-café chain (along with related fitness-gym business), which may have exposed the personal data of about 7.3 million customers.

According to investigators, the suspect used a computer programme, reportedly built with help from an AI chatbot, to send unauthorised commands around 7.24 million times to the company’s servers in order to extract membership information. The teenager was previously arrested in November in connection with a separate fraud case involving credit-card misuse.

Police have charged him under Japan’s law against unauthorised computer access and for obstructing business, though so far no evidence has emerged of misuse (for example, resale or public leaks) of the stolen data.

In his statement to investigators, the suspect reportedly said he carried out the hack simply because he found it fun to probe system vulnerabilities.

This case is the latest in a growing pattern of so-called AI-enabled cyber crimes in Japan, from fraudulent subscription schemes to ransomware generation. Experts warn that generative AI is lowering the barrier to entry for complex attacks, enabling individuals with limited technical training to carry out large-scale hacking or fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!