Ireland and Australia deepen cooperation on online safety

Ireland’s online safety regulator has agreed a new partnership with Australia’s eSafety Commissioner to strengthen global approaches to digital harm. The Memorandum of Understanding (MoU) reinforces shared ambitions to improve online protection for children and adults.

The Irish and Australian plan to exchange data, expertise and methodological insights to advance safer digital platforms. Officials describe the arrangement as a way to enhance oversight of systems used to minimise harmful content and promote responsible design.

Leaders from both organisations emphasised the need for accountability across the tech sector. Their comments highlighted efforts to ensure that platforms embed user protection into their product architecture, rather than relying solely on reactive enforcement.

The MoU also opens avenues for collaborative policy development and joint work on education programs. Officials expect a deeper alignment around age assurance technologies and emerging regulatory challenges as online risks continue to evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google drives health innovation through new EU AI initiative

At the European Health Summit in Brussels, Google presented new research suggesting that AI could help Europe overcome rising healthcare pressures.

The report, prepared by Implement Consulting Group for Google, argues that scientific productivity is improving again, rather than continuing a long period of stagnation. Early results already show shorter waiting times in emergency departments, offering practitioners more space to focus on patient needs.

Momentum at the Summit increased as Google announced new support for AI adoption in frontline care.

Five million dollars from Google.org will fund Bayes Impact to launch an EU-wide initiative known as ‘Impulse Healthcare’. The programme will allow nurses, doctors and administrators to design and test their own AI tools through an open-source platform.

By placing development in the hands of practitioners, the project aims to expand ideas that help staff reclaim valuable time during periods of growing demand.

Successful tools developed at a local level will be scaled across the EU, providing a path to more efficient workflows and enhanced patient care.

Google views these efforts as part of a broader push to rebuild capacity in Europe’s health systems.

AI-assisted solutions may reduce administrative burdens, support strained workforces and guide decisions through faster, data-driven insights, strengthening everyday clinical practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

€700 million crypto fraud network spanning Europe broken up

Authorities have broken an extensive cryptocurrency fraud and money laundering network that moved over EUR 700 million after years of international investigation.

The operation began with an investigation into a single fraudulent cryptocurrency platform and eventually uncovered an extensive network of fake investment schemes targeting thousands of victims.

Victims were drawn in by fake ads promising high returns and pressured via criminal call centres to pay more. Transferred funds were stolen and laundered across blockchains and exchanges, exposing a highly organised operation across Europe and beyond.

Police raids across Cyprus, Germany, and Spain in late October 2025 resulted in nine arrests and the seizure of millions in assets, including bank deposits, cryptocurrencies, cash, digital devices, and luxury watches.

Europol and Eurojust coordinated the cross-border operation with national authorities from France, Belgium, Germany, Spain, Malta, Cyprus, and other nations.

The second phase, executed in November, targeted the affiliate marketing infrastructure behind fraudulent online advertising, including deepfake campaigns impersonating celebrities and media outlets.

Law enforcement teams in Belgium, Bulgaria, Germany, and Israel conducted searches, dismantling key elements of the scam ecosystem. Investigations continue to track down remaining assets and dismantle the broader network.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Russia blocks Snapchat and FaceTime access

Russia’s state communications watchdog has intensified its campaign against major foreign platforms by blocking Snapchat and restricting FaceTime calls.

The move follows earlier reports of disrupted Apple services inside the country, while users could still connect through VPNs instead of relying on direct access. Roskomnadzor accused Snapchat of enabling criminal activity and repeated earlier claims targeting Apple’s service.

A decision that marks the authorities’ first formal confirmation of limits on both platforms. It arrives as pressure increases on WhatsApp, which remains Russia’s most popular messenger, with officials warning that a whole block is possible.

Meta is accused of failing to meet data-localisation rules and of what the authorities describe as repeated violations linked to terrorism and fraud.

Digital rights groups argue that technical restrictions are designed to push citizens toward Max, a government-backed messenger that activists say grants officials sweeping access to private conversations, rather than protecting user privacy.

These measures coincide with wider crackdowns, including the recent blocking of the Roblox gaming platform over allegations of extremist content and harmful influence on children.

The tightening of controls reflects a broader effort to regulate online communication as Russia seeks stronger oversight of digital platforms. The latest blocks add further uncertainty for millions of users who depend on familiar services instead of switching to state-supported alternatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Porn site fined £1m for ignoring UK child safety age checks

A UK pornographic website has been fined £1m by Ofcom for failing to comply with mandatory age verification under the Online Safety Act. The company, AVS Group Ltd, did not respond to repeated contact from the regulator, prompting an additional £50,000 penalty.

The Act requires websites hosting adult content to implement ‘highly effective age assurance’ to prevent children from accessing explicit material. Ofcom has ordered the company to comply within 72 hours or face further daily fines.

Other tech platforms are also under scrutiny, with one unnamed major social media company undergoing compliance checks. Regulators warn that non-compliance will result in formal action, highlighting the growing enforcement of child safety online.

Critics argue the law must be tougher to ensure real protection, particularly for minors and women online. While age checks have reduced UK traffic to some sites, loopholes like VPNs remain a concern, and regulators are pushing for stricter adherence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Campaigning in the age of generative AI

Generative AI is rapidly altering the political campaign landscape, argues the ORF article, which outlines how election teams worldwide are adopting AI tools for persuasion, outreach and content creation.

Campaigns can now generate customised messages for different voter groups, produce multilingual content at scale, and automate much of the traditional grunt work of campaigning.

On one hand, proponents say the technology makes campaigning more efficient and accessible, particularly in multilingual or resource-constrained settings. But the ease and speed with which content can be generated also lowers the barrier for misuse: AI-driven deepfakes, synthetic voices and disinformation campaigns can be deployed to mislead voters or distort public discourse.

Recent research supports these worries. For example, a large-scale study published in Science and Nature demonstrated that AI chatbots can influence voter opinions, swaying a non-trivial share of undecided voters toward a target candidate simply by presenting persuasive content.

Meanwhile, independent analyses show that during the 2024 US election campaign, a noticeable fraction of content on social media was AI-generated, sometimes used to spread misleading narratives or exaggerate support for certain candidates.

For democracy and governance, the shift poses thorny challenges. AI-driven campaigns risk eroding public trust, exacerbating polarisation and undermining electoral legitimacy. Regulators and policymakers now face pressure to devise new safeguards, such as transparency requirements around AI usage in political advertising, stronger fact-checking, and clearer accountability for misuse.

The ORF article argues these debates should start now, before AI becomes so entrenched that rollback is impossible.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japanese high-schooler suspected of hacking net-cafe chain using AI

Authorities in Tokyo have issued an arrest warrant for a 17-year-old boy from Osaka on suspicion of orchestrating a large-scale cyberattack using artificial intelligence. The alleged target was the operator of the Kaikatsu Club internet-café chain (along with related fitness-gym business), which may have exposed the personal data of about 7.3 million customers.

According to investigators, the suspect used a computer programme, reportedly built with help from an AI chatbot, to send unauthorised commands around 7.24 million times to the company’s servers in order to extract membership information. The teenager was previously arrested in November in connection with a separate fraud case involving credit-card misuse.

Police have charged him under Japan’s law against unauthorised computer access and for obstructing business, though so far no evidence has emerged of misuse (for example, resale or public leaks) of the stolen data.

In his statement to investigators, the suspect reportedly said he carried out the hack simply because he found it fun to probe system vulnerabilities.

This case is the latest in a growing pattern of so-called AI-enabled cyber crimes in Japan, from fraudulent subscription schemes to ransomware generation. Experts warn that generative AI is lowering the barrier to entry for complex attacks, enabling individuals with limited technical training to carry out large-scale hacking or fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google boosts Nigeria’s AI development

The US tech giant, Google, has announced a $2.1 million Google.org commitment to support Nigeria’s AI-powered future, aiming to strengthen local talent and improve digital safety nationwide.

An initiative that supports Nigeria’s National AI Strategy and its ambition to create one million digital jobs, recognising the economic potential of AI, which could add $15 billion to the country’s economy by 2030.

The investment focuses on developing advanced AI skills among students and developers instead of limiting progress to short-term training schemes.

Google will fund programmes led by expert partners such as FATE Foundation, the African Institute for Mathematical Sciences, and the African Technology Forum.

Their work will introduce advanced AI curricula into universities and provide developers with structured, practical routes from training to building real-world products.

The commitment also expands digital safety initiatives so communities can participate securely in the digital economy.

Junior Achievement Africa will scale Google’s ‘Be Internet Awesome’ curriculum to help families understand safe online behaviour, while the CyberSafe Foundation will deliver cybersecurity training and technical assistance to public institutions, strengthening national digital resilience.

Google aims to create more opportunities similar to those of Nigerian learners who used digital skills to secure full-time careers instead of remaining excluded from the digital economy.

By combining advanced AI training with improved digital safety, the company intends to support inclusive growth and build long-term capacity across Nigeria.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SAP elevates customer support with proactive AI systems

AI has pushed customer support into a new era, where anticipation replaces reaction. SAP has built a proactive model that predicts issues, prevents failures and keeps critical systems running smoothly instead of relying on queues and manual intervention.

Major sales events, such as Cyber Week and Singles Day, demonstrated the impact of this shift, with uninterrupted service and significant growth in transaction volumes and order numbers.

Self-service now resolves most issues before they reach an engineer, as structured knowledge supports AI agents that respond instantly with a confidence level that matches human performance.

Tools such as the Auto Response Agent and Incident Solution Matching enable customers to retrieve solutions without having to search through lengthy documentation.

SAP has also prepared organisations scaling AI by offering support systems tailored for early deployment.

Engineers have benefited from AI as much as customers. Routine tasks are handled automatically, allowing experts to focus on problems that demand insight instead of administration.

Language optimisation, routing suggestions, and automatic error categorisation support faster and more accurate resolutions. SAP validates every AI tool internally before release, which it views as a safeguard for responsible adoption.

The company maintains that AI will augment staff rather than replace them. Creative and analytical work becomes increasingly important as automation handles repetitive tasks, and new roles emerge in areas such as AI training and data stewardship.

SAP argues that progress relies on a balanced relationship between human judgement and machine intelligence, strengthened by partnerships that turn enterprise data into measurable outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Privacy concerns lead India to withdraw cyber safety app mandate

India has scrapped its order mandating smartphone manufacturers to pre-install the state-run Sanchar Saathi cyber safety app. The directive, which faced widespread criticism, had raised concerns over privacy and potential government surveillance.

Smartphone makers, including Apple and Samsung, reportedly resisted the order, highlighting that it was issued without prior consultation and challenged user privacy norms. The government argued the app was necessary to verify handset authenticity.

So far, the Sanchar Saathi app has attracted 14 million users, reporting around 2,000 frauds daily, with a sharp spike of 600,000 new registrations in a single day. Despite these figures, the mandatory pre-installation rule provoked intense backlash from cybersecurity experts and digital rights advocates.

India’s Minister of Communications, Jyotiraditya Scindia, dismissed concerns about surveillance, insisting that the app does not enable snooping. Digital advocacy groups welcomed the withdrawal but called for complete legal clarity on the revised Cyber Security Rules, 2024.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!