Australia steps up platform scrutiny after mass Snapchat removals

Snapchat has blocked more than 415,000 Australian accounts after the national ban on under-16s began, marking a rapid escalation in the country’s effort to restrict children’s access to major platforms.

The company relied on a mix of self-reported ages and age-detection technologies to identify users who appeared to be under 16.

The platform warned that age verification still faces serious shortcomings, leaving room for teenagers to bypass safeguards rather than supporting reliable compliance.

Facial estimation tools remain accurate only within a narrow range, meaning some young people may slip through while older users risk losing access. Snapchat also noted the likelihood that teenagers will shift towards less regulated messaging apps.

The eSafety commissioner has focused regulatory pressure on the 10 largest platforms, although all services with Australian users are expected to assess whether they fall under the new requirements.

Officials have acknowledged that the technology needs improvement and that reliability issues, such as the absence of a liveness check, contributed to false results.

More than 4.7 million accounts have been deactivated across the major platforms since the ban began, although the figure includes inactive and duplicate accounts.

Authorities in Australia expect further enforcement, with notices set to be issued to companies that fail to meet the new standards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

France challenges EU privacy overhaul

The EU’s attempt to revise core privacy rules has faced resistance from France, which argues that the Commission’s proposals would weaken rather than strengthen long-standing protections.

Paris objects strongly to proposed changes to the definition of personal data within the General Data Protection Regulation, which remains the foundation of European privacy law. Officials have also raised concerns about several more minor adjustments included in the broader effort to modernise digital legislation.

These proposals form part of the Digital Omnibus package, a set of updates intended to streamline the EU data rules. France argues that altering the GDPR’s definitions could change the balance between data controllers, regulators and citizens, creating uncertainty for national enforcement bodies.

The national government maintains that the existing framework already includes the flexibility needed to interpret sensitive information.

A disagreement that highlights renewed tension inside the Union as institutions examine the future direction of privacy governance.

Several member states want greater clarity in an era shaped by AI and cross-border data flows. In contrast, others fear that opening the GDPR could lead to inconsistent application across Europe.

Talks are expected to continue in the coming months as EU negotiators weigh the political risks of narrowing or widening the scope of personal data.

France’s firm stance suggests that consensus may prove difficult, particularly as governments seek to balance economic goals with unwavering commitments to user protection.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU plans a secure military data space by 2030

Institutions in the EU have begun designing a new framework to help European armies share defence information securely, rather than relying on US technology.

A plan centred on creating a military-grade data platform, the European Defence Artificial Intelligence Data Space, is intended to support sensitive exchanges among defence authorities.

Ultimately, the approach aims to replace the current patchwork of foreign infrastructure that many member states rely on to store and transfer national security data.

The European Defence Agency is leading the effort and expects the platform to be fully operational by 2030. The concept includes two complementary elements: a sovereign military cloud for data storage and a federated system that allows countries to exchange information on a trusted basis.

Officials argue that this will improve interoperability, speed up joint decision-making, and enhance operational readiness across the bloc.

A project that aligns with broader concerns about strategic autonomy, as EU leaders increasingly question long-standing dependencies on American providers.

Several European companies have been contracted to develop the early technical foundations. The next step is persuading governments to coordinate future purchases so their systems remain compatible with the emerging framework.

Planning documents suggest that by 2029, member states should begin integrating the data space into routine military operations, including training missions and coordinated exercises. EU authorities maintain that stronger control of defence data will be essential as military AI expands across European forces.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Submarine cables keep the global internet running

The smooth functioning of the global internet depends on a largely unseen but critical system, the undersea fibre-optic cables that carry nearly all international data traffic. These cables, laid across the ocean floor, support everything from everyday online communication to global financial transactions.

Ahead of the Second International Submarine Cable Resilience Summit in Porto, Portugal, the International Telecommunication Union (ITU) has drawn attention to the growing importance of protecting this infrastructure.

Tomas Lamanauskas, Deputy Secretary-General of ITU, has stressed that submarine cables are the backbone of global connectivity and that their resilience must be strengthened as societies become ever more dependent on digital networks. From their origins as 19th-century telegraph lines, undersea cables have evolved into high-capacity systems capable of transmitting hundreds of terabits of data per second, forming a dense web that connects continents, economies, and communities.

Today, more than 500 commercial submarine cables stretch for roughly 1.7 million kilometres beneath the seas. Although these cables are relatively thin, their installation is complex, requiring detailed seabed surveys, environmental assessments, and specialised cable-laying vessels to ensure safe deployment and protection.

Despite their robust design, undersea cables remain vulnerable. Natural hazards such as earthquakes and underwater landslides pose risks, but around 80% of cable faults are caused by human activities, including ship anchors and fishing trawlers.

When cables are damaged, the effects can be immediate, disrupting internet access, emergency communications, financial services, and digital healthcare and education, particularly in remote or island regions.

Repairing or replacing damaged cables is often slow and costly. While faults can usually be located quickly, repairs may be delayed by complex permitting procedures and coordination across multiple jurisdictions.

With some cables installed during the dot-com boom now approaching the end of their lifespan, ITU is increasingly focused on fostering international cooperation, setting standards, and promoting best practices to ensure that these hidden networks can continue to support global connectivity in the years ahead.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO and HBKU advance research on digital behaviour

Hamad Bin Khalifa University has unveiled the UNESCO Chair on Digital Technologies and Human Behaviour to strengthen global understanding of how emerging tools shape society.

An initiative, located in the College of Science and Engineering in Qatar, that will examine the relationship between digital adoption and human behaviour, focusing on digital well-being, ethical design and healthier online environments.

The Chair is set to address issues such as internet addiction, cyberbullying and misinformation through research and policy-oriented work.

By promoting dialogue among international organisations, governments and academic institutions, the programme aims to support the more responsible development of digital technologies rather than approaches that overlook societal impact.

HBKU’s long-standing emphasis on ethical innovation formed the foundation for the new initiative. The launch event brought together experts from several disciplines to discuss behavioural change driven by AI, mobile computing and social media.

An expert panel considered how GenAI can improve daily life while also increasing dependency, encouraging users to shift towards a more intentional and balanced relationship with AI systems.

UNESCO underlined the importance of linking scientific research with practical policymaking to guide institutions and communities.

The Chair is expected to strengthen cooperation across sectors and support progress on global development goals by ensuring digital transformation remains aligned with human dignity, social cohesion and inclusive growth.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI news needs ‘nutrition labels’, UK think tank says amid concerns over gatekeepers

A leading British think tank has urged the government to introduce ‘nutrition labels’ for AI-generated news, arguing that clearer rules are needed as AI becomes a dominant source of information.

The Institute for Public Policy Research said AI firms are increasingly acting as new gatekeepers of the internet and must pay publishers for the journalism that shapes their output.

The group recommended standardised labels showing which sources underpin AI-generated answers, instead of leaving users unsure about the origin or reliability of the material they read.

It also called for a formal licensing system in the UK that would allow publishers to negotiate directly with technology companies over the use of their content. The move comes as a growing share of the public turns to AI for news, while Google’s AI summaries reach billions each month.

IPPR’s study found that some AI platforms rely heavily on content from outlets with licensing agreements, such as the Guardian and the Financial Times, while others, like the BBC, appear far less often due to restrictions on scraping.

The think tank warned that such patterns could weaken media plurality by sidelining local and smaller publishers instead of supporting a balanced ecosystem. It added that Google’s search summaries have already reduced traffic to news websites by providing answers before users click through.

The report said public funding should help sustain investigative and local journalism as AI tools expand. OpenAI responded that its products highlight sources and provide links to publishers, arguing that careful design can strengthen trust in the information people see online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI-driven scams dominate malicious email campaigns

The Catalan Cybersecurity Agency has warned that generative AI is now being used in the vast majority of email scams containing malicious links. Its Cybersecurity Outlook Report for 2026 found that more than 80% of such messages rely on AI-generated content.

The report shows that 82.6% of emails carrying malicious links include text, video, or voice produced using AI tools, making fraudulent messages increasingly difficult to identify. Scammers use AI to create near-flawless messages that closely mimic legitimate communications.

Agency director Laura Caballero said the sophistication of AI-generated scams means users face greater risks, while businesses and platforms are turning to AI-based defences to counter the threat.

She urged a ‘technology against technology’ approach, combined with stronger public awareness and basic security practices such as two-factor authentication.

Cyber incidents are also rising. The agency handled 3,372 cases in 2024, a 26% increase year on year, mostly involving credential leaks and unauthorised email access.

In response, the Catalan government has launched a new cybersecurity strategy backed by a €18.6 million investment to protect critical public services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Moltbook AI vulnerability exposes user data and API keys

A critical security flaw has emerged in Moltbook, a new AI agent social network launched by Octane AI.

The vulnerability allowed unauthenticated access to user profiles, exposing email addresses, login tokens, and API keys for registered agents. The platform’s rapid growth, claimed to have 1.5 million users, was largely artificial, as a single agent reportedly created hundreds of thousands of fake accounts.

Moltbook enables AI agents to post, comment, and form sub-communities, fostering interactions that range from AI debates to token-related activities.

Analysts warned that prompt injections and unregulated agent interactions could lead to credential theft or destructive actions, including data exfiltration or account hijacking. Experts described the platform as both a milestone in scale and a serious security concern.

Developers have not confirmed any patches, leaving users and enterprises exposed. Security specialists advised revoking API keys, sandboxing AI agents, and auditing potential exposures.

The lack of safeguards on the platform highlights the risks of unchecked AI agent networks, particularly for organisations that may rely on them without proper oversight.

An incident that underscores the growing need for stronger governance in AI-powered social networks. Experts stress that without enforced security protocols, such platforms could be exploited at scale, affecting both individual users and corporate systems.

The Moltbook case serves as a warning about prioritising hype over security in emerging AI applications.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Grok returns to Indonesia as X agrees to tightened oversight

Indonesia has restored access to Grok after receiving guarantees from X that stronger safeguards will be introduced to prevent further misuse of the AI tool.

Authorities suspended the service last month following the spread of sexualised images on the platform, making Indonesia the first country to block the system.

Officials from the Ministry of Communications and Digital Affairs said that access had been reinstated on a conditional basis after X submitted a written commitment outlining concrete measures to strengthen compliance with national law.

The ministry emphasised that the document serves as a starting point for evaluation instead of signalling the end of supervision.

However, the government warned that restrictions could return if Grok fails to meet local standards or if new violations emerge. Indonesian regulators stressed that monitoring would remain continuous, and access could be withdrawn immediately should inconsistencies be detected.

The decision marks a cautious reopening rather than a full reinstatement, reflecting Indonesia’s wider efforts to demand greater accountability from global platforms deploying advanced AI systems within its borders.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Roblox faces new dutch scrutiny under EU digital rules

Regulators in the Netherlands have opened a formal investigation into Roblox over concerns about inadequate protections for children using the popular gaming platform.

The national authority responsible for enforcing digital rules is examining whether the company has implemented the safeguards required under the Digital Services Act rather than relying solely on voluntary measures.

Officials say children may have been exposed to harmful environments, including violent or sexualised material, as well as manipulative interfaces encouraging more extended play.

The concerns intensify pressure on the EU authorities to monitor social platforms that attract younger users, even when they do not meet the threshold for huge online platforms.

Roblox says it has worked with Dutch regulators for months and recently introduced age checks for users who want to use chat. The company argues that it has invested in systems designed to reinforce privacy, security and safety features for minors.

The Dutch authority plans to conclude the investigation within a year. The outcome could include fines or broader compliance requirements and is likely to influence upcoming European rules on gaming and consumer protection, due later in the decade.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!