International Criminal Court (ICC) issues policy on cyber-enabled crimes

The Office of the Prosecutor (OTP) of the International Criminal Court (ICC) has issued a Policy on Cyber-Enabled Crimes under the Rome Statute. The Policy sets out how the OTP interprets and applies the existing ICC legal framework to conduct that is committed or facilitated through digital and cyber means.

The Policy clarifies that the ICC’s jurisdiction remains limited to crimes defined in the Rome Statute: genocide, crimes against humanity, war crimes, the crime of aggression, and offences against the administration of justice. It does not extend to ordinary cybercrimes under domestic law, such as hacking, fraud, or identity theft, unless such conduct forms part of or facilitates one of the crimes within the Court’s jurisdiction.

According to the Policy, the Rome Statute is technology-neutral. This means that the legal assessment of conduct depends on whether the elements of a crime are met, rather than on the specific tools or technologies used.

As a result, cyber means may be relevant both to the commission of Rome Statute crimes and to the collection and assessment of evidence related to them.

The Policy outlines how cyber-enabled conduct may relate to each category of crimes under the Rome Statute. Examples include cyber operations affecting essential civilian services, the use of digital platforms to incite or coordinate violence, cyber activities causing indiscriminate effects in armed conflict, cyber operations linked to inter-State uses of force, and digital interference with evidence, witnesses, or judicial proceedings before the ICC.

The Policy was developed through consultations with internal and external legal and technical experts, including the OTP’s Special Adviser on Cyber-Enabled Crimes, Professor Marko Milanović. It does not modify or expand the ICC’s jurisdiction, which remains governed exclusively by the Rome Statute.

Currently, there are no publicly known ICC cases focused specifically on cyber-enabled crimes. However, the issuance of the Policy reflects the OTP’s assessment that digital conduct may increasingly be relevant to the commission, facilitation, and proof of crimes within the Court’s mandate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Trump signs order blocking individual US states from enforcing AI rules

US President Donald Trump has signed an executive order aimed at preventing individual US states from enforcing their own AI regulations, arguing that AI oversight should be handled at the federal level. Speaking at the White House, Trump said a single national framework would avoid fragmented rules, while his AI adviser, David Sacks, added that the administration would push back against what it views as overly burdensome state laws, except for measures focused on child safety.

The move is welcomed by major technology companies, which have long warned that a patchwork of state-level regulations could slow innovation and weaken the US position in the global AI race, particularly in comparison to China. Industry groups say a unified national approach would provide clarity for companies investing billions of dollars in AI development and help maintain US leadership in the sector.

However, the executive order has sparked strong backlash from several states, most notably California. Governor Gavin Newsom criticised the decision as an attempt to undermine state protections, pointing to California’s own AI law that requires large developers to address potential risks posed by their models.

Other states, including New York and Colorado, have also enacted AI regulations, arguing that state action is necessary in the absence of comprehensive federal safeguards.

Critics warn that blocking state laws could leave consumers exposed if federal rules are weak or slow to emerge, while some legal experts caution that a national framework will only be effective if it offers meaningful protections. Despite these concerns, tech lobby groups have praised the order and expressed readiness to work with the White House and Congress to establish nationwide AI standards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU supports Germany’s semiconductor expansion

The European Commission has approved €623 million in German support for two first-of-a-kind semiconductor factories in Dresden and Erfurt.

A funding that will help GlobalFoundries expand its site to create new wafer capacity and will assist X-FAB in building an open foundry designed for advanced micro-electromechanical systems.

Both projects aim to increase Europe’s strategic autonomy in chip production, rather than allowing dependence on non-European suppliers to deepen.

The facility planned by GlobalFoundries will adapt technologies developed under the IPCEI Microelectronics and Communication Technologies framework for dual-use needs in aerospace, defence and critical infrastructure.

The manufacturing process will take place entirely within the EU to meet strict security and reliability demands. X-FAB’s project will offer services that European firms, including start-ups and small companies, currently source from abroad.

A new plant that is expected to begin commercial operation by 2029 and will introduce manufacturing capabilities not yet available in Europe.

In return for public support, both companies will pursue innovation programmes, strengthen cross-border cooperation, and apply priority-rated orders during supply shortages, in line with the European Chips Act.

They will also develop training schemes to expand the pool of skilled workers, rather than relying on the limited existing capacity. Each company has committed to seeking recognition for its facilities as Open EU Foundries.

The Commission concluded that the aid packages comply with the EU State aid rules because they encourage essential economic activity, show apparent incentive effects and remain proportionate to funding gaps identified during assessment.

These measures form part of Europe’s broader shift toward a more resilient semiconductor ecosystem and follow earlier decisions supporting similar investments across member states.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit challenges Australia’s teen social media ban

The US social media company, Reddit, has launched legal action in Australia as the country enforces the world’s first mandatory minimum age for social media access.

Reddit argues that banning users under 16 prevents younger Australians from taking part in political debate, instead of empowering them to learn how to navigate public discussion.

Lawyers representing the company argue that the rule undermines the implied freedom of political communication and could restrict future voters from understanding the issues that will shape national elections.

Australia’s ban took effect on December 10 and requires major platforms to block underage users or face penalties that can reach nearly 50 million Australian dollars.

Companies are relying on age inference and age estimation technologies to meet the obligation, although many have warned that the policy raises privacy concerns in addition to limiting online expression.

The government maintains that the law is designed to reduce harm for younger users and has confirmed that the list of prohibited platforms may expand as new safety issues emerge.

Reddit’s filing names the Commonwealth of Australia and Communications Minister Anika Wells. The minister’s office says the government intends to defend the law and will prioritise the protection of young Australians, rather than allowing open access to high-risk platforms.

The platform’s challenge follows another case brought by an internet rights group that claims the legislation represents an unfair restriction on free speech.

A separate list identifies services that remain open for younger users, such as Roblox, Pinterest and YouTube Kids. At the same time, platforms including Instagram, TikTok, Snapchat, Reddit and X are blocked for those under sixteen.

The case is expected to shape future digital access rights in Australia, as online communities become increasingly central to political education and civic engagement among emerging voters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India expands job access with AI-powered worker platforms

India is reshaping support for its vast informal workforce through e-Shram, a national database built to connect millions of people to social security and better job prospects.

The database works together with the National Career Service portal, and both systems run on Microsoft Azure.

AI tools are now improving access to stable employment by offering skills analysis, resume generation and personalised career pathways.

The original aim of e-Shram was to create a reliable record of informal workers after the pandemic exposed major gaps in welfare coverage. Engineers had to build a platform capable of registering hundreds of millions of people while safeguarding sensitive data.

Azure’s scalable infrastructure allowed the system to process high transaction volumes and maintain strong security protocols. Support reached remote areas through a network of service centres, helped further by Bhashini, an AI language service offering real-time translation in 22 Indian languages.

More than 310 million workers are now registered and linked to programmes providing accident insurance, medical subsidies and housing assistance. The integration with NCS has opened paths to regulated work, often with health insurance or retirement savings.

Workers receive guidance on improving employability, while new features such as AI chatbots and location-focused job searches aim to help those in smaller cities gain equal access to opportunities.

India is using the combined platforms to plan future labour policies, manage skill development and support international mobility for trained workers.

Officials also hope the digital systems will reduce reliance on job brokers and strengthen safe recruitment, including abroad through links with the eMigrate portal.

The government has already presented the platforms to international partners and is preparing to offer them as digital public infrastructure for other countries seeking similar reforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian families receive eSafety support as the social media age limit takes effect

Australia has introduced a minimum age requirement of 16 for social media accounts during the week, marking a significant shift in its online safety framework.

The eSafety Commissioner has begun monitoring compliance, offering a protective buffer for young people as they develop digital skills and resilience. Platforms now face stricter oversight, with potential penalties for systemic breaches, and age assurance requirements for both new and current users.

Authorities stress that the new age rule forms part of a broader effort aimed at promoting safer online environments, rather than relying on isolated interventions. Australia’s online safety programmes continue to combine regulation, education and industry engagement.

Families and educators are encouraged to utilise the resources on the eSafety website, which now features information hubs that explain the changes, how age assurance works, and what young people can expect during the transition.

Regional and rural communities in Australia are receiving targeted support, acknowledging that the change may affect them more sharply due to limited local services and higher reliance on online platforms.

Tailored guidance, conversation prompts, and step-by-step materials have been produced in partnership with national mental health organisations.

Young people are reminded that they retain access to group messaging tools, gaming services and video conferencing apps while they await eligibility for full social media accounts.

eSafety officials underline that the new limit introduces a delay rather than a ban. The aim is to reduce exposure to persuasive design and potential harm while encouraging stronger digital literacy, emotional resilience and critical thinking.

Ongoing webinars and on-demand sessions provide additional support as the enforcement phase progresses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Vietnam passes first AI law to strict safeguards

Vietnam’s National Assembly has passed its first AI Law, advancing the regulation and development of AI nationwide. The legislation was approved with overwhelming support, alongside amendments to the Intellectual Property Law and a revised High Technology Law.

The AI Law will take effect on March 1, 2026.

The law establishes core principles, prohibits certain acts, and outlines a risk management framework for AI systems. The law combines safeguards for high-risk AI with incentives for innovation, including sandbox testing, a National AI Development Fund, and startup vouchers.

AI oversight will be centralised under the Government, led by the Ministry of Science and Technology, with assessments needed only for high-risk systems approved by the Prime Minister. The law allows real-time updates to this list to keep pace with technological advances.

Flexible provisions prevent obsolescence by avoiding fixed technology lists or rigid risk classifications. Lawmakers emphasised the balance between regulation and innovation, aiming to create a safe yet supportive environment for AI growth in Vietnam.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Global network strengthens AI measurement and evaluation

Leaders around the world have committed to strengthening the scientific measurement and evaluation of AI following a recent meeting in San Diego.

Representatives from major economies agreed to intensify collaboration under the newly renamed International Network for Advanced AI Measurement, Evaluation and Science.

The UK has assumed the role of Network Coordinator, guiding efforts to create rigorous, globally recognised methods for assessing advanced AI systems.

A network that includes Australia, Canada, the EU, France, Japan, Kenya, the Republic of Korea, Singapore, the UK and the US, promoting shared understanding and consistent evaluation practices.

Since its formation in November 2024, the Network has fostered knowledge exchange to align countries on AI measurement and evaluation best practices. Boosting public trust in AI remains central, unlocking innovations, new jobs, and opportunities for businesses and innovators to expand.

The recent San Diego discussions coincided with NeurIPS, allowing government, academic and industry stakeholders to collaborate more deeply.

AI Minister Kanishka Narayan highlighted the importance of trust as a foundation for progress, while Adam Beaumont, Interim Director of the AI Security Institute, stressed the need for global approaches to testing advanced AI.

The Network aims to provide practical and rigorous evaluation tools to ensure the safe development and deployment of AI worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google faces scrutiny over AI use of online content

The European Commission has opened an antitrust probe into Google over concerns it used publisher and YouTube content to develop its AI services on unfair terms.

Regulators are assessing whether Google used its dominant position to gain unfair access to content powering features like AI Overviews and AI Mode. They are examining whether publishers were disadvantaged by being unable to refuse use of their content without losing visibility on Google Search.

The probe also covers concerns that YouTube creators may have been required to allow the use of their videos for AI training without compensation, while rival AI developers remain barred from using YouTube content.

The investigation will determine whether these practices breached EU rules on abuse of dominance under Article 102 TFEU. Authorities intend to prioritise the case, though no deadline applies.

Google and national competition authorities have been formally notified as the inquiry proceeds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia enforces under-16 social media ban as new rules took effect

Australia has finally introduced the world’s first nationwide prohibition on social media use for under-16s, forcing platforms to delete millions of accounts and prevent new registrations.

Instagram, TikTok, Facebook, YouTube, Snapchat, Reddit, Twitch, Kick and Threads are removing accounts held by younger users. At the same time, Bluesky has agreed to apply the same standard despite not being compelled to do so. The only central platform yet to confirm compliance is X.

The measure follows weeks of age-assurance checks, which have not been flawless, with cases of younger teenagers passing facial-verification tests designed to keep them offline.

Families are facing sharply different realities. Some teenagers feel cut off from friends who managed to bypass age checks, while others suddenly gain a structure that helps reduce unhealthy screen habits.

A small but vocal group of parents admit they are teaching their children how to use VPNs and alternative methods instead of accepting the ban, arguing that teenagers risk social isolation when friends remain active.

Supporters of the legislation counter that Australia imposes clear age limits in other areas of public life for reasons of well-being and community standards, and the same logic should shape online environments.

Regulators are preparing to monitor the transition closely.

The eSafety Commissioner will demand detailed reports from every platform covered by the law, including the volume of accounts removed, evidence of efforts to stop circumvention and assessments of whether reporting and appeals systems are functioning as intended.

Companies that fail to take reasonable steps may face significant fines. A government-backed academic advisory group will study impacts on behaviour, well-being, learning and unintended shifts towards more dangerous corners of the internet.

Global attention is growing as several countries weigh similar approaches. Denmark, Norway and Malaysia have already indicated they may replicate Australia’s framework, and the EU has endorsed the principle in a recent resolution.

Interest from abroad signals a broader debate about how societies should balance safety and autonomy for young people in digital spaces, instead of relying solely on platforms to set their own rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!