Meta to block under-16 Australians from Facebook and Instagram early

Meta is beginning to block users in Australia who it believes are under 16 from using Instagram, Facebook, and Threads, starting 4 December, a week ahead of the government-mandated social media ban.

Last week, Meta sent in-app messages, emails and texts warning affected users to download their data because their accounts will soon be removed. As of 4 December, the company will deactivate existing accounts and block new sign-ups for users under 16.

To appeal the deactivation, targeted users can undergo age verification by providing a ‘video selfie’ to prove they are 16 or older, or by presenting a government-issued ID. Meta says it will ‘review and improve’ its systems, deploying AI-based age-assurance methods to reduce errors.

Observers highlight the risks of false positives in Meta’s age checks. Facial age estimation, conducted through partner company Yoti, has known margins of error.

The enforcement comes amid Australia’s world-first law that bars under-16s from using several major social media platforms, including Instagram, Snapchat, TikTok, YouTube, X and more.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech groups welcome EU reforms as privacy advocates warn of retreat

The EU has unveiled plans to scale back certain aspects of its AI and data privacy rules to revive innovation and alleviate regulatory pressure on businesses. The Digital Omnibus package delays stricter oversight for high-risk AI until 2027 and permits the use of anonymised personal data for model training.

The reforms amend the AI Act and several digital laws, cutting cookie pop-ups and simplifying documentation requirements for smaller firms. EU tech chief Henna Virkkunen says the aim is to boost competitiveness by removing layers of rigid regulation that have hindered start-ups and SMEs.

US tech lobby groups welcomed the overall direction. Still, they criticised the package for not going far enough, particularly on compute thresholds for systemic-risk AI and copyright provisions with cross-border effects. They argue the reforms only partially address industry concerns.

Privacy and digital rights advocates sharply opposed the changes, warning they represent a significant retreat from Europe’s rights-centric regulatory model. Groups including NOYB accused Brussels of undermining hard-won protections in favour of Big Tech interests.

Legal scholars say the proposals could shift Europe closer to a more permissive, industry-driven approach to AI and data use. They warn that the reforms may dilute the EU’s global reputation as a standard-setter for digital rights, just as the world seeks alternatives to US-style regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU eases AI and data rules to boost tech growth

The European Commission has proposed easing AI and data privacy rules to cut red tape and help European tech firms compete internationally. Companies could access datasets more freely for AI training and have 16 months to comply with ‘high-risk’ AI rules.

Brussels also aims to cut the number of cookie pop-ups, allowing users to manage consent more efficiently while protecting privacy. The move has sparked concern among rights groups and campaigners who fear the EU may be softening its stance on Big Tech.

Critics argue that loosening regulations could undermine citizen protections, while European companies welcome the changes as a way to foster innovation and reduce regulatory burdens that have slowed start-ups and smaller businesses.

EU officials emphasise that the reforms seek a balance between competitiveness and safeguarding fundamental rights. Commission officials say the measures will help European firms compete with US and Chinese rivals while safeguarding citizen privacy.

Simplifying consent mechanisms and providing companies more operational flexibility are central to the plan’s goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

KT launches secure public cloud with Microsoft for South Korean enterprises

The telco firm, KT Corp, has introduced a Secure Public Cloud service in partnership with Microsoft, designed to meet South Korea’s stringent data sovereignty demands instead of relying solely on global cloud platforms.

Built on Microsoft Azure, the platform targets sectors such as finance and manufacturing, offering high-performance computing while ensuring all data remains stored and processed domestically.

A service that is based on three pillars: end-to-end data protection, enhanced enterprise control over cloud resources, and strict compliance with the residency requirements of South Korea.

Confidential computing encrypts data even during in-memory execution, while a managed hardware security module allows customers to fully own and manage encryption keys, enabling true end-to-end protection.

KT said the platform is particularly suitable for AI training, transaction-heavy applications, and operational workloads where data exposure could pose major risks.

By combining domestic governance with the flexibility and scalability of Azure, the company aims to give enterprises a reliable cloud solution without compromising performance or compliance.

The launch also strengthens KT’s broader cloud ecosystem, which includes KT Cloud and managed global cloud services like AWS.

KT plans to expand the Secure Public Cloud gradually across industries, responding to rising demand from organizations that need robust domestic data controls instead of facing the risks of cross-border data exposure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI in healthcare gains regulatory compass from UK experts

Professor Alastair Denniston has outlined the core principles for regulating AI in healthcare, describing AI as the ‘X-ray moment’ of our time.

Like previous innovations such as MRI scanners and antibiotics, AI has the potential to improve diagnosis, treatment and personalised care dramatically. Still, it also requires careful oversight to ensure patient safety.

The MHRA’s National Commission on the Regulation of AI in Healthcare is developing a framework based on three key principles. The framework must be safe, ensuring proportionate regulation that protects patients without stifling innovation.

It must be fast, reducing delays in bringing beneficial technologies to patients and supporting small innovators who cannot endure long regulatory timelines. Ultimately, it must be trusted, with transparent processes that foster confidence in AI technologies today and in the future.

Professor Denniston emphasises that AI is not a single technology but a rapidly evolving ecosystem. The regulatory system must keep pace with advances while allowing the NHS to harness AI safely and efficiently.

Just as with earlier medical breakthroughs, failure to innovate can carry risks equal to the dangers of new technologies themselves.

The National Commission will soon invite the public to contribute their views through a call for evidence.

Patients, healthcare professionals, and members of the public are encouraged to share what matters to them, helping to shape a framework that balances safety, speed, and trust while unlocking the full potential of AI in the NHS.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trilateral sanctions target Media Land for supporting ransomware groups

The United States has imposed coordinated sanctions on Media Land, a Russian bulletproof hosting provider accused of aiding ransomware groups and broader cybercrime. The measures target senior operators and sister companies linked to attacks on businesses and critical infrastructure.

Authorities in the UK and Australia say Media Land infrastructure aided ransomware groups, including LockBit, BlackSuit, and Play, and was linked to denial-of-service attacks on US organisations. OFAC also named operators and firms that maintained systems designed to evade law enforcement.

The action also expands earlier sanctions against Aeza Group, with entities accused of rebranding and shifting infrastructure through front companies such as Hypercore to avoid restrictions introduced this year. Officials say these efforts were designed to obscure operational continuity.

According to investigators, the network relied on overseas firms in Serbia and Uzbekistan to conceal its activity and establish technical infrastructure that was detached from the Aeza brand. These entities, along with the new Aeza leadership, were designated for supporting sanctions evasion and cyber operations.

The sanctions block assets under US jurisdiction and bar US persons from dealing with listed individuals or companies. Regulators warn that financial institutions interacting with sanctioned entities may face penalties, stating that the aim is to disrupt ransomware infrastructure and encourage operators to comply.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

DPDP law takes effect as India tightens AI-era data protections

India has activated new Digital Personal Data Protection rules that sharply restrict how technology firms collect and use personal information. The framework limits data gathering to what is necessary for a declared purpose and requires clear explanations, opt-outs, and breach notifications for Indian users.

The rules apply across digital platforms, from social media and e-commerce to banks and public services. Companies must obtain parental consent for individuals under 18 and are prohibited from using children’s data for targeted advertising. Firms have 18 months to comply with the new safeguards.

Users can request access to their data, ask why it was collected, and demand corrections or updates. They may withdraw consent at any time and, in some cases, request deletion. Companies must respond within 90 days, and individuals can appoint someone to exercise these rights.

Civil society groups welcomed stronger user rights but warned that the rules may also expand state access to personal data. The Internet Freedom Foundation criticised limited oversight and said the provisions risk entrenching government control, reducing transparency for citizens.

India is preparing further digital regulations, including new requirements for AI and social media firms. With nearly a billion online users, the government has urged platforms to label AI-generated content amid rising concerns about deepfakes, online misinformation, and election integrity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU simplifies digital rules to save billions for companies

The European Commission has unveiled a digital package designed to simplify rules and reduce administrative burdens, allowing businesses to focus on innovation rather than compliance.

An initiative that combines the Digital Omnibus, Data Union Strategy, and European Business Wallet to strengthen competitiveness across the EU while maintaining high standards of fundamental rights, data protection, and safety.

The Digital Omnibus streamlines rules on AI, cybersecurity, and data. Amendments will create innovation-friendly AI regulations, simplify reporting for cybersecurity incidents, harmonise aspects of the GDPR, and modernise cookie rules.

Improved access to data and regulatory guidance will support businesses, particularly SMEs, allowing them to develop AI solutions and scale operations across member states more efficiently.

The Data Union Strategy aims to unlock high-quality data for AI, strengthen Europe’s data sovereignty, and support businesses with legal guidance and strategic measures to ensure fair treatment of the EU data abroad.

Meanwhile, the European Business Wallet will provide a unified digital identity for companies, enabling secure signing, storage, and exchange of documents and communication with public authorities across 27 member states.

By easing administrative procedures, the package could save up to €5 billion by 2029, with the Business Wallet alone offering up to €150 billion in annual savings.

The Commission has launched a public consultation, the Digital Fitness Check, to assess the impact of these rules and guide future steps, ensuring that businesses can grow and innovate instead of being held back by complex regulations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU introduces plan to strengthen consumer protection

The European Commission has unveiled the 2030 Consumer Agenda, a strategic plan to reinforce protection, trust, and competitiveness across the EU.

With 450 million consumers contributing over half of the Union’s GDP, the agenda aims to simplify administrative processes for businesses, rather than adding new burdens, while ensuring fair treatment for shoppers.

The agenda sets four priorities to adapt to rising living costs, evolving online markets, and the surge in e-commerce. Completing the Single Market will remove cross-border barriers, enhance travel and financial services, and evaluate the effectiveness of the Geo-Blocking Regulation.

A planned Digital Fairness Act will address harmful online practices, focusing on protecting children and strengthening consumer rights.

Sustainable consumption takes a central focus, with efforts to combat greenwashing, expand access to sustainable goods, and support circular initiatives such as second-hand markets and repairable products.

The Commission will also enhance enforcement to tackle unsafe or non-compliant products, particularly from third countries, ensuring that compliant businesses are shielded from unfair competition.

Implementation will be overseen through the Annual Consumer Summit and regular Ministerial Forums, which will provide political guidance and monitor progress.

The 2030 Consumer Agenda builds on prior achievements and EU consultations, aiming to modernise consumer protection instead of leaving gaps in a rapidly changing market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Northamptonshire Police launches live facial recognition trial

Northamptonshire Police will roll out live facial recognition cameras in three town centres. Deployments are scheduled in Northampton on 28 November and 5 December, in Kettering on 29 November, and in Wellingborough on 6 December.

The initiative uses a van loaned from Bedfordshire Police and the watch-lists include high-risk sex offenders or persons wanted for arrest. Facial and biometric data for non-alerts are deleted immediately; alerts are held only up to 24 hours.

Police emphasise the AI based technology is ‘very much in its infancy’ but expect future acquisition of dedicated kit. A coordinator post is being created to manage the LFR programme in-house.

British campaigners express concern the biometric tool may erode privacy or resemble mass surveillance. Police assert that appropriate signage and open policy documents will be in place to maintain public confidence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot