Meta to block under-16 Australians from Facebook and Instagram early

Meta is beginning to block users in Australia who it believes are under 16 from using Instagram, Facebook, and Threads, starting 4 December, a week ahead of the government-mandated social media ban.

Last week, Meta sent in-app messages, emails and texts warning affected users to download their data because their accounts will soon be removed. As of 4 December, the company will deactivate existing accounts and block new sign-ups for users under 16.

To appeal the deactivation, targeted users can undergo age verification by providing a ‘video selfie’ to prove they are 16 or older, or by presenting a government-issued ID. Meta says it will ‘review and improve’ its systems, deploying AI-based age-assurance methods to reduce errors.

Observers highlight the risks of false positives in Meta’s age checks. Facial age estimation, conducted through partner company Yoti, has known margins of error.

The enforcement comes amid Australia’s world-first law that bars under-16s from using several major social media platforms, including Instagram, Snapchat, TikTok, YouTube, X and more.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech groups welcome EU reforms as privacy advocates warn of retreat

The EU has unveiled plans to scale back certain aspects of its AI and data privacy rules to revive innovation and alleviate regulatory pressure on businesses. The Digital Omnibus package delays stricter oversight for high-risk AI until 2027 and permits the use of anonymised personal data for model training.

The reforms amend the AI Act and several digital laws, cutting cookie pop-ups and simplifying documentation requirements for smaller firms. EU tech chief Henna Virkkunen says the aim is to boost competitiveness by removing layers of rigid regulation that have hindered start-ups and SMEs.

US tech lobby groups welcomed the overall direction. Still, they criticised the package for not going far enough, particularly on compute thresholds for systemic-risk AI and copyright provisions with cross-border effects. They argue the reforms only partially address industry concerns.

Privacy and digital rights advocates sharply opposed the changes, warning they represent a significant retreat from Europe’s rights-centric regulatory model. Groups including NOYB accused Brussels of undermining hard-won protections in favour of Big Tech interests.

Legal scholars say the proposals could shift Europe closer to a more permissive, industry-driven approach to AI and data use. They warn that the reforms may dilute the EU’s global reputation as a standard-setter for digital rights, just as the world seeks alternatives to US-style regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU eases AI and data rules to boost tech growth

The European Commission has proposed easing AI and data privacy rules to cut red tape and help European tech firms compete internationally. Companies could access datasets more freely for AI training and have 16 months to comply with ‘high-risk’ AI rules.

Brussels also aims to cut the number of cookie pop-ups, allowing users to manage consent more efficiently while protecting privacy. The move has sparked concern among rights groups and campaigners who fear the EU may be softening its stance on Big Tech.

Critics argue that loosening regulations could undermine citizen protections, while European companies welcome the changes as a way to foster innovation and reduce regulatory burdens that have slowed start-ups and smaller businesses.

EU officials emphasise that the reforms seek a balance between competitiveness and safeguarding fundamental rights. Commission officials say the measures will help European firms compete with US and Chinese rivals while safeguarding citizen privacy.

Simplifying consent mechanisms and providing companies more operational flexibility are central to the plan’s goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US administration pushes back on proposal to restrict Nvidia sales to China

The White House is urging Congress to reject a bipartisan proposal that would restrict Nvidia from selling advanced AI chips to China and other countries subject to an embargo. The GAIN AI Act would require chipmakers to prioritise US buyers before exporting high-performance hardware.

Lawmakers are debating whether to attach the provision to the annual defence spending bill, a move that could accelerate approval. The White House intervention represents a significant win for Nvidia, which has lobbied to maintain export flexibility amid shifting trade policies.

China was previously a significant market for Nvidia, but the firm has pared back expectations due to rising geopolitical risks. Beijing has also increased scrutiny of US-made chips as it pushes for self-reliance in AI and semiconductor technology.

The policy discussions come shortly after Nvidia posted stronger-than-expected third-quarter earnings and issued an upbeat outlook. CEO Jensen Huang has pushed back against concerns of an AI-driven valuation bubble, arguing demand remains robust.

Nvidia’s shares rose 5 percent after hours following the earnings report, reflecting investor confidence as Washington continues to debate the future of AI chip export controls.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

KT launches secure public cloud with Microsoft for South Korean enterprises

The telco firm, KT Corp, has introduced a Secure Public Cloud service in partnership with Microsoft, designed to meet South Korea’s stringent data sovereignty demands instead of relying solely on global cloud platforms.

Built on Microsoft Azure, the platform targets sectors such as finance and manufacturing, offering high-performance computing while ensuring all data remains stored and processed domestically.

A service that is based on three pillars: end-to-end data protection, enhanced enterprise control over cloud resources, and strict compliance with the residency requirements of South Korea.

Confidential computing encrypts data even during in-memory execution, while a managed hardware security module allows customers to fully own and manage encryption keys, enabling true end-to-end protection.

KT said the platform is particularly suitable for AI training, transaction-heavy applications, and operational workloads where data exposure could pose major risks.

By combining domestic governance with the flexibility and scalability of Azure, the company aims to give enterprises a reliable cloud solution without compromising performance or compliance.

The launch also strengthens KT’s broader cloud ecosystem, which includes KT Cloud and managed global cloud services like AWS.

KT plans to expand the Secure Public Cloud gradually across industries, responding to rising demand from organizations that need robust domestic data controls instead of facing the risks of cross-border data exposure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Foxconn and OpenAI strengthen US AI manufacturing

OpenAI has formed a new partnership with Foxconn to prepare US manufacturing for a fresh generation of AI infrastructure hardware.

The agreement centres on design support and early evaluation instead of immediate purchase commitments, which gives OpenAI a path to influence development while Foxconn builds readiness inside American facilities.

Both companies expect rapid advances in AI capability to demand a new class of physical infrastructure. They plan to co-design several generations of data centre racks that can keep pace with model development instead of relying on slower single-cycle upgrades.

OpenAI will share insight into future hardware needs while Foxconn provides engineering knowledge and large-scale manufacturing capacity across the US.

A key aim is to strengthen domestic supply chains by improving rack architecture, widening access to domestic chip suppliers and expanding local testing and assembly. Foxconn intends to produce essential data centre components in the US, including cabling, networking, cooling and power systems.

The companies present such an effort as a way to support faster deployment, create more resilient infrastructure and bring economic benefits to American workers.

OpenAI frames the partnership as part of a broader push to ensure that critical AI infrastructure is built within the US instead of abroad. Company leaders argue that a robust domestic supply chain will support American leadership in AI and keep the benefits widely shared across the economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI in healthcare gains regulatory compass from UK experts

Professor Alastair Denniston has outlined the core principles for regulating AI in healthcare, describing AI as the ‘X-ray moment’ of our time.

Like previous innovations such as MRI scanners and antibiotics, AI has the potential to improve diagnosis, treatment and personalised care dramatically. Still, it also requires careful oversight to ensure patient safety.

The MHRA’s National Commission on the Regulation of AI in Healthcare is developing a framework based on three key principles. The framework must be safe, ensuring proportionate regulation that protects patients without stifling innovation.

It must be fast, reducing delays in bringing beneficial technologies to patients and supporting small innovators who cannot endure long regulatory timelines. Ultimately, it must be trusted, with transparent processes that foster confidence in AI technologies today and in the future.

Professor Denniston emphasises that AI is not a single technology but a rapidly evolving ecosystem. The regulatory system must keep pace with advances while allowing the NHS to harness AI safely and efficiently.

Just as with earlier medical breakthroughs, failure to innovate can carry risks equal to the dangers of new technologies themselves.

The National Commission will soon invite the public to contribute their views through a call for evidence.

Patients, healthcare professionals, and members of the public are encouraged to share what matters to them, helping to shape a framework that balances safety, speed, and trust while unlocking the full potential of AI in the NHS.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok rolls out mindfulness and screen-time reset features

TikTok has announced a set of new well-being features designed to help users build more balanced digital habits. The rollout includes an in-app experience with breathing exercises, calming audio tracks and short ‘Well-being Missions’ that reward mindful behaviour.

The missions are interactive tasks, such as quizzes and flashcards, that encourage users to explore TikTok’s existing digital-wellness tools (like Sleep Hours and Screen Time Management). Completing these missions earns users badges, reinforcing positive habits. In early tests, approximately 40 percent of people who saw the missions chose to try them.

TikTok is also experimenting with a dedicated ‘pause and recharge’ space within the app. This includes safe, calming activities that help users disconnect: for instance, before bedtime or after long scrolling sessions.

The broader effort reflects TikTok’s growing emphasis on digital wellness, part of a larger industry trend on the responsible and healthy use of social platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

DPDP law takes effect as India tightens AI-era data protections

India has activated new Digital Personal Data Protection rules that sharply restrict how technology firms collect and use personal information. The framework limits data gathering to what is necessary for a declared purpose and requires clear explanations, opt-outs, and breach notifications for Indian users.

The rules apply across digital platforms, from social media and e-commerce to banks and public services. Companies must obtain parental consent for individuals under 18 and are prohibited from using children’s data for targeted advertising. Firms have 18 months to comply with the new safeguards.

Users can request access to their data, ask why it was collected, and demand corrections or updates. They may withdraw consent at any time and, in some cases, request deletion. Companies must respond within 90 days, and individuals can appoint someone to exercise these rights.

Civil society groups welcomed stronger user rights but warned that the rules may also expand state access to personal data. The Internet Freedom Foundation criticised limited oversight and said the provisions risk entrenching government control, reducing transparency for citizens.

India is preparing further digital regulations, including new requirements for AI and social media firms. With nearly a billion online users, the government has urged platforms to label AI-generated content amid rising concerns about deepfakes, online misinformation, and election integrity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Target expands OpenAI partnership with new ChatGPT shopping app

Target is expanding its partnership with OpenAI by launching a new shopping app directly inside ChatGPT. The app offers customers personalised recommendations, multi-item baskets and streamlined checkout across Drive Up, Order Pickup and shipping.

The retailer will continue using OpenAI’s models and ChatGPT Enterprise to enhance employee productivity and strengthen digital experiences across its business.

AI is central to Target’s operations, supporting supply-chain forecasts, store processes, and personalised digital tools. Over 18,000 employees utilise ChatGPT Enterprise to streamline routine tasks, enhance creativity, and receive faster support for guest requests and returns through internal AI assistants.

Customer-facing tools such as Shopping Assistant, Gift Finder, Guest Assist and JOY reinforce this strategy by offering curated suggestions and instant answers.

The new Target app inside ChatGPT extends this AI-driven approach to customers. Shoppers will be able to ask for ideas, browse curated suggestions, build baskets and check out through their Target accounts.

The beta version launches next week, and upcoming features include Target Circle linking and same-day delivery. Target views the partnership as part of a retail shift, embedding AI across products, operations and guest interactions to drive the next wave of innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!