India considers social media bans for children under 16

India is emerging as a potential test case for age-based social media restrictions as several states examine Australia-style bans on children’s access to platforms.

Goa and Andhra Pradesh are studying whether to prohibit social media use for those under 16, citing growing concerns over online safety and youth well-being. The debate has also reached the judiciary, with the Madras High Court urging the federal government to consider similar measures.

The proposals carry major implications for global technology companies, given that India’s internet population exceeds one billion users and continues to skew young.

Platforms such as Meta, Google and X rely heavily on India for long-term growth, advertising revenue and user expansion. Industry voices argue parental oversight is more effective than government bans, warning that restrictions could push minors towards unregulated digital spaces.

Australia’s under-16 ban, which entered force in late 2025, has already exposed enforcement difficulties, particularly around age verification and privacy risks. Determining users’ ages accurately remains challenging, while digital identity systems raise concerns about data security and surveillance.

Legal experts note that internet governance falls under India’s federal authority, limiting what individual states can enforce without central approval.

Although the data protection law of India includes safeguards for children, full implementation will extend through 2027, leaving policymakers to balance child protection, platform accountability and unintended consequences.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Austrian watchdog rules against Microsoft education tracking

Microsoft has been found to have unlawfully placed tracking cookies on a child’s device without valid consent, following a ruling by Austria’s data protection authority.

The case stems from a complaint filed by a privacy group, noyb, concerning Microsoft 365 Education, a platform used by millions of pupils and teachers across Europe.

According to the decision, Microsoft deployed cookies that analysed user behaviour, collected browser data and served advertising purposes, despite being used in an educational context involving minors. The Austrian authority ordered the company to cease the unlawful tracking within four weeks.

Noyb warned the ruling could have broader implications for organisations relying on Microsoft software, particularly schools and public bodies. A data protection lawyer at the group criticised Microsoft’s approach to privacy, arguing that protections appear secondary to marketing considerations.

The ruling follows earlier GDPR findings against Microsoft, including violations of access rights and concerns raised over the European Commission’s own use of Microsoft 365.

Although previous enforcement actions were closed after contractual changes, regulatory scrutiny of Microsoft’s education and public sector products continues.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New AI model detects wide range of health risks via sleep analysis

Recent research indicates that AI applied to sleep pattern analysis can identify signals linked to over 130 health conditions, including heart disease, metabolic dysfunction and respiratory issues, from a single night’s sleep record.

By using machine learning to analyse detailed physiological data collected during sleep, AI models may reveal subtle patterns that correlate with existing or future health risks.

Proponents suggest that this technology could support early detection and preventative healthcare by offering a non-invasive way to screen for multiple conditions simultaneously, potentially guiding timely medical intervention.

However, clinicians stress that such AI tools should complement, not replace, formal medical evaluation and diagnosis.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta tests paid features on Facebook, Instagram and WhatsApp

Subscriptions for Facebook, Instagram and WhatsApp are set to be tested as Meta explores new revenue streams while keeping core access free. Paid tiers would place selected features and advanced sharing controls behind a subscription.

Early signals indicate the subscriptions could launch within months, with each platform offering its own set of premium tools. Meta has confirmed it will trial multiple formats rather than rely on a single bundled model.

AI plays a central role in the plan, with subscribers gaining access to AI-powered features, including video generation. The recently acquired Manus AI agent will be integrated across Meta services and offered separately to business users.

User reaction is expected to influence how far the company pushes the model, including potential bundles or platform-specific pricing. Wider acceptance could encourage other social networks to adopt similar subscription strategies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Data privacy shifts from breaches to authorised surveillance

Data Privacy Week has returned at a time when personal information is increasingly collected by default rather than through breaches. Campaigns urge awareness, yet privacy is being reshaped by lawful, large-scale data gathering driven by corporate and government systems.

In the US, companies now collect, retain and combine data with AI tools under legal authority, often without meaningful consent. Platforms such as TikTok illustrate how vast datasets are harvested regardless of ownership, shifting debates towards who controls data rather than how much is taken.

US policy responses have focused on national security rather than limiting surveillance itself. Pressure on TikTok to separate from Chinese ownership left data collection intact, while border authorities in the US are seeking broader access to travellers’ digital and biometric information.

Across the US technology sector, privacy increasingly centres on agency rather than secrecy. Data Privacy Week highlights growing concern that once information is gathered, control is lost, leaving accountability lagging behind capability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Facial recognition expansion anchors UK policing reforms driven by AI

UK authorities have unveiled a major policing reform programme that places AI and facial recognition at the centre of future law enforcement strategy. The plans include expanding the use of Live Facial Recognition and creating a national hub to scale AI tools across police forces.

The Home Office will fund 40 new facial recognition vans for town centres across England and Wales, significantly increasing real-time biometric surveillance capacity. Officials say the rollout responds to crime that increasingly involves digital activity.

The UK government will also invest £115 million over three years into a National Centre for AI in Policing, known as Police.AI. The centre will focus on speeding investigations, reducing paperwork and improving crime detection.

New governance measures will regulate police use of facial recognition and introduce a public register of deployed AI systems. National data standards aim to strengthen accountability and coordination across forces.

Structural reforms include creating a National Police Service to tackle serious crime and terrorism. Predictive analytics, deepfake detection and digital forensics will play a larger operational role.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The EU Commission opens DMA proceedings on Google interoperability and search data

The European Commission has opened two specification proceedings to spell out how Google should meet key obligations under the EU’s Digital Markets Act (DMA), focusing on Android’s AI-related features and access to Google Search data for competitors.

The first proceeding targets the DMA’s interoperability requirement for Android. In practical terms, Brussels wants to clarify how third-party AI services can get access, free and effectively, to the same Android hardware/software functionalities that power Google’s own AI offerings, including Gemini, so that rivals can compete on a more equal footing on mobile devices.

The second proceeding addresses Google’s obligation to provide rival search engines access to anonymised search data (such as ranking, query, click, and view data) on fair, reasonable, and non-discriminatory terms. The Commission is also considering whether AI chatbot providers should qualify for that access, an essential question as ‘search’ increasingly blurs with conversational AI.

These proceedings are designed to define how compliance should work rather than immediately sanction Google. The Commission is expected to wrap them up within six months, with draft measures and preliminary findings shared earlier in the process, and with scope for third-party feedback. A separate non-compliance track could still follow later, and DMA penalties for breaches can reach up to 10% of global turnover.

Google, for its part, says Android is ‘open by design’ and argues it is already licensing Search data, while warning that additional requirements, especially those it views as competitor-driven, could undermine user privacy, security, and innovation.

Why does it matter?

The EU is trying to prevent dominant platforms from turning control over operating systems and data into an ‘unfair advantage’ in the next wave of consumer tech, particularly as AI assistants become built into phones and as search data becomes fuel for competing discovery tools. The move also sits within a broader DMA enforcement push: the Commission has already opened DMA-related proceedings into Alphabet in other areas, signalling that Brussels sees gatekeeper compliance as an ongoing, hands-on exercise rather than a one-off checkbox.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Survey finds Gen Z turns to AI for sexual health questions despite misdiagnoses

According to a January 2026 survey of 2,520 US adults aged 18 to 29, roughly 20 percent of Gen Z have queried AI chatbots about STIs/STDs, and 1 in 10 specifically sought help with diagnosis or suspicion of infection.

Among those who later sought formal medical testing, about 31 percent said the chatbot’s assessment was incorrect, highlighting risks of relying on AI for health diagnostics.

Respondents often shared symptom details and even photos with the bots, and many said they were more comfortable discussing sensitive topics with an AI than with a clinician, despite potential privacy and accuracy limitations.

Medical experts emphasise that while AI can support general health education, these tools are not replacements for clinical diagnosis or professional medical testing, which remain necessary for accurate STI/STD identification and treatment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nova ransomware claims breach of KPMG Netherlands

KPMG Netherlands has allegedly become the latest target of the Nova ransomware group, following claims that sensitive data was accessed and exfiltrated.

The incident was reported by ransomware monitoring services on 23 January 2026, with attackers claiming the breach occurred on the same day.

Nova has reportedly issued a ten-day deadline for contact and ransom negotiations, a tactic commonly used by ransomware groups to pressure large organisations.

The group has established a reputation for targeting professional services firms and financial sector entities that manage high-value and confidential client information.

Threat intelligence sources indicate that Nova operates a distributed command and control infrastructure across the Tor network, alongside multiple leak platforms used to publish stolen data. Analysis suggests a standardised backend deployment, pointing to a mature and organised ransomware operation.

KPMG has not publicly confirmed the alleged breach at the time of writing. Clients and stakeholders are advised to follow official communications for clarity on potential exposure, response measures and remediation steps as investigations continue.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU classifies WhatsApp as Very Large Online Platform

WhatsApp has been formally designated a Very Large Online Platform under the EU Digital Services Act, triggering the bloc’s most stringent digital oversight regime.

The classification follows confirmation that the messaging service has exceeded 51 million monthly users in the EU, triggering enhanced regulatory scrutiny.

As a VLOP, WhatsApp must take active steps to limit the spread of disinformation and reduce risks linked to the manipulation of public debate. The platform is also expected to strengthen safeguards for users’ mental health, with particular attention placed on the protection of minors and younger audiences.

The European Commission will oversee compliance directly and may impose financial penalties of up to 6 percent of WhatsApp’s global annual turnover if violations are identified. The company has until mid-May to align its systems, policies and risk assessments with the DSA’s requirements.

WhatsApp joins a growing list of major platforms already subject to similar obligations, including Facebook, Instagram, YouTube and X. The move reflects the Commission’s broader effort to apply the Digital Services Act across social media, messaging services and content platforms linked to systemic online risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!