Nova ransomware claims breach of KPMG Netherlands

KPMG Netherlands has allegedly become the latest target of the Nova ransomware group, following claims that sensitive data was accessed and exfiltrated.

The incident was reported by ransomware monitoring services on 23 January 2026, with attackers claiming the breach occurred on the same day.

Nova has reportedly issued a ten-day deadline for contact and ransom negotiations, a tactic commonly used by ransomware groups to pressure large organisations.

The group has established a reputation for targeting professional services firms and financial sector entities that manage high-value and confidential client information.

Threat intelligence sources indicate that Nova operates a distributed command and control infrastructure across the Tor network, alongside multiple leak platforms used to publish stolen data. Analysis suggests a standardised backend deployment, pointing to a mature and organised ransomware operation.

KPMG has not publicly confirmed the alleged breach at the time of writing. Clients and stakeholders are advised to follow official communications for clarity on potential exposure, response measures and remediation steps as investigations continue.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU classifies WhatsApp as Very Large Online Platform

WhatsApp has been formally designated a Very Large Online Platform under the EU Digital Services Act, triggering the bloc’s most stringent digital oversight regime.

The classification follows confirmation that the messaging service has exceeded 51 million monthly users in the EU, triggering enhanced regulatory scrutiny.

As a VLOP, WhatsApp must take active steps to limit the spread of disinformation and reduce risks linked to the manipulation of public debate. The platform is also expected to strengthen safeguards for users’ mental health, with particular attention placed on the protection of minors and younger audiences.

The European Commission will oversee compliance directly and may impose financial penalties of up to 6 percent of WhatsApp’s global annual turnover if violations are identified. The company has until mid-May to align its systems, policies and risk assessments with the DSA’s requirements.

WhatsApp joins a growing list of major platforms already subject to similar obligations, including Facebook, Instagram, YouTube and X. The move reflects the Commission’s broader effort to apply the Digital Services Act across social media, messaging services and content platforms linked to systemic online risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok outages spark fears over data control and censorship in the US

Widespread TikTok disruptions affected users across the US as snowstorms triggered power outages and technical failures, with reports of malfunctioning algorithms and missing content features.

Problems persisted for some users beyond the initial incident, adding to uncertainty surrounding the platform’s stability.

The outage coincided with the creation of a new US-based TikTok joint venture following government concerns over potential Chinese access to user data. TikTok stated that a power failure at a domestic data centre caused the disruption, rather than ownership restructuring or policy changes.

Suspicion grew among users due to overlapping political events, including large-scale protests in Minneapolis and reports of difficulties searching for related content. Fears of censorship spread online, although TikTok attributed all disruptions to infrastructure failure.

The incident also resurfaced concerns over TikTok’s privacy policy, which outlines the collection of sensitive personal data. While some disclosures predated the ownership deal, the timing reinforced broader anxieties over social media surveillance during periods of political tension.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk’s X under EU Commission scrutiny over Grok sexualised images

The European Commission has opened a new investigation into Elon Musk’s X over Grok, the platform’s AI chatbot, after reports that the tool was used to generate and circulate non-consensual sexualised images, including content that may involve minors. The EU officials say they will examine whether X properly assessed and reduced the risks linked to Grok’s features before rolling them out in the EU.

The case is being pursued under the EU’s Digital Services Act (DSA), which requires very large online platforms to identify and mitigate systemic risks, including the spread of illegal content and harms to fundamental rights. If breaches are confirmed, the Commission can impose fines of up to 6% of a provider’s global annual turnover and, in some cases, require interim measures.

X and xAI have said they introduced restrictions after the backlash, including limiting some image-editing functions and blocking certain image generation in jurisdictions where it is illegal. The EU officials have welcomed steps to tighten safeguards but argue they may not address deeper, systemic risks, particularly if risk assessments and mitigations were not in place before deployment.

The Grok probe lands on top of a broader set of legal pressures already facing X. In the UK, Ofcom has opened a formal investigation under the Online Safety Act into whether X met its duties to protect users from illegal content linked to Grok’s sexualised imagery. Beyond Europe, Malaysia and Indonesia temporarily blocked Grok amid safety concerns, and access was later restored after authorities said additional safeguards had been put in place.

In parallel, the EU regulators have also widened scrutiny of X’s recommender systems, an area already under DSA proceedings, because the platform has moved toward using a Grok-linked system to rank and recommend content. The Commission has argued that recommendation design can amplify harmful material at scale, making it central to whether a platform effectively manages systemic risks.

The investigation also comes amid earlier DSA enforcement. The Commission recently fined X €120 million for transparency-related breaches, underscoring that the EU action is not limited to content moderation alone but extends to how platforms disclose and enable scrutiny of their systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI Overviews leans heavily on YouTube for health information

Google’s health-related search results increasingly draw on YouTube rather than hospitals, government agencies, or academic institutions, as new research reveals how AI Overviews select citation sources in automated results.

An analysis by SEO platform SE Ranking reviewed more than 50,000 German-language health queries and found AI Overviews appeared on over 82% of searches, making healthcare one of the most AI-influenced information categories on Google.

Across all cited sources, YouTube ranked first by a wide margin, accounting for more than 20,000 references and surpassing medical publishers, hospital websites, and public health authorities.

Academic journals and research institutions accounted for less than 1% of citations, while national and international government health bodies accounted for under 0.5%, highlighting a sharp imbalance in source authority.

Researchers warn that when platform-scale content outweighs evidence-based medical sources, the risk extends beyond misinformation to long-term erosion of trust in AI-powered search systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Indeed expands AI tools to reshape hiring

Indeed is expanding its use of AI to improve hiring efficiency, enhance candidate matching, and support recruiters, while keeping humans in control of final decisions.

The platform offers over 100 AI-powered features across job search, recruitment, and internal operations, supported by a long-term partnership with OpenAI.

Recent launches include Career Scout for job seekers and Talent Scout for employers, streamlining career guidance, sourcing, screening, and engagement.

Additional AI-powered tools introduced through Indeed Connect aim to improve candidate discovery and screening, helping companies move faster while broadening access to opportunities through skills-based matching.

AI adoption has accelerated internally, with over 80% of engineers using AI tools and two-thirds of staff saving up to 2 hours per week. Marketing, sales, and research teams are building custom AI agents to support creativity, personalised outreach, and strategic decision-making.

Responsible AI principles remain central to Indeed’s strategy, prioritising fairness, transparency, and human control in hiring. Early results show faster hiring, stronger candidate engagement, and improved outcomes in hard-to-fill roles, reinforcing confidence in AI-driven recruitment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

France’s National Assembly backs under-15 social media ban

France’s National Assembly has backed a bill that would bar children under 15 from accessing social media, citing rising concern over cyberbullying and mental-health harms. MPs approved the text late Monday by 116 votes to 23, sending it next to the Senate before it returns to the lower house for a final vote.

As drafted, the proposal would cover both standalone social networks and ‘social networking’ features embedded inside wider platforms, and it would rely on age checks that comply with the EU rules. The same package also extends France’s existing smartphone restrictions in schools to include high schools, and lawmakers have discussed additional guardrails, such as limits on practices deemed harmful to minors (including advertising and recommendation systems).

President Emmanuel Macron has urged lawmakers to move quickly, arguing that platforms are not neutral spaces for adolescents and linking social media to broader concerns about youth violence and well-being. Support for stricter limits is broad across parties, and polling has pointed in the same direction, but the bill still faces the practical question of how reliably platforms can keep underage users out.

Australia set the pace in December 2025, when its world-first ban on under-16s holding accounts on major platforms came into force, an approach now closely watched abroad. Early experience there has highlighted the same tension France faces, between political clarity (‘no accounts under the age line’) and the messy reality of age assurance and workarounds.

France’s debate is also unfolding in a broader European push to tighten child online safety rules. The European Parliament has called for an EU-wide ‘digital minimum age’ of 16 (with parental consent options for 13–16), while the European Commission has issued guidance for platforms and developed a prototype age-verification tool designed to preserve privacy, signalling that Brussels is trying to square protection with data-minimisation.

Why does it matter?

Beyond the child-safety rationale, the move reflects a broader push to curb platform power, with youth protection framed as a test case for stronger state oversight of Big Tech. At the same time, critics warn that strict age-verification regimes can expand online identification and surveillance, raising privacy and rights concerns, and may push teens toward smaller or less regulated spaces rather than offline life.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Artists and writers say no to generative AI

Creative communities are pushing back against generative AI in literature and art. The Science Fiction and Fantasy Writers Association now bars works created wholly or partly with large language models after criticism of earlier, more permissive rules.

San Diego Comic-Con faced controversy when it initially allowed AI-generated art in its exhibition, but not for sale. Artists argued that the rules threatened originality, prompting organisers to ban all AI-created material.

Authors warn that generative AI undermines the creative process. Some point out that large language model tools are already embedded in research and writing software, raising concerns about accidental disqualification from awards.

Fans and members welcomed SFWA’s decision, but questions remain about how broadly AI usage will be defined. Many creators insist that machines cannot replicate storytelling and artistic skill.

Industry observers expect other cultural organisations to follow similar policies this year. The debate continues over ethics, fairness, and technology’s role in arts and literature.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Monnett highlights EU digital sovereignty in social media

Monnett is a European-built social media platform designed to give people control over their online feeds. Users can choose exactly what they see, prioritise friends’ posts, and opt out of surveillance-style recommendation systems that dominate other networks.

Unlike mainstream platforms, Monnett places privacy first, with no profiling or sale of user data, and private chats protected without being mined for advertising. The platform also avoids “AI slop” or generative AI content shaping people’s feeds, emphasising human-centred interaction.

Created and built in Luxembourg at the heart of Europe, Monnett’s design reflects a growing push for digital sovereignty in the European Union, where citizens, regulators and developers want more control over how their digital spaces are governed and how personal data is treated.

Core features include full customisation of your algorithm, no shadowbans, strong privacy safeguards, and a focus on genuine social connection. Monnett aims to win users who prefer meaningful online interaction over addictive feeds and opaque data practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta pauses teen access to AI characters

Meta Platforms has announced a temporary pause on teenagers’ access to AI characters across its platforms, including Instagram and WhatsApp. Meta disclosed the decision to review and rebuild the feature for younger users.

In San Francisco, Meta said the restriction will apply to users identified as minors based on declared ages or internal age-prediction systems. Teenagers will still be able to use Meta’s core AI assistant, though interactive AI characters will be unavailable.

The move comes ahead of a major child safety trial in Los Angeles involving Meta, TikTok and YouTube. The Los Angeles case focuses on allegations that social media platforms cause harm to children through addictive and unsafe digital features.

Concerns about AI chatbots and minors have grown across the US, prompting similar action by other companies. In Los Angeles and San Francisco, regulators and courts are increasingly scrutinising how AI interactions affect young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot