Greece nears plan to restrict social media for under-15s

Preparing to restrict social media access for children under 15s, Greece plans to use the Kids Wallet app as its enforcement tool amid rising European concern over youth safety.

A senior official indicated that an announcement is close, reflecting growing political concern about digital safety and youth protection.

The Ministry of Digital Governance intends to rely on the Kids Wallet application, introduced last year, as a mechanism for enforcing the measure instead of developing a new control framework.

Government planning is advanced, yet the precise timing of the announcement by Prime Minister Kyriakos Mitsotakis has not been finalised.

In addition to the legislative initiative in Greece, the European debate on children’s online safety is intensifying.

Spain recently revealed plans to prohibit social media access for those under sixteen and to create legislation that would hold platform executives personally accountable for hate speech.

Such moves illustrate how governments are seeking to shape the digital environment for younger users rather than leaving regulation solely in private hands.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

India pushes Meta to justify WhatsApp’s data-sharing

The Supreme Court of India has delivered a forceful warning to Meta after judges said the company could not play with the right to privacy.

The court questioned how WhatsApp monetises personal data in a country where the app has become the de facto communications tool for hundreds of millions of people. Judges added that meaningful consent is difficult when users have little practical choice.

Meta was told not to share any user information while the appeal over WhatsApp’s 2021 privacy policy continues. Judges pressed the company to explain the value of behavioural data instead of relying solely on claims about encrypted messages.

Government lawyers argued that personal data was collected and commercially exploited in ways most users would struggle to understand.

The case stems from a major update to WhatsApp’s data-sharing rules that India’s competition regulator said abused the platform’s dominant position.

A significant penalty was issued before Meta and WhatsApp challenged the ruling at the Supreme Court. The court has now widened the proceedings by adding the IT ministry and has asked Meta to provide detailed answers before the next hearing on 9 February.

WhatsApp is also under heightened scrutiny worldwide as regulators examine how encrypted platforms analyse metadata and other signals.

In India, broader regulatory changes, such as new SIM-binding rules, could restrict how small businesses use the service rather than broadening its commercial reach.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Ofcom expands scrutiny of X over Grok deepfake concerns

The British regulator, Ofcom, has released an update on its investigation into X after reports that the Grok chatbot had generated sexual deepfakes of real people, including minors.

As such, the regulator initiated a formal inquiry to assess whether X took adequate steps to manage the spread of such material and to remove it swiftly.

X has since introduced measures to limit the distribution of manipulated images, while the ICO and regulators abroad have opened parallel investigations.

The Online Safety Act does not cover all chatbot services, as regulation depends on whether a system enables user interactions, provides search functionality, or produces pornographic material.

Many AI chatbots fall partly or entirely outside the Act’s scope, limiting regulators’ ability to act when harmful content is created during one-to-one interactions.

Ofcom cannot currently investigate the standalone Grok service for producing illegal images because the Act does not cover that form of generation.

Evidence-gathering from X continues, with legally binding information requests issued to the company. Ofcom will offer X a full opportunity to present representations before any provisional findings are published.

Enforcement actions take several months, since regulators must follow strict procedural safeguards to ensure decisions are robust and defensible.

Ofcom added that people who encounter harmful or illegal content online are encouraged to report it directly to the relevant platforms. Incidents involving intimate images can be reported to dedicated services for adults or support schemes for minors.

Material that may constitute child sexual abuse should be reported to the Internet Watch Foundation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU moves closer to decision on ChatGPT oversight

The European Commission plans to decide by early 2026 whether OpenAI’s ChatGPT should be classified as a vast online platform under the Digital Services Act.

OpenAI’s tool reported 120.4 million average monthly users in the EU back in October, a figure far above the 45-million threshold that triggers more onerous obligations instead of lighter oversight.

Officials said the designation procedure depends on both quantitative and qualitative assessments of how a service operates, together with input from national authorities.

The Commission is examining whether a standalone AI chatbot can fall within the scope of rules usually applied to platforms such as social networks, online marketplaces and significant search engines.

ChatGPT’s user data largely stems from its integrated online search feature, which prompts users to allow the chatbot to search the web. The Commission noted that OpenAI could voluntarily meet the DSA’s risk-reduction obligations while the formal assessment continues.

The EU’s latest wave of designations included Meta’s WhatsApp, though the rules applied only to public channels, not private messaging.

A decision on ChatGPT that will clarify how far the bloc intends to extend its most stringent online governance framework to emerging AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

France targets X over algorithm abuse allegations

The cybercrime unit of the Paris prosecutor has raided the French office of X as part of an expanding investigation into alleged algorithm manipulation and illicit data extraction.

Authorities said the probe began in 2025 after a lawmaker warned that biassed algorithms on the platform might have interfered with automated data systems. Europol supported the operation together with national cybercrime officers.

Prosecutors confirmed that the investigation now includes allegations of complicity in circulating child sex abuse material, sexually explicit deepfakes and denial of crimes against humanity.

Elon Musk and former chief executive Linda Yaccarino have been summoned for questioning in April in their roles as senior figures of the company at the time.

The prosecutor’s office also announced its departure from X in favour of LinkedIn and Instagram, rather than continuing to use the platform under scrutiny.

X strongly rejected the accusations and described the raid as politically motivated. Musk claimed authorities should focus on pursuing sex offenders instead of targeting the company.

The platform’s government affairs team said the investigation amounted to law enforcement theatre rather than a legitimate examination of serious offences.

Regulatory pressure increased further as the UK data watchdog opened inquiries into both X and xAI over concerns about Grok producing sexualised deepfakes. Ofcom is already conducting a separate investigation that is expected to take months.

The widening scrutiny reflects growing unease around alleged harmful content, political interference and the broader risks linked to large-scale AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Alternative social platform UpScrolled passes 2.5 million users

UpScrolled has surpassed 2.5 million users globally, gaining rapid momentum following TikTok’s restructuring of its US ownership earlier this year, according to founder Issam Hijazi.

The social network grew to around 150,000 users in its first six months before accelerating sharply in January, crossing one million users within weeks and reaching more than 2.5 million shortly afterwards.

Positioned as a hybrid of Instagram and X, UpScrolled promotes itself as an open platform free of shadowbanning and selective content suppression, while criticising major technology firms for data monetisation and algorithm-driven engagement practices.

Hijazi said the company would avoid amplification algorithms but acknowledged the need for community guidelines, particularly amid concerns about explicit content appearing on the platform.

Interest in alternative social networks has increased since TikTok’s shift to US ownership, though analysts note that long-term growth will depend on moderation frameworks, feature development, and sustained community trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WordPress introduces rules for responsible AI use

WordPress has released new guidelines to shape how AI is used across plugins, themes, documentation and media assets. The framework focuses on transparency, accountability and maintaining the project’s open source foundations.

Contributors remain fully responsible for AI-assisted work and are expected to disclose meaningful AI use during submissions. Reviewers are encouraged to assess such contributions with awareness of how automated tools influenced the output.

Strong emphasis is placed on licensing, with all AI-generated material required to remain compatible with GPLv2 or later. Tools that restrict redistribution or reproduce incompatible code are explicitly ruled out.

The guidance also targets so-called AI slop, including untested code, fabricated references and unnecessarily complex solutions. Maintainers are authorised to reject low-quality submissions that lack apparent human oversight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Australia steps up platform scrutiny after mass Snapchat removals

Snapchat has blocked more than 415,000 Australian accounts after the national ban on under-16s began, marking a rapid escalation in the country’s effort to restrict children’s access to major platforms.

The company relied on a mix of self-reported ages and age-detection technologies to identify users who appeared to be under 16.

The platform warned that age verification still faces serious shortcomings, leaving room for teenagers to bypass safeguards rather than supporting reliable compliance.

Facial estimation tools remain accurate only within a narrow range, meaning some young people may slip through while older users risk losing access. Snapchat also noted the likelihood that teenagers will shift towards less regulated messaging apps.

The eSafety commissioner has focused regulatory pressure on the 10 largest platforms, although all services with Australian users are expected to assess whether they fall under the new requirements.

Officials have acknowledged that the technology needs improvement and that reliability issues, such as the absence of a liveness check, contributed to false results.

More than 4.7 million accounts have been deactivated across the major platforms since the ban began, although the figure includes inactive and duplicate accounts.

Authorities in Australia expect further enforcement, with notices set to be issued to companies that fail to meet the new standards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

France challenges EU privacy overhaul

The EU’s attempt to revise core privacy rules has faced resistance from France, which argues that the Commission’s proposals would weaken rather than strengthen long-standing protections.

Paris objects strongly to proposed changes to the definition of personal data within the General Data Protection Regulation, which remains the foundation of European privacy law. Officials have also raised concerns about several more minor adjustments included in the broader effort to modernise digital legislation.

These proposals form part of the Digital Omnibus package, a set of updates intended to streamline the EU data rules. France argues that altering the GDPR’s definitions could change the balance between data controllers, regulators and citizens, creating uncertainty for national enforcement bodies.

The national government maintains that the existing framework already includes the flexibility needed to interpret sensitive information.

A disagreement that highlights renewed tension inside the Union as institutions examine the future direction of privacy governance.

Several member states want greater clarity in an era shaped by AI and cross-border data flows. In contrast, others fear that opening the GDPR could lead to inconsistent application across Europe.

Talks are expected to continue in the coming months as EU negotiators weigh the political risks of narrowing or widening the scope of personal data.

France’s firm stance suggests that consensus may prove difficult, particularly as governments seek to balance economic goals with unwavering commitments to user protection.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UNESCO and HBKU advance research on digital behaviour

Hamad Bin Khalifa University has unveiled the UNESCO Chair on Digital Technologies and Human Behaviour to strengthen global understanding of how emerging tools shape society.

An initiative, located in the College of Science and Engineering in Qatar, that will examine the relationship between digital adoption and human behaviour, focusing on digital well-being, ethical design and healthier online environments.

The Chair is set to address issues such as internet addiction, cyberbullying and misinformation through research and policy-oriented work.

By promoting dialogue among international organisations, governments and academic institutions, the programme aims to support the more responsible development of digital technologies rather than approaches that overlook societal impact.

HBKU’s long-standing emphasis on ethical innovation formed the foundation for the new initiative. The launch event brought together experts from several disciplines to discuss behavioural change driven by AI, mobile computing and social media.

An expert panel considered how GenAI can improve daily life while also increasing dependency, encouraging users to shift towards a more intentional and balanced relationship with AI systems.

UNESCO underlined the importance of linking scientific research with practical policymaking to guide institutions and communities.

The Chair is expected to strengthen cooperation across sectors and support progress on global development goals by ensuring digital transformation remains aligned with human dignity, social cohesion and inclusive growth.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!