Greece nears plan to restrict social media for under-15s

Preparing to restrict social media access for children under 15s, Greece plans to use the Kids Wallet app as its enforcement tool amid rising European concern over youth safety.

A senior official indicated that an announcement is close, reflecting growing political concern about digital safety and youth protection.

The Ministry of Digital Governance intends to rely on the Kids Wallet application, introduced last year, as a mechanism for enforcing the measure instead of developing a new control framework.

Government planning is advanced, yet the precise timing of the announcement by Prime Minister Kyriakos Mitsotakis has not been finalised.

In addition to the legislative initiative in Greece, the European debate on children’s online safety is intensifying.

Spain recently revealed plans to prohibit social media access for those under sixteen and to create legislation that would hold platform executives personally accountable for hate speech.

Such moves illustrate how governments are seeking to shape the digital environment for younger users rather than leaving regulation solely in private hands.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI in practice across the UN system: UN 2.0 AI Expo

The UN 2.0 Data & Digital Community AI Expo examined how AI is currently embedded within the operational, analytical and institutional work of the United Nations system. The session brought together a range of AI applications already in use across UN entities, offering a consolidated view of how data-driven tools are supporting mandates related to development, humanitarian action, human rights and internal organisational capacity.

Designed as a fast‑paced showcase, the event presented eight specific AI projects from various UN organisations within a one-hour window. These featured programmes were selected by the UN AI Resource Hub, which is a significant collaborative initiative involving over 50 UN entities. The hub serves to strengthen coordination and coherence regarding AI technologies across the entire UN system.

The Expo highlighted how AI interacts with data availability, governance frameworks, and legal obligations. The session therefore functioned as an overview of current practice, revealing both the scope of AI use and the constraints shaping its deployment within a multilateral institution.

UN 2.0, data and digital capacity

 Logo, Nature, Outdoors, Text, Snow

UN 2.0 frames data and digital capability as core institutional functions necessary for addressing complex global challenges. Increasing volumes of information, rapidly evolving risks and interconnected crises require tools that support analysis, coordination and timely decision-making.

Within this framework, AI is treated as one component of a broader digital ecosystem. Its effectiveness depends on data quality, governance structures, organisational readiness and ethical oversight. The AI Expo reflected this approach by consistently situating the use of AI within existing mandates and institutional responsibilities, rather than presenting technology as a standalone solution.

UNICEF: Guidance on AI and children

 Logo, Text, Turquoise

UNICEF addressed how AI systems affect children across education, health, protection, and social services. The guidance focuses on governance frameworks that protect children’s rights in digital environments where automated systems increasingly shape access and outcomes.

Key risks highlighted include profiling, algorithmic bias, data misuse, and exclusion from digital benefits. Safeguards such as transparency, accountability, accessibility, and human oversight are emphasised as essential conditions for any AI system involving children.

The guidance, now in its third edition from December 2025, draws on the Convention on the Rights of the Child and sets out 10 requirements for child-centred AI, including safety, data privacy, non-discrimination, transparency, inclusion, and support for children’s well-being and development.

By anchoring AI governance within established child rights frameworks, the guidance positions technological development as subject to existing international obligations rather than discretionary policy choices. It highlights both the risks of AI, such as harmful content, CSAM, and algorithmic bias, and the opportunities, including enhanced learning, accessibility for children with disabilities, and improved child well-being.

UN-Habitat: BEAM AI (Building & Establishment Automated Mapper)

 Logo, Text

UN-Habitat presented BEAM, a machine-learning system designed to analyse satellite and aerial imagery to identify buildings and settlement patterns. Rapid urbanisation and the growth of informal settlements often outpace traditional data collection methods, leaving governments without accurate information for planning and service delivery.

AI-supported mapping addresses these gaps by generating up-to-date spatial data at scale. Outputs support decisions related to housing, water, sanitation, infrastructure investment, and risk reduction. It identifies and geo-references rooftops, generating shapefiles for urban planning processes.

Applied in South Africa and Central America, the system has mapped millions of previously unrecorded buildings, providing comprehensive spatial data where none existed before and supporting evidence-based decision-making in rapidly evolving urban areas.

UNFPA: AI platform for adolescents and youth

 Person, Adult, Female, Woman, Face, Head, Conversation, People

UNFPA focused on AI-supported platforms designed to improve access to information for adolescents and youth, particularly in areas related to sexual and reproductive health and mental well-being. Many young people face barriers linked to stigma, lack of confidentiality and uneven access to services.

UNFPA India’s JustAsk! AI chatbot provide guidance that is age-appropriate, culturally sensitive, and aligned with ethical and rights-based standards. The system helps users navigate health information, counter misinformation, and connect with relevant services when needed, including mental health support and sexual health facilities.

The design of these platforms emphasises privacy, safety, and responsible AI use, ensuring that interactions remain trustworthy and secure for young people. By leveraging AI, UNFPA supports youth-facing services, reaching populations that may otherwise have limited access to accurate and confidential information, particularly in regions where traditional in-person services are scarce or difficult to access.

IOM: Donor intelligence

 Logo

IOM showcased an emerging AI project designed to strengthen donor intelligence and improve funding strategies. Following significant funding cuts and increasing competition for resources, the organisation explored new ways to diversify funding, identify opportunities and better align proposals after years of consistent rejections.

To ensure the solution addressed real operational needs, the team organised discovery workshops to identify pain points and opportunities for technological support. Using a rapid‑iteration approach known as ‘vibe coding’, developers built and tested prototypes quickly, incorporating continuous user feedback and daily improvements.

A multi-agent AI system integrates internal and external data to generate comprehensive, up-to-date donor profiles. Specialised agents research, synthesise, and refine information, enabling the organisation to monitor donor priorities and shifts in real-time.

Better alignment of project designs with donor interests has successfully reversed the trend of frequent rejections. Securing new funding has allowed the organisation to resume previously suspended activities and restore essential support to migrant and displaced communities.

UNDP: AI Sprint

 Text, Symbol, Logo, Astronomy, Moon, Nature, Night, Outdoors

UNDP launched the AI Sprint as a strategic initiative to accelerate the adoption of AI across the organisation and to build internal capacity for the responsible and effective use of AI. The AI Sprint is designed to equip UNDP staff with the tools, knowledge and governance frameworks needed to harness AI in support of sustainable development and organisational transformation.

The AI Sprint is structured around multiple components, including building foundational AI awareness and skills, establishing ethical principles and frameworks for AI use, and supporting the deployment of high-impact AI initiatives that address key development challenges. It also contributes to country-level enablement by helping partner countries develop AI strategies, strengthen public sector AI capacity and scale AI-related programmes.

The initiative reflects UNDP’s effort to position the organisation as a leader in responsible AI for development, with the dedicated AI Working Group established to oversee responsible use, legal compliance, risk management and transparency in AI adoption.

The UNDP AI Sprint Initiative forms part of broader efforts to build AI capability and accelerate digital transformation across regions, offering training, strategy support and practical tools in countries worldwide.

OHCHR: Human Rights Data Exchange (HRDx)

 Logo, Person

The Office of the High Commissioner for Human Rights (OHCHR) has introduced the Human Rights Data Exchange (HRDx), developed by the Innovation & Analytics Hub, as a global platform designed to enhance the collection, governance and analysis of human rights information. 

Described as a dedicated data service, HRDx aims to consolidate data that is currently fragmented, siloed, unverified and often collected manually into a single, more reliable resource. This will allow for earlier detection and monitoring of patterns, thereby supporting human rights initiatives in the digital era.

Given that human rights are currently at a crossroads and increasingly at risk, with only 15% of the Sustainable Development Goals (SDGs) on track for 2030, the design prioritises data protection, security and accountability. This approach reflects the sensitive nature of such information, particularly as technology can also accelerate inequality, disinformation and digital surveillance.

HRDx forms part of a broader OHCHR strategy to utilise technology and data to identify trends rapidly and facilitate coordinated action. The initiative seeks to establish human rights data as a global public good, ensuring that ethical data governance and the protection of personal data remain fundamental requirements for its operation.

UN Global Pulse: DISHA (Data Insights for Social & Humanitarian Action)

 Spiral, Text, Coil

UN Global Pulse has established a collaborative coalition known as DISHA, or Data Insights for Social and Humanitarian Action, to bridge the gap between experimental technology and its practical application.

This partnership focuses on refining and deploying AI-enabled analytics to support critical humanitarian decision-making, ensuring that the most effective tools transition from mere pilots to routine operational use. By fostering cross-sector partnerships and securing authorised access to dynamic data, the project aims to equip humanitarian organisations with the high-level insights necessary to respond to crises with greater speed and precision.

The practical utility of this effort is demonstrated through several key analytical applications designed to address immediate needs on the ground. One such tool significantly accelerates disaster damage assessment, reducing the time required for analysis from weeks or days to just a few hours. In the Philippines, the initiative uses an evergreen data partnership with Globe Telecom to monitor population mobility and dynamically track displacement trends following a disaster.

Furthermore, a shelter-mapping pilot project uses satellite imagery to automatically identify refugee shelters at scale, providing a clearer picture of humanitarian requirements in real time.

A central focus of the DISHA initiative is to overcome the persistent barriers that prevent the humanitarian sector from adopting these advanced solutions. By addressing these governance considerations and focusing on the productisation of AI approaches, the initiative ensures that analytical outputs are not only technically sound but also directly aligned with the live operational requirements of responders during a crisis.

WIPO: Breaking language barriers with AI

 Coil, Spiral, Logo

The World Intellectual Property Organization (WIPO) has implemented an AI system to automate the transcription and translation of international meetings. Developed by the Advanced Technology Applications Center (ATAC), the WIPO Speech-to-Text tool produces automated transcripts in minutes. These custom models are specifically trained on UN terminology and are designed to function despite background noise or non-native language accents.

The system captures spoken language directly from interpretation channels and publishes the results to the WIPO webcast platform, providing searchable access with timestamps for every word. When used alongside the WIPO Translate engine, the tool can generate machine translations in multiple additional languages.

Since its adoption for most public WIPO meetings in 2022, the initiative has delivered savings of several million Swiss francs. The infrastructure supports highly confidential content and allows for installation within an organisation’s secure framework. WIPO is currently sharing this technology with other organisations and developing a software-as-a-service (SaaS) API to expand its availability.

#AIforGood

 Logo

Across the UN system, initiatives demonstrate a shift toward a more capable, data‑driven, and ethically grounded approach to global operations, highlighting the use of technological tools to strengthen human rights, accountability and multilateral cooperation.

When applied responsibly, AI enhances human expertise, enabling more precise monitoring, planning and decision-making across development, humanitarian action, human rights and internal organisational functions. Ethical safeguards, governance frameworks and oversight mechanisms are embedded from the outset to ensure that innovations operate within established norms.

Overall, these developments reflect a broader institutional transformation, with the UN increasingly equipped to manage complexity, respond to crises with precision, and uphold its mandates with agility in the digital era.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

India pushes Meta to justify WhatsApp’s data-sharing

The Supreme Court of India has delivered a forceful warning to Meta after judges said the company could not play with the right to privacy.

The court questioned how WhatsApp monetises personal data in a country where the app has become the de facto communications tool for hundreds of millions of people. Judges added that meaningful consent is difficult when users have little practical choice.

Meta was told not to share any user information while the appeal over WhatsApp’s 2021 privacy policy continues. Judges pressed the company to explain the value of behavioural data instead of relying solely on claims about encrypted messages.

Government lawyers argued that personal data was collected and commercially exploited in ways most users would struggle to understand.

The case stems from a major update to WhatsApp’s data-sharing rules that India’s competition regulator said abused the platform’s dominant position.

A significant penalty was issued before Meta and WhatsApp challenged the ruling at the Supreme Court. The court has now widened the proceedings by adding the IT ministry and has asked Meta to provide detailed answers before the next hearing on 9 February.

WhatsApp is also under heightened scrutiny worldwide as regulators examine how encrypted platforms analyse metadata and other signals.

In India, broader regulatory changes, such as new SIM-binding rules, could restrict how small businesses use the service rather than broadening its commercial reach.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Ofcom expands scrutiny of X over Grok deepfake concerns

The British regulator, Ofcom, has released an update on its investigation into X after reports that the Grok chatbot had generated sexual deepfakes of real people, including minors.

As such, the regulator initiated a formal inquiry to assess whether X took adequate steps to manage the spread of such material and to remove it swiftly.

X has since introduced measures to limit the distribution of manipulated images, while the ICO and regulators abroad have opened parallel investigations.

The Online Safety Act does not cover all chatbot services, as regulation depends on whether a system enables user interactions, provides search functionality, or produces pornographic material.

Many AI chatbots fall partly or entirely outside the Act’s scope, limiting regulators’ ability to act when harmful content is created during one-to-one interactions.

Ofcom cannot currently investigate the standalone Grok service for producing illegal images because the Act does not cover that form of generation.

Evidence-gathering from X continues, with legally binding information requests issued to the company. Ofcom will offer X a full opportunity to present representations before any provisional findings are published.

Enforcement actions take several months, since regulators must follow strict procedural safeguards to ensure decisions are robust and defensible.

Ofcom added that people who encounter harmful or illegal content online are encouraged to report it directly to the relevant platforms. Incidents involving intimate images can be reported to dedicated services for adults or support schemes for minors.

Material that may constitute child sexual abuse should be reported to the Internet Watch Foundation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI becomes optional in Firefox 148 as Mozilla launches new control system

Mozilla has confirmed that Firefox will include a built-in ‘AI kill switch‘ from version 148, allowing users to disable all AI features across the browser. The update follows earlier commitments that AI tools would remain optional as Firefox evolves into what the company describes as an AI-enabled browser.

The new controls will appear in the desktop release scheduled to begin rolling out on 24 February. A dedicated AI Controls section will allow users to turn off every AI feature at once or manage each tool individually, reflecting Mozilla’s aim to balance innovation with user choice.

At launch, Firefox 148 will introduce AI-powered translations, automatic alt text for images in PDFs, tab grouping suggestions, link previews, and an optional sidebar chatbot supporting services such as ChatGPT, Claude, Copilot, Gemini, and Le Chat Mistral.

All of these tools can be disabled through a single ‘Block AI enhancements’ toggle, which removes prompts and prevents new AI features from appearing. Mozilla has said preferences will remain in place across updates, with users able to adjust settings at any time.

The organisation said the approach is intended to give people full control over how AI appears in their browsing experience, while continuing development for those who choose to use it. Early access to the controls will also be available through Firefox Nightly.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU moves closer to decision on ChatGPT oversight

The European Commission plans to decide by early 2026 whether OpenAI’s ChatGPT should be classified as a vast online platform under the Digital Services Act.

OpenAI’s tool reported 120.4 million average monthly users in the EU back in October, a figure far above the 45-million threshold that triggers more onerous obligations instead of lighter oversight.

Officials said the designation procedure depends on both quantitative and qualitative assessments of how a service operates, together with input from national authorities.

The Commission is examining whether a standalone AI chatbot can fall within the scope of rules usually applied to platforms such as social networks, online marketplaces and significant search engines.

ChatGPT’s user data largely stems from its integrated online search feature, which prompts users to allow the chatbot to search the web. The Commission noted that OpenAI could voluntarily meet the DSA’s risk-reduction obligations while the formal assessment continues.

The EU’s latest wave of designations included Meta’s WhatsApp, though the rules applied only to public channels, not private messaging.

A decision on ChatGPT that will clarify how far the bloc intends to extend its most stringent online governance framework to emerging AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

France targets X over algorithm abuse allegations

The cybercrime unit of the Paris prosecutor has raided the French office of X as part of an expanding investigation into alleged algorithm manipulation and illicit data extraction.

Authorities said the probe began in 2025 after a lawmaker warned that biassed algorithms on the platform might have interfered with automated data systems. Europol supported the operation together with national cybercrime officers.

Prosecutors confirmed that the investigation now includes allegations of complicity in circulating child sex abuse material, sexually explicit deepfakes and denial of crimes against humanity.

Elon Musk and former chief executive Linda Yaccarino have been summoned for questioning in April in their roles as senior figures of the company at the time.

The prosecutor’s office also announced its departure from X in favour of LinkedIn and Instagram, rather than continuing to use the platform under scrutiny.

X strongly rejected the accusations and described the raid as politically motivated. Musk claimed authorities should focus on pursuing sex offenders instead of targeting the company.

The platform’s government affairs team said the investigation amounted to law enforcement theatre rather than a legitimate examination of serious offences.

Regulatory pressure increased further as the UK data watchdog opened inquiries into both X and xAI over concerns about Grok producing sexualised deepfakes. Ofcom is already conducting a separate investigation that is expected to take months.

The widening scrutiny reflects growing unease around alleged harmful content, political interference and the broader risks linked to large-scale AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ChatGPT restored after global outage disrupts users worldwide

OpenAI faced a wave of global complaints after many users struggled to access ChatGPT.

Reports began circulating in the US during the afternoon, with outage cases climbing to more than 12.000 in less than half an hour. Social media quickly filled with questions from people trying to determine whether the disruption was widespread or a local glitch.

Also, users in the UK reported complete failure to generate responses, yet access returned when they switched to a US-based VPN.

Other regions saw mixed results, as VPNs in Ireland, Canada, India and Poland allowed ChatGPT to function, although replies were noticeably slower instead of consistent.

OpenAI later confirmed that several services were experiencing elevated errors. Engineers identified the source of the disruption, introduced mitigations and continued monitoring the recovery.

The company stressed that users in many regions might still experience intermittent problems while the system stabilises rather than operating at full capacity.

In the following update, OpenAI announced that its systems were fully operational again.

The status page indicated that the affected services had recovered, and engineers were no longer aware of active issues. The company added that the underlying fault was addressed, with further safeguards being developed to prevent similar incidents.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Major Chinese data leak exposes billions of records

Cybersecurity researchers uncovered an unsecured database exposing 8.7 billion records linked to individuals and businesses in China. The data was found in early January 2026 and remained accessible online for more than three weeks.

The China focused dataset included national ID numbers, home addresses, email accounts, social media identifiers and passwords. Researchers warned that the scale of exposure in China creates serious risks of identity theft and account takeovers.

The records were stored in a large Elasticsearch cluster hosted on so called bulletproof infrastructure. Analysts believe the structure suggests deliberate aggregation in China rather than an accidental misconfiguration.

Although the database is now closed, experts say actors targeting China may have already copied the data. China has experienced several major leaks in recent years, highlighting persistent weaknesses in large scale data handling.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Austria and Poland eye social media limits for minors

Austria is advancing plans to bar children under 14 from social media when the new school year begins in September 2026, according to comments from a senior Austrian official. Poland’s government is drafting a law to restrict access for under-15s, using digital ID tools to confirm age.

Austria’s governing parties support protecting young people online but differ on how to verify ages securely without undermining privacy. In Poland supporters of the draft argue that early exposure to screens is a parental and platform enforcement issue.

Austria and Poland form part of a broader European trend as France moves to ban under-15s and the UK is debating similar measures. Wider debates tie these proposals to concerns about children’s mental health and online safety.

Proponents in both Austria and Poland aim to finalise legal frameworks by 2026, with implementation potentially rolling out in the following year if national parliaments approve the age restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot