Tinder tests AI Chemistry feature to cut swipe fatigue and revive engagement

The dating platform is expanding its reliance on AI, with Tinder experimenting with a feature designed to ease swipe fatigue among users.

A tool, known as Chemistry, that builds a picture of each person through optional questions and by reviewing their Camera Roll with permission, offering a more personalised route toward potential matches instead of repetitive browsing.

Match is currently testing the feature only in Australia. Executives say the system allows people to receive a small set of tailored profiles rather than navigating large volumes of candidates.

Tinder hopes the approach will strengthen engagement during a period when registrations and monthly activity remain lower than last year, despite minor improvements driven by AI-based recommendations.

Developers are also refocusing the broader discovery experience to reflect concerns raised by Gen Z around authenticity, trust and relevance.

The platform now relies on verification tools such as Face Check, which Match says cut harmful interactions by more than half instead of leaving users exposed to impersonators.

These moves indicate a shift away from the swipe mechanic that once defined the app, offering more direct suggestions that may improve outcomes.

Marketing investment is set to rise as part of the strategy. Match plans to allocate $50 million to new campaigns that will position Tinder as appealing again, using creators on TikTok and Instagram to reframe the brand.

Strong quarterly revenue failed to offset weaker guidance, yet the company argues that AI features will help shape a more reliable and engaging service for users seeking consistent matches.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google issues warning on malware affecting over 40% of Android devices

The US tech giant, Google, has alerted users that more than 40% of Android phones are vulnerable to new malware and spyware due to outdated software. Phones running older versions than Android 13 no longer receive security updates, leaving over a billion users worldwide at risk.

Data shows Android 16 is present on only 7.5% of devices, while versions 15, 14, and 13 still dominate the market.

Slow adoption of updates means many devices remain exposed, even when security patches are available. Google emphasised that outdated phones are particularly unsafe and cannot protect against emerging threats.

Users are advised to upgrade to Android 13 or newer, or purchase a mid-range device that receives regular updates, instead of keeping an old high-end phone without support. Unlike Apple, where most iPhones receive timely updates, older Android devices may never get the necessary security fixes.

The warning highlights the urgent need for users to act immediately to avoid potential data breaches and spyware attacks. Google’s message is clear: using unsupported Android devices is a growing global security concern.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU tests Matrix protocol as sovereign alternative for internal communication

The European Commission is testing a European open source system for its internal communications as worries grow in Brussels over deep dependence on US software.

A spokesperson said the administration is preparing a solution built on the Matrix protocol instead of relying solely on Microsoft Teams.

Matrix is already used by several European institutions, including the French government, German healthcare bodies and armed forces across the continent.

The Commission aims to deploy it as a complement and backup to Teams rather than a full replacement. Officials noted that Signal currently fills that role but lacks the flexibility needed for an organisation of the Commission’s size.

The initiative forms part of a wider push for digital sovereignty within the EU. A Matrix-based tool could eventually link the Commission with other Union bodies that currently lack a unified secure communication platform.

Officials said there is already an operational connection with the European Parliament.

The trial reflects growing sensitivity about Europe’s strategic dependence on non-European digital services.

By developing home-grown communication infrastructure instead of leaning on a single foreign supplier, the Commission hopes to build a more resilient and sovereign technological foundation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Under 16 social media ban proposed in Spain

Spain is preparing legislation to ban social media access for users under 16, with the proposal expected to be introduced within days. Prime Minister Pedro Sánchez framed the move as a child-protection measure aimed at reducing exposure to harmful online environments.

Government plans include mandatory age-verification systems for platforms, designed to serve as practical barriers rather than symbolic safeguards. Officials argue that minors face escalating risks online, including addiction, exploitation, violent content, and manipulation.

Additional provisions could hold technology executives legally accountable for unlawful or hateful content that remains online. The proposal reflects a broader regulatory shift toward platform responsibility and stricter enforcement standards.

Momentum for youth restrictions is building across Europe. France and Denmark are pursuing similar controls, while the EU Digital Services Act guidelines allow member states to define a national ‘digital majority age’.

The European Commission is also testing an age verification app, with wider deployment expected next year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Malaysia enforces a total ban on e-waste imports after corruption probe

Authorities have imposed a full and immediate ban on the import of electronic waste in Malaysia to end the long-standing practice of foreign dumping.

The Anti-Corruption Commission reclassified all e-waste as an absolute prohibition, removing the earlier discretion that allowed limited exemptions. Officials argue that the country should protect its environment rather than accept hazardous materials from other nations.

Authorities have spent years intercepting containers loaded with discarded electronics suspected to contain toxic metals that contaminate soil and water when mishandled.

Environmental groups have repeatedly urged stronger controls, noting that waste from computers, mobile phones and household appliances poses severe risks to human health. The government now insists that firm enforcement must accompany the new restrictions to prevent continued smuggling.

The decision comes amid a widening corruption inquiry into oversight of e-waste. The director-general of the environment department and his deputy have been detained on suspicion of abuse of power. At the same time, investigators have frozen bank accounts and seized cash linked to the case.

The Home Ministry has pledged increased surveillance and warned that Malaysia will safeguard its national security by stopping illegal e-waste at its borders.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Harvard researchers highlight contextual risks in medical AI systems

Medical AI promises faster analysis, more accurate pattern detection, and continuous availability, yet most systems still struggle to perform reliably in real clinical environments beyond laboratory testing.

Researchers led by Marinka Zitnik at Harvard Medical School identify contextual errors as a key reason why medical AI often fails when deployed in hospitals and clinics.

Models frequently generate technically sound responses that overlook crucial factors, such as medical speciality, geographic conditions, and patients’ socioeconomic circumstances, thereby limiting their real-world usefulness.

The study argues that training datasets, model architecture, and performance benchmarks must integrate contextual information to prevent misleading or impractical recommendations.

Improving transparency, trust, and human-AI collaboration could allow context-aware systems to support clinicians more effectively while reducing harm and inequality in care delivery.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Facial recognition AI supports passenger security in India

Indian Railways has deployed an AI powered Rail Robocop at Visakhapatnam Railway Station in India to strengthen passenger security. The system is designed to patrol platforms and monitor crowds in Visakhapatnam.

The robot, named ASC Arjun, uses facial recognition to compare live images with a database of known criminals in India. Officials said the system recently identified a suspect during routine surveillance in Visakhapatnam.

Once a match was detected, the AI system sent an instant alert to the Railway Protection Force CCTV control room in Visakhapatnam. Officers were able to respond quickly using the automated notification.

Authorities in India say the Rail Robocop will support human staff rather than replace them. Similar AI deployments are expected at other major railway stations in India following trials in Visakhapatnam.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US agencies linked to expanded biometric data sharing with Australia

Reports suggest Australia may expand biometric and identity data sharing with US authorities through border security and visa negotiations, granting enforcement agencies broader access to sensitive personal information.

Information reportedly covered includes passport numbers, dates of birth, facial images, fingerprints, and criminal or immigration records. Such access could allow US authorities to query Australian-held databases directly, bypassing traditional legal cooperation procedures.

No official treaty text or confirmation has been released by either government, and responses have remained general, avoiding details about the Enhanced Border Security Partnership negotiations. The absence of transparency has raised concerns among privacy advocates and legal commentators.

Australia and the United States already cooperate through established frameworks such as the Visa Waiver Program, Migration 5 agreements, and the CLOUD Act. Existing mechanisms involve structured, case-by-case data sharing with legal oversight rather than unrestricted database access.

Analysts note that confirmed arrangements differ significantly from claims of open biometric access, though expanding security vetting requirements continue to increase cross-border data flows. Debate is growing over privacy, sovereignty, and the long-term implications of deeper information sharing.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Greece nears plan to restrict social media for under-15s

Preparing to restrict social media access for children under 15s, Greece plans to use the Kids Wallet app as its enforcement tool amid rising European concern over youth safety.

A senior official indicated that an announcement is close, reflecting growing political concern about digital safety and youth protection.

The Ministry of Digital Governance intends to rely on the Kids Wallet application, introduced last year, as a mechanism for enforcing the measure instead of developing a new control framework.

Government planning is advanced, yet the precise timing of the announcement by Prime Minister Kyriakos Mitsotakis has not been finalised.

In addition to the legislative initiative in Greece, the European debate on children’s online safety is intensifying.

Spain recently revealed plans to prohibit social media access for those under sixteen and to create legislation that would hold platform executives personally accountable for hate speech.

Such moves illustrate how governments are seeking to shape the digital environment for younger users rather than leaving regulation solely in private hands.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI in practice across the UN system: UN 2.0 AI Expo

The UN 2.0 Data & Digital Community AI Expo examined how AI is currently embedded within the operational, analytical and institutional work of the United Nations system. The session brought together a range of AI applications already in use across UN entities, offering a consolidated view of how data-driven tools are supporting mandates related to development, humanitarian action, human rights and internal organisational capacity.

Designed as a fast‑paced showcase, the event presented eight specific AI projects from various UN organisations within a one-hour window. These featured programmes were selected by the UN AI Resource Hub, which is a significant collaborative initiative involving over 50 UN entities. The hub serves to strengthen coordination and coherence regarding AI technologies across the entire UN system.

The Expo highlighted how AI interacts with data availability, governance frameworks, and legal obligations. The session therefore functioned as an overview of current practice, revealing both the scope of AI use and the constraints shaping its deployment within a multilateral institution.

UN 2.0, data and digital capacity

 Logo, Nature, Outdoors, Text, Snow

UN 2.0 frames data and digital capability as core institutional functions necessary for addressing complex global challenges. Increasing volumes of information, rapidly evolving risks and interconnected crises require tools that support analysis, coordination and timely decision-making.

Within this framework, AI is treated as one component of a broader digital ecosystem. Its effectiveness depends on data quality, governance structures, organisational readiness and ethical oversight. The AI Expo reflected this approach by consistently situating the use of AI within existing mandates and institutional responsibilities, rather than presenting technology as a standalone solution.

UNICEF: Guidance on AI and children

 Logo, Text, Turquoise

UNICEF addressed how AI systems affect children across education, health, protection, and social services. The guidance focuses on governance frameworks that protect children’s rights in digital environments where automated systems increasingly shape access and outcomes.

Key risks highlighted include profiling, algorithmic bias, data misuse, and exclusion from digital benefits. Safeguards such as transparency, accountability, accessibility, and human oversight are emphasised as essential conditions for any AI system involving children.

The guidance, now in its third edition from December 2025, draws on the Convention on the Rights of the Child and sets out 10 requirements for child-centred AI, including safety, data privacy, non-discrimination, transparency, inclusion, and support for children’s well-being and development.

By anchoring AI governance within established child rights frameworks, the guidance positions technological development as subject to existing international obligations rather than discretionary policy choices. It highlights both the risks of AI, such as harmful content, CSAM, and algorithmic bias, and the opportunities, including enhanced learning, accessibility for children with disabilities, and improved child well-being.

UN-Habitat: BEAM AI (Building & Establishment Automated Mapper)

 Logo, Text

UN-Habitat presented BEAM, a machine-learning system designed to analyse satellite and aerial imagery to identify buildings and settlement patterns. Rapid urbanisation and the growth of informal settlements often outpace traditional data collection methods, leaving governments without accurate information for planning and service delivery.

AI-supported mapping addresses these gaps by generating up-to-date spatial data at scale. Outputs support decisions related to housing, water, sanitation, infrastructure investment, and risk reduction. It identifies and geo-references rooftops, generating shapefiles for urban planning processes.

Applied in South Africa and Central America, the system has mapped millions of previously unrecorded buildings, providing comprehensive spatial data where none existed before and supporting evidence-based decision-making in rapidly evolving urban areas.

UNFPA: AI platform for adolescents and youth

 Person, Adult, Female, Woman, Face, Head, Conversation, People

UNFPA focused on AI-supported platforms designed to improve access to information for adolescents and youth, particularly in areas related to sexual and reproductive health and mental well-being. Many young people face barriers linked to stigma, lack of confidentiality and uneven access to services.

UNFPA India’s JustAsk! AI chatbot provide guidance that is age-appropriate, culturally sensitive, and aligned with ethical and rights-based standards. The system helps users navigate health information, counter misinformation, and connect with relevant services when needed, including mental health support and sexual health facilities.

The design of these platforms emphasises privacy, safety, and responsible AI use, ensuring that interactions remain trustworthy and secure for young people. By leveraging AI, UNFPA supports youth-facing services, reaching populations that may otherwise have limited access to accurate and confidential information, particularly in regions where traditional in-person services are scarce or difficult to access.

IOM: Donor intelligence

 Logo

IOM showcased an emerging AI project designed to strengthen donor intelligence and improve funding strategies. Following significant funding cuts and increasing competition for resources, the organisation explored new ways to diversify funding, identify opportunities and better align proposals after years of consistent rejections.

To ensure the solution addressed real operational needs, the team organised discovery workshops to identify pain points and opportunities for technological support. Using a rapid‑iteration approach known as ‘vibe coding’, developers built and tested prototypes quickly, incorporating continuous user feedback and daily improvements.

A multi-agent AI system integrates internal and external data to generate comprehensive, up-to-date donor profiles. Specialised agents research, synthesise, and refine information, enabling the organisation to monitor donor priorities and shifts in real-time.

Better alignment of project designs with donor interests has successfully reversed the trend of frequent rejections. Securing new funding has allowed the organisation to resume previously suspended activities and restore essential support to migrant and displaced communities.

UNDP: AI Sprint

 Text, Symbol, Logo, Astronomy, Moon, Nature, Night, Outdoors

UNDP launched the AI Sprint as a strategic initiative to accelerate the adoption of AI across the organisation and to build internal capacity for the responsible and effective use of AI. The AI Sprint is designed to equip UNDP staff with the tools, knowledge and governance frameworks needed to harness AI in support of sustainable development and organisational transformation.

The AI Sprint is structured around multiple components, including building foundational AI awareness and skills, establishing ethical principles and frameworks for AI use, and supporting the deployment of high-impact AI initiatives that address key development challenges. It also contributes to country-level enablement by helping partner countries develop AI strategies, strengthen public sector AI capacity and scale AI-related programmes.

The initiative reflects UNDP’s effort to position the organisation as a leader in responsible AI for development, with the dedicated AI Working Group established to oversee responsible use, legal compliance, risk management and transparency in AI adoption.

The UNDP AI Sprint Initiative forms part of broader efforts to build AI capability and accelerate digital transformation across regions, offering training, strategy support and practical tools in countries worldwide.

OHCHR: Human Rights Data Exchange (HRDx)

 Logo, Person

The Office of the High Commissioner for Human Rights (OHCHR) has introduced the Human Rights Data Exchange (HRDx), developed by the Innovation & Analytics Hub, as a global platform designed to enhance the collection, governance and analysis of human rights information. 

Described as a dedicated data service, HRDx aims to consolidate data that is currently fragmented, siloed, unverified and often collected manually into a single, more reliable resource. This will allow for earlier detection and monitoring of patterns, thereby supporting human rights initiatives in the digital era.

Given that human rights are currently at a crossroads and increasingly at risk, with only 15% of the Sustainable Development Goals (SDGs) on track for 2030, the design prioritises data protection, security and accountability. This approach reflects the sensitive nature of such information, particularly as technology can also accelerate inequality, disinformation and digital surveillance.

HRDx forms part of a broader OHCHR strategy to utilise technology and data to identify trends rapidly and facilitate coordinated action. The initiative seeks to establish human rights data as a global public good, ensuring that ethical data governance and the protection of personal data remain fundamental requirements for its operation.

UN Global Pulse: DISHA (Data Insights for Social & Humanitarian Action)

 Spiral, Text, Coil

UN Global Pulse has established a collaborative coalition known as DISHA, or Data Insights for Social and Humanitarian Action, to bridge the gap between experimental technology and its practical application.

This partnership focuses on refining and deploying AI-enabled analytics to support critical humanitarian decision-making, ensuring that the most effective tools transition from mere pilots to routine operational use. By fostering cross-sector partnerships and securing authorised access to dynamic data, the project aims to equip humanitarian organisations with the high-level insights necessary to respond to crises with greater speed and precision.

The practical utility of this effort is demonstrated through several key analytical applications designed to address immediate needs on the ground. One such tool significantly accelerates disaster damage assessment, reducing the time required for analysis from weeks or days to just a few hours. In the Philippines, the initiative uses an evergreen data partnership with Globe Telecom to monitor population mobility and dynamically track displacement trends following a disaster.

Furthermore, a shelter-mapping pilot project uses satellite imagery to automatically identify refugee shelters at scale, providing a clearer picture of humanitarian requirements in real time.

A central focus of the DISHA initiative is to overcome the persistent barriers that prevent the humanitarian sector from adopting these advanced solutions. By addressing these governance considerations and focusing on the productisation of AI approaches, the initiative ensures that analytical outputs are not only technically sound but also directly aligned with the live operational requirements of responders during a crisis.

WIPO: Breaking language barriers with AI

 Coil, Spiral, Logo

The World Intellectual Property Organization (WIPO) has implemented an AI system to automate the transcription and translation of international meetings. Developed by the Advanced Technology Applications Center (ATAC), the WIPO Speech-to-Text tool produces automated transcripts in minutes. These custom models are specifically trained on UN terminology and are designed to function despite background noise or non-native language accents.

The system captures spoken language directly from interpretation channels and publishes the results to the WIPO webcast platform, providing searchable access with timestamps for every word. When used alongside the WIPO Translate engine, the tool can generate machine translations in multiple additional languages.

Since its adoption for most public WIPO meetings in 2022, the initiative has delivered savings of several million Swiss francs. The infrastructure supports highly confidential content and allows for installation within an organisation’s secure framework. WIPO is currently sharing this technology with other organisations and developing a software-as-a-service (SaaS) API to expand its availability.

#AIforGood

 Logo

Across the UN system, initiatives demonstrate a shift toward a more capable, data‑driven, and ethically grounded approach to global operations, highlighting the use of technological tools to strengthen human rights, accountability and multilateral cooperation.

When applied responsibly, AI enhances human expertise, enabling more precise monitoring, planning and decision-making across development, humanitarian action, human rights and internal organisational functions. Ethical safeguards, governance frameworks and oversight mechanisms are embedded from the outset to ensure that innovations operate within established norms.

Overall, these developments reflect a broader institutional transformation, with the UN increasingly equipped to manage complexity, respond to crises with precision, and uphold its mandates with agility in the digital era.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!