Tinder tests AI Chemistry feature to cut swipe fatigue and revive engagement

The dating platform is expanding its reliance on AI, with Tinder experimenting with a feature designed to ease swipe fatigue among users.

A tool, known as Chemistry, that builds a picture of each person through optional questions and by reviewing their Camera Roll with permission, offering a more personalised route toward potential matches instead of repetitive browsing.

Match is currently testing the feature only in Australia. Executives say the system allows people to receive a small set of tailored profiles rather than navigating large volumes of candidates.

Tinder hopes the approach will strengthen engagement during a period when registrations and monthly activity remain lower than last year, despite minor improvements driven by AI-based recommendations.

Developers are also refocusing the broader discovery experience to reflect concerns raised by Gen Z around authenticity, trust and relevance.

The platform now relies on verification tools such as Face Check, which Match says cut harmful interactions by more than half instead of leaving users exposed to impersonators.

These moves indicate a shift away from the swipe mechanic that once defined the app, offering more direct suggestions that may improve outcomes.

Marketing investment is set to rise as part of the strategy. Match plans to allocate $50 million to new campaigns that will position Tinder as appealing again, using creators on TikTok and Instagram to reframe the brand.

Strong quarterly revenue failed to offset weaker guidance, yet the company argues that AI features will help shape a more reliable and engaging service for users seeking consistent matches.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google issues warning on malware affecting over 40% of Android devices

The US tech giant, Google, has alerted users that more than 40% of Android phones are vulnerable to new malware and spyware due to outdated software. Phones running older versions than Android 13 no longer receive security updates, leaving over a billion users worldwide at risk.

Data shows Android 16 is present on only 7.5% of devices, while versions 15, 14, and 13 still dominate the market.

Slow adoption of updates means many devices remain exposed, even when security patches are available. Google emphasised that outdated phones are particularly unsafe and cannot protect against emerging threats.

Users are advised to upgrade to Android 13 or newer, or purchase a mid-range device that receives regular updates, instead of keeping an old high-end phone without support. Unlike Apple, where most iPhones receive timely updates, older Android devices may never get the necessary security fixes.

The warning highlights the urgent need for users to act immediately to avoid potential data breaches and spyware attacks. Google’s message is clear: using unsupported Android devices is a growing global security concern.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU tests Matrix protocol as sovereign alternative for internal communication

The European Commission is testing a European open source system for its internal communications as worries grow in Brussels over deep dependence on US software.

A spokesperson said the administration is preparing a solution built on the Matrix protocol instead of relying solely on Microsoft Teams.

Matrix is already used by several European institutions, including the French government, German healthcare bodies and armed forces across the continent.

The Commission aims to deploy it as a complement and backup to Teams rather than a full replacement. Officials noted that Signal currently fills that role but lacks the flexibility needed for an organisation of the Commission’s size.

The initiative forms part of a wider push for digital sovereignty within the EU. A Matrix-based tool could eventually link the Commission with other Union bodies that currently lack a unified secure communication platform.

Officials said there is already an operational connection with the European Parliament.

The trial reflects growing sensitivity about Europe’s strategic dependence on non-European digital services.

By developing home-grown communication infrastructure instead of leaning on a single foreign supplier, the Commission hopes to build a more resilient and sovereign technological foundation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Under 16 social media ban proposed in Spain

Spain is preparing legislation to ban social media access for users under 16, with the proposal expected to be introduced within days. Prime Minister Pedro Sánchez framed the move as a child-protection measure aimed at reducing exposure to harmful online environments.

Government plans include mandatory age-verification systems for platforms, designed to serve as practical barriers rather than symbolic safeguards. Officials argue that minors face escalating risks online, including addiction, exploitation, violent content, and manipulation.

Additional provisions could hold technology executives legally accountable for unlawful or hateful content that remains online. The proposal reflects a broader regulatory shift toward platform responsibility and stricter enforcement standards.

Momentum for youth restrictions is building across Europe. France and Denmark are pursuing similar controls, while the EU Digital Services Act guidelines allow member states to define a national ‘digital majority age’.

The European Commission is also testing an age verification app, with wider deployment expected next year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Experts call for better protection of submarine internet cables

A high-level panel at the International Submarine Cable Resilience Summit 2026 in Porto focused on a growing paradox in global connectivity. While submarine cable damage incidents have remained relatively stable for over a decade, the time needed to repair them has increased sharply.

Moderated by Nadia Krivetz, member of the International Advisory Body for Submarine Cable Resilience, the discussion brought together government officials and industry experts who warned that longer repair times are creating new vulnerabilities for the global internet, even as undersea cable networks continue to expand rapidly.

Andy Palmer-Felgate of the International Cable Protection Committee highlighted that more than 80% of cable damage is caused by fishing and anchoring, mostly on continental shelves where maritime activity is densest. She noted that a small number of high-risk ‘problem cables’ consume around half of the world’s annual repair capacity, suggesting that targeted prevention in specific locations could significantly reduce global disruption.

Palmer-Felgate also pointed to a shift in fault patterns away from Europe and the Atlantic toward Asia, exposing weaknesses in a repair model that depends on shared, slow-to-move vessels.

New monitoring technologies were presented as part of the solution, though not without limitations. Sigurd Zhang described how distributed acoustic sensing can detect vessel activity in real time, even when ships switch off tracking systems, citing cases in which fishing fleets were invisible to conventional monitoring systems.

International Submarine Cable Resilience Summit 2026

Eduardo Mateo added that newer optical monitoring tools can identify long-term stress and seabed instability affecting cables. Still, both speakers stressed that the cost, data complexity, and reliability requirements remain major barriers, especially for shorter cable systems.

Beyond monitoring, the panel explored improvements in cable design and installation, including stronger armouring, deeper burial, and more resilient network topologies. Mateo cautioned that technology alone cannot eliminate risk, as submarine cables must coexist with other seabed users.

Zhang noted that fully integrated ‘smart cables’ combining telecoms and scientific monitoring may still be a decade away, given the strict reliability standards operators demand.

Government coordination emerged as a decisive factor in reducing damage and speeding up repairs. South Africa’s Nonkqubela Thathakahle Jordan-Dyani described how fragmented regulations across African countries slow emergency responses and raise costs.

Speakers pointed to examples of more effective governance, including Australia’s notification-based repair system and successful legal cases described by Peter Jamieson, which have increased accountability among vessel operators and begun changing behaviour at sea.

Industry practices and skills were also under scrutiny. Jamieson argued that careful route planning and proper burial can prevent most cable faults. Still, Simon Hibbert warned that these standards depend on experienced workers whose skills are hard to replace. With an ageing maritime workforce and fewer recruits entering sea-based professions, the panel cautioned that declining expertise could undermine future cable resilience if training and knowledge transfer are not prioritised.

The discussion concluded by situating cable protection within broader economic and geopolitical pressures. Mateo pointed to supply chain risks for key materials driven by AI-related demand, while Jamieson cited regions like the Red Sea, where geopolitical instability forces cables into crowded corridors.

Despite these challenges, speakers agreed that prevention, cooperation, and shared responsibility offer a realistic path forward, stressing that submarine cable resilience can only be strengthened through sustained collaboration between governments, industry, and international organisations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Greece nears plan to restrict social media for under-15s

Preparing to restrict social media access for children under 15s, Greece plans to use the Kids Wallet app as its enforcement tool amid rising European concern over youth safety.

A senior official indicated that an announcement is close, reflecting growing political concern about digital safety and youth protection.

The Ministry of Digital Governance intends to rely on the Kids Wallet application, introduced last year, as a mechanism for enforcing the measure instead of developing a new control framework.

Government planning is advanced, yet the precise timing of the announcement by Prime Minister Kyriakos Mitsotakis has not been finalised.

In addition to the legislative initiative in Greece, the European debate on children’s online safety is intensifying.

Spain recently revealed plans to prohibit social media access for those under sixteen and to create legislation that would hold platform executives personally accountable for hate speech.

Such moves illustrate how governments are seeking to shape the digital environment for younger users rather than leaving regulation solely in private hands.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

India pushes Meta to justify WhatsApp’s data-sharing

The Supreme Court of India has delivered a forceful warning to Meta after judges said the company could not play with the right to privacy.

The court questioned how WhatsApp monetises personal data in a country where the app has become the de facto communications tool for hundreds of millions of people. Judges added that meaningful consent is difficult when users have little practical choice.

Meta was told not to share any user information while the appeal over WhatsApp’s 2021 privacy policy continues. Judges pressed the company to explain the value of behavioural data instead of relying solely on claims about encrypted messages.

Government lawyers argued that personal data was collected and commercially exploited in ways most users would struggle to understand.

The case stems from a major update to WhatsApp’s data-sharing rules that India’s competition regulator said abused the platform’s dominant position.

A significant penalty was issued before Meta and WhatsApp challenged the ruling at the Supreme Court. The court has now widened the proceedings by adding the IT ministry and has asked Meta to provide detailed answers before the next hearing on 9 February.

WhatsApp is also under heightened scrutiny worldwide as regulators examine how encrypted platforms analyse metadata and other signals.

In India, broader regulatory changes, such as new SIM-binding rules, could restrict how small businesses use the service rather than broadening its commercial reach.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Microsoft expands software security lifecycle for AI-driven platforms

AI is widening the cyber risk landscape and forcing security teams to rethink established safeguards. Microsoft has updated its Secure Development Lifecycle to address AI-specific threats across design, deployment and monitoring.

The updated approach reflects how AI can blur trust boundaries by combining data, tools, APIs and agents in one workflow. New attack paths include prompts, plugins, retrieved content and model updates, raising risks such as prompt injection and data poisoning.

Microsoft says policy alone cannot manage non-deterministic systems and fast iteration cycles. Guidance now centres on practical engineering patterns, tight feedback loops and cross-team collaboration between research, governance and development.

Its SDL for AI is organised around six pillars: threat research, adaptive policy, shared standards, workforce enablement, cross-functional collaboration and continuous improvement. Microsoft says the aim is to embed security into every stage of AI development.

The company also highlights new safeguards, including AI-specific threat modelling, observability, memory protections and stronger identity controls for agent workflows. Microsoft says more detailed guidance will follow in the coming months.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ofcom expands scrutiny of X over Grok deepfake concerns

The British regulator, Ofcom, has released an update on its investigation into X after reports that the Grok chatbot had generated sexual deepfakes of real people, including minors.

As such, the regulator initiated a formal inquiry to assess whether X took adequate steps to manage the spread of such material and to remove it swiftly.

X has since introduced measures to limit the distribution of manipulated images, while the ICO and regulators abroad have opened parallel investigations.

The Online Safety Act does not cover all chatbot services, as regulation depends on whether a system enables user interactions, provides search functionality, or produces pornographic material.

Many AI chatbots fall partly or entirely outside the Act’s scope, limiting regulators’ ability to act when harmful content is created during one-to-one interactions.

Ofcom cannot currently investigate the standalone Grok service for producing illegal images because the Act does not cover that form of generation.

Evidence-gathering from X continues, with legally binding information requests issued to the company. Ofcom will offer X a full opportunity to present representations before any provisional findings are published.

Enforcement actions take several months, since regulators must follow strict procedural safeguards to ensure decisions are robust and defensible.

Ofcom added that people who encounter harmful or illegal content online are encouraged to report it directly to the relevant platforms. Incidents involving intimate images can be reported to dedicated services for adults or support schemes for minors.

Material that may constitute child sexual abuse should be reported to the Internet Watch Foundation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU moves closer to decision on ChatGPT oversight

The European Commission plans to decide by early 2026 whether OpenAI’s ChatGPT should be classified as a vast online platform under the Digital Services Act.

OpenAI’s tool reported 120.4 million average monthly users in the EU back in October, a figure far above the 45-million threshold that triggers more onerous obligations instead of lighter oversight.

Officials said the designation procedure depends on both quantitative and qualitative assessments of how a service operates, together with input from national authorities.

The Commission is examining whether a standalone AI chatbot can fall within the scope of rules usually applied to platforms such as social networks, online marketplaces and significant search engines.

ChatGPT’s user data largely stems from its integrated online search feature, which prompts users to allow the chatbot to search the web. The Commission noted that OpenAI could voluntarily meet the DSA’s risk-reduction obligations while the formal assessment continues.

The EU’s latest wave of designations included Meta’s WhatsApp, though the rules applied only to public channels, not private messaging.

A decision on ChatGPT that will clarify how far the bloc intends to extend its most stringent online governance framework to emerging AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!