Doctolib fined €4.67 million for abusing market dominance

France’s competition authority has fined Doctolib €4.67 million for abusing its dominant position in online medical appointment booking and teleconsultation services. The regulator found that Doctolib used exclusivity clauses and tied selling to restrict competition and strengthen its market control.

Doctolib required healthcare professionals to subscribe to its appointment booking service to use its teleconsultation platform, effectively preventing them from using rival providers. Contracts also included clauses discouraging professionals from signing with competing services.

The French authority also sanctioned Doctolib for its 2018 acquisition of MonDocteur, describing it as a strategy to eliminate its main competitor. Internal documents revealed that the merger aimed to remove MonDocteur’s product from the market and reduce pricing pressure.

The decision marks the first application of the EU’s Towercast precedent to penalise a below-threshold merger as an abuse of dominance. Doctolib has been ordered to publish the ruling summary in Le Quotidien du Médecin and online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Coca-Cola enhances its AI-powered Christmas ad to fix last year’s visual flaws

Coca-Cola has released an improved AI-generated Christmas commercial after last year’s debut campaign drew criticism for its unsettling visuals.

The latest ‘Holidays Are Coming’ ads, developed in part by San Francisco-based Silverside, showcase more natural animation and a wider range of festive creatures, instead of the overly lifelike characters that previously unsettled audiences.

The new version avoids the ‘uncanny valley’ effect that plagued 2024’s ads. The use of generative AI by Coca-Cola reflects a wider advertising trend focused on speed and cost efficiency, even as creative professionals warn about its potential impact on traditional jobs.

Despite the efficiency gains, AI-assisted advertising remains labour-intensive. Teams of digital artists refine the content frame by frame to ensure realistic and emotionally engaging visuals.

Industry data show that 30% of commercials and online videos in 2025 were created or enhanced using generative AI, compared with 22% in 2023.

Coca-Cola’s move follows similar initiatives by major firms, including Google’s first fully AI-generated ad spot launched last month, signalling that generative AI is now becoming a mainstream creative tool across global marketing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI outlines roadmap for AI safety, accountability and global cooperation

New recommendations have been published by OpenAI for managing rapid advances in AI, stressing the need for shared safety standards, public accountability, and resilience frameworks.

The company warned that while AI systems are increasingly capable of solving complex problems and accelerating discovery, they also pose significant risks that must be addressed collaboratively.

According to OpenAI, the next few years could bring systems capable of discoveries once thought centuries away.

The firm expects AI to transform health, materials science, drug development and education, while acknowledging that economic transitions may be disruptive and could require a rethinking of social contracts.

To ensure safe development, OpenAI proposed shared safety principles among frontier labs, new public oversight mechanisms proportional to AI capabilities, and the creation of a resilience ecosystem similar to cybersecurity.

It also called for regular reporting on AI’s societal impact to guide evidence-based policymaking.

OpenAI reiterated that the goal should be to empower individuals by making advanced AI broadly accessible, within limits defined by society, and to treat access to AI as a foundational public utility in the years ahead.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK mobile networks and the Government launch a fierce crackdown on scam calls

Britain’s largest mobile networks have joined the Government to tackle scam calls and texts. Through the second Telecommunications Fraud Charter, they aim to make the UK harder for fraudsters to target.

To achieve this, networks will upgrade systems within a year to prevent foreign call centres from spoofing UK numbers. Additionally, advanced call tracing and AI technology will detect and block suspicious calls and texts before they reach users.

Moreover, clear commitments are in place to support fraud victims, reducing the time it takes for help from networks to two weeks. Consequently, victims will receive prompt, specialist assistance to recover quickly and confidently.

Furthermore, improved data sharing with law enforcement will enable them to track down scammers and dismantle their operations. By collaborating across sectors, organised criminal networks can be disrupted and prevented from targeting the public.

Since fraud is the UK’s most reported crime, it causes financial losses and emotional distress. Additionally, scam calls erode public trust in essential services and cost the telecom industry millions of dollars annually.

Therefore, the Telecoms Charter sets measurable goals, ongoing monitoring, and best practice guidance for networks. Through AI tools, staff training, and public messaging, networks aim to stay ahead of evolving scam tactics.

Finally, international collaboration, such as UK-US actions against Southeast Asian fraud centres, complements these efforts.

Overall, this initiative forms part of a wider Fraud Strategy and Government plan to safeguard citizens.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tinder tests AI feature that analyses photos for better matches

Tinder is introducing an AI feature called Chemistry, designed to better understand users through interactive questions and optional access to their Camera Roll. The system analyses personal photos and responses to infer hobbies and preferences, offering more compatible match suggestions.

The feature is being tested in New Zealand and Australia ahead of a broader rollout as part of Tinder’s 2026 product revamp. Match Group CEO Spencer Rascoff said Chemistry will become a central pillar in the app’s evolving AI-driven experience.

Privacy concerns have surfaced as the feature requests permission to scan private photos, similar to Meta’s recent approach to AI-based photo analysis. Critics argue that such expanded access offers limited benefits to users compared to potential privacy risks.

Match Group expects a short-term financial impact, projecting a $14 million revenue decline due to Tinder’s testing phase. The company continues to face user losses despite integrating AI tools for safer messaging, better profile curation and more interactive dating experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU Advocate General backs limited seizure of work emails in competition probes

An Advocate General of the Court of Justice of the European Union has said national competition authorities may lawfully seize employee emails during investigations without prior judicial approval. The opinion applies only when a strict legal framework and effective safeguards against abuse are in place.

The case arose after Portuguese medical companies challenged the competition authority’s seizure of staff emails, arguing it breached the right to privacy and correspondence under the EU Charter of Fundamental Rights. The authority acted under authorisation from the Public Prosecutor’s Office.

According to the Advocate General, such seizures may limit privacy and data protection rights under Articles 7 and 8 of the Charter, but remain lawful if proportionate and justified. The processing of personal data is permitted under the GDPR where it serves the public interest in enforcing competition law.

The opinion emphasised that access to business emails did not undermine the essence of data protection rights, as the investigation focused on professional communications. The final judgment from the CJEU is expected to clarify how privacy principles apply in competition law enforcement across the EU.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New law aims to make the internet safer in Singapore

Singapore’s Parliament has passed the Online Safety (Relief and Accountability) Bill, a landmark law designed to provide faster protection and redress for victims of online harm. After over eight hours of debate, MPs approved the Bill, which will establish the Online Safety Commission (OSC) by June 2026, a one-stop agency empowered to direct online platforms, group administrators, and internet service providers to remove harmful content or restrict the accounts of perpetrators.

The move follows findings that social media platforms often take five days or more to act on harmful content reports, leaving victims exposed to harassment and abuse.

The new law introduces civil remedies and enforcement powers for a wide range of online harms, including harassment, doxing, stalking, intimate image abuse, and child exploitation. Victims can seek compensation for lost income or force perpetrators to surrender profits gained from harmful acts.

In severe cases, individuals or entities that ignore OSC orders may face fines of up to S$500,000, and daily penalties may be applied until compliance is achieved. The OSC can also order access blocks or app removals for persistent offenders.

Ministers Josephine Teo, Rahayu Mahzam, and Edwin Tong emphasised that the Bill aims to empower victims rather than punish expression, while ensuring privacy safeguards. Victims will be able to request the disclosure of a perpetrator’s identity to pursue civil claims, though misuse of such data, such as doxing in retaliation, will be an offence. The law also introduces a ‘no wrong door’ approach, ensuring that victims will not have to navigate multiple agencies to seek help.

Singapore joins a small group of nations, such as Australia, that have created specialised agencies for digital safety. The government hopes the OSC will help rebuild trust in online spaces and establish new norms for digital behaviour.

As Minister Teo noted, ‘Our collective well-being is compromised when those who are harmed are denied restitution. By fostering trust in online spaces, Singaporeans can participate safely and confidently in our digital society.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Social media platforms ordered to enforce minimum age rules in Australia

Australia’s eSafety Commissioner has formally notified major social media platforms, including Facebook, Instagram, TikTok, Snapchat, and YouTube, that they must comply with new minimum age restrictions from 10 December.

The rule will require these services to prevent social media users under 16 from creating accounts.

eSafety determined that nine popular services currently meet the definition of age-restricted platforms since their main purpose is to enable online social interaction. Platforms that fail to take reasonable steps to block underage users may face enforcement measures, including fines of up to 49.5 million dollars.

The agency clarified that the list of age-restricted platforms will not remain static, as new services will be reviewed and reassessed over time. Others, such as Discord, Google Classroom, and WhatsApp, are excluded for now as they do not meet the same criteria.

Commissioner Julie Inman Grant said the new framework aims to delay children’s exposure to social media and limit harmful design features such as infinite scroll and opaque algorithms.

She emphasised that age limits are only part of a broader effort to build safer, more age-appropriate online environments supported by education, prevention, and digital resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU conference highlights the need for collaboration in digital safety and growth

European politicians and experts gathered in Billund for the conference ‘Towards a Safer and More Innovative Digital Europe’, hosted by the Danish Parliament.

The discussions centred on how to protect citizens online while strengthening Europe’s technological competitiveness.

Lisbeth Bech-Nielsen, Chair of the Danish Parliament’s Digitalisation and IT Committee, stated that the event demonstrated the need for the EU to act more swiftly to harness its collective digital potential.

She emphasised that only through cooperation and shared responsibility can the EU match the pace of global digital transformation and fully benefit from its combined strengths.

The first theme addressed online safety and responsibility, focusing on the enforcement of the Digital Services Act, child protection, and the accountability of e-commerce platforms importing products from outside the EU.

Participants highlighted the importance of listening to young people and improving cross-border collaboration between regulators and industry.

The second theme examined Europe’s competitiveness in emerging technologies such as AI and quantum computing. Speakers called for more substantial investment, harmonised digital skills strategies, and better support for businesses seeking to expand within the single market.

A Billund conference emphasised that Europe’s digital future depends on striking a balance between safety, innovation, and competitiveness, which can only be achieved through joint action and long-term commitment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kraken Pro unlocks crypto-collateralized futures for EU traders

Kraken Pro has expanded its offerings in the EU by allowing clients to use crypto, including BTC, ETH, and certain stablecoins, as collateral for more than 150 perpetual futures markets.

The move positions the platform among the first regulated venues in Europe to provide crypto-collateralised, USD-margined futures contracts. It combines flexibility, speed, and capital efficiency with compliance under MiFID II.

Using crypto as collateral enables traders to maintain exposure to their digital assets while accessing leveraged positions. Clients can post BTC, ETH, or stablecoins without converting to fiat, avoiding fees and delays.

The system also supports cross-asset hedging and stablecoin-backed trades, allowing users to manage risk and diversify strategies more efficiently.

Kraken Pro’s regulated futures comply with EU rules, offering up to 10x leverage, multi-asset collateral, and supervision under MiCA and MiFID II. The platform offers deep liquidity, tight spreads, and reliable execution for both individual and institutional traders, even during volatile market conditions.

To begin trading, clients must enable futures on Kraken EU, fund their accounts with crypto assets, select their preferred collateral, and then open or manage leveraged perpetual positions. The update enhances strategic options for both hedging and directional trades.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot