AliExpress agrees to binding EU rules on data and transparency

AliExpress has agreed to legally binding commitments with the European Commission to comply with the Digital Services Act (DSA). These cover six key areas, including recommender systems, advertising transparency, and researcher data access.

The announcement on 18 June marks only the second case where a major platform, following TikTok, has formally committed to specific changes under the DSA.

The platform promised greater transparency in its recommendation algorithms, user opt-out from personalisation, and clearer information on product rankings. It also committed to allowing researchers access to publicly available platform data through APIs and customised requests.

However, the lack of clear definitions around terms such as ‘systemic risk’ and ‘public data’ may limit practical oversight.

AliExpress has also established an internal monitoring team to ensure implementation of these commitments. Yet experts argue that without measurable benchmarks and external verification, internal monitoring may not be enough to guarantee meaningful compliance or accountability.

The Commission, meanwhile, is continuing its investigation into the platform’s role in the distribution of illegal products.

These commitments reflect the EU’s broader enforcement strategy under the DSA, aiming to establish transparency and accountability across digital platforms. The agreement is a positive start but highlights the need for stronger oversight and clearer definitions for lasting impact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

TikTok struggles to stop the spread of hateful AI videos

Google’s Veo 3 video generator has enabled a new wave of racist AI content to spread across TikTok, despite both platforms having strict policies banning hate speech.

According to MediaMatters, several TikTok accounts have shared AI-generated videos promoting antisemitic and anti-Black stereotypes, many of which still circulated widely before being removed.

These short, highly realistic videos often included offensive depictions, and the visible ‘Veo’ watermark confirmed their origin from Google’s model.

While both TikTok and Google officially prohibit the creation and distribution of hateful material, enforcement has been patchy. TikTok claims to use both automated systems and human moderators, yet the overwhelming volume of uploads appears to have delayed action.

Although TikTok says it banned over half the accounts before MediaMatters’ findings were published, harmful videos still managed to reach large audiences.

Google also maintains a Prohibited Use Policy banning hate-driven content. However, Veo 3’s advanced realism and difficulty detecting coded prompts make it easier for users to bypass safeguards.

Testing by reporters suggests the model is more permissive than previous iterations, raising concerns about its ability to filter out offensive material before it is created.

With Google planning to integrate Veo 3 into YouTube Shorts, concerns are rising that harmful content may soon flood other platforms. TikTok and Google appear to lack the enforcement capacity to keep pace with the abuse of generative AI.

Despite strict rules on paper, both companies are struggling to prevent their technology from fuelling racist narratives at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU races to catch up in quantum tech amid cybersecurity fears

The European Union is ramping up efforts to lead in quantum computing, but cybersecurity experts warn that the technology could upend digital security as we know it.

In a new strategy published Wednesday, the European Commission admitted that Europe trails the United States and China in commercialising quantum technology, despite its strong academic presence. The bloc is now calling for more private investment to close the gap.

Quantum computing offers revolutionary potential, from drug discovery to defence applications. But its power poses a serious risk: it could break today’s internet encryption.

Current digital security relies on public key cryptography — complex maths that conventional computers can’t solve. But quantum machines could one day easily break these codes, making sensitive data readable to malicious actors.

Experts fear a ‘store now, decrypt later’ scenario, where adversaries collect encrypted data now and crack it once quantum capabilities mature. That could expose government secrets and critical infrastructure.

The EU is also concerned about losing control over homegrown tech companies to foreign investors. While Europe leads in quantum research output, it only receives 5% of global private funding. In contrast, the US and China attract over 90% combined.

European cybersecurity agencies published a roadmap for transitioning to post-quantum cryptography to address the threat. The aim is to secure critical infrastructure by 2030 — a deadline shared by the US, UK, and Australia.

IBM recently said it could release a workable quantum computer by 2029, highlighting the urgency of the challenge. Experts stress that replacing encryption is only part of the task. The broader transition will affect billions of systems, requiring enormous technical and logistical effort.

Governments are already reacting. Some EU states have imposed export restrictions on quantum tech, fearing their communications could be exposed. Despite the risks, European officials say the worst-case scenarios are not inevitable, but doing nothing is not an option.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S eyes full online recovery by august after cyberattack

Marks & Spencer (M&S) expects its full online operations to be restored within four weeks, following a cyber attack that struck in April. Speaking at the retailer’s annual general meeting, CEO Stuart Machin said the company aims to resolve the majority of the incident’s impact by August.

The cyberattack, attributed to human error, forced M&S to suspend online sales and disrupted supply chain operations, including its Castle Donington distribution centre. The breach also compromised customer personal data and is expected to result in a £300 million hit to the company’s profit.

April marked the beginning of a multi-month recovery process, with M&S confirming by May that the breach involved a supply chain partner. By June, the financial and operational damage became clear, with limited online services restored and key features like click-and-collect still unavailable.

The e-commerce platform in Great Britain is now partially operational, but services such as next-day delivery remain offline. Machin stated that recovery is progressing steadily, with the goal of full functionality within weeks.

Julius Cerniauskas, CEO of web intelligence firm Oxylabs, highlighted the growing risks of social engineering in cyber incidents. He noted that while technical defences are improving, attackers continue to exploit human vulnerabilities to gain access.

Cerniauskas described the planned recovery timeline as a ‘solid achievement’ but warned that long-term reputational effects could persist. ‘It’s not a question of if you’ll be targeted – but when,’ he said, urging firms to bolster both human and technical resilience.

Executive pay may also be impacted by the incident. According to the Evening Standard, chairman Archie Norman said incentive compensation would reflect any related performance shortfalls. Norman added that systems are gradually returning online and progress is being made each week.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Qantas cyber attack sparks customer alert

Qantas is investigating a major data breach that may have exposed the personal details of up to six million customers.

The breach affected a third-party platform used by the airline’s contact centre to store sensitive data, including names, phone numbers, email addresses, dates of birth and frequent flyer numbers.

The airline discovered unusual activity on 30 June and responded by immediately isolating the affected system. While the full scope of the breach is still being assessed, Qantas expects the volume of stolen data to be significant.

However, it confirmed that no passwords, PINs, credit card details or passport numbers were stored on the compromised platform.

Qantas has informed the Australian Federal Police, the Cyber Security Centre and the Office of the Information Commissioner. CEO Vanessa Hudson apologised to customers and urged anyone concerned to call a dedicated support line. She added that airline operations and safety remain unaffected.

The incident follows recent cyber attacks on Hawaiian Airlines, WestJet and major UK retailers, reportedly linked to a group known as Scattered Spider. The breach adds to a growing list of Australian organisations targeted in 2025, in what privacy authorities describe as a worsening trend.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tinder trials face scans to verify profiles

Tinder is trialling a facial recognition feature to boost user security and crack down on fraudulent profiles. The pilot is currently underway in the US, after initial launches in Colombia and Canada.

New users are now required to take a short video selfie during sign-up, which will be matched against profile photos to confirm authenticity. The app also compares the scan with other accounts to catch duplicates and impersonations.

Verified users receive a profile badge, and Tinder stores a non-reversible encrypted face map to aid in detection. The company claims all facial data is deleted when accounts are removed.

The update follows a sharp rise in catfishing and romance scams, with over 64,000 cases reported in the US last year alone. Other measures introduced in recent years include photo verification, ID checks and location-sharing tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Coinbase privacy appeal rejected by US Supreme Court

The US Supreme Court has declined to hear an appeal from a Coinbase user, effectively allowing the Internal Revenue Service (IRS) to access user data without new restrictions.

The decision ends James Harper’s legal battle over the IRS’s broad request for user data, which he claimed violated constitutional privacy rights.

Harper’s challenge stemmed from a 2016 IRS summons demanding data from over 14,000 Coinbase users suspected of underreporting crypto income. Lower courts rejected his claims, citing the third-party doctrine that removes privacy rights for data shared with external platforms.

By refusing to take up the case, the Supreme Court leaves intact the precedent set by lower courts. The ruling confirms that centralised exchange users like those on Coinbase lack Fourth Amendment protection over government access to their financial data.

Experts warn the ruling could have broader implications beyond crypto. The outcome may reinforce the government’s ability to obtain user data from financial and technology platforms, potentially expanding surveillance powers across the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s Facebook uses phone photos for AI if users allow it

Meta has introduced a new feature that allows Facebook to access and analyse users’ photos stored on their phones, provided they give explicit permission.

The move is part of a broader push to improve the company’s AI tools, especially after the underwhelming reception of its Llama 4 model. Users who opt in will be agreeing to Meta’s AI Terms of Service, which grants the platform the right to retain and use personal media for content suggestions.

The new feature, currently being tested in the US and Canada, is designed to offer Facebook users creative ideas for Stories by processing their photos and videos through cloud infrastructure.

When enabled, users may receive suggestions such as collages or travel highlights based on when and where images were captured, as well as who or what appears in them. However, participation is strictly optional and can be turned off at any time.

Facebook clarifies that the media analysed under the feature is not used to train AI models in the current test. Still, the system does upload selected media to Meta’s servers on an ongoing basis, raising privacy concerns.

The option to activate these suggestions can be found in the Facebook app’s settings, where users are asked whether they want camera roll data to inform sharing ideas.

Meta has been actively promoting its AI ambitions, with CEO Mark Zuckerberg pushing for the development of ‘superintelligence’. The company recently launched Meta Superintelligence Labs to lead these efforts.

Despite facing stiff competition from OpenAI, DeepSeek and Google, Meta appears determined to deepen its use of personal data to boost its AI capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lung cancer caught early thanks to AI

A 69-year-old woman from Surrey has credited AI with saving her life after it detected lung cancer that human radiologists initially missed.

The software flagged a concerning anomaly in a chest X-ray that had been given the all-clear, prompting urgent follow-up and surgery.

NHS hospitals increasingly use AI tools like Annalise.ai, which analyses scans and prioritises urgent cases for radiologists.

Dianne Covey, whose tumour was caught at stage one, avoided chemotherapy or radiotherapy and has since made a full recovery.

With investments exceeding £36 million, the UK government and NHS are rapidly deploying AI to improve cancer diagnosis rates and reduce waiting times. AI has now been trialled or implemented across more than 45 NHS trusts and is also used for skin and prostate cancer detection.

Doctors and technologists say AI is not replacing medical professionals but enhancing their capabilities by highlighting critical cases and improving speed.

Experts warn that outdated machines, biassed training data and over-reliance on consumer AI tools remain risks to patient outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Balancing security and usability in digital authentication

A report by the FIDO Alliance revealed that 53% of consumers observed an increase in suspicious messages in 2024, with SMS, emails, and phone calls being the primary vectors.

As digital scams and AI-driven fraud rise, businesses face growing pressure to strengthen authentication methods without compromising user experience.

No clear standard has emerged despite the range of available authentication options—including passkeys, one-time passwords (OTP), multi-factor authentication (MFA), and biometric systems.

Industry experts warn that focusing solely on advanced tools can lead to overlooking basic user needs. Minor authentication hurdles such as CAPTCHA errors have led to customer drop-offs and failed transactions.

Organisations are exploring risk-based, adaptive authentication models that adjust security levels based on user behaviour and context. The systems could eventually replace static logins with continuous, behind-the-scenes verification.

AI complicates the landscape further. As autonomous assistants handle tasks like booking tickets or making purchases, distinguishing legitimate user activity from malicious bots becomes increasingly tricky.

With no universal solution, experts say businesses must offer a flexible range of secure options tailored to user preferences. The challenge remains to find the right balance between security and usability in an evolving threat environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!