UK enforces mandatory ETA as digital border era begins

Non-visa nationals are now barred from entering the UK, as the country has begun enforcing mandatory digital permission through the Electronic Travel Authorisation.

Travellers from 85 nations, including the US, Canada and France, must obtain an ETA before departure; otherwise, airlines will prevent them from boarding rather than allow last-minute checks at the border. The authorisation costs £16 and remains valid for two years or until a passport expires.

British and Irish citizens remain exempt but must present valid proof of status when travelling. Authorities say the scheme brings the UK into line with similar systems used by the US and the EU.

The Home Office emphasises that the measure strengthens border security and supports a modern, efficient entry process designed to benefit both visitors and the wider public.

A requirement that also applies to travellers passing through the UK to take connecting flights, reinforcing the shift toward a fully digital immigration system.

Over 19 million people have already used the ETA since its launch in 2023, generating significant revenue that is being reinvested in broader border improvements. Officials argue that the momentum paves the way for a future contactless border, supported by the steady transition from physical documents to eVisas.

From 26 February, Certificates of Entitlement will also be issued digitally, creating a single record that no longer expires with a passport.

Most ETA applications are processed automatically within minutes, allowing short-notice trips to remain possible. However, authorities still recommend applying up to 3 working days in advance to avoid delays for the small number of cases that require additional review.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Colorado targets AI chatbot safety

AI chatbots operating in Colorado would face new child safety and suicide prevention requirements under a bipartisan bill introduced in the Colorado legislature. Lawmakers say the measure addresses parents to concerns about harmful chatbot interactions.

House Bill 1263 would require companies to clearly inform children in Colorado that they are interacting with AI rather than a real person. Platforms would also be barred from offering engagement rewards to child users.

The proposal mandates reasonable safeguards to prevent sexually explicit content and to stop chatbots from encouraging emotional dependence, including romantic role-playing. Parental control options would also be required where services are accessible to children in Colorado.

Companies would need to provide suicide prevention resources when users express self-harm thoughts and report such incidents to the Colorado attorney general. Violations would be treated as consumer protection infractions, carrying fines of up to $1,000 per occurrence in Colorado.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Conduent breach exposes data of 25 million people across US

More than 25 million people across the United States have had personal information exposed following a ransomware attack on government contractor Conduent. Updated state breach notifications indicate the incident is larger than initially understood.

Conduent provides printing, payment processing, and benefit administration services for state agencies and large corporations. Its systems support food assistance, unemployment benefits, and workplace programmes, reaching more than 100 million individuals, according to the company.

US State disclosures show Oregon and Texas account for most of the affected records, with additional cases reported in Massachusetts, New Hampshire, and Washington. Compromised data includes names, dates of birth, addresses, Social Security numbers, health insurance information, and medical details.

Public information from Conduent has been limited since the January 2025 attack. An incident notice published in October carried a ‘noindex’ tag in its source code, preventing search engines from listing the page, which critics say reduced visibility for affected individuals.

The breach ranks among the largest recent ransomware incidents, though it is smaller than the 2024 Change Healthcare attack that affected 190 million people. Regulators and affected users continue seeking clarity on the Conduent case and its security failures.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic faces data theft claims from Musk

Elon Musk, CEO of Tesla and xAI, has publicly accused Anthropic of stealing large volumes of data to train its AI models. The allegation was made on X in response to posts referencing Community Notes attached to Anthropic-related content.

Musk claimed the company had engaged in large-scale data theft and suggested that it had paid multi-billion-dollar settlements. Those financial claims remain contested, and no official confirmation has been provided to substantiate the figures.

Anthropic, known for developing the Claude AI model, was founded by former OpenAI employees and promotes an approach centred on AI safety and responsible development. The company has not publicly responded to Musk’s latest accusations.

The dispute reflects a broader conflict across the AI industry over how companies collect the text, images and other materials required to train large language models. Much of this data is scraped from the internet, often without explicit permission from rights holders.

Multiple lawsuits filed by authors, media organisations and software developers are testing whether large-scale scraping qualifies as fair use under copyright law. Court rulings in these cases could reshape licensing practices, impose financial penalties, and alter the economics of AI development.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

CarGurus data leak surfaces as ShinyHunters publishes archive

The ShinyHunters extortion group has published a 6.1GB archive, which it claims contains more than 12 million records stolen from CarGurus, a US-based automotive platform. Have I Been Pwned listed the dataset, reporting that roughly 3.7 million records appear to be new.

The exposed information includes email addresses, IP addresses, full names, phone numbers, physical addresses, user account IDs, and finance-related application data belonging to CarGurus users. Dealer account details and subscription information were also reportedly included in the archive.

CarGurus has not issued a public statement confirming a breach. However, Have I Been Pwned said it attempts to verify the authenticity of datasets before adding them to its database, suggesting a level of validation of the leaked material.

Security experts warn that the availability of the data could increase the risk of phishing. Users are advised to remain cautious of unsolicited communications and potential scams that may leverage the exposed personal information.

ShinyHunters has recently claimed attacks against multiple large organisations across telecoms, fintech, retail, and media. The group is known for using social engineering tactics, including voice phishing and malicious OAuth applications, to gain access to SaaS platforms and extract customer data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

National security concerns reshape US data policy

US policymakers are increasingly treating personal data as a dual use asset that carries both economic value and national security risks. Regulators have raised concerns about sensitive information, including geolocation data linked to military personnel.

Measures such as the Protecting Americans Data from Foreign Adversaries Act of 2024 and the Department of Justice Data Security Program aim to curb misuse by designated foreign adversaries. Both frameworks impose broad restrictions on cross border data transfers.

Experts warn that compliance remains complex and uncertain, with companies adapting in what one adviser described as a fog. Enforcement signals have already emerged, including a draft noncompliance letter from the Federal Trade Commission and litigation.

Organizations are being urged to integrate national security expertise into privacy and cybersecurity teams. Observers say early preparation is essential as selective enforcement risks increase under strict but evolving US data protection regimes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Western Balkans closer to the EU roaming free zone

The European Commission has proposed opening negotiations to bring Albania, Bosnia and Herzegovina, Kosovo, Montenegro, North Macedonia, and Serbia into the EU’s ‘Roam Like at Home’ regime. The move would allow citizens and businesses to use their mobile phones across borders without incurring additional roaming charges, once the necessary agreements are finalised and the rules are aligned.

If implemented, travellers between the EU and the Western Balkans would be able to make calls, send text messages, and use mobile data at domestic rates. This would apply both to Western Balkan visitors in the EU and to the EU citizens travelling in the region, ensuring seamless connectivity without unexpected costs.

The change would make travel for study, work, and tourism more affordable and practical. By removing roaming surcharges, the initiative aims to simplify cross-border communication and strengthen economic and social ties between the two regions.

To move forward, the European Commission has adopted proposals for negotiating mandates and is now seeking authorisation from the European Council to begin formal talks. Once approved, the Commission will negotiate bilateral agreements with each Western Balkan partner. After successful alignment with the EU roaming rules, the countries would join the EU’s roaming area.

The proposal builds on existing voluntary arrangements between some EU and Western Balkan mobile operators, which already offer reduced roaming charges. It also complements the regional roaming agreement within the Western Balkans, where lower tariffs are already in place.

More broadly, the initiative reflects the EU’s gradual integration strategy outlined in the 2023 Growth Plan for the Western Balkans. By progressively extending elements of the EU Single Market to candidate countries, the plan aims to deliver practical benefits to citizens and businesses before full EU membership, while keeping the enlargement process on track.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EDPS and regulators unite to address misuse of AI imagery across jurisdictions

The European Data Protection Supervisor (EDPS) and authorities from 61 jurisdictions issued a joint statement on AI-generated imagery, warning about tools that create realistic depictions of identifiable individuals without consent. The move underscores concerns over privacy, dignity and child safety.

Authorities said advances in AI image and video tools, especially when integrated into social media platforms, have enabled non-consensual intimate imagery, defamatory depictions, and other harmful content. Children and vulnerable groups are seen as particularly at risk.

The EDPS and the other signatories reminded organisations that AI content-generation systems must comply with applicable data protection and privacy laws. They stressed that creating non-consensual intimate imagery may constitute a criminal offence in many jurisdictions.

Organisations are urged to implement safeguards against misuse of personal data, ensure transparency about system capabilities and uses, and provide accessible mechanisms for swift content removal. Stronger protections and age-appropriate information are expected where children are involved.

Authorities signalled plans for coordinated responses, including enforcement, policy development and education initiatives. The EDPS and fellow signatories urged organisations to engage proactively with regulators and ensure innovation does not undermine fundamental rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act enforcement begins, reshaping startup compliance landscape

The first enforcement provisions of the EU AI Act entered into force on 2 February 2025, marking a turning point for Europe’s AI startup ecosystem. The initial phase targets ‘unacceptable risk’ systems, including social scoring, real-time biometric surveillance in public spaces, and manipulative AI practices.

Under the regulation, penalties can reach €35 million or 7% of global annual turnover, whichever is higher. Although the current enforcement covers only prohibited practices, the move signals that Europe’s AI rulebook is now operational rather than theoretical.

Broader obligations for high-risk AI systems, such as hiring tools, credit scoring, and medical diagnostics, will apply from August 2026. Separate rules for general-purpose AI models are scheduled to take effect in August 2025.

Surveys from European SME groups indicate that many smaller technology companies feel unprepared. A significant share of reports have not conducted formal risk classification of their AI systems, despite this being a foundational requirement under the EU AI Act’s tiered framework.

While some founders warn that compliance costs could slow innovation, others point to long-term benefits from clearer governance standards. For startups, the coming months will focus on aligning products with AI Act risk tiers and strengthening documentation and oversight before stricter rules apply.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Reddit hit with a major ICO penalty over children’s privacy failures

The UK’s Information Commissioner’s Office has fined Reddit £14.47 million after finding that the platform unlawfully used children’s personal information and failed to put in place adequate age checks.

The regulator concluded that Reddit allowed children under 13 to access the platform without robust age-verification measures, leaving them exposed to content they were not able to understand or control.

Although Reddit updated its processes in July 2025, self-declaration remained easy to bypass, offering only a veneer of protection. Investigators also found that the company had not completed a data protection impact assessment until 2025, despite a large number of teenagers using the service.

Concerns were heightened by the volume of children affected and the risks created by relying on inadequate age checks.

The regulator noted that unlawful data processing occurred over a prolonged period, and that children were at risk of viewing harmful material while their information was processed without a lawful basis.

UK Information Commissioner John Edwards said companies must prioritise meaningful age assurance and understand the responsibilities set out in the Children’s Code.

The ICO said it will continue monitoring Reddit’s current controls and expects online platforms to align with robust age-assurance standards rather than rely on weak verification.

It will coordinate its oversight with Ofcom as part of broader efforts to strengthen online safety and ensure under-18s benefit from high privacy protections by default.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!