Italy orders Amazon to stop processing sensitive employee data after privacy ruling

The Italian data protection authority has ordered Amazon Italia Logistics to halt processing of sensitive employee data after investigators found that the company gathered details ranging from health conditions to union involvement.

Information about workers’ private lives and family members had also been collected, often retained for a decade through internal tracking systems rather than being limited to what labour rules in Italy allow.

Regulators discovered that some data originated from cameras positioned near restrooms and staff break areas, a practice that breached EU privacy standards.

The watchdog concluded that the company’s monitoring went far beyond what employers are permitted to compile when assessing staff performance or workplace needs.

Amazon responded by stressing that protecting employee information remains a priority and said that internal rules and training programmes are designed to ensure compliance. The company added that any findings from the Italian authority would prompt a review of its procedures instead of being dismissed.

An order that arrives as Amazon attempts to regain its lobby badges at the European Parliament.

Access was suspended in 2024 after senior representatives declined to attend hearings on warehouse working conditions, and opposition from MEPs continues to place pressure on Parliament President Roberta Metsola to reject reinstatement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU moves to enforce digital fairness rules with stronger consumer oversight

Regulatory scrutiny of the EU’s digital fairness framework is set to begin on 1 July as the European Commission moves to tighten its supervision of online platforms.

An initiative that forms part of a broader effort to ensure stronger consumer protection across digital markets, with officials signalling stricter oversight of commercial practices that disadvantage users.

The Commission is preparing a major upgrade of its consumer protection framework, expected by December 2026.

The reforms aim to reinforce enforcement tools under the Unfair Commercial Practices Directive and the Consumer Protection Cooperation Regulation, allowing regulators to intervene more effectively when platforms breach fairness standards.

Michael McGrath, Commissioner for Democracy, Justice and Rule of Law, has highlighted the need for greater transparency and accountability as digital markets expand rapidly.

The forthcoming scrutiny focuses on ensuring that platforms respect transparency obligations, avoid manipulating users and provide fair conditions in online transactions.

Regulators seek to replace fragmented enforcement with a more coordinated model that reflects the increasingly cross-border nature of digital commerce.

Stronger consumer safeguards are becoming central to the digital agenda of the EU.

The next phase of reforms is expected to streamline investigations across member states and deliver more predictable outcomes for affected consumers, offering steadier enforcement instead of reactive measures taken after violations escalate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

ESMA sets guidance for crypto perpetuals and CFDs

The European Securities and Markets Authority (ESMA) has clarified that many crypto-perpetual contracts, including those for Bitcoin and Ether, are likely to be classified as contracts for difference (CFDs).

Due to their leverage, complexity, and risk, these products should target a narrow audience, with distribution strategies aligned accordingly.

The announcement came as Kraken launched perpetual futures for ten tokenised assets, including major indices, gold, and top tech and crypto stocks. ESMA warned that mass marketing or promotions targeting inexperienced investors are inappropriate under its guidance.

Firms must ensure that derivatives falling within the CFD category comply with product intervention requirements. Requirements include leverage limits, risk warnings, margin close-outs, negative balance protection, and a ban on incentives or benefits.

Non-advised services must include an appropriateness assessment to protect investors from unsuitable offerings.

ESMA also emphasised the importance of identifying and managing conflicts of interest arising from these products. The statement seeks to ensure firms market and distribute leveraged crypto products responsibly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta AI flood of unusable abuse tips overwhelms US investigators

Investigators in the US say that AI used by Meta is flooding child protection units with large volumes of unhelpful reports, thereby draining resources rather than assisting ongoing cases.

Officers in the Internet Crimes Against Children network told a New Mexico court that most alerts generated by the company’s platforms lack essential evidence or contain material that is not criminal, leaving teams unable to progress investigations.

Meta rejects the claim that it prioritises profit, stressing its cooperation with law enforcement and highlighting rapid response times to emergency requests.

Its position is challenged by officers who say the volume of AI-generated alerts has doubled since 2024, particularly after the Report Act broadened reporting obligations.

They argue that adolescent conversations and incomplete data now form a sizeable portion of the alerts, while genuine cases of child sexual abuse material are becoming harder to detect.

Internal company documents disclosed at trial show Meta executives raising concerns as early as 2019 about the impact of end-to-end encryption on the firm’s ability to identify child exploitation and support investigators.

Child safety groups have long warned that encryption could limit early detection, even though Meta says it has introduced new tools designed to operate safely within encrypted environments.

The growing influx of unusable tips is taking a heavy toll on investigative teams. Officers in the US say each report must still be reviewed manually, despite the low likelihood of actionable evidence, and this backlog is diminishing morale at a time when they say resources have not kept pace with demand.

They warn that meaningful cases risk being delayed as units struggle with a workload swollen by AI systems tuned to avoid regulatory penalties rather than investigative value.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Colorado targets AI chatbot safety

AI chatbots operating in Colorado would face new child safety and suicide prevention requirements under a bipartisan bill introduced in the Colorado legislature. Lawmakers say the measure addresses parents to concerns about harmful chatbot interactions.

House Bill 1263 would require companies to clearly inform children in Colorado that they are interacting with AI rather than a real person. Platforms would also be barred from offering engagement rewards to child users.

The proposal mandates reasonable safeguards to prevent sexually explicit content and to stop chatbots from encouraging emotional dependence, including romantic role-playing. Parental control options would also be required where services are accessible to children in Colorado.

Companies would need to provide suicide prevention resources when users express self-harm thoughts and report such incidents to the Colorado attorney general. Violations would be treated as consumer protection infractions, carrying fines of up to $1,000 per occurrence in Colorado.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Conduent breach exposes data of 25 million people across US

More than 25 million people across the United States have had personal information exposed following a ransomware attack on government contractor Conduent. Updated state breach notifications indicate the incident is larger than initially understood.

Conduent provides printing, payment processing, and benefit administration services for state agencies and large corporations. Its systems support food assistance, unemployment benefits, and workplace programmes, reaching more than 100 million individuals, according to the company.

US State disclosures show Oregon and Texas account for most of the affected records, with additional cases reported in Massachusetts, New Hampshire, and Washington. Compromised data includes names, dates of birth, addresses, Social Security numbers, health insurance information, and medical details.

Public information from Conduent has been limited since the January 2025 attack. An incident notice published in October carried a ‘noindex’ tag in its source code, preventing search engines from listing the page, which critics say reduced visibility for affected individuals.

The breach ranks among the largest recent ransomware incidents, though it is smaller than the 2024 Change Healthcare attack that affected 190 million people. Regulators and affected users continue seeking clarity on the Conduent case and its security failures.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

CarGurus data leak surfaces as ShinyHunters publishes archive

The ShinyHunters extortion group has published a 6.1GB archive, which it claims contains more than 12 million records stolen from CarGurus, a US-based automotive platform. Have I Been Pwned listed the dataset, reporting that roughly 3.7 million records appear to be new.

The exposed information includes email addresses, IP addresses, full names, phone numbers, physical addresses, user account IDs, and finance-related application data belonging to CarGurus users. Dealer account details and subscription information were also reportedly included in the archive.

CarGurus has not issued a public statement confirming a breach. However, Have I Been Pwned said it attempts to verify the authenticity of datasets before adding them to its database, suggesting a level of validation of the leaked material.

Security experts warn that the availability of the data could increase the risk of phishing. Users are advised to remain cautious of unsolicited communications and potential scams that may leverage the exposed personal information.

ShinyHunters has recently claimed attacks against multiple large organisations across telecoms, fintech, retail, and media. The group is known for using social engineering tactics, including voice phishing and malicious OAuth applications, to gain access to SaaS platforms and extract customer data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI automation quietly reshapes core insurance operations

A Business Reporter analysis notes that AI in the insurance sector has progressed from pilots and back-office experiments to core operational automation, spanning underwriting, claims processing, customer servicing, document interpretation and financial workflows.

This shift is driven by the need to reduce high operating costs, estimated at roughly 22% of global premiums, which have long limited the industry’s growth and agility.

Modern AI systems are increasingly deployed as intelligent processing layers that interpret applications, policy documents and financial records, route work, reconcile data and assist human judgement without requiring wholesale replacement of legacy systems.

Insurers see potential for real-time underwriting support, dramatically faster claims intake and near-instant reconciliation of finance tasks, enabling staff to shift focus from repetitive administration to higher-value activities such as risk assessment, customer relationships and portfolio insights.

The commentary suggests that resistance to broader AI adoption in insurance is cultural rather than technical, as the industry’s traditionally cautious stance can slow integration even when automation delivers measurable value.

The core message is that AI’s role in insurance is not to replace humans but to remove friction and elevate human work by automating routine functions efficiently and at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Western Balkans closer to the EU roaming free zone

The European Commission has proposed opening negotiations to bring Albania, Bosnia and Herzegovina, Kosovo, Montenegro, North Macedonia, and Serbia into the EU’s ‘Roam Like at Home’ regime. The move would allow citizens and businesses to use their mobile phones across borders without incurring additional roaming charges, once the necessary agreements are finalised and the rules are aligned.

If implemented, travellers between the EU and the Western Balkans would be able to make calls, send text messages, and use mobile data at domestic rates. This would apply both to Western Balkan visitors in the EU and to the EU citizens travelling in the region, ensuring seamless connectivity without unexpected costs.

The change would make travel for study, work, and tourism more affordable and practical. By removing roaming surcharges, the initiative aims to simplify cross-border communication and strengthen economic and social ties between the two regions.

To move forward, the European Commission has adopted proposals for negotiating mandates and is now seeking authorisation from the European Council to begin formal talks. Once approved, the Commission will negotiate bilateral agreements with each Western Balkan partner. After successful alignment with the EU roaming rules, the countries would join the EU’s roaming area.

The proposal builds on existing voluntary arrangements between some EU and Western Balkan mobile operators, which already offer reduced roaming charges. It also complements the regional roaming agreement within the Western Balkans, where lower tariffs are already in place.

More broadly, the initiative reflects the EU’s gradual integration strategy outlined in the 2023 Growth Plan for the Western Balkans. By progressively extending elements of the EU Single Market to candidate countries, the plan aims to deliver practical benefits to citizens and businesses before full EU membership, while keeping the enlargement process on track.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EDPS and regulators unite to address misuse of AI imagery across jurisdictions

The European Data Protection Supervisor (EDPS) and authorities from 61 jurisdictions issued a joint statement on AI-generated imagery, warning about tools that create realistic depictions of identifiable individuals without consent. The move underscores concerns over privacy, dignity and child safety.

Authorities said advances in AI image and video tools, especially when integrated into social media platforms, have enabled non-consensual intimate imagery, defamatory depictions, and other harmful content. Children and vulnerable groups are seen as particularly at risk.

The EDPS and the other signatories reminded organisations that AI content-generation systems must comply with applicable data protection and privacy laws. They stressed that creating non-consensual intimate imagery may constitute a criminal offence in many jurisdictions.

Organisations are urged to implement safeguards against misuse of personal data, ensure transparency about system capabilities and uses, and provide accessible mechanisms for swift content removal. Stronger protections and age-appropriate information are expected where children are involved.

Authorities signalled plans for coordinated responses, including enforcement, policy development and education initiatives. The EDPS and fellow signatories urged organisations to engage proactively with regulators and ensure innovation does not undermine fundamental rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!