GDPR changes debated as EU seeks balance on data protection rules

Debate over potential updates to the GDPR is intensifying, as Marina Kaljurand advocates a focused ‘fitness check’ rather than sweeping legislative changes in an omnibus package.

Concerns raised in the European Parliament highlight risks associated with altering foundational elements of the regulation, particularly its definitions to personal data. Preserving these core principles is seen as essential to maintaining the integrity of the EU’s data protection framework.

Ongoing discussions reflect broader policy tensions within the EU, where efforts to reduce regulatory complexity must be balanced against the need to uphold strong privacy safeguards. Proposals for simplification are therefore facing scrutiny from lawmakers prioritising stability and legal clarity.

Future developments are likely to shape how the EU adapts its data protection rules to evolving digital markets, while ensuring that existing protections remain effective in a rapidly changing technological environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

US senators question Meta facial recognition in smart glasses

Three Democratic senators have raised concerns about Meta’s reported exploration of facial recognition in its smart glasses, warning that it could normalise public surveillance. In a letter to CEO Mark Zuckerberg, Senators Edward Markey, Ron Wyden, and Jeff Merkley asked about consent, biometric data, and the risks of misuse.

The lawmakers said the proposed feature ‘risks normalising mass surveillance at a moment when the federal government is using similar tools to intimidate protesters and chill speech. Although facial recognition may offer real benefits for blind and visually impaired users, Meta’s history of failing to protect user privacy raises serious questions about its plan to deploy this technology in its smart glasses.’

‘Americans do not consent to biometric data collection simply by walking down a public street, entering a café, or standing in a crowd,’ the senators added. ‘Yet, the deployment of this technology would appear to do exactly that – subjecting countless individuals to covert identification without notice, without consent, and without any meaningful opportunity to opt out.’ They warned that such practices would erode longstanding expectations of privacy in public spaces, effectively eliminating public anonymity.’

Concerns grew after reports of US Border Patrol and ICE agents using Meta smart glasses. While there is no evidence of facial recognition use, senators argue that adding identification tools to eyewear could expand undetectable surveillance. The letter questions if Meta might link facial data with information from its platforms, enabling real-time identification tied to profiles. Lawmakers warn that this could increase the risks of harassment and targeting.

Meta had previously discontinued facial recognition on Facebook in 2021, citing societal concerns. The senators argue that reintroducing similar technology in wearable devices suggests a shift rather than a retreat. ‘Five years later, Meta appears less worried about those societal concerns and is reportedly planning to deploy facial recognition technology in one of the most dangerous possible settings,’ they wrote.

‘Moreover,’ they continued, ‘Meta is apparently aware of the risks with this technology,’ noting that ‘an internal memo recommended launching the product ‘during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.’

‘In other words,’ the senators added, ‘Meta appears to recognise the serious privacy and civil liberties risks of facial recognition but thinks it can avoid attention by slipping the once-abandoned, ethically fraught product back onto the market while the world is distracted by the Trump administration’s daily chaos.’

The senators have asked Meta to clarify how it would obtain consent from both users and bystanders, how long it would retain biometric data, whether it would use it to train AI models, and whether it could share it with law enforcement, including the Department of Homeland Security. The company has been given until 6 April to respond.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NSA warns of AI supply chain risks in new cybersecurity guidance

The National Security Agency has released new guidance on managing risks across the AI supply chain, highlighting growing cybersecurity concerns tied to AI and machine learning systems. The joint information sheet outlines how organisations can better assess vulnerabilities when deploying or sourcing AI technologies.

The document defines the AI and machine learning supply chain as a combination of key components, including training data, models, software, infrastructure, hardware, and third-party services. Each element can introduce risks affecting confidentiality, integrity, or availability, particularly as advanced tools such as large language models and AI agents become more widely adopted.

Security risks associated with data include bias, poisoning attacks, and exposure via techniques such as model inversion and data extraction. For models, the guidance warns of hidden backdoors, malware, evasion attacks, and model manipulation. Organisations are advised to use trusted sources, perform integrity checks, and maintain verified model registries to mitigate such threats.

The paper also highlights software and infrastructure vulnerabilities, noting that AI systems often rely on complex dependencies that expand the attack surface. Recommended measures include malware scanning, testing, patching, and maintaining software bills of materials. Additional risks arise from third-party services, which may introduce weaknesses through their own supply chains or shared environments.

To manage these risks, organisations are urged to improve visibility across their AI ecosystems, identify suppliers and subcontractors, and require documentation such as AI and software bills of materials. The guidance aligns with frameworks from the National Institute of Standards and Technology and MITRE, reinforcing the need for coordinated approaches to AI supply chain security.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI makes strides in mathematical reasoning

AI systems are increasingly being tested on advanced mathematical problems as researchers assess their reasoning abilities. Competitions such as the Putnam exam have become benchmarks for evaluating performance.

Recent results suggest some AI models can achieve scores comparable to top human participants, whilst other tests face scrutiny. Experts caution that such tests may not reflect real-world mathematical research or practical problem-solving.

Researchers have also explored AI-generated proofs for longstanding mathematical questions. Verification tools are being used to confirm results and reduce errors often produced by AI systems.

Mathematicians say AI can support brainstorming and research, but still requires human oversight. Analysts describe performance as uneven, with strong results in some areas and clear limitations in others.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI reshapes India IT services outlook

India’s $300bn outsourcing industry is facing mounting pressure as AI tools threaten to disrupt traditional business models. A recent sell-off in technology stocks reflects investor concern over automation replacing labour-intensive services.

Fears intensified after new AI tools demonstrated the ability to automate legal, compliance and data processes. Analysts warn such advances could reduce demand for routine IT services and reshape client engagements.

Industry leaders in India argue AI will also create opportunities, particularly in consulting and system modernisation. Firms expect partnerships with AI developers to drive new areas of growth despite near-term disruption.

Revenue growth may slow, and hiring could remain subdued as the sector adapts. Analysts in India expect a gradual shift towards outcome-based services while companies invest in new AI capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU delays tech sovereignty package with AI and Chips Act 2

The European Commission has delayed a flagship tech sovereignty package for the second time, according to its latest College agenda. The measures are now scheduled for adoption on 27 May, after previously being postponed from March to April.

The tech sovereignty package includes several major initiatives aimed at strengthening EU tech sovereignty, such as the Cloud and AI Development Act, the Chips Act 2, an open-source strategy, and a roadmap for digitalisation and AI in energy. European Commission officials have not provided a reason for the latest delay.

The Cloud and AI Development Act is expected to define what constitutes a ‘sovereign’ cloud and simplify rules for building data centres. The proposal is designed to accelerate infrastructure development as Europe seeks to compete in the global AI race.

Chips Act 2 will follow up on the EU’s earlier semiconductor strategy, which struggled to boost domestic chip production significantly. The new proposal is expected to refine industrial policy efforts to reduce reliance on foreign suppliers.

Meanwhile, the planned open source strategy aims to support European software ecosystems and reduce dependence on large US technology firms. By encouraging commercially viable open source projects, the EU hopes to strengthen its long-term digital autonomy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN calls for global action against online scam networks

Online scam networks operating across Southeast Asia are defrauding victims worldwide, using AI, impersonation techniques, and complex cyber tools to steal billions of dollars.

At the Global Fraud Summit in Vienna, the UN Office on Drugs and Crime (UNODC) and INTERPOL brought together governments, law enforcement, and private-sector actors to strengthen international cooperation against these crimes.

Victims include individuals from diverse backgrounds, often highly educated and financially experienced. One Australian couple, Kim and Allan Sawyer, lost more than $2.5 million after engaging with what appeared to be a legitimate investment opportunity. ‘The scammer was extraordinarily believable,’ Kim Sawyer said. ‘He had a British accent, used all the right financial market terms and knew how to induce us by appearing credible every time.’

UNODC officials warn that these operations extend beyond fraud, forming part of a broader criminal ecosystem driven by organised scam networks, involving human trafficking, corruption, and money laundering.

‘We need to be looking into prosecuting high-level criminals, following the money through financial investigations and identifying the giant networks that operate behind these operations,’ said Delphine Schantz, UNODC’s regional representative for Southeast Asia and the Pacific.

Authorities say the scale and complexity of these crimes require a coordinated global response to dismantle scam networks effectively. ‘The complexity of these crimes requires an equally complex, whole-of-government approach and enhanced coordination among governments, financial intelligence units and digital banks,’ Schantz added.

Investigations in countries such as the Philippines and Cambodia have revealed how scam networks operate on the ground. In Manila, a former scam compound uncovered facilities used to control trafficked workers and evidence of corruption linked to local officials. ‘How do you prove a cybercrime in 36 hours? It is not possible,’ said the Philippines’ Presidential Anti-Organised Crime Commission (PAOCC) operations director, recalling the challenges investigators faced during early raids.

In Cambodia, international prosecutors and investigators have focused on improving cooperation mechanisms, including extradition, asset recovery, and the handling of digital evidence. These efforts are seen as critical in addressing the cross-border nature of scam networks.

Despite increased enforcement efforts, these networks continue to adapt and relocate, maintaining a global reach. At recent international meetings, including a summit in Bangkok involving nearly 60 countries and major technology firms, officials agreed on the need for shared intelligence, joint investigations and coordinated prosecutions.

Victims continue to call for stronger responses. ‘The scammer works twice: they take your money, and they take your soul. They really do. They take your self-worth. And then, you feel like you’re being scammed again, by the authorities’ lack of response,’ Sawyer said.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI agents test limits of EU rules

AI agents are rapidly gaining traction, raising questions about whether existing EU rules can keep pace. Unlike chatbots, these systems can act autonomously and interact with digital tools on behalf of users.

Experts warn that AI agents require deeper access to personal data and online services to function effectively. Regulators in Europe are monitoring potential risks as the technology becomes more integrated into daily life.

Lawmakers are examining whether current legislation, such as the AI Act and GDPR, adequately covers agent-based systems. Legal experts highlight challenges around contracts, liability and accountability when AI acts independently.

Despite concerns, many governments remain reluctant to introduce new rules, citing regulatory fatigue. Policymakers may rely on existing frameworks unless major incidents force a reassessment of AI oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Publishers challenge OpenAI over alleged copyright infringement

Legal pressure is increasing on OpenAI as Encyclopaedia Britannica and Merriam-Webster file a lawsuit accusing the company of large-scale copyright violations.

According to the complaint, nearly 100,000 copyrighted articles were allegedly used without authorisation to train large language models. Publishers also argue that AI-generated outputs can reproduce parts of their content, raising concerns about unauthorised distribution.

Additional claims focus on how AI systems retrieve and present information. The lawsuit argues that retrieval-augmented generation tools may rely on proprietary databases, potentially undermining publishers’ business models by reducing traffic to original sources.

Concerns are also raised about inaccurate outputs attributed to publishers, which could affect trust in established information providers. The case highlights ongoing tensions between AI development and intellectual property protections.

Growing legal disputes involving media organisations, including The New York Times, suggest that courts will play a key role in defining how copyrighted material can be used in AI training.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ECA Digital law raises pressure on Big Tech in Brazil

Brazil is set to enforce a new law aimed at strengthening protections for children online, marking a significant shift in how digital platforms are regulated in the country. The legislation, known as ECA Digital, introduces stricter rules for technology companies and will test whether stronger oversight can translate into real-world impact.

The law, which takes effect this week, allows authorities to impose warnings and fines of up to $10 million for violations. In severe cases, courts may order the suspension or banning of platforms operating in Brazil. The measure was passed rapidly following public outrage over online content involving the sexualisation of minors.

ECA Digital builds on Brazil’s existing child protection framework and adapts it to the digital environment. It introduces obligations such as age verification, stricter content moderation, and mechanisms to remove harmful material involving minors without requiring a court order.

The law also targets platform design, requiring companies to limit features that may encourage compulsive use among children. This includes restrictions on excessive notifications, profiling for targeted advertising, and design elements that prolong user engagement.

Enforcement of ECA Digital will be led by Brazil’s data protection authority, ANPD, alongside a new screening centre within the Federal Police. However, implementation challenges remain, including limited regulatory capacity and the short timeline between the law’s approval and enforcement.

Experts say the law reflects a broader global trend, with dozens of countries considering similar measures. While technology companies have introduced tools such as age verification and parental controls, critics argue that bigger changes to platform design and content moderation are still needed.

Brazil’s experience may serve as a test case for how governments balance child protection, platform responsibility, and enforcement capacity. The effectiveness of ECA Digital will depend not only on its legal framework but also on how rigorously it is applied in practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!