UK debates digital ID vs national ID cards

Tony Blair, former UK Prime Minister, is advocating for digital identity as a solution to manage irregular migration, a pressing issue in the recent UK elections. In a piece for The Times addressed to Prime Minister Keir Starmer, Blair proposes leveraging AI and digital ID systems to enhance border controls and immigration management.

Blair emphasises the need for a robust digital identity framework, suggesting it could replace traditional national ID cards. This approach, he argues, could ensure accurate identification without the need for centralised databases or government-issued cards, which has sparked controversy in the past.

Despite Blair’s advocacy, UK government officials, including Business Secretary Jonathan Reynolds, hesitated to reintroduce national ID cards. Instead, the government plans to establish a new enforcement and return unit to tackle illegal migration and smuggling rings.

The debate over digital ID versus national ID cards has historical roots, dating back to Blair’s earlier proposals in the 2000s. The issue resurfaced recently amidst concerns over illegal migration and the small boat crisis in the English Channel, prompting renewed discussions about the role of ID documents in modern immigration policies.

Why does this matter?

Advocates like the Open Identity Exchange stress that if implemented adequately through frameworks like the Digital Verification Service, digital ID systems could drive economic growth and improve service delivery in sectors beyond immigration, such as healthcare and education. Despite challenges, proponents argue that a secure, decentralised digital ID system could substantially benefit the UK’s digital economy and public services.

Google Wallet to support US biometric passports

Google Wallet is set to introduce support for American biometric passports, a move highlighted by code leaker Assemble Debug. Initially launching in the US, users will scan their passports into Google Wallet to create a digital ID pass. The pass enables identity verification via NFC scanners or QR codes, although carrying a physical passport is still recommended. The feature, expected in an upcoming update, aligns with Google’s expansion into digital IDs, a space where Apple Wallet is also advancing with similar capabilities.

While not yet live, the integration of passport support was included in Google Wallet’s beta version, signalling imminent implementation. The development follows Google Wallet’s recent additions, including support for 29 more banks this June, focusing heavily on payment functionalities. In the US, digital driver’s licenses from states like Georgia and Arizona are already part of Google Wallet, facilitating airport use, among other services. Additionally, Google Wallet has integrated access control credentials from HID Global, broadening its utility beyond traditional payments.

Why does this matter?

In Europe, the trend towards interoperable digital wallets for national IDs within the EU indicates a broader application beyond travel. The initiative reflects a global shift towards mobile-based digital IDs for online financial services and government interactions, underscoring Google Wallet’s and similar platforms’ potential in shaping digital identity solutions in the Americas and beyond.

WhatsApp introduces Meta AI for avatar creation

WhatsApp is developing a new AI feature to create user avatars, following in the footsteps of Meta AI. According to WABetaInfo, this ‘Imagine Me’ feature will allow users to generate AI-based avatars by typing prompts in their chats. The feature was discovered in the latest WhatsApp beta for Android 2.24.14.13, available through the Google Play Beta programme.

Users can generate avatars by typing ‘Imagine me’ in the Meta AI chat or ‘@Meta AI imagine me’ in other chats. A screenshot from WABetaInfo shows how this feature might look. Once enabled, users must take setup photos, and the AI will create images based on the provided prompts. The resulting images are automatically shared in the conversation, with user privacy preserved.

The feature is optional and requires users to opt in through their settings. While currently available only in limited countries, it is still under development and cannot yet be tested by all users. WhatsApp aims to make Meta AI a more integral part of daily user interactions with this innovative avatar creation tool.

Meta responds to photo tagging issues with new AI labels

Meta has announced a significant update regarding using AI labels across its platforms, replacing the ‘Made with AI’ tag with ‘AI info’. This change comes after widespread complaints about the incorrect tagging of photos. For instance, a historical photograph captured on film four decades ago was mistakenly labelled AI-generated when uploaded with basic editing tools like Adobe’s cropping feature.

Kate McLaughlin, a spokesperson for Meta, emphasised that the company is continuously refining its AI products and collaborating closely with industry partners on AI labelling standards. The new ‘AI info’ label aims to clarify that content may have been modified with AI tools rather than solely created by AI.

The issue primarily stems from how metadata tools like Adobe Photoshop apply information to images, which platforms interpret. Following the expansion of its AI content labelling policies, daily photos shared on Meta’s platforms, such as Instagram and Facebook, were erroneously tagged as ‘Made with AI’.

Initially, the updated labelling will roll out on mobile apps before extending to web platforms. Clicking on the ‘AI info’ tag will display a message similar to the previous label, explaining why it was applied and acknowledging the use of AI-powered editing tools like Generative Fill. Despite advancements in metadata tagging technology like C2PA, distinguishing between AI-generated and authentic images remains a work in progress.

Detroit adopts new rules for the use of facial recognition after settlement

The Detroit Police Department has agreed to new rules limiting how it can use facial recognition technology after a legal settlement was reached with Robert Williams, who was wrongfully arrested based on the technology in 2020. Williams was detained for over 30 hours after software identified him with video surveillance of another Black man stealing watches. With the support of the American Civil Liberties Union of Michigan, he submitted a complaint in 2020 and then sued in 2021.

So far, Detroit police are responsible for three of the seven reported instances when the use of facial recognition has led to a wrongful arrest. Detroit’s police chief, James White, has blamed ‘human error’, and not the software, saying his officers relied too much on the technology.

What does this change concretely?

To combat human error, Detroit police officers will now be trained in the risks of facial recognition in policing. Another change states that suspects identified by the technology must be linked to the crime by other evidence before being used in photo lineups. Along with other policy changes, the police department will have to launch an audit into facial recognition searches since 2017, when it first started using the technology. 

In spite of this incident, police say facial recognition technology is too useful a tool to be abandoned entirely. According to the head of informatics with Detroit’s crime intelligence unit, Stephen Lamoreaux, the Police Department remains ‘very keen to use technology in a meaningful way for public safety.’
However, some cities like San Francisco have banned its use because of concerns about privacy and racial bias. Microsoft also said it would not be providing its facial recognition software to the US police until a national framework for the using facial recognition based on human rights is put in place.

EU advances with digital euro project focusing on privacy

The Eurosystem, including the European Central Bank (ECB) and national central banks of the EU area, is advancing the digital euro project aimed to modernize central bank money. Following an initial investigation phase launched in 2021, the ECB’s Governing Council approved a two-year preparation phase starting 18 October 2023 and concluding by 31 October 2025. This phase will finalise the digital euro rulebook, select potential platform and infrastructure providers, and conduct further testing, particularly its offline functionality.

A cornerstone of the digital euro project is “privacy by design” approach. Technological measures like pseudonymisation, hashing, and encryption will ensure that online transactions remain unlinked to specific individuals. Payment service providers will access only the necessary transaction data for EU law compliance, with user consent required for any additional commercial uses. The digital euro is also designed for offline use, allowing payments without an internet connection, akin to cash transactions. This offline functionality will enhance privacy and usability in areas with limited network coverage or during power outages.

Legislative and stakeholder engagement continues in parallel, with the European Parliament and Council of the European Union working on the legislative framework proposed in 2023. Stakeholder involvement ensures the digital euro meets high standards of quality, security, and usability. Fraud prevention remains a priority, with ongoing assessments indicating that current technologies can effectively detect and prevent fraud using pseudonymised information.

By the end of 2025, the ECB will decide whether to proceed further with the digital euro, contingent on the legislative process completion.

Illinois judge dismisses lawsuit against X over social media photo scanning

A federal judge in Illinois dismissed a class action lawsuit against the social network X, ruling that the photos it collected did not constitute biometric data under the state’s Biometric Information Privacy Act (BIPA). The lawsuit alleged that X violated BIPA by using Microsoft’s PhotoDNA software to scan for offensive images without proper disclosure and consent.

The judge concluded that the plaintiff failed to prove that the PhotoDNA tool involved facial geometry scanning or could identify specific individuals. Instead, the software analysed uploaded photos to detect nudity or pornographic content, which did not qualify as a scan of facial geometry under BIPA.

The ruling mirrors a recent case involving Facebook, where allegations of illegally collecting biometric data were dismissed. Both cases clarified that a digital signature generated from a photograph, known as a ‘hash’ or face signature, did not violate BIPA’s definition of biometric identifiers.

The judge emphasised that BIPA aims to regulate specific biometric identifiers like retina scans or fingerprints, excluding photographs to avoid an overly broad scope. Applying BIPA to any face geometry scan that cannot identify individuals would contradict the law’s purpose of ensuring notice and consent.

BIPA’s private right of action has been a significant deterrent for biometrics companies, allowing users to sue for damages in cases of non-compliance.

EU faces controversy over proposed AI scanning law

The EU is facing significant controversy over a proposed law that would require AI scanning of users’ photos and videos on messaging apps to detect child sexual abuse material (CSAM). Critics, including major tech companies like WhatsApp and Signal, argue that this law threatens privacy and encryption, undermining fundamental rights. They also warn that the AI detection systems could produce numerous false positives, overwhelming law enforcement.

A recent meeting among the EU member states’ representatives failed to reach a consensus on the proposal, leading to further delays. The Belgian presidency had hoped to finalise a negotiating mandate, but disagreements among member states prevented progress. The ongoing division means that discussions on the proposal will likely continue under Hungary’s upcoming EU Council presidency.

Opponents of the proposal, including Signal President Meredith Whittaker and Proton founder Andy Yen, emphasise the dangers of mass surveillance and the need for more targeted approaches to child protection. Despite the current setback, there’s concern that efforts to push the law forward will persist, necessitating continued vigilance from privacy advocates.

Ukrainian student’s identity misused by AI on Chinese social media platforms

Olga Loiek, a 21-year-old University of Pennsylvania student from Ukraine, experienced a disturbing twist after launching her YouTube channel last November. Her image was hijacked and manipulated through AI to create digital alter egos on Chinese social media platforms. These AI-generated avatars, such as ‘Natasha,’ posed as Russian women fluent in Chinese, promoting pro-Russian sentiments and selling products like Russian candies. These fake accounts amassed hundreds of thousands of followers in China, far surpassing Loiek’s own online presence.

Loiek’s experience highlights a broader trend of AI-generated personas on Chinese social media, presenting themselves as supportive of Russia and fluent in Chinese while selling various products. Experts reveal that these avatars often use clips of real women without their knowledge, aiming to appeal to single Chinese men. Some posts include disclaimers about AI involvement, but the followers and sales figures remain significant.

Why does it matter?

These events underscore the ethical and legal concerns surrounding AI’s misuse. As generative AI systems like ChatGPT become more widespread, issues related to misinformation, fake news, and copyright violations are growing.

In response, governments are starting to regulate the industry. China proposed guidelines to standardise AI by 2026, while the EU’s new AI Act imposes strict transparency requirements. However, experts like Xin Dai from Peking University warn that regulations struggle to keep pace with rapid AI advancements, raising concerns about the unchecked proliferation of AI-generated content worldwide.

Worldcoin allowed to resume operations in Kenya after year-long probe

Worldcoin, a cryptocurrency startup co-founded by OpenAI’s Sam Altman, has been permitted to resume its iris-scanning operations in Kenya after a year-long investigation into privacy and regulatory concerns was concluded. The Kenyan Directorate of Criminal Investigations (DCI) officially closed its probe, citing no further police action as necessary. However, Worldcoin must now register its business in Kenya, secure requisite licences, and vet its vendors to maintain operations.

Worldcoin’s activities had been suspended nearly a year ago due to compliance issues with Kenyan security, financial services, and data protection laws. A parliamentary committee recommended shutting down the company altogether, citing violations of the Computer Misuse and Cybercrimes Act, and labelling its activities as potential espionage. It was also found that Worldcoin and its parent entity, Tools for Humanity, were unregistered in Kenya, and had not received approval to use the Orbs, considered telecommunications equipment.

Thomas Scott, Chief Legal Officer of Tools for Humanity, expressed gratitude for the fair investigation and said this is merely a new beginning. He highlighted the company’s commitment to working with Kenyan authorities to advance Worldcoin’s mission and create economic opportunities. While Worldcoin has resolved its immediate regulatory hurdles in Kenya, it continues to face significant scrutiny in other countries, including ongoing investigations in Germany, Spain, Portugal, and Italy.

The situation has highlighted challenges in regulating new technologies, particularly around privacy and compliance. In response, Kenya is developing a regulatory framework for virtual assets, aiming to provide clearer guidelines for crypto startups like Worldcoin. The outcome could pave the way for more structured compliance pathways amid the rapid advancements in digital finance and identity systems.