EU Parliament approves controversial Asylum and Migration Pact amidst criticism

The European Parliament approved the Asylum and Migration Pact, a controversial measure that included reforms to the EURODAC biometric database and biometric data collection from minors. Three and a half years in the making, the document aims to bolster border security and streamline asylum processes.

However, critics fear it may usher in repressive policies and expand biometric surveillance, particularly regarding minors, as it provides for the collection of biometric data from children as young as seven. Despite these concerns, proponents argue it aids family reunification efforts and combats document fraud.

The pact’s complexity has sparked debate over its effectiveness and ethics. While some view it as progress, others see it as a missed opportunity for a more compassionate system. The implications of biometrics and facial recognition technology are central to the discourse, which critics warn could grant excessive control over migrants’ movements.

Why does it matter? 

The legal move comes after years of intense debate among conservative and liberal lawmakers and between northern and southern EU member states, with allegations over loyalty to Europe and dissent further complicating the voting process. As political tensions escalate amidst ongoing migrant detentions and deaths, exacerbated by global conflicts driving displacement, discussions on technological deployments at the EU borders in light of implementing the pact will persist.

UK invests £55.5 million in facial recognition to combat retail crime

UK Prime Minister Rishi Sunak has announced a substantial investment of £55.5 million over four years in facial recognition technology, which aims to combat retail crime by identifying repeated shoplifters.

The initiative, part of a broader crackdown on theft, includes deploying bespoke mobile units equipped with live facial recognition capabilities across high streets nationwide. While controversial, its deployment has resulted in numerous arrests, primarily for offences ranging from theft to assault. However, concerns persist regarding privacy and false positives.

Despite criticism from privacy advocates like Big Brother Watch, Home Secretary James Cleverly emphasises the technology’s preventative nature, while the Metropolitan Police views it as a transformative tool in law enforcement. The Office of the Scottish Biometrics Commissioner noted that careful deployment is needed to maintain public confidence.

Why does it matter?

The development has emerged months after Scotland’s biometrics commissioner, Brian Plastow, raised concerns about the trajectory towards autocracy driven by inappropriate use of biometric surveillance in the UK. While supporting specific biometric surveillance applications, like live facial recognition, he critiques government overreach and highlights risks such as database misuse and privacy erosion. Plastow’s concerns are exemplified by incidents like the arrest of an eight-month-pregnant woman for failing to report community service. While Scotland may resist England’s path towards a vigilant state, the stance of Wales remains uncertain.

US law proposes integration of AI into border control

A bipartisan proposal in the US aims to bolster border control by integrating cutting-edge technologies such as AI, machine learning, biometrics, and nanotechnology. Spearheaded by the Department of Homeland Security (DHS), the legislation mandates developing a comprehensive plan within 180 days to incorporate these technologies into border security operations. The move follows the release of an AI Roadmap for DHS and an executive order emphasising trustworthy AI for American benefit.

Representative Lou Correa highlighted the importance of investing in security-enhancing technologies to aid Customs and Border Protection (CBP) officers in swiftly responding to threats like human trafficking and hazardous migrant crossings. The proposed plan includes metrics, performance indicators, and privacy/security assessments to ensure effective implementation.

As cartels and foreign adversaries increase in sophistication amid the ongoing border crisis, the necessity to deploy advanced technologies becomes apparent. The legislation seeks to leverage commercially available technologies while empowering CBP Innovation Teams to adapt and integrate them into border security operations efficiently.

The bill also mandates that the CBP clarify operational procedures and roles regarding new technologies. Research areas outlined in the legislation encompass mobile surveillance vehicles, lighter-than-air ground surveillance equipment, tunnel detection, and other pertinent areas determined by the Secretary of Homeland Security. Through bipartisan efforts, the proposal aims to equip officers with the tools necessary to safeguard the border effectively.

California enacts Senate bill to safeguard elections against disinformation and deepfakes

California has passed Senate Bill 1228, requiring large online platforms to implement digital identity verification and labelling for influential users and those sharing significant amounts of AI-generated content. The law mandates semiannual reporting to the Attorney General regarding user authentication methods and public disclosure of authenticated accounts.

The bill’s sponsor, Senator Steve Padilla, highlights the need to combat foreign interference and disinformation campaigns targeting US elections. By verifying the identities of accounts with substantial followings, the law seeks to mitigate the spread of false information and malicious content. Additionally, the legislative package includes measures like Assembly Bill (AB) 2839, which restricts deepfakes in campaign ads, and AB 2655, which addresses the labelling and regulation of generative AI deepfakes.

The laws were developed in collaboration with the California Initiative for Technology and Democracy (CITED) to address concerns about online misinformation and its potential impact on democratic processes. A survey reveals strong public support for measures promoting user authentication and legal accountability for online posts, reflecting growing concerns about spreading false information.

However, critics raise constitutional concerns and question the effectiveness of SB 1228’s criteria for identifying influential accounts. The experts point out potential flaws in the law, such as the definition of influential users based on view counts and AI-generated content volume, which may encompass genuine influencers and spam accounts. Despite these challenges, California’s legislative efforts signal a proactive approach to combating online misinformation and protecting electoral integrity.

Leaked documents reveal Kremlin’s facial recognition surveillance plans

The documents obtained by Estonian news agency Delfi Estonia have unveiled the Kremlin’s ambitious plans to bolster its surveillance capabilities through facial recognition technology. These documents shed light on the Russian government’s 12 billion rouble initiative to establish a nationwide surveillance network by 2030.

Spearheaded by the Russian Presidential Affairs Department’s Scientific Research Computing Center (GlavNIVTs), the project aims to integrate biometrics firms like NtechLab into Russia’s surveillance infrastructure. Despite facing the EU sanctions, NtechLab and others are positioned to play a crucial role in supplying software and licenses for this initiative.

The surveillance system, which includes projects like the Video Stream Processing Service and Center, is designed to swiftly identify perceived threats and dissenting behaviour through AI-powered analysis of video feeds. However, experts caution about potential budget constraints, casting doubt on the sustainability of the Ministry of Digital Development’s centralised surveillance effort.

Why does it matter?

The revelation comes at a critical juncture, as the right to protest and express political opinions in Russia appears to have eroded amid the conflict with Ukraine. According to a new report by Reuters, authorities are using facial recognition to identify individuals not accused of any crime for preemptive detention. Human rights groups have documented a significant increase in such detentions, with hundreds informed by the integration of facial recognition with Moscow’s extensive surveillance camera network.

US Department of Justice reveals facial recognition policy details

Despite not making the full policy public, the US Department of Justice (DOJ) has revealed insights into its interim policy concerning facial recognition technology (FRT). The testimony submitted to the US Commission on Civil Rights highlights key aspects of the policy announced in December, emphasising its adherence to protecting First Amendment activities. The policy aims to prevent unlawful use of FRT, establish guidelines for compliant use, and address various aspects, including privacy protection, civil rights, and accuracy.

Ethical considerations are integral to the interim policy, with measures in place to prevent discriminatory use of facial recognition and ensure accountability for its deployment. However, complexities arise due to evolving AI regulations and the proliferation of biometric algorithms, leading to stipulations that FRT systems must comply with DOJ policies on AI and that FRT results alone cannot serve as sole proof of identity.

The testimony acknowledged civil rights concerns, recognising the potential for bias in algorithms and the misuse of FRT, including unlawful surveillance. Nonetheless, the DOJ emphasises the benefits of FRT in enhancing public safety, citing its role in identifying missing persons, combating human trafficking, and aiding in criminal investigations. According to the DOJ, the key lies in harnessing FRT’s potential while implementing effective safeguards to mitigate potential harm.

Why does it matter?

In a related development, the US government has recently published new guidelines that require all federal agencies to appoint senior leaders as chief AI officers to oversee the use of AI systems. According to the guidelines, agencies must establish AI governance boards to coordinate usage and submit annual reports detailing AI systems, associated risks, and mitigation strategies. As a result, the US Department of Justice appointed Jonathan Mayer, an assistant professor specialising in national security, consumer privacy, and criminal procedure at Princeton University, as its first chief AI officer.

Israel deploys facial recognition program in Gaza

Israel has deployed a sophisticated facial recognition program in the Gaza Strip, according to reports. The program, initiated after the 7 October attacks, employs technology from Google Photos and a proprietary tool from Corsight AI, an Israeli firm dedicated to creating industry-leading facial recognition technology to identify individuals linked to Hamas without their consent.

The facial recognition system, crafted in parallel with Israel’s military operations in Gaza, operates by collecting data from diverse sources, including social media platforms, surveillance footage, and inputs from Palestinian detainees. Israeli Unit 8200, the primary intelligence unit, played a pivotal role in identifying potential targets through these means.

Corsight’s technology, known for its claim to accurately identify individuals even with less than 50% of their face visible, was utilised to construct a facial recognition tool. Establishing checkpoints equipped with facial recognition cameras along critical routes used by Palestinians to escape southwards, the Israeli military aims to expand the database and pinpoint potential targets, compiling a ‘hit list’ of individuals associated with the 7 October attack.

Despite soldiers acknowledging Corsight’s technology’s limitations, particularly in grainy images or obscured faces, concerns persist over misidentifications. One such incident involved the mistaken apprehension of Palestinian poet Mosab Abu Toha, who faced interrogation and detention due to being flagged by the system.

South Korea launches investigation into Worldcoin’s personal data collection

South Korea’s Personal Information Protection Commission (PIPC) has launched an investigation into cryptocurrency project Worldcoin following numerous complaints about its collection of personal information. Of particular concern is the project’s use of iris scanning in exchange for cryptocurrency. The PIPC announced on Monday that it will examine company’s collection, processing, and potential overseas transfer of sensitive personal information, and will take action if any violations of local privacy rules are found.

It is worth noting that OpenAI, which co-founded Worldcoin, was fined last year by the privacy watchdog for leaking personal information of South Korean citizens through its ChatGPT application. This connection with OpenAI adds weight to the concerns surrounding the handling of personal data by Worldcoin.

Worldcoin is an identity-focused cryptocurrency project. Participants in the protocol receive WLD tokens in return for signing up. The project’s unconventional sign-up process has also raised concerns in other jurisdictions. As of now, company has not responded to the investigation or the accusations.

Fake documents and synthetic identities led to higher fraud in financial services in 2022

A recent report exploring the impact of investing in IDV, released by Regula, a global developer of forensic devices and identify verification (IDV) solutions, shows gaps in identity fraud between banking and fintech organisations worldwide. The research conducted between December 2022 and January 2023 found that every fourth bank reported more than 100 identity fraud incidents in 2022 (about 26% of organisations), while 17% were reported in the fintech sector. The most common form of fraud identity was the use of fake or modified physical documents.

The report further indicates the economic and collateral effects of identity fraud. The median cost of identity fraud for banks was nearly half a million dollars (about $479,000) and $120,000 for the fintech sector organizations. The collateral effects were related to business disruption (44%) and legal expenses (36%).

Preparations for Digital ID awareness campaign starts in Jamaica

The government of Jamaica has announced that the sensitisations on campaigns on implementing the National Identification System (NIDS) will begin as the procurement process is being finalised after it failed in 2022. This is part of the country’s effects to roll out its new generation biometric passport from 31 March 2023. 

During the campaign on digital ID sensitisation, free birth certificates will be issued. These birth certificates will enable those who receive them to apply for national IDs.