French Police allegedly used illegal facial recognition software

Disclose uncovered this clandestine use of Briefcam, sparking debates on privacy breaches. The looming Paris Olympics offer flexibility, but alleged breaches of Informatics and Freedom laws and GDPR regulations raise serious concerns.

Woman, tablet and facial recognition at night in biometrics for access, verification or identificat

The French national police have allegedly utilised Briefcam, an Israeli facial recognition software, since 2015, contravening French regulations that prohibit such technology. Disclose, an investigative media entity, exposed this through internal documents divulging the police’s covert software deployment, which facilitates facial recognition.

Despite the prohibition, there seems to be some leniency due to the upcoming Paris Olympic Games in 2024, leading to experimental usage. This usage potentially violates France’s Informatics and Freedom law and the EU General Data Protection Regulation, both explicitly banning biometric data processing and facial recognition. Authorities have stayed silent on these claims, although leaked correspondence implies awareness of the software’s utilisation within the Ministry of the Interior.

French MP Philippe Latombe highlighted diverse legal implications of the police’s potential use of Briefcam, discussing scenarios ranging from legitimate use under judicial supervision to serious breaches of mass surveillance statutes. Yet, based on Latombe’s present information, it seems the police may have employed Briefcam for targeted searches under judicial oversight, potentially involving facial recognition but not broad scanning. Verification of these specifics remains pending.

Why does this matter?

Facial recognition technology has raised a heated debate about law enforcement’s adherence to regulations and oversight, necessitating transparency and accountability in their actions. Worries are growing due to the lack of adequate supervision and accountability, notably with the absence of an independent advisory body responsible for monitoring AI utilisation in the public domain. Experts are continuously stressing the immediate need to act in order to avoid the potential unauthorised use of complex automated systems in critical decisions, stressing the importance of transparency and regulation.