Pakistan implements AI-powered criminal identification system

Pakistan implemented an AI-powered Face Trace System to increase the effectiveness of biometric criminal identification and detention. However, its implementation raises many ethical concerns.

Concept for facial recognition, biometric, security system or deepfake dangers.

Pakistan implemented an AI-powered Face Trace System (FTS) to increase the effectiveness of biometric criminal identification and detention. The Punjab Safe City project is a joint project between the Punjab Police and the Punjab Information Technology Board (PITB), and it is one of many implemented in the country. Other regions also have advanced programs incorporating AI security control systems, and IP cameras spread over 255 sites.

Punjab province’s commitment to modernising law enforcement resulted in the introducing of these new digital technologies. With the help of an extensive database and a user-friendly online platform of the FTS, investigations are accelerated, and laborious processes are replaced with smooth identification and verification. Identification procedures are streamlined using AI-driven facial and vehicle number plate recognition, reducing the time and resources required for investigations. Sindh province has implemented surveillance systems consisting of carefully placed cameras with cutting-edge capabilities like facial recognition, night vision, and vehicle plate recording so that law enforcement officials can quickly identify and monitor suspects using state-of-the-art AI technology.

Why does it matter?

Although the introduction of new technologies in security and surveillance can contribute to a more efficient use of resources, there are also concerns about ethical issues the use of AI might entail. One of the most substantial concerns is the potential for bias. Given that AI models rely on historical data for training, there is a risk that they will perpetuate existing biases and discriminatory practices. Another concern is the need for more transparency in AI-based decision-making. It can be difficult for humans to understand how a machine learning algorithm arrived at a particular decision, which makes it challenging to identify and correct errors or biases. Finally, there are worries about privacy. The use of facial recognition and other AI-powered surveillance technologies raises questions about the right to privacy and the potential for abuse by law enforcement agencies. Moreover, with an upcoming general election in Pakistan, the Digital Rights Foundation has urged political parties to include six key digital rights issues in their manifestos. The problems range from funding AI research initiatives and establishing a robust data protection regime, including enacting a Data Protection Law, to PECA amendments and law enforcement capacity development.