Digital borders can pose human rights issues for migrants

A new study shows that using biometrics at border locations is likely to have adverse effects on the lives of migrants.

Concept for facial recognition, biometric, security system or deepfake dangers.

Emerging technologies such as biometrics at borders are likely to adversely affect migrants, according to a study by the UNHCR and the University of Essex.

The resulting publication, ‘Digital border governance: a human rights-based approach’, argues that despite the gains these technologies offer, if left unchecked, they could pose human rights violations, exploitation or other forms of abuse. The report notes that such infringements could also touch on their rights to freedom of expression, association and religion, rights to education, and even the right to housing and health.

Practical recommendations and takeaways recommend regularly conducting algorithmic risk assessments and privacy and security controls for collecting biographic and biometric data; the adoption and enforcement of proper regulatory frameworks at the national, regional and international levels for using new and emerging technologies; the establishment of more robust complaints handling bodies and processes; and a review of technologies used at borders, such as remote biometrics and polygraphs, for their compatibility with human rights. 

Why does it matter?

The study comes at a time when many governments and international organisations are in the process of replacing physical forms and border management processes with biometric technologies such as fingerprint, iris and facial recognition, with some reportedly testing new systems such as lie detectors, robodogs, and GPS tagging.