NYC’s anti-bias law holds algorithms accountable in hiring decisions
Employers using algorithms for recruitment must undergo independent audits and disclose the results publicly. This move aims to address biases in AI decision-making. The law covers all job applicants and workers in NYC and can result in penalties for non-compliance.
New York City has enforced a new law called Local Law 144, which mandates that employers utilising algorithms for hiring, recruitment, or advancement must undergo independent audits and publicly disclose findings. The implementation of Local Law 144 is widely regarded as a crucial measure to counter the potential reinforcement of biases by AI recruitment algorithms. Instances of bias in hiring algorithms, such as Amazon’s discriminatory recruiting engine, have raised concerns.
The law requires companies to disclose the algorithms used, including the average scores candidates of different races, ethnicities, and genders are likely to receive. The law also necessitates the disclosure of ‘impact ratios,’ which compare scores across various categories. Failure to comply with the law can result in penalties ranging from $375 to $1,500 per violation. It is noteworthy that the law applies to all job applicants and workers in NYC, regardless of their location.
Although it is too early to assess the long-term impact of Local Law 144, its success or failure could influence the introduction of similar legislation in other cities and states. Washington, D.C., California, and New Jersey are among the jurisdictions contemplating regulations to minimize discrimination in AI-driven hiring processes. Nevertheless, critics argue that the law may not go far enough in providing sufficient safeguards for candidates and workers. Efforts are also being made within the industry to self-regulate, as demonstrated by initiatives like the Data & Trust Alliance.