NIST releases new digital identity and AI guidelines for contractors

The National Institute of Standards and Technology proposes new cybersecurity and AI guidelines to enhance the safety, security, and trustworthiness of AI systems.

AI,water

US National Institute of Standards and Technology (NIST) has released a new draft of its Digital Identity Guidelines, introducing updates for government contractors in cybersecurity, identity verification, and AI use. The guidelines propose expanded identity proofing methods, including remote and onsite verification options. These enhancements aim to improve the reliability of identity systems used by government contractors to access federally controlled facilities and information. By providing different assurance levels for identity verification, NIST ensures that contractors can implement secure and appropriate measures based on the context and location of the verification process.

A significant focus of the guidelines is on continuous evaluation and monitoring. Organisations are now required to implement ongoing programs that track the performance of identity management systems and evaluate their effectiveness against emerging threats. The guidelines also emphasise the importance of proactive fraud detection. Contractors and credential service providers (CSPs) must continuously assess and update their fraud detection methods to align with the evolving threat landscape.

One of the notable updates in the guidelines is the introduction of syncable authenticators and digital wallets. This allows contractors to manage their digital credentials more efficiently by storing them securely in digital wallets. These wallets provide flexibility in how contractors present their identity attributes when accessing different federal systems.

The guidelines also introduce a risk-based approach to authentication, where authentication levels are tailored to the sensitivity of the system or information being accessed. That gives government agencies the flexibility to assign different authentication methods depending on the security needs of the transaction. For example, accessing highly sensitive systems would require stronger multi-factor authentication (MFA) measures, including biometrics, while less critical systems may have less stringent requirements.

Why does this matter?

The use of AI and ML in identity systems is another key aspect of the Draft Guidelines. NIST emphasises transparency and accountability in integrating AI and ML into these systems. Organisations must document how AI is used, disclose the datasets for training models, and ensure that AI systems are evaluated for risks like bias and inequitable outcomes. The guidelines address the concern that AI technologies could exacerbate existing inequities or produce biassed results in identity verification processes. Organisations are encouraged to adopt NIST’s AI Risk Management Framework to mitigate these risks and consult its guidance on managing bias in AI.

Lastly, the guidelines highlight the importance of privacy, equity, and usability in digital identity systems. Ensuring broad participation and access to digital services, especially for individuals with disabilities, is a core requirement. NIST stresses that digital identity systems must be designed to be inclusive and accessible to all contractors, addressing any potential usability challenges while maintaining security.