Mitre launches AI assurance and testing lab for US federal agencies

On Monday, March 25, Mitre, a government-backed non-profit corporation, opened a lab to test AI systems used by federal agencies with the goal of identifying and fixing security threats and other issues.

A person is writing prompt for ChatGPT

Sen. Mark Warner, Rep. Donald Beyer, and Rep. Gerry Connolly joined Mitre, a public-interest, non-profit organisation, for the launch of its AI Assurance and Discovery Lab. Located at Mitre’s headquarters in McLean, Virginia, the mission of the new lab is to identify and mitigate critical risks in AI-powered systems that operate in complex, unpredictable, and high-stakes contexts.

The lab is hosted at Mitre’s headquarters, where they manage research on national security, aviation, health, and cybersecurity, among other areas. It can accommodate 50 individuals in person and 4,000 participants remotely.

Why does it matter?


AI systems are often presented as a black box, and experts are cautioning that they are adopted without a complete understanding of the various ways they can go wrong or be manipulated. According to Miles Thompson, a robotics expert who will oversee the lab, the AI systems will be assessed for various risks, ranging from data breaches to explainability—the ability to understand why an AI model makes a certain decision or produces a given outcome.


Mitre’s scientists and engineers are experts in computational sciences, robotics, languages, cognitive science, neuroscience, and other government domains. First, federal agencies and soon commercial firms can bring AI-powered systems into the lab to evaluate potential issues such as effectiveness, consistency, and safety in simulated real-world environments. Mitre will also use the lab to investigate AI system security, potentially harmful bias, and the level of user control over how their information is used.
The new initiative is part of Mitre’s broader efforts to address emerging, complex AI and autonomy challenges for the benefit of US national security. It is also aligned with an important feature of President Joe Biden’s October 2023 landmark executive order on AI. Specifically, the National Institute of Standards and Technology is required to build a generative AI risk management framework coupled with standards for AI ‘red teaming,’ a process to simulate attacks on the model to identify vulnerabilities and weaknesses.