Advertisement

MITRE launches lab to test federal government AI risks

The new AI Assurance and Discovery Lab in McLean, Virginia, is aimed at helping federal agencies test and evaluate systems that use AI.
Sen. Mark Warner and Reps. Don Beyer and Gerry Connolly join MITRE to open a new facility for discovering and managing risks in AI-enabled systems. (MITRE photo)

Public interest nonprofit corporation MITRE opened a new facility dedicated to testing government uses of artificial intelligence for potential risks Monday.

MITRE’s new AI Assurance and Discovery Lab is designed to assess the risk of systems using AI in simulated environments, red-teaming, and “human-in-the-loop experimentation,” among other things. The lab will also test systems for bias and users will be able to control how their information is used, according to the announcement.

In remarks presented at the Monday launch, Keoki Jackson, senior vice president of MITRE National Security Sector, pointed to the corporation’s poll that found less than half of the American public respondents thought AI would have the trust needed for applications. 

“We have some work to do as a nation, and that’s where this new AI lab comes in,” Jackson said.

Advertisement

Mitigating the risks of AI in government has been a topic of interest for lawmakers and was a key component of President Joe Biden’s October executive order on the technology. The order, for example, directed the National Institute of Standards and Technology to develop a companion to its AI Risk Management Framework for generative AI and create standards for AI red-teaming. MITRE’s new lab bills itself as a testbed for that type of risk assessment.

“The vision for this lab really is to be a place where we can pilot … and develop these concepts of AI assurance — where we have the tools and capabilities that can be adopted and applied to the special the specialized needs of different sectors,” Charles Clancy, MITRE senior vice president and chief technology officer, said at the event. 

Clancy also noted that both the “assurance” and “discovery” aspects of the new lab are important. Focusing too much on assurance and getting “tangled up in security” could prevent from balancing “against the opportunity,” he said. 

Members of the Virginia congressional delegation were also present to express their support at the event, which was held at MITRE’s McLean, Virginia, headquarters where the new lab is located. The three lawmakers were Reps. Gerry Connolly and Don Beyer, and Sen. Mark Warner. All are Democrats. 

Warner, in remarks at the event, said he worries that the race for the best large language model by companies like Anthropic, Open AI, Microsoft, and Google might be so intense that those entities aren’t building in assurance. 

Advertisement

“Getting it right is critical as any mission I can imagine, and I think, unfortunately, that we’re going to have to make sure that we come up with the standards,” Warner said. He added that policymakers are still trying to figure out whether the federal government houses AI expertise in one location, such as NIST or the Office of Science and Technology Policy, or spreads it out across the government. 

For MITRE, working on AI projects isn’t new. The corporation has been doing work in that space for roughly 10 years, Miles Thompson, MITRE’s AI assurance solutions lead, told FedScoop in an interview at the event. “Today really codifies that we’re going to provide this as a service now,” Thompson said of the new lab.

As part of its approach to evaluation, MITRE created its own process for AI risk assessment it calls the AI Assurance Process, which is consistent with existing standards for things like machinery and medical devices. Thompson described the process as “a stake in the ground for what we think is the best practice today,” noting that it could change with the evolving landscape. 

Thompson also said the level of assurance for that process changes depending on the system and how it’s being used. The consequences for something like Netflix’s recommendations system are low whereas those for AI for self-driving cars or air traffic control are dire, he said.

An example of how MITRE has applied that process to work with an agency is its recent work with the Federal Aviation Administration, Thompson said. 

Advertisement

The FAA and its industry partners came to MITRE to talk through potential tweaks to a standard inside the agency pertaining to software in airborne systems (DO-178C) that doesn’t currently address AI or machine learning, he said. Those conversations addressed the question of how that standard might change to be able to say “this use of AI is still safe,” he said. 

Latest Podcasts