DHS’s initial AI inventory included a cybersecurity use case that wasn’t AI, GAO says

A new watchdog report finds that the Department of Homeland Security wasn’t verifying whether use case submissions were actual examples of AI, raising “questions about the overall reliability” of the inventory.
The seal of the Department of Homeland Security is seen on a podium on Feb. 23, 2015, in Washington, D.C. (Photo by MANDEL NGAN/AFP via Getty Images)

The Department of Homeland Security didn’t properly certify whether the artificial intelligence use cases for cybersecurity listed in its AI inventory were actual examples of the technology, according to a new Government Accountability Office report, calling into question the veracity of the agency’s full catalog.

DHS’s AI inventory, launched in 2022 to meet requirements called out in the Trump administration’s 2020 executive order on AI in the federal government, included 21 use cases across agency components, with two focused specifically on cybersecurity.

DHS officials told GAO that one of the two cyber use cases — Automated Scoring and Feedback, a predictive model intended to share cyber threat information — “was incorrectly characterized as AI.” The inclusion of AS&F “raises questions about the overall reliability of DHS’s AI Use Case Inventory,” the GAO stated.

“Although DHS has a process to review use cases before they are added to the AI inventory, the agency acknowledges that it does not confirm whether uses are correctly characterized as AI,” the report noted. “Until it expands its process to include such determinations, DHS will be unable to ensure accurate use case reporting.”


The GAO faulted DHS for its failure to fully implement the watchdog’s 2021 AI Accountability Framework, noting that the agency only “incorporated selected practices” to “manage and oversee its use of AI for cybersecurity.”

That AI framework features 11 key practices that were taken into account for DHS management, operations and oversight of AI cybersecurity practices, covering everything from governance and data to performance and monitoring. The agency’s Chief Technology Officer Directorate reviewed all 21 use cases listed in the launch of DHS’s use case inventory, but additional steps to determine whether a use case “was characteristic of AI” did not occur, the report said.

“CTOD officials said they did not independently verify systems because they rely on components and existing IT governance and oversight efforts to ensure accuracy,” the GAO said. “According to experts who participated in the Comptroller General’s Forum on Artificial Intelligence, existing frameworks and standards may not provide sufficient detail on assessing social and ethical issues which may arise from the use of AI systems.”

The GAO offered eight recommendations to DHS, including an expansion of the agency’s AI review process, adding steps to ensure the accuracy of inventory submissions, and a complete implementation of the watchdog’s AI framework practices. DHS agreed with all eight recommendations, the report noted.

“Ensuring responsible and accountable use of AI will be critical as DHS builds its capabilities to use AI for its operations,” the GAO stated. “By fully implementing accountability practices, DHS can promote public trust and confidence that AI can be a highly effective tool for helping attain strategic outcomes.”


The DHS report follows earlier GAO findings of “incomplete and inaccurate data” in agencies’ AI use case inventories. A December 2023 report from the watchdog characterized most inventories as “not fully comprehensive and accurate,” a conclusion that matched previous FedScoop reporting.

Latest Podcasts