DHS needs to beef up government’s AI risk assessments, watchdog says
The Department of Homeland Security should improve guidance for risk assessments focused on artificial intelligence and the threats the technology could pose to critical infrastructure sectors, the Government Accountability Office said in a report released Wednesday.
The watchdog report, which recommends that DHS “quickly” update its “guidance and template for AI risk assessments” to address various gaps, comes amid ongoing concern that AI could be used to undermine critical infrastructure sectors, which range from dams and IT systems to emergency services.
The GAO focused its report on how various sector risk management agencies (SMRAs) have addressed six activities tied to AI assessments: documented assessment methodology, identified AI use cases, identified potential risks, evaluated level of risk, identified mitigation strategies, and mapped mitigation strategies to risks.
While the SMRAs — agencies charged with protecting U.S. critical infrastructure sectors — completed requirements to identify AI use cases, they didn’t fully consider the risks involved, according to the GAO. Sixteen of the 17 risk assessments analyzed by the watchdog considered potential risks, but none included “a measurement of both the magnitude of harm (level of impact) and the probability of an event occurring (likelihood of occurrence).”
While most agencies identified risk mitigation strategies, most didn’t connect those strategies to potential risks. Seven agencies partially addressed risk mitigations, while 10 didn’t do so at all.
DHS concurred with the GAO’s recommendation that it “expeditiously” update its guidance “to address the gaps identified in this report, including activities such as identifying potential risks and evaluating the level of risk.”