Advertisement

GAO encourages agencies to improve forensic algorithm standards in new report

The watchdog agency says new rules could help govern the use of algorithms in criminal investigations.
GAO, Government Accountability Office
The entrance to the Government Accountability Office building on H Street NW in Washington, D.C. (Joe Warminsky / Scoop News Group)

Agencies and Congress should consider improving standards, training and transparency around forensic algorithms to help analysts better use them in criminal investigations, according to a Government Accountability Office technology assessment released Tuesday.

GAO found such algorithms strengthen forensic analysis by improving the speed and objectivity of investigations, but their usefulness is limited by the human error and cognitive bias introduced by analysts.

The tech assessment comes two months after GAO released a report describing how the forensic algorithms used by federal law enforcement work and one week after the watchdog warned more than a dozen agencies using facial recognition, one of three primary forensic algorithms, couldn’t account for which systems they use — increasing public distrust of the technology.

“Policymakers could support the development and implementation of standards and policies related to law enforcement testing, procurement, and use to improve consistency and reduce the risk of misuse,” reads the assessment. “This could help address the challenges we identified related to human involvement, public confidence, and interpreting and communicating results.”

Advertisement

While both the National Institute of Standards and Technology and the Organization of Scientific Area Committees for Forensic Science (OSAC), are already developing standards for forensic algorithms, a new federal forensic oversight body may be in order. The other option is assigning a greater role to NIST and other agencies, according to GAO.

The three primary forensic algorithms are: latent print, facial recognition and probabilistic genotyping.

Both latent print and facial recognition search larger databases faster and more consistently than analysts, but poor quality prints reduce the accuracy of the former and human involvement introduces errors with the latter. Agencies further struggle to test and procure the most accurate facial recognition algorithms and find ones with minimal performance differences across demographic groups.

Meanwhile probabilistic genotyping helps analysts evaluate a wider variety of DNA evidence, that may have multiple contributors or be partially degraded, and compare it with samples from persons of interest. But evaluating such algorithms’ performance is complex, and there are no standards for interpreting or communicating the results.

Developing standards around the appropriate use of algorithms will reduce improper use if data quality is addressed and improve confidence in their use by enforcing consistency across law enforcement agencies, as well as streamlining testing and performance in the case of facial recognition, according to GAO.

Advertisement

The challenge will be implementing standards across all levels of government because agencies and localities may not want to confirm. The cost of procuring and maintaining algorithms could also rise, and researching and testing standards is already resource-intensive, according to GAO.

GAO also suggested agencies and Congress consider increasing algorithm training for analysts and investigators, which would reduce human error and improve cognitive bias. A certification process could reduce improper use at federal and non-federal labs.

The challenge would be developing and distributing training materials and determining what agencies are in charge of a certification process, according to GAO.

Lastly GAO suggested increasing transparency to improve trust by providing more information on algorithm testing results, data sources, use and investigations. Better comparative results could even help other agencies select better algorithms.

GAO foresees developers potentially resisting release of proprietary algorithm information, and the sharing of data sources could create privacy risks.

Advertisement

The watchdog agency made it clear that nothing suggested in its assessment constituted a formal recommendation, and no legal changes were proposed in the report to the House Science Committee leadership and Rep. Mark Takano, D-Calif.

Latest Podcasts