Advertisement

The FBI is using AI to mine threat tips, but isn’t sharing much detail

Documents obtained by FedScoop show that the algorithms were “up and running” in 2019.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
(Getty Images)

The Federal Bureau of Investigation is using artificial intelligence to mine tips about potential threats but is revealing little about how the system actually works.

Specifically, the bureau is using a system it calls the “Complaint Lead Value Probability” to prioritize tips by conducting algorithm scores and triaging, according to two versions of an agency AI disclosure. The technology, which is meant to help sort through the tips the FBI receives, is one of several AI tools employed by a bureau that also uses Amazon’s Rekognition software and drug signature algorithms

Still, the FBI provides limited insight into how this system — which could theoretically determine the threats that get addressed — actually works. A recent public records request filed by FedScoop saw much of the information redacted, including a section on “scores” that could possibly reference the efficacy of the algorithms. 

The FBI, despite officials publicly commenting on the tool in the past, declined to answer a series of questions from FedScoop. The agency said it did not comment on documents obtained through public records requests. After FedScoop pointed out that the use case has been discussed publicly and in other public documents, the agency again said it declined to comment.

Advertisement

The Department of Justice did not comment on a request from FedScoop about when aspects of its AI programs are considered techniques and procedures for law enforcement investigations, an exemption category under FOIA law

Those documents also show the involvement of MITRE, a nonprofit organization that develops technologies and operates R&D centers for the government, and references mistakes that the system can make. And they appear to show that the FBI was looking at this tool from 2019 to 2020, and then picked up interest again in the technology relatively recently. 

The “Threat Intake Processing System (TIPS) database uses artificial intelligence (AI) algorithms to accurately identify, prioritize, and process actionable tips in a timely manner,” one undated version of the Justice Department’s AI inventory states. “The AI used in this case helps to triage immediate threats in order to help FBI field offices and law enforcement respond to the most serious threats first. Based on the algorithm score, highest priority tips are first in the queue for human review.”

At the time of the publication of that disclosure, the AI tool, which describes its training data as “agency generated,” was in production for more than one year. According to the documents obtained by FedScoop, MITRE, which did not respond to multiple requests for comment, appears to have been involved in reviewing and analyzing the system. 

Another version of the DOJ’s AI inventory, dated 2023, describes the tool with a different level of detail: “The Threat Intake Processing System (TIPS) uses artificial intelligence (AI) to calculate scores for calls and electronic tips based on call synopses and electronic tip text. The score predicts the probability that a tip has lead value (e.g., referrals to partner agencies, drafting of a Guardian, or if it contains a Threat to Life (TTL). The scores are also used to screen social media posts directed to the FBI. Due to the significant volume of social media posts, only posts that score above a designated threshold are forwarded to the system for review.”

Advertisement

The tool appears to be the AI tips-sorting use case that FBI officials, including Jacqueline Maguire, executive assistant director of the FBI’s Science and Technology Branch, have publicly discussed. 

“We’ve been able to use AI to analyze phone calls to help the individual assessing the tip quickly triage it, making us more efficient and flagging indicators — something that AI is uniquely suited for because of its ability to digest large volumes of data,” Maguire said during a Center for Strategic and International Studies event this month, according to MeriTalk. “While we continue to have a human assess every tip, AI can help our teams be better at their jobs and keep Americans safe by prioritizing our workload.”

Cynthia Kaiser, deputy assistant director of the FBI’s Cyber Division, also seems to have discussed the tool, per previous FedScoop reporting. Kaiser said the system uses natural language processing and helps “fill in the cracks” — even though humans will always review tips. 

A privacy impact assessment for the TIPS database, which was approved in 2020, does not mention any use of artificial intelligence. But a September 2019 email obtained by FedScoop indicates that MITRE’s algorithms were “up and running” on FBI servers. That communication, from a supervisory IT specialist at the FBI, shows that the government would have had to consider what threshold values to deploy at the bureau. They also warned that there could be accuracy issues with the algorithm. 

“This is a very small sample size and you have to understand that there are risks with what we are doing. These two tips were marked [redacted] and would have been closed in the new workflow,” the IT specialist wrote. “I’m sure they will improve workloads by not having to do multiple reviews on junk tips, but I don’t want you to expect the algorithm will catch everything. There will undoubtedly be cases like this once this goes live.” 

Advertisement

At one point, at least one person from “OGA,” which possibly references “other government agency,” was also involved. 

Faiza Patel, senior director of the Brennan Center’s liberty and national security program, said the software — at least as described in the redacted emails — appears to raise the “classic problem” of false positives and negatives. Another concern is that the system could be prioritizing, or deprioritizing, certain tips based on bias.  

“Obviously you can see the efficiency rationale for them using a tool like this,” Patel said. “I think it’s important for the public to understand what the tool is and how it works, and in particular, to ensure that the tool is not consistently elevating tips relating to certain types of people to the front of the line.”

Latest Podcasts