IARPA to develop novel AI that automatically generates tips to improve intel reports

The Intelligence Advanced Research Projects Activity (IARPA) has a new REASON program.
(Getty Images)

The intelligence community’s key research hub is launching a new program to build tools that exploit recent advances in artificial intelligence to ultimately help intel analysts write their reports with stronger evidence and better reasoning.

To answer complex and constantly evolving intelligence questions, military and government analysts frequently have to comb through large amounts of uncertain and sometimes conflicting information. Through its new rapid explanation, analysis and sourcing online program (REASON) the Intelligence Advanced Research Projects Activity (IARPA) will tap teams to develop AI-driven software that can automatically generate recommendations or comments on any draft analytic report that an analyst is working on, with the simple push of a button. 

“The suggestions that are automatically produced will do two things: They’ll suggest additional evidence bearing on the topic, and secondly, the suggestions and the comments will identify, automatically, strengths as well as weaknesses in the reasoning of the draft report,” IARPA Program Manager Dr. Steven Rieber told DefenseScoop in an interview on Thursday.

Rieber provided an early look at what this new project will involve — and how the AI it enables might benefit the U.S. military and intel analysts in the not-so-distant future.


The goal

“We’re developing new tools and technology for the intelligence community, so we’re working on problems that have not yet been solved,” Rieber said.

Rieber has never been an intelligence analyst, and he “doesn’t ever pretend to have been” one — but he has worked closely with many over the course of his career. 

“I got my [doctorate degree] in analytic philosophy, and I worked as a professor at a university for a number of years. After 9/11 I decided to change careers and come to the intelligence community to help it improve its analytic methods — the methods that intelligence analysts use,” he told DefenseScoop.

At the time he first joined the Office of the Director of National Intelligence, Rieber worked in its integrity and standards unit. While there, he introduced sophisticated analytic methods to intel analysts, crafting thousands of different training courses for government experts.


“When it comes to developing training courses and training analysts, and facilitating structured analytic techniques, one thing I noticed — and that the analysts pointed out to me — is that the techniques and the training tend to require a lot of time and effort on the part of the analyst. But intelligence analysts, like most professionals, are busy people doing important tasks, and often don’t have sufficient time to take away from their work to use a structured method. So, my goal in coming to IARPA was to work with scientists to develop new methods for intelligence analysts that require much less time and effort on the part of any analyst,” Rieber explained.

The new REASON program he’s leading aims to accomplish exactly that.

The proposer’s day for the effort will be held Jan. 11 and the program will unfold under two phases — the first lasting 24 months and the second lasting 18 months.

Several scientific research and interdisciplinary teams — or “performer teams” in IARPA jargon — will likely be tapped to collaborate with the agency in this work. But they won’t be competing against one another.

“What IARPA looks for when evaluating proposals for a program like REASON is a diversity of approaches. So, we’re happy to fund — and we often do — several different ways of solving the technological problem that we put out in the [broad agency announcement],” Rieber said.


The agency’s technical description calls for proposals to involve a “mix of skills and staffing,” highlighting expertise in more than a dozen topics, including: applied epistemology; argumentation; cognitive psychology; experimental design; informal logic; judgment and decision making; linguistics; natural language processing; philosophy of language; psychometrics; rationality; software engineering; systems engineering; and systems integration.

The agency makes it a point to never prescribe in detail the exact technology performer teams should generate in efforts like this one, and broadly its approaches seek to promote creativity. Still, Rieber provided DefenseScoop with a hypothetical about the sort of innovation that IARPA envisions inspiring through REASON. 

Imagine “you’re an analyst working on a problem and you’ve written a draft report. You think you’ve covered the things that you need to. So what you do with REASON is you press a button and request that REASON produce automatic instantaneous comments on your draft,” he said. “And among the comments, you find that REASON has pointed out that there’s a piece of evidence from a report that you hadn’t noticed is relevant to the topic you’re working on. And that piece of evidence, let’s say, is contrary evidence. It’s some evidence against the claim that you’re making in the report.” 

He continued: “You see that piece of evidence, and as a result you reduce the level of confidence that you’ve assigned to your judgment, because there’s some contrary evidence that REASON pointed out that you weren’t aware of.”

While that, Rieber noted, was just one “sort of stylized, dramatic example” of what the imagined software might do, he said the novel AI-enabled tools might also point out supporting evidence for the judgment that an intel analyst is making. It could also reveal, he said, some weakness in the logic of the user’s analytic report.


“We all know how helpful it can be to receive comments from our peers and supervisors, and from friends on our drafts. So, you can think of REASON as producing these comments of a similar type that we get from our human peers — but doing so instantaneously and on demand — whenever the analyst wants the comments,” Rieber said.

Aiding — but not replacing — humans

IARPA intends for this program to pave the way for novel AI systems that can assist intel analysts as they hustle to solve complex national security puzzles by pinpointing critical information that’s available, pertains to their work and may have been overlooked. 

The hope is that those tools might one day help influence the accuracy and speed of reports those top thinkers deliver to policymakers and the administration. Right now, though, “such technology does not exist,” Rieber told DefenseScoop.  

One capability out there that REASON may be somewhat analogous to — once IARPA and its partners develop it — is the digital technology that automatically produces suggestions on grammar and style in written products, like essays. The envisioned AI tools would be similar in that they generate comments on experts’ work, however, the comments produced will be about the logic of the evidence of the analytic reports and not the writing and style.


“The automatic grammar checks technologies are pretty sophisticated. But the REASON problem is harder because there aren’t formal rules that guide the evidence and logic in real-life reasoning on complex issues the way there are formal rules for, say, grammar and spelling,” Rieber said. 

He noted that the REASON program does not seek to replace human analysts with fully automated production of analytic reports. Instead, those involved will produce and refine software that helps the analysts write better reports, faster, and with stronger reasoning than today’s technologies allow. 

“The fact that REASON doesn’t aim to do that is a good thing — because if we had automated production of analytic reports, that technology would have to be just about perfect, because we can’t risk making a mistake in intelligence analysis that informs our decision-makers in national security,” Rieber said.

Human experts can use their own judgment to consider the valuable suggestions the AI software might generate and discard the others. 

When asked whether REASON could be portrayed as a potential stepping stone to reaching sophisticated AI systems that are so intelligent they can independently write full analytic national security reports that are more accurate — and produced quicker — than those produced by humans, Rieber said he could think of several reasons why “that is still a long way off.”


For one, even incredibly advanced technologies that exist today for creating essays or other written products, still often pump out glaring errors, according to Rieber.

“Another reason to doubt whether technology will soon be able to produce analytic reports of high quality is that intelligence analysts are experts on their topics. And the technology would have to have an extraordinary level of … artificial intelligence to be able to compete with the human analysts who’ve been working on that topic for years,” he said.

Brandi Vincent

Written by Brandi Vincent

Brandi Vincent reports on emerging and disruptive technologies, and associated policies, impacting the Defense landscape. Prior to joining DefenseScoop, she produced a long-form documentary and worked as a journalist at Nextgov, Snapchat and NBC Network. Brandi was named a 2021 Paul Miller Washington Fellow by the National Press Foundation and was awarded SIIA’s 2020 Jesse H. Neal Award for Best News Coverage. She grew up in Louisiana and received a master’s in journalism from the University of Maryland.

Latest Podcasts