2019 in review: Agencies embrace RPA — AI less so

Trust in data is an issue implementing AI, but agencies overcame hurdles credentialing bots.
Identity and trust image
(Getty Images)

Plenty of agencies undertook robotic process automation pilots in 2019, but developing the confidence in their data to pursue full-on artificial intelligence continues to be an uphill battle.

In 2019, agencies saw real wins on the robotic process automation (RPA) front.

The General Services Administration launched an RPA community of practice in April allowing for agency collaboration on the development of software that mimics the keystrokes and mouse actions of employees to automate repetitive, manual tasks — saving time and money.

A month later, the Defense Logistics Agency announced it had finished a first-of-its-kind proof of concept in government allowing unattended bots to work around the clock. Previously bots had to be attended by receiving credentials from the laptop of an employee, so long as that person was on the clock.


Credentialing bots like humans doesn’t sit well with every official. But the Federal Emergency Management Agency ran a pilot earlier this year to determine how to properly provision identity to bots by giving their development pipeline an authority to operate.

The Center for Drug Evaluation and Research has seven RPA projects in development — like one automating drug intake forms — to free up pharmaceutical and medical staff for the agency’s core science mission.

In May, the Office of Personnel Management released a toolset for handling RPA and AI’s effect on the federal workforce, focusing on how employees might be redeployed or reskilled to meet mission-critical needs.

As of July, Deloitte estimated there were more than 1,000 bots across the government but found agencies weren’t rushing to scale them vertically because that requires putting reskilling and performance management controls in place. Instead, agencies were more likely to scale RPA horizontally into additional, controlled use cases.

“I’ve seen RPA solicitations on the street. So we’ve matured as a business from risk and compliance, security credentialing, innovation to having more places horizontally scaled,” Marc Mancher, a principal at Deloitte, told FedScoop. “And I’m hoping in the next 12 months — as I’m now seeing more in the marketplace — for us to get vertical scale in the federal government.”


While RPA is a step in the right direction to automate rote, repetitive tasks common to the government space, many are critical of referring to it as AI because it is completely rules-based doesn’t actually generate intelligence.

The slower move to adopt full-on AI

Federal agencies were much slower to implement true AI applications this past year that replicate or mimic human judgment or behavior.

Agencies must determine how much data they need, what they’re trying to understand with their algorithms, and whether they trust their findings, Donna Dodson, chief cybersecurity advisor at the National Institute for Standards and Technology, said at a conference in October.

“What’s really fascinating to me is, when the outcomes come out, people are not thinking about the kinds of security capabilities you want to protect those outputs so that you can go back and show during the life cycle what you did to protect and provide that confidence in the end result,” Dodson said.


NIST published a draft plan in July for federal engagement in AI standards that pointed to “trustworthiness” as the central element predicated on metrics.

While President Trump issued an executive order on a national AI strategy in February, lawmakers insist a national framework is still needed to address challenges developing the technology and maintaining U.S. leadership on that front — ahead of rivals like China.

Agencies lag behind financial services and tech companies in developing organizational AI strategies to account for talent and investment, according to a Deloitte report from November.

That hasn’t stopped the Pentagon from beginning to develop an AI algorithm for streamlining the issuance and review of federal security clearances or the Department of Homeland Security from exploring prototypes that could predict a vendor’s ability to deliver on a contract.

The National Science Foundation plans to invest $200 million on large-scale, long-term AI projects over the next six years, the U.S. Postal Service adopt an AI system for reading address labels more accurately, and the Department of Energy incorporate more geospatial data into AI development.


On the defense side, the Air Force led other branches in developing an AI strategy while the Army remains focused on building out its cloud architecture first. An independent audit conducted by RAND Corp. found the Department of Defense “lacks baselines and metrics in conjunction with its AI vision” and its Joint AI Center needs better “visibility and authorities to carry out its present role.”

And then there’s the fact agency CIOs remain skeptical of the value of current commercial AI applications for services like cybersecurity.

“I think we’re in the early, early stages of applying real AI to cyber,” said Ryan Cote, CIO at the Department of Transportation, at the Security Transformation Summit in December. “We’re still trying to figure out the definition of AI in some circles and what is AI and what isn’t AI.”

More 2019 in review:


The Pentagon’s JEDI cloud wars
A tense homestretch for 2020 census prep
2019 in review: A chief data officer in every agency
CDM program continues to wait for nod from Congress
Building more tech capacity on Capitol Hill

Latest Podcasts