Feds need to ‘raise’ AI with better data, report says
As the government increases its adoption of artificial intelligence, a new report says federal officials will have take great care in “teaching” the technology if agencies are to maximize its potential.
With AI’s potential applications ranging from freeing the workforce of administrative backlogs to providing cybersecurity protections, the Accenture Federal Technology Vision 2018 report says that the technology will increasingly have to be taught by federal personnel how to perceive the data it interprets and be able to justify its decisions.
“Deploying AI is no longer just about training it to perform a given task. Just as parents guide their children, AI must be ‘raised’ to act as a responsible representative of the agency, and a contributing member of society,” the report said. “No one would expect a tool to ‘act’ responsibly, explain its decisions, or work well with others. But with AI systems making decisions that affect people, we must teach AI to do these things, and more.”
The education will come from the data presented to an AI solution or, more importantly, how that data is structured, the report said. By establishing a taxonomy that structures the data into a baseline for the AI application to learn from, agencies can then improve its decision making, providing officials with better outputs.
“The organizations with the best data available to train an AI how to do its job will create the most capable AI systems,” the report said. “However, data scientists must use care when determining constraints and training data. It’s not just about scale but about actively minimizing bias in the data. Building provenance (verifying the history of data from its origin throughout its lifecycle) into a library of models preserves a link between models and the data used to train the model.”
Because of data’s importance to informing the AI systems agencies will be relying on, the Accenture report also calls on federal leaders to establish policies that ensure sound data integrity and security.
To do this Accenture advises federal leaders to craft frameworks focused on verifying data’s provenance, context and integrity to inform its systems and workforce.
The report cites the Department of Homeland Security’s process for its Privacy Impact Assessments, which outline to the public what Personally Identifiable Information it collects, why it is collected and how it is stored and protected.
Agencies should develop frameworks that can help determine the integrity of the data they collect, helping their systems better interpret information to prevent incorrect decisions in both citizen-facing and cyberdefense systems.
“Whether it’s a citizen creating a data trail by applying for benefits online or a sensor network reporting security checkpoints for a transportation system, there’s an associated behavior around all data origination,” the report said. “Federal agencies must build the capability to track this behavior as data is recorded, used and maintained.”
Besides AI adoption, the Accenture report examines four other innovation trends for the federal government, including extended reality and “frictionless business,” as well as how federal leaders can navigate the challenges of incorporating new tech into agency operations.