Advertisement

NIST AI center looks for input on agentic AI security, best practices

A request for information by the Center for AI Standards and Innovation comes as commercial buzz around autonomous AI agents grows.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
(Getty Images)

The federal government’s Center for AI Standards and Innovation is looking to the public for input on artificial intelligence agents to support its work evaluating and establishing guidance for the technology. 

A request for information scheduled to go live Thursday on the Federal Register will specifically seek information from stakeholders — such as developers, deployers, and researchers focused on computer security — on practices and methods for developing and adopting AI agent systems. Comments will be due 60 days after the request is officially published. 

Agentic AI, which has become a buzzy term for the tech industry, generally refers to systems that can autonomously complete specific tasks, as opposed to something like an AI chatbot that is designed to work by interacting with a user. 

For proponents of the tech, AI agents present opportunities for fully automating certain work and creating efficiency, but there are also risks with actions that require no human intervention. That’s where the request for information comes in.

Advertisement

The request is being published by CAISI, which was formerly known as the AI Safety Institute and is housed within the Department of Commerce’s National Institute of Standards and Technology. 

According to a copy of that RFI published for public inspection ahead of its official release, the center is looking for examples of agent system deployments and how risks were managed and anticipated.

“AI agent systems are capable of taking autonomous actions that impact real-world systems or environments, and may be susceptible to hijacking, backdoor attacks, and other exploits,” the RFI states. “If left unchecked, these security risks may impact public safety, undermine consumer confidence, and curb adoption of the latest AI innovations.”

CAISI has already conducted an initial evaluation of AI agent “hijacking,” which is a term that refers to a type of attack in which the agent ingests data with malicious instructions intended to make the system take potentially harmful actions. 

Through those experiments, the center said it found continuously improving and expanding shared frameworks for evaluation is important, and that evaluations need to be adapted to anticipate weaknesses that might not be known, among other things.

Advertisement

Topics CAISI is hoping to get information on in the comments include what security threats, risks, and vulnerabilities exist for AI agents; security best practices for the systems; how to assess the security of agents; and whether the environment the tools are deployed in can be monitored or constrained to mitigate risks.

Madison Alder

Written by Madison Alder

Madison Alder is a reporter for FedScoop in Washington, D.C., covering government technology. Her reporting has included tracking government uses of artificial intelligence and monitoring changes in federal contracting. She’s broadly interested in issues involving health, law, and data. Before joining FedScoop, Madison was a reporter at Bloomberg Law where she covered several beats, including the federal judiciary, health policy, and employee benefits. A west-coaster at heart, Madison is originally from Seattle and is a graduate of the Walter Cronkite School of Journalism and Mass Communication at Arizona State University.

Latest Podcasts