Advertisement

Pipeline safety agency’s proposed pilot for ChatGPT in rulemaking raises questions

The Pipeline and Hazardous Materials Safety Administration is considering using OpenAI in the rulemaking process, according to a Transportation Department AI inventory.
This photo illustration shows the ChatGPT logo at an office in Washington, DC, on March 15, 2023. (Photo by STEFANI REYNOLDS/AFP via Getty Images)

The Pipeline and Hazardous Materials Safety Administration is exploring using ChatGPT in the rulemaking process, according to a disclosure by its parent agency, the Department of Transportation.

According to a posting on the agency’s public AI inventory, PHMSA is weighing an “artificial intelligence support for rulemaking use case.” The project, according to the posting, involves using ChatGPT to support the rulemaking “processes to provide significant efficiencies, reduction of effort, or the ability to scale efforts for unusual levels of public scrutiny or interest.” The agency told FedScoop that, right now, it has no official plans to implement such technology.

Interest from PHMSA, which creates regulations for the movement of potentially dangerous materials, comes as other agencies, including NASA and the Defense Department, begin considering the role of generative AI tools in their work.

Still, PHMSA’s concept for a technology pilot that would use ChatGPT to analyze comments submitted to the agency about regulations it’s considering raises concerns about what role, if any, the technology should play in the regulatory process, according to an expert on AI and civil liberties.

Advertisement

“The idea that agencies will use a tool notorious for factual inaccuracies for development of rules that forbid arbitrary and capricious rule-making processes is concerning,” Ben Winters, an attorney and the leader of the AI and Human Rights project at the Electronic Privacy Information Center, said in an email to FedScoop. “Especially, the PHMSA, whose rules often concern potentially life-altering exposure to hazardous materials.”

The Transportation Department’s AI inventory states that the OpenAI chatbot would be used to conduct sentiment analysis on comments sent to the agency about proposed rules. The tool could be used for analyzing the “relevance” of the comments, providing a “synopsis” for comments, “cataloging of comments,” and identifying duplicates.

When asked about the use case, PHMSA emphasized that the project is still in a very early stage.

“PHMSA, like many other federal agencies, is exploring the responsible and ethical use of AI through limited pilots and demonstration projects,” the agency told FedScoop in a statement. “These pilots and projects are designed to ensure alignment with recent guidance from the Administration on the appropriate use of AI in the federal government.”

The agency continued: “At this time, PHMSA is not using, and does not plan on using any generative AI tools or commercial software for generative AI like OpenAI to influence the rulemaking process. PHMSA is working with our stakeholders to assess both short term and long term risks from generative AI.”

Advertisement

In the agency’s AI inventory, which was last updated in July, the project is described as “a pilot initiative” that’s “planned” and “not in production.”

Winters, from EPIC, questioned whether ChatGPT is an appropriate technology for the rulemaking process. He argued that relevance analysis could ultimately result in an agency missing a novel point they hadn’t considered before, and added that sentiment analysis isn’t a “relevant consideration” of the Administration Procedure Act’s rulemaking process.

“[S]ummaries by ChatGPT are prone to factual inaccuracies and a limited and outdated corpus of information,” he said. “Most of these functions could not be reliably achieved by ChatGPT.”

OpenAI did not respond to a request for comment by publication time.

There are other instances where the DOT’s AI activities, at least as described on the agency’s official AI inventory, have raised questions. Earlier this year, the Department removed a reference to the Federal Aviation Administration’s Air Traffic Office using ChatGPT for code-writing assistance in response to FedScoop questions.

Advertisement

Stanford researchers highlighted major issues in the AI inventory compliance process at the end of last year. FedScoop has reported on ongoing issues related to these inventories and the AI use cases they’ve revealed.

Madison Alder contributed reporting. 

Latest Podcasts