Lawmakers want FTC to take on ‘black box’ AI foundation models
One of the House’s top voices on artificial intelligence wants to put an independent federal agency in charge of ensuring the data and algorithms behind foundation models are made public.
Rep. Don Beyer, D-Va., co-chair of the Congressional AI Caucus, is part of a bipartisan trio behind a bill introduced last week that would require the Federal Trade Commission to establish requirements for foundation model transparency.
The bill, co-sponsored by Reps. Mike Lawler, R-N.Y., and Sara Jacobs, D-Calif., calls on the FTC to work with the Commerce secretary, the Office of Science and Technology Policy director and the head of the National Institute of Standards and Technology on those requirements. The federal leaders would also seek input from standards bodies, academics, tech experts, civil rights advocates and consumers.
Beyer, who has pursued graduate work in machine learning, said in a press release that consumers deserve more information about AI foundation models that are “commonly described as a ‘black box’” — meaning users aren’t privy to why a model may provide a particular response.
Giving users more information, such as what the model bases it results on and how it was built, would go a long way toward changing that element of the unknown, the Virginia Democrat said.
“This bill would help users determine if they should trust the model they are using for certain applications, and help identify limitations on data, potential biases, or misleading results,” Beyer continued. “When a model’s bias could lead to harmful results like rejections for housing or loan applications, or faulty medical decisions, the importance of this reform becomes clear and very significant.”
The bill aims to spur documentation on testing and details on data collection prior to commercial deployment of an AI model. Ongoing transparency throughout the “lifecycle of the system” would also be required, per the bill text. The legislation would exempt “fully open-source” models from the FTC’s regulations.
The lawmakers view a handful of measures as paramount to providing the public with total transparency, including: detailed summaries of training data sources, broad descriptions of that training data, descriptions of data governance procedures, descriptions of both purposes and unintended consequences of the model, details on the computational power used to train the foundation model, and more.
The House reps are also interested in how a model responds to questions about potentially sensitive topics. The bill wants AI model developers to share the “precautions” it takes when it answers or responds to “situations with higher levels of risk of providing inaccurate or harmful information.” Those topics include national security, elections, law enforcement, health care, hiring decisions, financial decisions, and biological, chemical, radiological or nuclear weapons, among others.
“This is about accountability and getting ahead of a rapidly evolving technology before it outpaces common-sense guardrails,” Lawler said in a statement. “Transparency is the foundation for trust, and if we’re going to lead on innovation here in the United States, we also have to lead on protecting consumers, safeguarding our national security, and making sure this technology is used responsibly.”
The commission would also be charged with creating a plan to help small and new businesses comply with the requirements. The FTC and NIST would jointly publish guidance for compliance, in addition to providing a three-month grace period for small and new businesses and making available a “qualified, technically proficient representative” for meetings during that grace period.
The FTC’s enforcement powers, meanwhile, would fall under the agency’s “unfair or deceptive acts or practices” regulations. Covered entities found in violation of the rules would be notified at least two weeks before any enforcement actions are taken.
“Trust will decide the global AI race — separating the countries and developers that earn it from those that don’t,” Jacobs, a member of the House’s bipartisan task force on AI, said in a statement. “Transparency is the first step toward detecting and addressing potential harms, assigning responsibility, and building confidence in systems that are rapidly shaping our lives as well as our economy and national security.”