Four major federal agencies announced Tuesday that they are teaming up to crack down on the use of artificial intelligence tools that perpetuate bias and discrimination.
The Biden administration will use existing civil rights and consumer rights laws to take enforcement action against AI systems and automated systems that allow discrimination, top leaders within the Justice Department, the Federal Trade Commission, the Consumer Financial Protection Bureau, and the Equal Employment Opportunity Commission pledged on Tuesday.
With AI tools increasingly central to private industry and soon potential government decisions about hiring, credit, housing and other services, top leaders from the four federal agencies warned about the risk of “digital redlining.”
The officials said they were worried that inaccurate data sets and faulty design choices could perpetuate racial disparities and they pledged to use existing law to combat such risks.
“We’re going to hold companies responsible for deploying these technologies, and making sure that it is all in compliance with existing law. I think we are starting the process of figuring out where we’re identifying potentially illegal activity,” said Rohit Chopra, Director of the Consumer Financial Protection Bureau.
“And we’ve already started some work to continue to muscle up internally, when it comes to bringing on board data scientists, technologists and others, to make sure we can confront these challenges,” Chopra added.
The four federal agencies are taking the lead on holding AI companies and vendors responsible for any harmful behaviour because they are the key agencies in charge of enforcing civil rights, non-discrimination, fair competition, consumer protection, and other legal protections to citizens.
Each agency has previously expressed concern about potentially harmful uses of automated systems.
“There is no AI exemption to the laws on the books,” said trade commission Chair Lina Khan, one of several regulators who spoke during a news conference to signal a “whole of government” approach to enforcement efforts against discrimination and bias in automated systems.
Khan said the FTC recently launched a new Office of Technology, which is focused on hiring more technologists with expertise to fully grasp how AI technologies are functioning and potentially causing harm and have the capacity in-house to deal with such issues.
AI and automated system companies that are government vendors or contractors could also be targeted by the federal government enforcement crackdown.
“So with respect to vendors and employers, obviously, we have very clear enforcement with respect to employers, depending on the facts, and this is true of pretty much every issue that we might look at is very fact intensive.
“I want to emphasize that there may be liability for vendors as well. And it really depends on how they’re constructed,” said Charlotte Burrows, Chair of the Equal Employment Opportunity Commission (EEOC).
“There are various legal authorities with respect to vendors and other actors that may be involved in the employment process and developing these tools. So it really just depends on what that relationship is with and what the role that the AI developer or the vendor may have with respect to the employee and processes, both for our authority with respect to interference under, for instance, Title Seven of the Civil Rights Act, or the ADA, which is actually quite a broad interference provision,” Burrows added.