Advertisement

Chevron’s downfall highlights need for clear artificial intelligence laws

Much of Biden’s AI executive order isn't likely to be affected, but legislation coming from Congress will need to account for the Supreme Court’s ruling, experts say.
Sen. Martin Heinrich, D-N.M.; Sen. Todd Young, R-Ind.; Senate Majority Leader Charles Schumer, D-N.Y.; and Sen. Mike Rounds, R-S.D., are pictured walking out of the White House. The Eisenhower Executive Office Building is visible in the background.
Sen. Martin Heinrich, D-N.M.; Sen. Todd Young, R-Ind.; Senate Majority Leader Charles Schumer, D-N.Y.; and Sen. Mike Rounds, R-S.D., prepare to talk to reporters after meeting with President Joe Biden at the White House in October 2023. (Chip Somodevilla/Getty Images)

The demise of a legal doctrine that required courts to defer to a federal agency’s interpretation of ambiguous statutes creates hurdles as the U.S. races to build a regulatory framework for artificial intelligence, several legal and policy experts told FedScoop. 

The Supreme Court’s 6-3 decision to overturn what’s known as Chevron deference complicates federal efforts to regulate, opening rules up to legal challenges on the basis that they run afoul of what Congress intended. So as lawmakers and the Biden administration make a push to rein in AI, they’re under pressure to be more specific — and clairvoyant — about the emerging technology.  

“It makes it less likely that agencies can sort of muddle along successfully because they’ll be more open to challenges even if the challenges aren’t meritorious,” said Ellen Goodman, a professor at Rutgers Law School who specializes in law related to information policy. The solution was always getting clear legislation from Congress, she said, but “that’s even more true now.”

The stakes are high as the Biden administration has directed federal agencies to take action on AI, particularly in the absence of a major legislative package from Congress. While most of the major portions of the October executive order appear unlikely to be affected, the decision threatens to complicate an already slow-moving legislative and rule-making process. Ultimately, the ruling gives companies another avenue to challenge AI-related regulation and the judiciary more influence over an emerging technology they may not have the expertise to evaluate.  

Advertisement

The issues for AI regulation were foreshadowed during the oral arguments for the consolidated cases before the high court: Relentless v. Department of Commerce and Loper Bright v. Raimondo

Justice Elena Kagan used AI as an example of something that could be “the next big piece of legislation on the horizon” when discussing whether Congress wants the court to be the arbiter of “policy-laden questions” around the technology. “What Congress wants, we presume, is for people who actually know about AI to decide those questions,” Kagan said. 

In her dissent arguing that the opinion of the court didn’t respect the judgment that agencies are experts in their respective fields, Kagan again referenced AI as an example of an area of regulation that could wind up in courts because of inevitable ambiguity in statutes.

‘Tough task’ for Congress

For Congress, the ruling means that lawmakers writing new legislative proposals on AI may include language that gives agencies Chevron-like deference to reasonably interpret the law, experts told FedScoop.

Advertisement

Lawmakers will likely want to stipulate that decisions about interpreting the definitional aspects of AI systems as the technology evolves still be left up to the agencies, Goodman said. “To the extent that they don’t have those explicit delegations, the court’s going to interpret it,” she said.

The revocation of Chevron underscores arguments for a new AI-focused agency, Goodman said. Even without this ruling, regulatory work required in statutes needs to be carried out by an agency. Now, with the need to incorporate language that tasks an agency with an updating and interpretive function, Congress needs to specify an agency — but there isn’t currently one single agency tasked with that responsibility, she said.

Rep. Don Beyer, D-Va., who serves on the bipartisan AI task force, said in a statement to FedScoop that the ruling “makes it much more important that Congress respond to AI with legislation, and that it do so at a more granular level.

“The alternative, expecting judges and law clerks with no relevant expertise to make sweeping decisions and fill in gaps in existing regulation of an enormously important policy area, is a recipe for disaster,” Beyer added.

Anticipating changes in the rapidly evolving technology and writing statutes that address that with hyper-specificity is a tall order for lawmakers, said Divyansh Kaushik, a non-resident senior fellow at American Policy Ventures who specializes in AI and national security. 

Advertisement

“That’s a tough task, especially right now,” Kaushik said, pointing to the lack of technical knowledge in the legislative branch.

Congress “should be very proactive now” in building up technical capacity, shoring up agencies like the Government Accountability Office and Congressional Research Service, and potentially bringing back the defunct Office of Technology Assessment, Kaushik said. The OTA was dedicated to providing Congress with information about the benefits and risks of technology applications but lost its funding in 1995. There’s been recent bipartisan interest in reinstating the body, though.

“If Congress misses the moment, it will essentially be now up to the judiciary,” Kaushik said. That could lead to “slow-rolling” regulatory activity in the courts, he added.

In the meantime, state laws and regulations — which aren’t impacted by the decision — will likely continue to outpace federal efforts, said Stacey Gray, senior director for U.S. policy at the Future of Privacy Forum.

Colorado, for instance, passed legislation to protect consumer rights with respect to AI systems, and California’s 2020 Privacy Rights Act managed to influence how some websites and services operate throughout the country. 

Advertisement

While the “big risk” of regulating technology at the state level is fragmentation, Gray said, not having Chevron could create the same effect for federal rules intended to be a national standard. “If you have less deference to the federal agency that’s creating the rules, then you have potentially different federal courts reaching different decisions about what the law means,” Gray said.

Executive order priorities

Much of the Biden administration’s executive order may remain unaffected, said Kenneth Bamberger, a UC Berkeley Law professor who has focused on AI regulatory issues. That’s because most of the order’s actions consist of non-binding efforts, like directing federal agencies to draft reports or establish guidelines that wouldn’t have fallen under Chevron

For example, the order’s pilot for the National AI Research Resource at the National Science Foundation and the establishment of the AI Safety Institute at the Department of Commerce’s National Institute of Standards and Technology appear to have textual support in statute, said Matt Mittlesteadt, a research fellow at the Mercatus Center at George Mason University. 

While neither action is spelled out directly in law, the activities of the NAIRR — which is aimed at providing resources such as cloud computing and data for AI research — and the AI Safety Institute are, for the most part, spelled out in the AI Initiative Act of 2020, Mittlesteadt said. 

Advertisement

Beyond some of the major actions, however, it’s difficult to determine whether others in the lengthy order are impacted.

“Certain actions might change out of view, and we’ll never really know what those things are, but perhaps there are certain ambitions that might be scaled back,” Mittlesteadt said.

One area from the executive order that has potential for challenge is the administration’s use of the Defense Production Act to compel companies to disclose information about foundation models that pose serious risks to national security, Kaushik said. 

He pointed to existing opposition by Sen. Ted Cruz, R-Texas, who in a Wall Street Journal op-ed with former Sen. Phil Gramm, R-Texas, argued that the Biden administration’s use of the statute in that manner “begs for legislative and judicial correction.”

Mittlesteadt, however, said the law’s industry assessment provisions are “crystal clear” that the president has the authority to subpoena companies for information related to the defense industrial base. In this instance, “one could easily make that case,” he said.

Advertisement

Suzette Kent, CEO of Kent Advisory Services and former federal chief information officer under the Trump administration, said the ruling makes efforts to maintain and hire a federal workforce with AI expertise even “more important” given the decision could expand the areas where deep expertise is needed.

“Whether it’s law or regulation, we need experts, and we need experts that understand both the technology and business process,” Kent said.

AI in the courts

Still, the ruling raises key challenges for federal agencies to deal with AI-focused issues. Officials may find themselves proposing less ambitious rules out of concern that their proposals could ultimately get struck down by a court, warned Cary Coglianese, an administrative law-focused professor at the University of Pennsylvania Law School. 

“Agencies go to their general counsels and their general counsels weigh whether there’s some litigation risk and what the likelihood of prevailing in the face of a challenge to the agency’s authority,” Coglianese said. “Now those lawyers in those general counsels’ offices are, I think, going to advise taking a more cautious approach. And that’s where this will really play out.”

Advertisement

Generally, courts with limited AI-related experience could end up trying to come up with determinations on topics they aren’t trained to handle — or which could evolve — like the definition of a frontier model, which might be discussed in a statute, Bamberger said.   

John Davisson, the director of litigation and senior counsel at the Electronic Privacy Information Center, called the overturning of Chevron “a calculated blow to the power of federal agencies to protect the public from harms posed by emerging technologies, including AI.” He suggested that “courts will be freer to insert their own views of whether a regulation is the ‘best’ way to apply a statute, even when they lack the technical expertise and democratic legitimacy to make that call.” 

To help inform their judgements, Gray noted, judges still have what’s known as Skidmore deference, which permits courts to seek agency expertise. That will likely prompt agencies to spend more time briefing judges on the reasoning and validity of their decisions, she said. 

“It’s no longer going to be enough for agencies simply to say that the interpretation of the law is reasonable,” Gray said. “They have to also convince the courts that it’s the right decision, and it’s the correct interpretation.”

But Goodman said a lot of how those regulations move forward depends on which judges get the challenges and who is in charge at the executive agencies. Right now, there’s a general reaction “that assumes conservative judges and a kind of more pro-regulatory executive, but one could easily imagine, certainly, the executive flipping.” 

Advertisement

Should that happen, the fight over AI regulations could look a little different. Regardless, courts are likely to be significantly more involved in making decisions about the technology than they are now. 

This story was updated July 9, 2024 with quotes from Rep. Don Beyer.

Latest Podcasts