Advertisement

Tech groups push back on Biden AI executive order, raising concerns that it could crush innovation

NetChoice, the Chamber of Commerce and SIIA expressed significant concerns about the 111-page EO, which marks the the most aggressive step by the government to rein in the technology to date.
President Joe Biden hands Vice President Kamala Harris the pen he used to sign a new executive order regarding artificial intelligence during a White House event on Oct. 30, 2023, in Washington, D.C. (Photo by Chip Somodevilla/Getty Images)

President Joe Biden’s executive order on artificial intelligence has largely received a warm welcome from AI experts and government leaders, but it is facing pushback from multiple tech industry associations that say the EO is too confusing, too broad and could potentially stifle innovation. 

NetChoice, the U.S. Chamber of Commerce and the Software & Information Industry Association — which represent some of the largest AI and tech companies in the world — expressed several concerns about the long-awaited 111-page executive order, which marks the most aggressive step by the government to rein in the technology to date.

“Broad regulatory measures in Biden’s AI red tape wishlist will result in stifling new companies and competitors from entering the marketplace and significantly expanding the power of the federal government over American innovation,” Carl Szabo, vice president and general counsel at NetChoice, an advocacy group that represents major AI companies such as Amazon, Google and Meta, said in a statement.

“This order puts any investment in AI at risk of being shut down at the whims of government bureaucrats,” he continued. “That is dangerous for our global standing as the leading technological innovators, and this is the wrong approach to govern AI.”

Advertisement

Szabo added that there are many federal government regulations that already govern AI that can be used to rein in the technology, but the Biden administration “has chosen to further increase the complexity and burden of the federal code.”

The Chamber of Commerce said the EO shows promise and addresses important AI priorities, but also raises concerns and needs more work.

“Substantive and process problems still exist,” Tom Quaadman, executive vice president of the Chamber’s Technology Engagement Center, said in a statement. “Short, overlapping timelines for agency-required action endangers necessary stakeholder input, thereby creating conditions for ill-informed rulemaking and degrading intra-government cooperation.” 

Federal agencies such as the Federal Trade Commission and the Consumer Financial Protection Bureau “should not see this as an opportunity to do as they please,” Quaadman added. “All agencies must continue to act within congressional limits and abide by Supreme Court rulings.”

The wide-ranging executive order, which aims to tackle everything from AI privacy risks to federal procurement, calls on several agencies to take on new responsibilities related to artificial intelligence. The order also addresses new strategies for federal agency use of the technology, including issuing guidance for agency deployment, helping agencies access AI systems through more efficient and less expensive contracting, and hiring more AI professionals within the government. 

Advertisement

Some tech executives also pushed back on the lack of inclusion of certain key industry perspectives in the creation of the White House’s EO.

“It’s a good first step, but what I think is lacking here is we need to get more people at the table in the room than just the big three or big five technology companies. Those that work on AI risk and security should be included, too. I think that’s a missing element,” Ian Swanson, CEO and co-founder of Protect AI, which helps businesses secure their AI and machine learning systems, told FedScoop. Swanson previously led Amazon Web Services’ worldwide AI and machine learning business.

A third tech industry association, SIIA, said in a statement that it had “top-level support” for the EO but also “concerns about some of the directions taken,” including with regard to the document’s effect on innovation and American tech leadership. 

“While we are pleased the foundation model review process is focused on high-risk use cases — those that involve national security, national economic security, and national public health and safety — we are concerned that the EO imposes requirements on the private sector that are not well calibrated to those risks and will impede innovation that is critical to realize the potential of AI to address societal challenges,” said Paul Lekas, senior vice president for global public policy & government affairs at SIIA, which represents major tech players including Adobe, Apple and Google.

“While we support the measures to democratize research and access to AI resources and reform immigration policy, we believe the directive to the FTC to focus on competition in the AI markets will ultimately undermine the administration’s objectives to maintain U.S. technological leadership.” 

Advertisement

The White House didn’t respond to a request for comment by publication time. 

AI scholars say it’s entirely expected for tech industry associations to oppose AI regulatory steps like the EO, because they have a long history of saying such actions can challenge existing business models and slow down the speed of technological progress. 

“The fact that we’re starting to see pushback on the EO is not surprising from trade groups saying it’s too broad and could impede innovation,” Tom Romanoff, director of the Technology Policy Project at the Bipartisan Policy Center, told FedScoop. “But everyone sees there’s a need for regulation to happen and both parties on [Capitol] Hill have been supportive, received it well.”

Nevertheless, Romanoff did critique federal agencies for their history of not meeting deadlines for the implementation of time-sensitive tech regulations.

“I can see the government very much slipping on deadlines,” he said. “They need to be realistic with their timelines and resources and we need to hold them accountable when agencies miss their deadlines.”

Advertisement

The EO shows significant progress in developing a U.S. strategy for AI regulation, tackling everything from privacy risks to large models. At nearly 20,000 words, the order is remarkably detailed and includes references to everything from AI models “trained using a quantity of computing power greater than 1026 integer” and obligations of new chief artificial intelligence officers to incentive pay programs for AI-focused roles in government. 

Alondra Nelson, the former White House Office of Science and Technology Policy chief, was highly supportive and complimentary of the Biden administration’s EO, but also highlighted major challenges the government is likely to face in implementing the order.

“I think implementation is going to be hard and I think it’s encouraging that part of the executive order includes the standing up of a White House Council on AI,” Nelson said in an interview with FedScoop. “It’s going to take that level of senior coordination to do that. 

“I think we’re going to need scientists, technologists, funding and infrastructure to implement it at the scale in the government that we’ll need to preserve privacy from a technical perspective while we wait on the Congress to act and get us closer to something like federal privacy law.” 

Nelson added that the executive order hit the “sweet spot” of containing parts that everyone likes and doesn’t like, so “no one’s too unhappy.”

Advertisement

This story was updated to include additional comments and context from SIIA.

Latest Podcasts