Advertisement

AI won’t pause for the election, and AI regulation shouldn’t either

With election season in full bloom, and politics, rather than policy, in the spotlight, U.S. agencies and regulators should take steps to continue to promote the responsible development and deployment of artificial intelligence.
WASHINGTON, DC - JULY 12: U.S. Vice President Kamala Harris listens as Arati Prabhakar (R), the director of the White House Office of Science and Technology Policy speaks during a meeting on Artificial Intelligence in her ceremonial office in the Eisenhower Executive Office Building on July 12, 2023 in Washington, DC. Harris hosted the meeting to discuss AI with civil rights leaders and consumer protection experts. (Photo by Anna Moneymaker/Getty Images)

Election season is here, and politics, rather than policy, is in the spotlight.
With lawmakers preoccupied with the upcoming elections in November, the prospect of major new legislation working its way through Capitol Hill is slim to none.

However, innovation in potentially disruptive technologies like artificial intelligence won’t pause for the election. Leading AI developers continue to pump out powerful new models that leaders in government and industry are eager to adopt. As such, it is crucial that federal agencies quickly and effectively respond to new developments in AI, allowing for responsible private-sector innovation while mitigating risks.


To promote the responsible development and deployment of AI during this busy political period, would-be regulators should therefore aim to use their existing powers to govern AI. Rather than waiting for a partisanly hyper-charged Congress to create new regulations from scratch, agencies can deploy their current authorities and tools to govern AI applications under their regulatory purview.

Using existing regulations equips policymakers to more quickly respond to emerging AI capabilities. It will allow agencies to use their expertise most effectively to recognize opportunity and risk areas when it comes to AI. And, in light of the Supreme Court’s recent decision in Loper Bright Enterprises et al. v. Raimondo to overturn certain powers afforded to federal agencies, regulators using existing rules may be less likely to face legal challenges to their governance efforts from AI companies.


Advertisement

For instance, our recent research at the Center for Security and Emerging Technology illustrates how, by mapping the authorities of the Federal Aviation Administration, the agency could use its existing legal powers in the U.S. Code to prescribe minimum safety standards for AI models integrated into aircraft and air traffic control systems. Other agencies could easily conduct similar exercises to replicate this approach.

Our research also suggests that federal agencies should follow three broad principles to map and use their existing authorities to regulate AI.


First, agencies must begin to understand the landscape of AI risks and harm in their regulatory jurisdictions. Collecting data on AI incidents — where AI has unintentionally or maliciously harmed individuals, property, critical infrastructure, or other entities — would be a good starting point.

Incident data can help agencies pinpoint the most pressing areas requiring intervention, and data could even be shared across agencies to identify broader problematic trends for executive or congressional attention.

As a hypothetical example, the FAA could collect and analyze data related to incidents where a commercial airline used AI to monitor an aircraft’s in-flight systems, as a human flight engineer might do. Using its authorities to “promote safe flight of civil aircraft in air commerce,” the FAA could exploit the incident data to inform safety regulations for airlines deploying such an AI system.

Advertisement

Second, agencies must prepare their workforces to capitalize on AI and recognize its strengths and weaknesses. Developing AI literacy among senior leaders and staff can help improve understanding and more measured assessments of where AI can appropriately serve as a useful tool. Greater knowledge can combat unbridled optimism or pessimism around AI and inform smart deployment of existing regulations.

Providing agency experts with a better understanding of AI can also help with regulatory mapping. To return to the FAA, familiarizing aviation experts with known strengths and limitations of AI applications relevant to their line of work may be easier than teaching AI experts the intricacies of aviation systems. AI-literate regulators can better recognize where regulatory action may be most urgently needed and can identify expertise gaps, like in testing and evaluating AI systems, where they may need to hire new talent with dedicated AI expertise.

Third and finally, agencies must develop smart, agile approaches to public-private cooperation. Private companies are valuable sources of knowledge and expertise in AI, and can help agencies understand the latest, cutting-edge advancements. Corporate expertise may help regulators overcome knowledge deficiencies in the short term and develop regulations that allow the private sector to innovate quickly within safe bounds.

At the same time, though, regulators need to ensure that incumbent firms do not use cooperation with the government to advocate for rules that unfairly crowd out competitors or limit the growth of a vibrant, diverse AI ecosystem. They must be aware of the risks from regulatory capture. Tomorrow’s AI progress may not be identical to today’s high-computing power-driven innovation. AI governance efforts should not hinder the rise of creative new approaches or models based on advice from current market leaders. Regulators are ultimately protecting civic — not corporate — interests.

This fall, the race for the White House will rightfully overshadow the private sector’s race to innovate in AI. Nonetheless, regulators should ready themselves to ensure that new AI advancements don’t catch them flat-footed.

Advertisement

Owen J. Daniels is the Andrew W. Marshall fellow at Georgetown University’s Center for Security and Emerging Technology (CSET).

Jack Corrigan is a senior research analyst at Georgetown University’s Center for Security and Emerging Technology (CSET).

Latest Podcasts