Advertisement

In deploying AI, the Federal Aviation Administration faces unique challenges

As federal agencies ramp up their AI work, observers say the FAA is taking a “cautious” approach as it wrestles with safety questions.
A United Airlines plane departs the Newark International Airport, in Newark, New Jersey, on January 11 2023. (Photo by Kena Betancur / AFP) (Photo by KENA BETANCUR/AFP via Getty Images)

The Biden administration has made the deployment of artificial intelligence a priority, directing federal agencies to look for ways to integrate the technology into their operations. But the Federal Aviation Administration faces unique challenges with that goal.

Through partners, its own internal research staff, and work w​ith NASA, the country’s aviation safety regulator is looking at a range of AI applications. The FAA has a chief scientific and technical advisor for artificial intelligence — machine learning, who is charged with expanding the country’s role in understanding how AI might be deployed in aviation contexts. And the agency is working on a plan, along with NASA, for certifying AI technologies for use in the national airspace system.

“We are harnessing predictive analytics, machine learning, and artificial intelligence to develop streams of data,” Polly Trottenberg, the FAA’s acting administrator, said in a note within one of the agency’s recent four-year research plans. “These capabilities allow us to create new tools and techniques and adopt new technologies.”

But hurdles remain for actually deploying AI. While the FAA has implemented risk management standards for the safety of national airspace, the agency told FedScoop it still needs to “adapt AI risk management methodologies and best practices from the National Institute of Science and Technology,” along with other institutions. The FAA has released several use cases in its AI inventory, but many of them are still somewhat modest, experts told FedScoop. Other uses are still in the research phase. 

Advertisement

There are further constraints, too. While the FAA is investing in research and development related to artificial intelligence, the aviation industry is more broadly facing ongoing safety issues with Boeing aircraft and an overworked population of air traffic controllers. And then there’s the matter of ensuring that flying stays safe, despite excitement about using artificial intelligence.

“It’s still very early days,” noted Anand Rao, a Carnegie Mellon data science and AI professor. “They’re taking a conservative, cautious approach.” 

The FAA declined to make Dr. Trung T. Pham, the agency’s chief AI leader, available for comment, nor did it answer FedScoop’s questions about staff within the agency focused specifically on artificial intelligence. The FAA, along with the Department of Transportation, have also declined to provide further detail about a mention of ChatGPT for software coding that agency staff removed from its AI inventory last year. Still, documents about several AI use cases from the agency, along with interviews with experts, provide insight into the FAA’s approach to the technology.

FAA pursues no-frills approach to AI

When asked about the most promising use cases for AI, a spokesperson for the FAA pointed to several, including predictive analytics that could help mitigate safety risks, assistance with decision support, automating certain processes, and improving engagement through virtual assistants. Some of those use cases have already been disclosed in the Department of Transportation’s executive order-required AI inventory while others are discussed in the agency’s four-year research plan. The DOT recently edited its inventory and some of the use cases appear to have been redacted, though the agency did not respond to a request for comment. 

Advertisement

Some of these AI applications are related to the weather, including a convective weather avoidance model meant to analyze how pilots navigate thunderstorms. The agency is also looking at an effort to use AI to support air traffic controllers, per the four-year research plan, as well as using artificial intelligence to address aviation cybersecurity. And the FAA is studying the use of AI and voice recognition technology to improve flight simulations used in pilot training. Still, many of the AI use cases identified by FedScoop are rudimentary or still relatively early in their deployment, while others remain in the research phase. 

Several that are in use are relatively modest — and reflect the agency’s circumspect approach. The FAA’s Office of Safety and Technical Training, which conducts data analysis and investigations, has already deployed a model for use by the runway safety team. The internal tool assists the team with automatically classifying runway incursions as part of their analysis. FedScoop obtained documents describing how this system works —  but the technology discussed in those documents, Rao said, represent well-tested algorithms that have been around since the 1990s and early 2000s, and not the newer technology used for systems like ChatGPT. 

Another is the “regulatory compliance mapping tool,” which is essentially an internal search engine-esque system for regulatory concepts. The tool is built off a database of documents provided by organizations like the FAA, federal agencies, and the International Civil Aviation Organization, a branch of the United Nations that focuses on aviation. The idea for the tool, which leverages natural language processing, is to reduce “research time from days or weeks to hours,” according to a presentation by the Aeronautical Information Standards Branch dated Sept. 20. 

Still, the tool is “essentially just a database,” said Syed A.M. Shihab, an assistant professor of aeronautics and engineering at Kent State University, and not particularly advanced. While around 175 FAA employees can access the tool, the agency told FedScoop, the platform is used fewer than 20 times a week, according to that same presentation. The FAA, which said the “internal FAA tool” is in the “development phase,” appears to have spent more than $1 million with a company called iCatalyst — which did not respond to a request for comment —  to build it, according to a federal government contracts database

“The FAA is continually working to make our processes more efficient. The Regulatory Compliance Mapping Tool (RCMT) is an initiative that can significantly speed up safety research,” the agency said in a statement. In March, the agency said security authorization would kick off later that month and that it had completed a Section 508 self-assessment process. 

Advertisement

Other systems disclosed in the AI inventory either don’t use the technology yet or haven’t been deployed. These include a tool to help transcribe conversations between pilots and another, called ROMIO, meant to help pilots understand cloud structures, according to FAA documents.

FAA’s AI work goes beyond disclosed use cases

Other AI work is ongoing, but it’s not clear if or how it’s been deployed. The FAA has worked with researchers at Georgia Tech and the University of Maryland to use AI for measuring collision risk, according to federal contract records. It also appears to have procured the development and implementation of a machine learning model from a company called Deep AI Solutions for its safety information sharing system. 

The FAA’s work with NASA, meanwhile, includes looking at AI for “runway configuration management, digitization of standard operating practices and letters of agreements, and natural language processing,” per a spokesperson. It also represents NASA’s machine learning airport surface model, which was supposed to help the FAA capture the location of routes, taxiways, and runways using a real-time machine learning system. NASA said this work has helped contribute to a framework it’s working on with the aviation agency. 

Advertisement

And at the MIT-based Lincoln Laboratory, which is funded by the Defense Department and the FAA, researchers aren’t focusing on AI for safety-critical applications, according to Tom Reynolds, who leads the lab’s air traffic control systems group. For example, the lab is researching a technology called “the offshore precipitation capability” to assist with weather radar coverage gaps. “Things that are more advisory and not directly in the loop of deciding where individual aircraft fly, but rather helping air traffic controllers with situational awareness and strategic decision making,” Reynolds said. 

Technically, the FAA has been looking at AI for decades — and lots of preliminary work with the technology does seem to be underway. For example, in March, the FAA announced a data challenge meant to help use artificial intelligence to address problems concerning the national airspace, and it’s recently hosted workshops on machine learning, too. Email records show that the FAA is invited to monthly meetings of the Department of Transportation’s AI task force. 

The FAA is working with industry and international counterparts on an AI roadmap, and developing a certification research framework for artificial intelligence applications with NASA. The plan is focused on developing a way of certifying AI applications that could be deployed in the national airspace in a highly safe way. It’s expected to launch later this year, the space agency said. 

Still, most of the AI work at the FAA isn’t for direct use in aviation. That reality reflects the broader challenge of using the technology in a safety critical context. In meetings with industry, the agency’s chief adviser for aircraft computer software has highlighted the challenge of approving AI software, while Pham, the agency’s AI chief AI, has detailed concerns about traceability, per a blog post on the website of RCTA, a nonprofit aviation modernization group. 

Similarly, a roadmap the FAA is working on with other aviation agencies around the world has encountered several challenges, including issues with predictability and explainability, the tracking of datasets that might feed AI models, training humans to work alongside AI, model bias, and safety.

Advertisement

“Because aviation is a safety critical industry and domain, in general, stakeholders involved in this industry are slower to adapt AI models and tools for decision-making and prediction tasks,” said Shihab, the Kent State professor. “It’s all good when the AI model is performing well, but all it takes is one missed prediction or one inaccurate classification, concerning the use cases, to compromise safety of flight operations.”

Latest Podcasts