Export-Import Bank taking open-minded approach on the use of generative AI tools

Addressing employee generative AI use is largely an evolution of the agency’s existing policies for general internet searches, said Ex-Im's Howard Spira.
(Getty Images)

The Export-Import Bank of the United States is among the agencies opting for a more permissive approach to generative AI tools, providing employees the same kind of access the independent agency has for access to the internet, according to its top IT official.

“We do not block AI any more than we block general internet access,” Howard Spira, chief information officer of Ex-Im, said during a Thursday panel discussion hosted by the Advanced Technology Academic Research Center (ATARC).

Spira said the agency is approaching generative tools with discussions about accountability and best practices, such as not inputting private information into tools like ChatGPT or other public large language models. “But frankly, that is just an evolution of policies that we’ve had with respect to just even search queries on the general internet,” Spira said.

He emphasized the importance of context in AI usage, noting that the agency — whose mission is facilitating U.S. exports — deals with the kinds of decisions that it believes are “a relatively low-risk environment” for AI. Most of the work the agency is doing with AI is with “embedded AI” that’s within its existing environments, such as those for cyber and infrastructure monitoring.


“We’re also actually encouraging our staff to play with this,” Spira said.

His comments come as agencies across the federal government have grappled with how to address the use of generative AI tools by employees and contractors. Those policies have so far varied by agency depending on their individual needs and mission, according to FedScoop reporting.

While some agencies have taken a permissive approach like Ex-Im, others are approaching the tools with more caution.

Jennifer Diamantis, special counsel to the chief artificial intelligence officer in the Securities and Exchange Commission’s Office of Information Technology Strategy and Innovation, said during the panel that the SEC isn’t jumping into third-party generative AI tools yet, citing unknowns and risks. 

There is, however, a lot of exploration, learning, safe testing and making sure guardrails are followed, Diamantis said. She added that while the agency is exploring the technical side, there is also an opportunity right now to explore the process, policy and compliance side of things to make sure they’re ready to manage risks if and when they do move forward with the technology. 


Diamantis, who noted she wasn’t speaking for the commission or commissioners, encouraged people to use this time to focus not just on the technology, “but also, what do you need in terms of governance? What do you need in terms of updating your lifecycle process? What do you need in terms of upskilling, training for staff?”

In addition to exploration, the SEC is also educating its staff on AI. Diamantis said those efforts have included trainings — such as a recent one on responsible AI — and having outside speakers, as well as establishing an AI community of practice and a user group.

Spira similarly noted that Ex-Im has working groups addressing AI and is including discussions about the technology in its continuous strategy process. This year, that process for its IT portfolio included having “the portfolio owners identify potential use cases that they were interested in exploring” and the identification of embedded use cases, he said.

Tony Holmes, another panelist and Pluralsights’s director of public sector presales solution consulting for North America, underscored the importance of broad training on AI to build a workforce that isn’t afraid of the technology. 

“I know when I talk to people in my organization, when I talk to people at agencies, there are a lot of people that just haven’t touched it because they’re like, ‘we’re not sure about it and we’re a little bit scared of it,’’’ Holmes said. Exposure, he added, can help those people “understand it’s not scary” and “can be very productive.”

Madison Alder

Written by Madison Alder

Madison Alder is a reporter for FedScoop in Washington, D.C., covering government technology. Her reporting has included tracking government uses of artificial intelligence and monitoring changes in federal contracting. She’s broadly interested in issues involving health, law, and data. Before joining FedScoop, Madison was a reporter at Bloomberg Law where she covered several beats, including the federal judiciary, health policy, and employee benefits. A west-coaster at heart, Madison is originally from Seattle and is a graduate of the Walter Cronkite School of Journalism and Mass Communication at Arizona State University.

Latest Podcasts