Advertisement

Treasury AI report seeks to mitigate risks in financial services sector

The findings from the agency’s June RFI on the technology reflect an industry fed up with patchwork regulations and searching for more clarity.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
(Getty Images)

Six months after putting out a request for information on artificial intelligence in the financial sector, the Treasury Department on Thursday released a report filled with findings from the feedback and recommendations for how the public and private sectors should approach the technology going forward.

The department said that the comments — from more than 100 letters submitted by financial firms, fintech companies, trade associations, consumer advocacy organizations, technology providers and others — paint a picture of proliferating AI use cases throughout the financial sector, as well as growing concerns about the risks that generative AI may pose.

“Through this AI RFI, Treasury continues to engage with stakeholders to deepen its understanding of current uses, opportunities, and associated risks of AI in the financial sector,” Nellie Liang, Treasury’s under secretary for domestic finance, said in a statement. “The Biden Administration believes that continued stakeholder engagement like this is important to fostering innovation in financial services while mitigating potential risks.”

Common threads in the feedback included stakeholder desires for standard AI definitions that align with financial services, better clarity around data privacy and security requirements, expanded consumer protections, and more uniform compliance.

Advertisement

Risks, meanwhile, were grouped by Treasury into six categories: data privacy, security, and quality standards; bias, explainability, and hallucinations; impacts on consumers, fair lending, and financial inclusion; concentration-related risks; third-party risks; and illicit finance risks. 

Respondents had a variety of suggestions aimed at mitigating those risks, from instituting more stringent federal requirements for data collection and analysis to enhancing risk management frameworks and more closely monitoring the concentration of AI providers.

The possibility of strengthened regulatory frameworks came up frequently in the RFI responses, with stakeholders advocating for Treasury “to prioritize intergovernmental coordination to provide cohesive regulatory guidance as appropriate, facilitate information sharing, and aligning governance approaches for the same activities,” the report noted. “Respondents also broadly agreed on the benefit of public-private partnerships to share trends, risks, and best practices.”

Coordination on AI-related matters in the financial sector is key, Treasury added, highlighting respondents’ aversion to patchwork and often contradictory state laws that result in “uneven requirements on AI developers, users, and financial firms of different sizes, as well as varied product functionalities for consumers.” Last year, lawmakers in 31 states introduced nearly 200 AI-related bills, underscoring the increasingly patchwork system that financial companies must operate under with AI. 

Different international standards are another issue that firms are forced to consider, the report said, with respondents flagging the “regulatory fragmentation” that makes it exceedingly difficult to “manage risks consistently on an enterprise-wide basis.” That fragmentation also results in “dramatically different levels of consumer protection,” the department stated.

Advertisement

Treasury concluded its report with five “potential next steps” that it, other financial regulators and the sector as a whole could take with regard to AI. The department would like to see increased collaboration among foreign and domestic stakeholders in financial services, as well as additional analysis and stakeholder engagement to flag gaps in regulatory frameworks. The agency also recommends continued coordination among financial regulators on “enhancements” to those frameworks.

On the industry side, Treasury suggested that financial firms “prioritize their review of AI use cases for compliance with existing laws and regulations before deployment and that they periodically reevaluate compliance as needed.” Additional AI information-sharing among the financial services sector and federal agencies should be considered, the report added, along with the development of data standards that align with Treasury’s AI cybersecurity report.

Matt Bracken

Written by Matt Bracken

Matt Bracken is the managing editor of FedScoop and CyberScoop, overseeing coverage of federal government technology policy and cybersecurity. Before joining Scoop News Group in 2023, Matt was a senior editor at Morning Consult, leading data-driven coverage of tech, finance, health and energy. He previously worked in various editorial roles at The Baltimore Sun and the Arizona Daily Star. You can reach him at matt.bracken@scoopnewsgroup.com.

Latest Podcasts