Why agencies need to focus on modernizing public interaction over infrastructure
Federal agencies have made great strides in modernizing their digital infrastructure, tapping the power of the cloud and switching to software-as-a-service applications. Yet, by many measures, the aim of those efforts — to improve the public’s and employees’ digital experience — has fallen short of getting agencies where they need to be.
Why is that, and what can be done to help agencies achieve the service outcomes the public and employees expect?

There is a combination of reasons, from funding gridlock on Capitol Hill to internal cultural inertia. However, perhaps the biggest culprit is how agencies think about and manage modernization. High-level decisions to bolster IT modernization often revolve more around agency organization charts than what needs to happen to modernize the agency-wide digital experience for the public and employees.
In a nutshell, agencies still modernize primarily within silos — upgrading security operations, expanding IT infrastructure to the cloud, redesigning web applications to be more user-friendly — but collectively fail to address what’s missing to leverage those efforts fully. That missing element is a different kind of infrastructure: Think of it as a uniform, interoperable layer of connectivity between agency IT systems and the internet. We call it the “connectivity cloud” at Cloudflare.
Prioritizing interoperability promotes uniformity
At its core, the idea behind a connectivity cloud involves a commitment to making the Internet better for everyone, regardless of the silo they operate in. Establishing a highly-interoperable approach to the interactivity of enterprise and security infrastructure and applications, regardless of whether they operate on legacy infrastructure or across multiple clouds—and whether employees access those applications or the public.
The internet has become the primary media and method for delivering digital government operations. The issue is how to consistently provide uniform and reliable connection and protection across the various platforms and applications that depend on the internet.
The answer starts with shifting how we think about public and employee engagement and recognizing how a uniform connectivity layer — designed for today’s highly distributed end users, workforce, infrastructure, applications and security — can optimize IT interoperability to better support agencies and their missions.
Consider this example: You are a mission owner or a program manager who has grown accustomed to operating within your enterprise’s digital enclave, which operates partly on-premises and increasingly in the cloud. Suddenly, a DDoS attack or a potential ransomware attack is throttling your operations. Your IT team scrambles to figure out, “Where is the attack coming from? What is it impacting? Which vendor’s remediation tools must be activated? How can we scale our defenses to absorb the attack without incurring downtime?”
A connectivity platform like Cloudflare operates as a single, centrally managed layer that sits between your IT operations and Amazon, Azure, and Google cloud environments, as well as your cloud-enabled software service applications. It also sits in front of and manages connections to your legacy infrastructure and security tools. It does that by propagating a service that runs on every server in every data center.
By sitting in front of and connecting all of these environments, Cloudflare provides a uniquely consistent level of connectivity, visibility, security, performance and reliability everywhere. That allows agencies to identify and stop malicious traffic before it gets to any of those environments.
That form on cross-domain connectivity is one reason a growing number of federal agencies, including the Departments of Commerce, Homeland Security, Interior, Justice, State, and Treasury, now rely on Cloudflare for, among other initiatives, to:
- Develop tools and techniques to defend against distributed denial of service (DDoS) attacks, which aim to disrupt critical Government services.
- Significantly reduce the use of bots by entities seeking to seize and resell valuable appointment slots for government services.
- Protect agency staff and devices against web threats
- Protect, enhance and monitor agency web apps
- Support a government-wide network platform, allowing agencies to leverage Cloudflare’s security services for user authentication.
Another reason agencies are turning to Cloudflare is to take advantage of its Protective Domain Name System (DNS) Resolver service, which was first launched for federal agencies by the Cybersecurity and Infrastructure Security Agency (CISA) in 2022. The service, which is mandated for all Federal Civilian Executive Branch agencies and aligns with the White House Zero Trust memo, provides safeguards against malicious domains.
Cloudflare also provides authoritative DNS services for the government’s .gov top-level domain and a suite of FedRAMP-authorized application, network, developer and Zero Trust services, allowing easy access to enterprise-grade security, network and performance services.
A final factor behind CISA’s decision to select Cloudflare’s DNS services is the company’s unique view of the internet: On any given day, roughly 20% of the world’s internet traffic across more than 24 million active websites is proxied on Cloudflare’s network. We serve data from 330 cities in over 120 countries around the world.
That gives Cloudflare’s engineers and security specialists a powerful and unparalleled view of malicious threats that can compromise commercial and government services that rely on the internet. Not even the major cloud providers see anywhere near that level of traffic.
Given the long-standing structural and cultural enclaves that exist across civilian and defense agencies, it’s not likely that IT silos are going away anytime soon. However, by instituting a proven connectivity layer that not only bridges those silos without forcing a change of control within those silos, Cloudflare brings uniformity and greater protection across those silos, and agencies have a much better chance of operating more securely and efficiently — and moving closer to the goal of safely modernizing their digital systems for employees and the public.
SSA database to flag synthetic identity fraud has cost issues, GAO finds
The Social Security Administration is off track on recouping the costs it took to create and maintain a database aimed at helping banks catch a form of identity fraud, a new watchdog report found.
Launched in June 2020 to comply with the Economic Growth, Regulatory Relief, and Consumer Protection Act of 2018, the SSA’s electronic Consent Based Social Security Number Verification (eCBSV) service was pitched to financial institutions as a way to combat synthetic identity fraud, a practice in which bad actors combine fake and real information to generate a fraudulent identity.
According to the Government Accountability Office, the Social Security Administration has spent roughly $62 million on the service but has taken in just $25 million in user fees, putting it off pace to meet the law’s requirement to fully recover the service’s total cost by the end of fiscal year 2027.
To reverse course and meet that target, the SSA would need to bring in about $14 million yearly, per the GAO’s analysis, though that would likely require the agency to bump up fees or grow its user base. But “SSA officials told GAO they did not plan to take significant steps to increase use of the service,” the report noted.
The eCBSV database is available to certified financial institutions. For an annual fee to cover development and operating costs, the service will verify for banks if an individual’s Social Security number, name and date-of-birth combination is a match with SSA records, rooting out fraudsters in the process. The financial entity must obtain an individual’s written consent to use the database.
The service was developed in response to the increasing frequency and impact of synthetic identity fraud — 3,000 activity reports for suspicious, potentially synthetic identities were made in 2021 alone, amounting to about $182 million in possible fraud, per the Financial Crimes Enforcement Network. Those figures likely underestimate the problem, the GAO noted, given the fact that “many cases may go undetected and be reported as a credit loss rather than fraud.”
In launching eCBSV, SSA developed three new IT systems and contracted external customer support, totalling $30.8 million in costs — about half of the service’s total costs through FY2023, according to the GAO. Costs ran higher than initial projections due to unforeseen delays and complications, an expansion of use and expenses that were not initially accounted for by the agency.
The GAO said SSA “did not follow key aspects of IT investment guidance when estimating costs for eCBSV,” nor did the agency “develop an IT investment proposal specifically for eCBSV.”
The Social Security Administration, which has shifted its cost-recovery timeline three times, did increase user fees in July 2023, but “fee collections have not met SSA’s projections because the number of direct users has been lower than expected,” the GAO reported. And “based on SSA subscription data we reviewed through December 2023, the number of eCBSV direct users has not increased significantly since the service opened enrollment in FY 2022.”
The watchdog provided SSA with seven recommendations, pushing the agency to provide updated cost estimates based on GAO guidance, solicit feedback from stakeholders on changes to user fees, and develop performance metrics tied to use of the eCBSV, among other goals. The GAO report did not include SSA’s responses to its recommendations.
Government websites aren’t created equal. GSA’s 10x program aims to change that
Over the course of his 14-plus years at the National Renewable Energy Laboratory, Nick Langle has learned a thing or two about the chasm that exists between the internet’s haves and have nots.
As a principal investigator for the Department of Energy-funded Community Power Accelerator, Langle spends his days researching web sustainability and human-computer interaction in the pursuit of expanding access to affordable solar energy. In that role, he’s confronted on an all-too-regular basis with what user experience professionals call the performance inequality gap.
“A lot of internet users are on older hardware, or maybe it’s newer but it’s a budget device,” Langle said. “And so those devices don’t have the same hardware capabilities as this super fancy laptop that maybe I’m programming on.”
The result of that technological imbalance is a public unable to consume information from the federal government in the same way — an imbalance that a team of Technology Transformation Services experts are working to address through the General Services Administration’s 10x program. 10x considers tech ideas from civil servants throughout the federal government and invests in a handful that it believes could be transformative for the delivery of public services
Langle’s project idea — Accelerating Inclusive CX and Web Access — was one of 16 selections the 10x team announced in March that it would fund, beating out nearly 200 other technology proposals. The submission pointed specifically to “ongoing user performance issues” with “numerous government websites” that “disproportionately affect vulnerable users, especially those dependent on mobile data plans instead of WiFi.”
For those vulnerable users, accessing critical information on a government website that isn’t designed with inclusivity in mind can lead to “substantial consumption of their data plans,” the 10x site notes. “This results in tangible costs that underscore a lack of equal accessibility to government content for these users.”
Will Cahoe, communications and outreach lead for 10x, said Langle’s problem felt “solvable,” and the pitch’s emphasis on inclusivity immediately resonated with the TTS crew.
“It’s really at that intersection of digital and inclusivity, and that’s where we have a lot of expertise,” Cahoe said. With 10x, “we have the freedom, and really the mandate, as we see it, to look for broad solutions, things that can help a lot of people. And so the idea that there are certain geographies, there are certain income levels across the country that people are in that could affect their access to our websites, that feels really important.”
What began as Langle running his own audits and doing a brief analysis of various government sites in his spare time has since advanced to phase two of 10x, meaning it is one of only a few projects per year that the government’s in-house “venture studio” believes is potentially feasible to execute. Moving beyond phase 1 also means the project has cleared potential regulatory hurdles and does not have any obvious red flags.
The team is now exploring “an actionable and reusable design and development strategy” that matches W3C Web Sustainability guidelines — while also “exploring a performance tool tailored for users with poor or restricted internet access.” Phase 2 gives the developers, researchers and product managers time to zero in on the market landscape and identify any barriers that might prevent the project from succeeding.
There’s also the possibility of a pivot from the specific project to related work across the federal web space, Cahoe said, depending on what’s revealed in phase 2. “You tell people that, ‘hey, if we built a thing or a service, is this something that folks would use?’” he said.
That question and others could be revealed during the technical prototyping 10x is doing now, in addition to automated testing and field work that includes a closer examination of federal websites.
From Cahoe’s standpoint, there are plenty of reasons for optimism about Langle’s project moving forward, starting with the fact that it falls “under an umbrella of improving mission delivery at agencies through better digital customer experiences.” 10x has documented successes that fit that bill, with the U.S. Web Design System standing out as one of its most notable wins.
“It’s a suite of online, reusable tools for federal web managers that came out of 10x,” Cahoe said of USWDS. “It’s not terribly difficult to see how something like web performance tools could be incorporated into that.”
Whether it’s rural populations with spotty access to broadband or lower-income urban areas that don’t have 5G yet, plenty of Americans stand to benefit from the project should it make it across the finish line. Beyond the obvious geographic user-experience issues at play, Langle said it would help people with older devices, which aren’t equipped to handle an industry trend to use more JavaScript in web design.
In cases where users “run into some of these very JavaScript-intensive things” on certain websites, Langle said, “your fans start running on your computer because it’s consuming [a lot of energy]. Those budget devices or older hardware, they do not handle those types of applications the same as newer hardware does. … It really does throttle or bottleneck their experience.”
A byproduct of those performance gains, Langle said, will be a reduction in the energy demand required of older devices trying to keep up. On an individual level, those gains would be minimal. But in the aggregate, Langle said energy efficiency gains could be notable and the federal government’s carbon footprint could actually be reduced.
At this point in the project’s lifecycle, the end result is theoretical, given that two phases remain assuming it moves past the second stage. But while neither Cahoe nor Langle are in the business of making predictions, they’re both hopeful about the project’s prospects and its ultimate ability to incorporate inclusive design in all phases of government web design.
“There is no technical impossibility here,” Cahoe said. “So we know that it’s achievable. … It’s the impact we’re looking for. It fits with it and everything we want to do.”
House Republicans probe NIST on facial recognition for federal digital identity verification
Three House Republicans are asking the National Institute of Standards and Technology to provide information about how it’s alleviating concerns with facial recognition when it comes to using it as a method of identity verification for accessing online federal services.
In a letter to NIST Director Laurie Locascio this week, the leaders of the House Committee on Science, Space, and Technology asked the agency to share findings from its digital identity and facial recognition work related to its Digital Identity Guidelines. Those guidelines serve as a best practices reference for U.S. agencies on identity verification methods for federal services.
According to the letter — which was signed by committee Chair Frank Lucas, R-Okla., Research and Technology subcommittee Chair Mike Collins, R-Ga., and Investigations and Oversight subcommittee Chair Jay Obernolte, R-Calif. — that guidance permits agencies to use face recognition technology as a method of identity verification.
“Despite the many advantages of face recognition technology, its trustworthiness has long been questioned, particularly as it relates to personal privacy issues. There have also been concerns raised about the accuracy of face recognition technology and the use of biometrics to authenticate a user,” the lawmakers said.
Specifically, they asked for details on how NIST, a component of the Department of Commerce, participates in the development of facial recognition development and what measures it’s implemented to ensure the technology is accurate and reliable, “particularly in terms of identifying users across diverse demographic groups.”
They also asked for information on safeguards in place for storage of sensitive personally identifiable information collected through facial recognition.
A spokesman for NIST told FedScoop that the agency had received the letter and plans to respond.
The House lawmakers aren’t alone in their focus on facial recognition and civil rights implications of artificial intelligence technologies, which pose potential discrimination issues. Senate Majority Leader Chuck Schumer, D-N.Y., and Sen. Ed Markey, D-Mass., recently asked the White House to consider requiring all federal agencies using AI to have a civil rights office to help protect against algorithmic discrimination.
A recent report by the U.S. Commission on Civil Rights that focused more broadly on facial recognition use by the government found a “concerning lack of federal oversight” with those technologies. Tha report asked agency chief AI officers to work with NIST on the development and implementation of programs to test the systems.
The lawmakers requested a response from NIST by Oct. 22.
MITRE’s Federal AI Sandbox will focus on critical infrastructure, weather modeling, social services
MITRE, the federally funded operator of research and development centers on behalf of agencies, unveiled plans this week to train three new artificial intelligence foundation models in critical infrastructure, weather modeling and sustainable social services.
The Federal AI Sandbox, a supercomputer, is expected to be capable of training large-scale foundation models and supportive of generative AI, multimodal perception systems and reinforcement learning decision aids, according to a Wednesday press release.
MITRE is specifically focused on training models to help cybersecurity experts prevent and mitigate threats through analyzing complex data. Those efforts are aimed at helping with identification and response, enhancing weather modeling with improved precision and transforming millions of pages of information into tools that streamline government workflows.
Charles Clancy, MITRE’s senior vice president and chief technology officer, said in a statement that AI has the potential to transform government services and “address important challenges ranging from improving resilience of critical infrastructure to making Medicare sustainable.”
Agencies are able to access the sandbox through existing contracts with any of the six federally funded R&D centers that MITRE operates, which include the Department of Homeland Security, the National Institute of Standards and Technology, the Department of Defense, the Federal Aviation Administration and others.
Wednesday’s release comes after MITRE’s original announcement that it would provide an AI sandbox — which is powered by AI data center infrastructure from NVIDIA — and a subsequent statement that the sandbox would be available by the end of 2024. In the original announcement, Clancy said agencies “often lack the computing environment necessary for implementation and prototyping.”
Arati Prabhakar, director of the Office of Science and Technology Policy, spoke during an American Enterprise Institute event Tuesday about how budget pitfalls and aging facilities could threaten national goals for technology R&D.
Prabhakar’s comments followed a report from the White House that noted the need for the administration’s continued “advocating for robust” levels of finances for federally funded R&D.
The report said that R&D throughout agencies can “make it possible to achieve better health outcomes, create new products and services, generate new industries and good jobs, improve policies and regulations and develop new standards and practices, all to address the greatest challenges of our time.”
White House’s final ‘Trust Regulation’ aims to bolster confidence in federal statistics
A final rule announced by the White House on Thursday will further codify and clarify responsibilities for U.S. agencies when it comes to accurate and trusted federal statistics.
Specifically, the regulation will outline how federal statistical agencies should carry out responsibilities to produce information that’s relevant and timely, credible and accurate, objective, and protects the trust of respondents and those providing the information by ensuring confidentiality of responses.
That final rule, also known as the “Trust Regulation,” was posted for public inspection Thursday and will officially be published in the Federal Register on Friday.
“Federal statistics are produced as a public good, whose value is rooted in public trust,” Chief Statistician of the U.S. Karin Orvis said in a statement shared by the White House. “Maintaining and bolstering public trust in our Nation’s statistics is absolutely critical.”
The regulation stems from the Foundations for Evidence-Based Policymaking Act of 2018, known as the Evidence Act, which first outlined the four responsibilities in the new rule. In her statement, Orvis called the rule “a major milestone” for implementation of that law.
“The responsibilities described in this regulation are not new and are consistent with longstanding OMB, Federal government, and international policy. Yet effectively implementing them in the form of standards and practices requires clear rules,” Orvis said.
OMB first issued a proposed version of the rule in August 2023 for public comment. According to the public inspection document, the final rule is mostly unchanged from that proposal and makes slight changes for clarity.
The rule mainly outlines requirements for the 16 recognized statistical agencies and units, or RSAUs, within the federal government, but it also provides direction to other federal agencies to help support them. RSAUs include the Census Bureau, National Center for Health Statistics, Bureau of Labor Statistics, and National Animal Health Monitoring System.
While OMB noted in the public inspection document that it expects most of the RSAUs are implementing the requirements, it also said it expects that the current landscape has wide variation.
Orvis said that although the regulation marks a “significant day for the Federal Statistical System” and U.S. statistics broadly, the work isn’t over.
“We must continue to make sure our Nation’s Federal Statistical System produces accurate, objective, high-quality, and trustworthy information and that our Federal statistics remain relevant in meeting the information needs of the American people, data users, and policymakers,” Orvis said.
The final rule goes into effect 60 days after the official publication in the Federal Register. Then, by Dec. 10, 2026, each RSAU also must revise their own rules or policies that pose barriers to the responsibilities and the parent agency inspector general must conduct a compliance review, among other things.
Login.gov announces availability for facial recognition technology
Login.gov, the single sign-on platform provided by the General Services Administration, will begin offering a new identity verification option to its partners.
GSA’s new option will verify identity with facial recognition technology through the independently certified NIST 800-63 Identity Assurance Level 2 (IAL2), a standard that introduces the need for either remote or physically present identity proofing, according to a Wednesday press release. The agency said this implementation will allow federal agencies to verify users at a higher assurance level.
The IAL2-compliant option offers one-to-one facial matching technology, and has users confirm their identity with a live selfie matched with a photo ID, such as a driver’s license. The release emphasized to users that “Login.gov does not use one-to-many facial identification and does not use these images for any purpose other than verifying a user’s identity.”
Login.gov Director Hanna Kim said in a statement that her team “heard from our agency partners with higher-risk use cases that it was important that we offer a version of our strong identity verification service that is IAL2 certified. Looking ahead, we will continue to uphold our values of equity, privacy and transparency by incorporating best-in-class technology and learning from our academic and user research.”
The move follows GSA’s announcement in April that it was piloting biometric verification. The agency later told FedScoop that the pilot’s goal was to evaluate overall user experience throughout the new workflow and to find where individuals become stuck or confused so the “team can iteratively make improvements.”
Ann Lewis, director of the GSA’s Technology Transformation Services, previously told FedScoop that the agency was “getting lots of really interesting, useful data” during the pilot. Lewis also shared that the program recently added Chinese to the website’s language offerings.
“All of our programs at TTS are trying to prioritize accessibility, user research and language access,” Lewis said. “Since Login.gov is a front door for many other government agencies and programs, the user demographics and the breadth of who uses the system will vary from agency to agency and program to program. But sometimes we’ll identify a need … because there’s a whole sector of users who will benefit from that.”
This story was updated Oct. 11, 2024, to clarify that Login.gov does not use one-to-many facial identification and does not use these images for any purpose other than verifying a user’s identity.
CISA official: AI tools ‘need to have a human in the loop’
An abbreviated rundown of the Cybersecurity and Infrastructure Security Agency’s artificial intelligence work goes something like this: a dozen use cases, a pair of completed AI security tabletop exercises and a robust roadmap for how the technology should be used.
Lisa Einstein, who took over as CISA’s first chief AI officer in August and has played a critical role in each of those efforts, considers herself an optimist when it comes to the technology’s potential, particularly as it relates to cyber defenses. But speaking Wednesday at two separate events in Washington, D.C., Einstein mixed that optimism with a few doses of caution.
“These tools are not magic, they are still imperfect, and they still need to have a human in the loop and need to be used in the context of mature cybersecurity processes,” Einstein said during a panel discussion at NVIDIA’s AI Summit. “And in some ways, this is actually good news for all of us cybersecurity practitioners, because it means that doubling down on the basics and making sure we have strong human processes in place remains super critical, even as we use these new tools for automation.”
At Recorded Future’s Predict 2024 event later in the day, Einstein doubled down on those comments, noting that the “AI gold rush” happening across the tech sector now has people perhaps overly excited about AI-generated code. In reality, there’s plenty to be concerned about with AI as it’s observed “echoing previous generations of software security issues.”
“AI learns from data, and humans historically are really bad at building security into their code,” she said. “The human processes for all of these security inputs are going to be the most important thing. Your software assurance processes, it’s not going to be just fixed with some magical, mystical AI tool.”
Assessments of that kind from Einstein are possible thanks in part to CISA’s decades-long experience with commercial AI products, as well as the agency’s more recent work with a handful of bespoke tools. She specifically cited a reverse malware engineering system that leverages machine learning to aid analysts in diagnosing malicious code.
For that AI tool and others like it, Einstein said, human review is still absolutely critical.
“We don’t yet have a situation where there’s some AI agent doing all of our cyber defense for us,” she said. “And I think we have to be realistic about how important it is to still keep humans in the loop across all of our cybersecurity use cases.”
CISA has been able in recent months to drive home that human-centered case through two tabletop exercises led by the Joint Cyber Defense Collaborative. Einstein spoke at both Wednesday events about JCDC’s AI efforts, highlighting the agency’s decision to enlist new industry partners specializing in the emerging technology.
“AI companies are part of the IT sector, that’s part of critical infrastructure, and they need to understand how they can share information with CISA and with each other in the wake of possible AI incidents or threats,” she said.
The JCDC’s first AI security tabletop exercise was held in June and the second was completed “just a couple weeks ago,” Einstein said. Next up for the group will be the publication this fall of an AI security incident collaboration playbook, which she hopes will be “useful … in the context of future threats and incidents.”
“What we hope is that that community will be able to keep building this muscle memory of collaboration,” she said, “because it’s a terrible time to make new collaboration during a crisis. We need to have these strong relationships increase trust ahead of whatever crisis might happen.”
Part of CISA’s crisis planning in the months ahead will come in the form of its second set of risk assessments required by the White House’s AI executive order. Einstein said the agency is already “deep” into that second round of assessments, on track for a January delivery date. In the meantime, Einstein has a few words of advice for public or private-sector cyber officials as they consider using the technology.
“Don’t be a solution looking for a problem; become obsessed with the problem you’re trying to solve, and then use the best available automation or human to fix that problem,” she said. “Just because you have an AI hammer doesn’t mean that everything’s a nail, right?”
Data, talent, funding among top barriers for federal agency AI implementation
Federal officials are citing several common barriers to carrying out the Biden administration’s directives on artificial intelligence, including preparedness and resource issues, according to recent compliance plans shared by agencies with the White House.
A FedScoop analysis of 29 of those documents found that data readiness and access to quality data, a dearth of knowledge about AI and talent with specific expertise, and finite funding levels were among the most common challenges that agencies reported. Agencies also disclosed obstacles when it comes to their IT infrastructure, limitations in government-ready tools, and testing and evaluation challenges, among other issues.
The compliance plans, which were required to be completed and posted publicly in late September, are one of the first windows into how the executive branch is developing methods to ensure responsible AI use in alignment with President Joe Biden’s executive order on the technology and the Office of Management and Budget’s corresponding guidance.
Among the questions officials were asked to address in those plans was describing whether there have been barriers to the responsible use of AI and steps the agency has taken — or intends to take — to address them. The responses reveal legacy issues for government modernization now posing challenges for AI efforts.
About 75% of plans reviewed by FedScoop listed or described examples of specific barriers the agency is facing, though the documents varied widely in terms of detail. Of those, roughly a dozen mentioned data hurdles, six mentioned talent or knowledge gap problems and six underscored funding limitations.
In response to a FedScoop inquiry to OMB about the themes, a spokeswoman said the office “continues to work with agencies as they implement AI risk management practices to ensure the responsible and ethical use of AI in their operations.”
Alexander Howard, a digital government expert who currently blogs about emerging tech and public policy issues through his independent publication, Civic Texts, said that the barriers reflect longer-term issues for the federal government.
When the federal government was working to implement new technologies in 2009, for example, Howard said the two biggest challenges were around procurement and talent. While there’s been progress since then, he said “it’s interesting to see that 15 years later, these barriers are still there.”
Similarly, improving enterprise data management is an ongoing issue for the government, and that issue is particularly salient now with AI. Making government AI-ready “comes down to data,” Howard said. “It can’t do something if the data isn’t there for it to work on.”
Reliable, AI-ready data
Agencies that noted data barriers specifically cited issues like outdated storage methods, AI-readiness, and lack of trusted training data.
The Department of Energy said legacy methods of storing data, such as warehouses and databases, are outdated and weren’t designed to account for AI. “As a result, DOE faces obstacles in ensuring that data used for AI training and use in AI models is high quality, well-curated, and easily accessible,” the agency said.
That has manifested in “fragmented data sources, inconsistent data quality, and inconsistent and insufficient data interoperability standards,” DOE’s compliance plan said.
The Department of Veterans Affairs, meanwhile, said barriers for its efforts were “access to authoritative data sources for training, testing and validation of AI models and ensuring that these data sources have documentation describing how they are cleaned and refined to support model audits.”
VA pointed to its enterprise data platforms and creation of an enterprise data catalog as examples of work to address that barrier, saying those platforms are “crucial” to being able to access personally identifiable information and protected health information securely.
Some agencies noted the status of existing data improvement efforts. The U.S. Department of Agriculture said it’s still working to implement a portion of the Foundations for Evidence-Based Policymaking Act — known as the OPEN Government Data Act — which requires public government data to be machine-readable. The agency plans to chart the next steps for data readiness in its upcoming AI strategy and said that making more progress on that front “would contribute significantly to USDA’s readiness for AI.”
NASA, which also cited data readiness as an issue, said it was conducting workshops within the agency to address the problem. Those workshops, in part, will attempt to “identify data enhancements required to fuel transformation with data and AI. “
‘Ripe for improvement’
That data issues were a top barrier is notable given how much AI tools hinge on reliable data sources.
Valerie Wirtschafter, a fellow in the Brookings Institution’s Artificial Intelligence and Emerging Technology Initiative, said that while the commonalities in agencies’ barriers weren’t necessarily surprising, the most acute of the issues seemed to be the data challenges.
“This strikes me as an area that is ripe for improvement and also one that can be addressed through more detailed guidance,” Wirtschafter said in an emailed response.
Howard said that a potential answer might be having a team within government dedicated to structuring “data that’s currently trapped in PDFs or elsewhere” and implementing the OPEN Government Data Act.
“But that’s not where the ethos is right now,” he said. Howard added that it seems like “the current leadership is people who understand how to use tools to publish, but it’s not people who use tools to make.”
Ensuring that federal data is AI-ready is already an issue that’s attracted attention from officials and lawmakers as well.
A working group within the Department of Commerce’s Data Governance Board has taken on the issue and is looking to develop standards for AI-ready government data. While the standard of machine readability is necessary, the department’s top data official, Oliver Wise, previously said that it’s “not sufficient” to meet the expectations of users in the AI age.
And in Congress, a bipartisan pair of Senate lawmakers are also seeking to extend the life of the Chief Data Officers Council included language in their bill that would task the group with specific work on AI data management practices.
Talent, funding wanted
On talent, agencies said there was a need to improve understanding of AI tools within their current workforce and hire more people who are in roles dedicated to advancing the technology.
“Like most federal agencies, USDA does not have sufficient AI literacy and AI talent today,” the agency said in its plan. “Without a significant investment to increase workforce literacy in AI and attract AI talent to USDA, our ability to execute the upcoming AI Strategy will be limited.”
In the Nuclear Regulatory Commission’s case, one barrier is employees’ aversion to the technology itself. NRC said its “workforce has expressed trepidation as well as a general lack of knowledge of AI capabilities.” As a result, the agency said it would continue to enable “effective change management” so workers can take advantage of those capabilities.
Hiring new AI talent and upskilling the workforce have been important parts of the Biden administration’s goals when it comes to federal actions on the technology. The administration currently has aspirations to hire 500 AI and AI-enabling workers by 2025.
Agencies, which were also required to report information about AI talent, said that hiring efforts have already included using the Office of Personnel Management’s direct hire authority for AI and AI-enabling workers — a mechanism aimed at making the hiring process easier — and training staff.
USDA said it posted several positions using that authority, which has reduced time to hire and made the agency’s AI positions “more competitive than before.” The agency further noted that it’s expanded its number of U.S. Digital Corps fellows for AI efforts, is working to mature its training program for data science and has launched a generative AI training course through a partnership with Skillsoft.
Agencies also said they’re training up their existing workforce. The Department of the Treasury said it’s working to acquire training developed by the General Services Administration. It expects to embed that training into its employee learning platform by the end of the year and is encouraging employees to educate themselves on AI with other training provided by GSA and free open-source videos.
On funding, agencies said limited financial resources increase risk and make it difficult to test and review uses. USDA said the lack of dedicated funding increases “risk of improper deployments.”
The Department of Commerce, whose National Institute of Standards and Technology experienced a cut in the last fiscal year’s appropriations, said “AI governance remains a broadly unfunded requirement, severely impacting Commerce’s ability to thoroughly analyze and track the responsible use of AI.”
Similarly, NRC said it’s “only able to assess, test, implement, and maintain new capabilities where resources have been made available to do so.” The agency said its IT and AI leaders will continue to express resource needs during budget formulation and execution.
In the case of the U.S. Trade and Development Agency, having limited staffing and funding as a small agency means it “leans upon the lessons learned and best practices of the interagency.”
The Export-Import Bank of the U.S., another small agency, said “AI use cases compete for funding and staffing with other important priorities at the Bank including non-IT investments in core EXIM capabilities, cyber security, and other use cases in our modernization agenda.”
While funding and talent pose barriers, they’re among the issues that Wirtschafter said federal agencies are more equipped to deal with than others.
The federal government has made strides to bring talent in through the Intergovernmental Personnel Act, direct-hire authority and the U.S. Digital Corps, she said. “Funding is also always a challenge, but I do think there are specific funding pools available for modernization efforts as well, and pay for a lot of these roles is typically quite reasonable,” Wirtschafter said.
Ultimately, the information on challenges provided in the compliance plans serves as something of a preview of forthcoming strategies to identify and address barriers to responsible AI that are required under OMB’s governance memo. Those plans must be published publicly by March 2025 and will include information about the status of the agency’s AI maturity and a plan to make sure AI innovation is supported.
While most agencies provided some list of challenges they’re facing, others noted there’s still work to be done. The Securities Exchange Commission said it “plans to establish a working group that will be responsible for identifying any barriers to the responsible use of AI, including with respect to IT infrastructure, data practices, and cybersecurity processes.”
Senate bill to create NSF-awarded AI challenges gets House companion
A Senate bill that would task the National Science Foundation with overseeing a multimillion-dollar competition on artificial intelligence innovations now has a House companion from the leaders of the chamber’s AI task force.
The AI Grand Challenges Act from Reps. Jay Obernolte, R-Calif., and Ted Lieu, D-Calif., announced in a press release Wednesday, pairs with the May bill from Sens. Cory Booker, D-N.J., Martin Heinrich, D-N.M., and Mike Rounds, R-S.D., in calling on the NSF director to create and administer contests that incentivize researchers and entrepreneurs in AI research and innovations.
“Artificial intelligence has the power to change our world,” Lieu said in a statement. “We must maintain American leadership in AI research, innovation and implementation while minimizing potential risks associated with the technology. The AI Grand Challenges Act would encourage the next generation of AI researchers and developers through prize competitions to incentivize ambitious, cutting-edge AI development.”
Said Obernolte: “The AI Grand Challenges Act will ensure the U.S. will continue to lead in AI research and development across critical sectors such as health, energy, and cybersecurity. By incentivizing breakthroughs, we are paving the way for transformative advancements that will harnesses the incredible potential of artificial intelligence to solve some of our nation’s most pressing challenges.”
The legislation calls for $1 million grand challenges that leverage AI technologies to solve problems in more than a dozen categories: national security, cybersecurity, health, energy, environment, transportation, agriculture and rural development, education and workforce training, manufacturing, space and aerospace, quantum computing, materials science, supply chain resilience, disaster preparedness, and natural resources management. There would also be a category for cross-cutting AI, covering “robustness, interpretability, explainability, transparency, safety, privacy, content provenance, and bias mitigation.”
The legislation also calls on the NSF director, in concert with the directors of the White House’s Office of Science and Technology Policy and the National Institutes of Health, to oversee $10 million grand challenges for AI-enabled cancer breakthroughs.
Those competitions are aimed at using AI to target breakthroughs in the “most lethal forms of cancer and related comorbidities,” with an emphasis on “detection, diagnostics, treatments, therapeutics” or other AI innovations to increase “the total quality-adjusted life years of those affected or likely to be affected by cancer,” the bill states.