DHS’s initial AI inventory included a cybersecurity use case that wasn’t AI, GAO says

The Department of Homeland Security didn’t properly certify whether the artificial intelligence use cases for cybersecurity listed in its AI inventory were actual examples of the technology, according to a new Government Accountability Office report, calling into question the veracity of the agency’s full catalog.

DHS’s AI inventory, launched in 2022 to meet requirements called out in the Trump administration’s 2020 executive order on AI in the federal government, included 21 use cases across agency components, with two focused specifically on cybersecurity.

DHS officials told GAO that one of the two cyber use cases — Automated Scoring and Feedback, a predictive model intended to share cyber threat information — “was incorrectly characterized as AI.” The inclusion of AS&F “raises questions about the overall reliability of DHS’s AI Use Case Inventory,” the GAO stated.

“Although DHS has a process to review use cases before they are added to the AI inventory, the agency acknowledges that it does not confirm whether uses are correctly characterized as AI,” the report noted. “Until it expands its process to include such determinations, DHS will be unable to ensure accurate use case reporting.”

The GAO faulted DHS for its failure to fully implement the watchdog’s 2021 AI Accountability Framework, noting that the agency only “incorporated selected practices” to “manage and oversee its use of AI for cybersecurity.”

That AI framework features 11 key practices that were taken into account for DHS management, operations and oversight of AI cybersecurity practices, covering everything from governance and data to performance and monitoring. The agency’s Chief Technology Officer Directorate reviewed all 21 use cases listed in the launch of DHS’s use case inventory, but additional steps to determine whether a use case “was characteristic of AI” did not occur, the report said.

“CTOD officials said they did not independently verify systems because they rely on components and existing IT governance and oversight efforts to ensure accuracy,” the GAO said. “According to experts who participated in the Comptroller General’s Forum on Artificial Intelligence, existing frameworks and standards may not provide sufficient detail on assessing social and ethical issues which may arise from the use of AI systems.”

The GAO offered eight recommendations to DHS, including an expansion of the agency’s AI review process, adding steps to ensure the accuracy of inventory submissions, and a complete implementation of the watchdog’s AI framework practices. DHS agreed with all eight recommendations, the report noted.

“Ensuring responsible and accountable use of AI will be critical as DHS builds its capabilities to use AI for its operations,” the GAO stated. “By fully implementing accountability practices, DHS can promote public trust and confidence that AI can be a highly effective tool for helping attain strategic outcomes.”

The DHS report follows earlier GAO findings of “incomplete and inaccurate data” in agencies’ AI use case inventories. A December 2023 report from the watchdog characterized most inventories as “not fully comprehensive and accurate,” a conclusion that matched previous FedScoop reporting.

Raimondo announces picks for U.S. AI Safety Institute’s director, CTO

The U.S. AI Safety Institute will be led by a key White House National Economic Council adviser, and an artificial intelligence official at the National Institute for Standards and Technology will also join the new group’s executive leadership team, Commerce Secretary Gina Raimondo announced Wednesday.

Elizabeth Kelly, special assistant to the president for economic policy at the NEC, will serve as the inaugural director of the USAISI, established under the NIST umbrella by President Joe Biden’s AI executive order. 

Kelly, who with the NEC helps guide the Biden administration’s financial regulation and technology policy, including AI, will be charged with “providing executive leadership, management, and oversight of the AI Safety Institute and coordinating with other AI policy and technical initiatives throughout the Department, NIST, and across the government,” per a Commerce Department press release.

Kelly was described in the release as a “driving force” behind Biden’s AI EO, taking the lead on domestic efforts to spur competition, protect privacy and back workers and consumers. 

The AI Safety Institute’s “ambitious mandate to develop guidelines, evaluate models, and pursue fundamental research will be vital to addressing the risks and seizing the opportunities of AI,” Kelly said in a statement. “I am thrilled to work with the talented NIST team and the broader AI community to advance our scientific understanding and foster AI safety. While our first priority will be executing the tasks assigned to NIST in President Biden’s executive order, I look forward to building the Institute as a long-term asset for the country and the world.”

The USAISI’s chief technology officer will be Elham Tabassi, NIST’s chief AI adviser. Tabassi led the development of NIST’s AI Risk Management Framework and also served as the associate director for emerging technologies in the agency’s Information Technology Laboratory. 

In her new role as CTO, Tabassi will oversee critical technical programs and “be responsible for shaping efforts at NIST and with the broader AI community to conduct research, develop guidance, and conduct evaluations of AI models including advanced large language models in order to identify and mitigate AI safety risks,” the release stated.

“The USAISI will advance American leadership globally in responsible AI innovations that will make our lives better,” Tabassi said in a statement. “We must have a firm understanding of the technology, its current and emerging capabilities, and limitations. NIST is taking the lead to create the science, practice, and policy of AI safety and trustworthiness. I am thrilled to be part of this remarkable team, leading the effort to develop science-based, and empirically backed guidelines and standards for AI measurement and policy.”

House lawmakers optimistic about NAIRR legislation prospects as pilot moves forward

Legislative efforts to codify the National AI Research Resource, or NAIRR, that will help provide researchers with the computational tools needed for researching the technology might have a path forward in 2024, House lawmakers forecasted. 

“I think the prospects for the legislation are, I would say, very good to excellent,” Rep. Anna Eshoo, D-Calif., lead sponsor of the legislation, told FedScoop on Tuesday outside a hearing exploring federal science agencies’ use of AI for research. “We want to get this done this year.”

Eshoo, isn’t currently a member of the House Committee on Science, Space and Technology, but was able to participate Tuesday in a joint hearing of its energy and research and technology subcommittees. Eshoo said she anticipates a markup of the legislation in “hopefully in March.”

Similarly, Rep. Jay Obernolte, R-Calif., who is a co-sponsor of the House legislation, told FedScoop outside the same hearing that the legislation is a priority. 

“When I look at the landscape of potential AI legislation that should pass this year, I think the CREATE AI Act is right at the top of that list, and so I’m cautiously optimistic that we’ll see some traction,” Obernolte said.

A spokesperson for the full committee didn’t immediately respond to a request for comment on timing for a markup.

While the National Science Foundation recently launched a pilot for the NAIRR to inform the creation of the full-scale resource, the bipartisan and bicameral bill — called the CREATE AI Act — would enshrine it in federal statute. 

Eshoo said she welcomed the pilot launch and was “eager to see what comes out of it,” but also noted that “the full force of it is through the legislation.” 

The idea behind the NAIRR is to provide researchers with the resources needed to carry out their work on AI, including advanced computing, data, software, and AI models. The pilot, which was a requirement in President Joe Biden’s AI executive order, is supported with contributions from 11 federal agencies and 25 private sector partners. 

Evolution, metrics for success

The NAIRR was a central topic of discussion at the Tuesday hearing, which featured witnesses from NSF, Oak Ridge National Laboratory, Georgia Tech, Oakland University, and Anthropic. Lawmakers’ questions indicated interest in the pilot and capabilities of the full-scale resource. 

Rep. Frank Lucas, R-Okla., chairman of the full committee, for example, probed panelists about how the resource could keep up “with the rapidly evolving industry standards for advanced computational power.”

Jack Clark, co-founder and head of policy at Anthropic, which has been supportive of the legislation, said making sure researchers can do ambitious research will be key. 

“They should not be able to run into a situation where they’re unable to do their research due to running into computational limits,” Clark said. “And how you achieve that in a fiscally responsible way is to make sure that the NAIRR is allocating a portion of its resources for a small number of big-ticket projects each year and adopt a consortium approach for picking what those are.”

Meanwhile, Rep. Scott Franklin, R-Fla., asked about what metrics Congress should be watching for to evaluate success of the pilot as they weigh the estimated $2.6 billion the full resource would require.

In response, Tess deBlanc Knowles, NSF’s special assistant to the director for artificial intelligence, pointed to the number of users the pilot will serve, whether it reaches communities that don’t typically have access to the resources, how many students it can train, and the impact of the resources on projects “in terms of access to computational data resources that they are able to access through the pilot.”

DeBlanc Knowles also noted that experimenting with types of resources and modes of accessing them in the pilot will help the agency “design and scope the plan for the full-scale NAIRR.” 

Modernization efforts will bring  billions in new revenue to IRS, analysis finds

The IRS’s revenue is estimated to jump by as much as $561 billion over the next decade thanks to IT and customer service modernization and other Inflation Reduction Act funding measures for the tax agency, a new Treasury Department analysis found.

Previous projections for how IRA funding would impact the IRS’s revenue only took into account revenues that were directly connected to increased enforcement staffing. The new analysis added modernization investments — as well as information reporting for digital assets, enhanced services to boost voluntary compliance, advances in analytics to improve productivity and other activities — to the equation.

With the “diversified revenue strategies” assessed in the analysis, Treasury projects the IRS to take in $851 billion from fiscal year 2024 to fiscal year 2034. Accounting for IT modernization, the report noted, reveals “a wide array of potential revenue benefits.” 

“Expanded data intake capacity and productivity will help increase compliance; improved audit selection and collection planning can increase the productivity of enforcement activities,” the report stated. “IT investments can also increase the productivity of auditors by providing them with better access to data during the audit process and allow for quicker and more efficient communication between auditors and taxpayers. 

“IT and customer experience investments can also facilitate voluntary compliance by making it easier for taxpayers to communicate with us, enabling taxpayers to complete more tasks online, reducing the demand for direct contact with customer service representatives and allowing us to process returns more quickly and efficiently,” it said.

The analysis noted specifically how investments in IT infrastructure will better position the IRS to handle IT-related outages. During the 2018 tax season, for example, a massive outage took the system offline for 11 hours, leading to processing delays for millions of returns. 

The Treasury highlighted California’s Enterprise Data to Revenue initiative as a case study in how the IRS’s modernization funding infusion could play out. The Golden State saw an approximate 1% increase in collections during the initial phase of its project, which a senior state official attributed mostly to “technical and process improvements that allowed for increased taxpayer self-service.”

If the 1% efficiency increase held for the IRS, the agency would be looking at an extra $43 billion annually, the report said.

While the potential is there for a significant revenue boon for the IRS, the analysis notes that the agency must be prepared for the downside to “the integration of technology in administration.”

“While it significantly enhances government capability to reduce tax evasion through enriched data access, it concurrently presents sophisticated taxpayers, particularly those with high incomes, with novel opportunities to evade taxes,” the report said. “Therefore, as we modernize our IT infrastructure, we must also devise strategies to close these new loopholes, ensuring that the digital transformation leads to a more equitable tax system.”

How Azure Orbital and the cloud are expanding our worldview

The rapid expansion of low Earth orbit satellite constellations, combined with a growing network of ground-based cloud computing centers, has brought space industrialization to a historic inflection point, according to a new report.

A record 2,897 satellites were launched into orbit around the Earth by more than 50 countries last year, according to Jonathan McDowell, an astronomer and astrophysicist known for documenting space activity. An even greater number are expected to be launched in 2024.

All of that contributes to a supernova of new space-based communications and Earth-observation sensor capabilities, says Stephen Kitay, a former Pentagon deputy assistant secretary for space policy, now senior director of Azure Space at Microsoft.

Download the full report.

“A huge transformation is happening in space — and the technology that was never there before — effectively extending the internet and edge computing into space,” Kitay said in the report, produced by Scoop News Group and underwritten by Microsoft.

What’s been missing until recently, he says, is a reliable and secure way to manage and transmit the explosive growth of satellite data being collected in space and the means to automate and manage satellite activities more efficiently.

That’s changing as a new era of secure, scalable cloud computing centers strategically located around the globe is developing to stay connected to all those satellites — along with a new generation of software platforms to manage the devices, applications, and data on board all of them, according to the report.

How federal agencies stand to benefit

The report highlights the rise of hybrid space architecture, which Microsoft helped pioneer under the Azure Space banner launched in 2020. The concept involves “bringing cloud and space technologies together to foster a partner ecosystem,” explained Kitay. That effort has spawned a variety of components, including:

At the same time, Microsoft is “bringing our code and our software into space by empowering developers to build applications on the ground in the cloud and then seamlessly deploy them on board spacecraft,” Kitay said.

The report also highlights examples of how federal agencies, such as the U.S. Forest Service, the Environmental Protection Agency, the Department of Agriculture and the Defense Department, among others, stand to gain powerful new insights from Earth observation data to better support their missions.

“Removing the barriers to seamless and secure connectivity from ground to orbit creates entirely new opportunities for federal government customers, including those operating in classified environments,” said Zach Kramer, vice president of the Mission Engineering unit at Microsoft.

“Defense and civilian agencies can leverage this ubiquitous connectivity to develop and deploy new applications, gather and transmit data at the speed of relevance, and gain an information advantage to serve the American people.”

Download the full report. And look for additional reports in our series “The Future of Cloud” underwritten by Microsoft.

This article was produced by Scoop News Group, for FedScoop and underwritten by Microsoft.


Microsoft makes Azure OpenAI service available in government cloud platform

Federal agencies that use Microsoft’s Azure Government service now have access to its Azure OpenAI Service through the cloud platform, permitting use of the tech giant’s AI tools in a more regulated environment.

Candice Ling, senior vice president of Microsoft’s federal government business, announced the launch in a Tuesday blog post, highlighting the data safety measures of the service and its potential uses for productivity and innovation. 

“Azure OpenAI in Azure Government enables agencies with stringent security and compliance requirements to utilize this industry-leading generative AI service at the unclassified level,” Ling’s post said.

The announcement comes as the federal government is increasingly experimenting with and adopting AI technologies. Agencies have reported hundreds of use cases for the technology while also crafting their own internal policies and guidance for use of generative AI tools.

Ling also announced that the company is submitting Azure OpenAI for federal cloud services authorizations that, if approved, would allow higher-impact data to be used with the system. 

Microsoft is submitting the service for authorization for FedRAMP’s “high” baseline, which is reserved for cloud systems using high-impact, sensitive, unclassified data like heath care, financial or law enforcement information. It will also submit the system for authorization for the Department of Defense’s Impact Levels 4 and 5, Ling said. Those data classification levels for DOD include controlled unclassified information, non-controlled unclassified information and non-public, unclassified national security system data.

In an interview with FedScoop, a Microsoft executive said the availability of the technology in Azure Government is going to bring government customers capabilities expected from GPT-4 — the fourth version of Open AI’s large language models — in “a more highly regulated environment.”

The executive said the company received feedback from government customers who were experimenting with smaller models and open source models but wanted to be able to use the technology on more sensitive workloads.

Over 100 agencies have already deployed the technology in the commercial environment, the executive said, “and the majority of those customers are asking for the same capability in Azure Government.” 

Ling underscored data security measures for Azure OpenAI in the blog, calling it “a fundamental aspect” of the service. 

“This includes ensuring that prompts and proprietary data aren’t used to further train the model,” Ling wrote. “While Azure OpenAI Service can use in-house data as allowed by the agency, inputs  and outcomes are not made available to Microsoft or others using the service.”

That means embeddings and training data aren’t available to other customers, nor are they used to train other models or used to improve the company’s or third-party services. 

According to Ling’s blog, the technology is already being used for a tool being developed by the National Institutes of Health’s National Library of Medicine. In collaboration with the National Cancer Institute, the agency is working on a large language model-based tool, called TrialGPT, that will match patients with clinical trials.

How risky is ChatGPT? Depends which federal agency you ask

From exploratory pilots to temporary bans on the technology, most major federal agencies have now taken some kind of action on the use of tools like ChatGPT. 

While many of these actions are still preliminary, growing focus on the technology signals that federal officials expect to not only govern but eventually use generative AI. 

A majority of the civilian federal agencies that fall under the Chief Financial Officers Act have either created guidance, implemented a policy, or temporarily blocked the technology, according to a FedScoop analysis based on public records requests and inquiries to officials. The approaches vary, highlighting that different sectors of the federal government face unique risks — and unique opportunities — when it comes to generative AI. 

As of now, several agencies, including the Social Security Administration, the Department of Energy, and Veterans Affairs, have taken steps to block the technology on their systems. Some, including NASA, have or are working on establishing secure testing environments to evaluate generative AI systems. The Agriculture Department has even set up a board to review potential generative AI use cases within the agency. 

Some agencies, including the U.S. Agency for International Development, have discouraged employees from inputting private information into generative AI systems. Meanwhile, several agencies, including Energy and the Department of Homeland Security, are working on generative AI projects. 

The Departments of Commerce, Housing and Urban Development, Transportation, and Treasury did not respond to requests for comment, so their approach to the technology remains unclear. Other agencies, including the Small Business Administration, referenced their work on AI but did not specifically address FedScoop’s questions about guidance, while the Office of Personnel Management said it was still working on guidance. The Department of Labor didn’t respond to FedScoop’s questions about generative AI. FedScoop obtained details about the policies of Agriculture, USAID, and Interior through public records requests. 

The Biden administration’s recent executive order on artificial intelligence discourages agencies from outright banning the technology. Instead, agencies are encouraged to limit access to the tools as necessary and create guidelines for various use cases. Federal agencies are also supposed to focus on developing “appropriate terms of service with vendors,” protecting data, and “deploying other measures to prevent misuse of Federal Government information in generative AI.”

Agency policies on generative AI differ
AgencyPolicy or guidanceRisk assessmentSandboxRelationship with generative AI providerNotes
USAIDNeither banned nor approved, but employees discouraged from using private data in memo sent in April.Didn’t respond to a request for comment. Document was obtained via FOIA.
AgricultureInterim guidance distributed in October 2023 prohibits employee or contactor use in official capacity and on government equipment. Established review board for approving generative AI use cases.A March risk determination by the agency rated ChatGPT’s risk as “high.”OpenAI disputed the relevance of a vulnerability cited in USDA’s risk assessment, as FedScoop first reported.
EducationDistributed initial guidance to employees and contractors in October 2023. Developing comprehensive guidance and policy. Conditionally approved use of public generative AI tools.Is working with vendors to establish an enterprise platform for generative AI.Not at the time of inquiry.Agency isn’t aware of generative AI uses in the department and is establishing a review mechanism for future proposed uses.
EnergyIssued a temporary block of Chat GPT but said it’s making exceptions based on needs.Sandbox enabled.Microsoft Azure and Google Cloud.
Health and Human ServicesNo specific vendor or technology is excluded, though subagencies, like National Institutes of Health, prevent use of generative AI in certain circumstances.“The Department is continually working on developing and testing a variety of secure technologies and methods, such as advanced algorithmic approaches, to carry out federal missions,” Chief AI Officer Greg Singleton told FedScoop.
Homeland SecurityFor public, commercial tools, employees might seek approval and attend training. Four systems, ChatGPT, Bing Chat, Claude 2 and DALL-E2, are conditionally approved.Only for use with public information.In conversations.DHS is taking a separate approach to generative AI systems integrated directly into its IT assets, CIO and CAIO Eric Hysen told FedScoop.
InteriorEmployees “may not disclose non-public data” in a generative AI system “unless or until” the system is authorized by the agency. Generative AI systems “are subject to the Department’s prohibition on installing unauthorized software on agency devices.”Didn’t respond to a request for comment. Document was obtained via FOIA.
JusticeThe DOJ’s existing IT policies cover artificial intelligence, but there is no separate guidance for AI. No use cases have been ruled out.No plans to develop an environment for testing currently.No formal agreements beyond existing contracts with companies that now offer generative AI.DOJ spokesperson Wyn Hornbuckle said the department’s recently established Emerging Technologies Board will ensure that DOJ “remains alert to the opportunities and the attendant risks posed by artificial intelligence (AI) and other emerging technologies.”
StateInitial guidance doesn’t automatically exclude use cases. No software type is outright forbidden and generative AI tools can be used with unclassified information.Currently developing a tailored sandbox.Currently modifying terms of service with AI service providers to support State’s mission and security standards.A chapter in the Foreign Affairs Manual, as well as State’s Enterprise AI strategy, apply to generative AI, according to the department.
Veterans AffairsDeveloped internal guidance in July 2023 based on the agency’s existing ban on using sensitive data on unapproved systems. ChatGPT and similar software are not available on the VA network.Didn’t directly address but said the agency is  pursuing low-risk pilotsVA has contracts with cloud companies offering generative AI services.
Environmental Protection AgencyReleased a memo in May 2023 that personnel were prohibited from  using generative AI tools while the agency reviewed “legal, information security and privacy concerns.” Employees with “compelling” uses are directed to work with the information security officer on an exception.Conducting a risk assessment.No testbed currently.EPA is “considering several vendors and options in accordance with government acquisition policy,” and is “also considering open-source options,” a spokesperson said.The department intends to create a more formal policy in line with Biden’s AI order.
General Services AdministrationPublicly released policy in June 2023 saying it blocked third-party generative AI tools on government devices. According to a spokesperson, employees and contractors can only use public large language models for “research or experimental purposes and non-sensitive uses involving data inputs already in the public domain or generalized queries. LLM responses may not be used in production workflows.”Agency has “developed a secured virtualized data analysis solution that can be used for generative AI systems,” a spokesperson said.
NASAMay 2023 policy says public generative AI tools are not cleared for widespread use on sensitive data. Large language models can’t be used in production workflows.Cited security challenges and limited accuracy as risks.Currently testing the technology in a secure environment.
National Science FoundationGuidance for generative AI use in proposal reviews expected soon; also released guidance for the technology’s use in merit review. Set of acceptable use cases is being developed.“NSF is exploring options for safely implementing GAI technologies within NSF’s data ecosystem,” a spokesperson said.No formal relationships.
Nuclear Regulatory CommissionIn July 2023, the agency issued an internal policy statement to all employees on generative AI use.Conducted “some limited risk assessments of publicly available gen-AI tools” to develop policy statement, a spokesperson said. NRC plans to continue working with government partners on risk management, and will work on security and risk mitigation for internal implementation.NRC is “talking about starting with testing use cases without enabling for the entire agency, and we would leverage our development and test environments as we develop solutions,” a spokesperson said.Has Microsoft for Azure AI license. NRC is also exploring the implementation of Microsoft Copilot when it’s added to the Government Community Cloud.“The NRC is in the early stages with generative AI. We see potential for these tools to be powerful time savers to help make our regulatory reviews more efficient,” said Basia Sall, deputy director of the NRC’s IT Services Development & Operations Division.
Office of Personnel ManagementThe agency is currently working on generative AI guidance.“OPM will also conduct a review process with our team for testing, piloting, and adopting generative AI in our operations,” a spokesperson said.
Small Business AdministrationSBA didn’t address whether it had a specific generative AI policy.A spokesperson said the agency “follows strict internal and external communication practices to safeguard the privacy and personal data of small businesses.”
Social Security AdministrationIssued temporary block on the technology on agency devices, according to a 2023 agency reportDidn’t respond to a request for comment.
Sources: U.S. agency responses to FedScoop inquiries and public records.
Note: Chart displays information obtained through records requests and responses from agencies. The Departments of Commerce, Housing and Urban Development, Transportation, and Treasury didn’t respond to requests for comment. The Department of Labor didn’t respond to FedScoop’s questions about generative AI.

FCC chair proposes ban on AI-voiced robocalls

FITARA scorecard adds cloud metric, prompts expected grade declines

A new version of an agency scorecard tracking IT modernization progress unveiled Thursday featured tweaked and new metrics, including one for cloud computing that caused an anticipated falter in agency grades. 

The latest round of grading awarded one A, 10 Bs, 10 Cs, and three Ds to federal agencies, Rep. Gerry Connolly, D-Va., announced at a roundtable discussion on Capitol Hill. While the grades were generally a decline from the last iteration of the scorecard, Connolly said that starting at a “lower base” was expected with the addition of a new category. “The object here is to move up.”

Carol Harris, director of the Government Accountability Office’s IT and Cybersecurity team, who was also at the roundtable, similarly attributed the decline to the cloud category.

“A large part of this decrease in the grades was driven by the cloud computing category, because it is brand new, and it’s something that we’ve not had a focus on relative to the scorecard,” Harris said.

The FITARA scorecard is a measure of agency progress in meeting requirements of the 2024 Federal IT Acquisition Reform Act that has over time added other technology priorities for agencies. In addition to cloud, the new scorecard also changed existing metrics related to a 2017 law, added a new category grading IT risk assessment progress, and installed a progress tracker.

“I think it’s important the scorecard be a dynamic scorecard,” Connolly said in an interview with FedScoop after the roundtable. He added: “The goal isn’t, let’s have brand new, shiny IT. It’s to make sure that our functions and operations are better serving the American people and that they’re protected.”

Harris also underscored the accomplishments of the scorecard, citing $4.7 billion in savings as a result of closing roughly 4,000 data centers and $27.2 billion in savings as the result of eliminating duplicative systems across government.

“So, tremendous accomplishments all coming out of FITARA and the implementation of FITARA,” she said.

The Thursday roundtable featured agency representatives from the Office of Personnel Management, the Nuclear Regulatory Commission, the Department of Housing and Urban Development, and the U.S. Agency for International Development. USAID was the only agency to get an A.

Updated scorecard

Among the changes, the new scorecard updated the existing category for Modernizing Government Technology to reflect whether agencies have an account dedicated to IT that “satisfies the spirit of” the Modernizing Government Technology Act, which became law in 2017.

Under that metric, each agency must have a dedicated funding stream for government IT that’s controlled by the CIO and provides at least three years of flexible spending, Connolly said at the roundtable.

The transparency and risk management category has also evolved into a new CIO investment evaluation category, Connolly said in written remarks ahead of the roundtable. That category will grade how recently each agency’s IT Dashboard “CIO Evaluation History” data feed reflects new risk assessments for major IT investments, he said.

The 17th scorecard also added a progress tracker, which Connolly said Democrats on the House Subcommittee on Cybersecurity, Information Technology, and Government Innovation worked on with the GAO to create. Connolly is the ranking member of that subcommittee.

“This section will provide transparency into metrics that aren’t being regularly updated or do not lend themselves to grading across agencies,” Connolly said, adding the data “still merits congressional attention, and we want to capture it with this tool.”

The progress tracker also allows stakeholders to keep tabs on categories the subcommittee has retired for the scorecard.

The release of a new scorecard has in the past been a hearing, but Connolly indicated the Republican majority declined to take the issue up. 

At the start of the meeting, Connolly said he was “disappointed” that “some of the Republican majority had turned their backs on FITARA.” He later noted that by “the difference of two votes, this would be called a hearing instead of a meeting.”

FITARA scorecard grades in September were also announced with a roundtable and not a hearing.

“FITARA is a law concerning federal IT management and acquisition,” a House Committee on Oversight and Accountability spokesperson said in a statement to FedScoop. South Carolina Republican Rep. Nancy Mace’s “subcommittee has held a dozen hearings in the past year concerning not only federal information technology management and acquisition, but also pressing issues surrounding artificial intelligence, and cybersecurity. These hearings have been a critical vehicle for substantive oversight and the development of significant legislation.”

This story was updated Feb. 2, 2024, with comments from a House Committee on Oversight and Accountability spokesperson.

NIST previews open competition for semiconductor research and development

The National Institute of Standards and Technology released a notice of intent Thursday for an open competition on digital twins for semiconductor manufacturing, packaging and assembly. 

According to the Federal Register posting, the CHIPS Research and Development Office seeks to establish “one (1) Manufacturing USA Institute focused on” digital twins —  a digital representation of a physical object or process that can assist in simulating potential situations and their outcomes — for semiconductor operations. Additionally, the competition includes “the validation of such digital twins in a physical prototyping facility.”

“After receiving extensive public input, CHIPS R&D determined that a single institute with both regionally-focused programs and meaningful cross-region participation will best meet the CHIPS R&D program goals of strengthening U.S. technology leadership, accelerating ideas to market and realizing a robust semiconductor workforce,” the notice states. “Despite substantial existing investment in proprietary digital twin technology, the United States lacks a comprehensive environment for collaborative development and validation of semiconductor industry digital twins.”

The minimum NIST commitment for this is listed at $200 million across five years. CHIPS R&D expects to announce the competition officially in the second quarter of 2024, according to the notice, and to post the official notice of funding opportunity on Grants.gov.

Manufacturing USA Institute, which NIST is looking to establish, are cross-sector partnerships that attempt to bring together various forms and sizes of industry organizations and government entities. The institutes aim to foster a community that lends itself to collaboration, support the delivery of tangible benefits to all manufacturers, enhance research institutions and “ensure a national reach in workforce development.”

The notice also states that a planned CHIPS Manufacturing USA Institute Industry Day will happen this month, providing an opportunity for the government to  “solicit feedback on the NIST plans and timelines for the Institute.”