Federal judge enjoins use of presumed racial disadvantage in SBA contracting program

A federal judge in Tennessee this week struck down the Small Business Administration’s use of presumed racial and ethnic disadvantage as a qualification for a keystone program intended to broaden the government contracting landscape, throwing it into uncertainty.

The Wednesday ruling from Judge Clifton L. Corker of the U.S. District Court for the Eastern District of Tennessee enjoined the small business agency and the Department of Agriculture — both defendants in the case — from using a “rebuttable presumption of social disadvantage” in the SBA’s business development program known as “8(a).”

The opinion relies in part on the Supreme Court’s recent decision striking down the use of race in college admissions through affirmative action. It could have broad impact as the SBA manages the program across the U.S. government, federal contracting experts told FedScoop, though they noted its full scope isn’t clear and it will likely be appealed.

“It’s a significant blow to what really is the SBA’s crown jewel socioeconomic program,” Matthew Moriarty, a founding partner at Schoonover & Moriarty in Kansas who focuses on federal contracting.

The Small Business Administration, the Department of Agriculture, and the Justice Department, which represents the agencies, didn’t respond to requests for comment on whether they planned to appeal the decision.

In his opinion, Corker said use of a “rebuttable presumption” violated the Fifth Amendment rights of the company that brought the lawsuit, Ultima Services Corporation, to equal protection of the law. 

A “rebuttable presumption” is a legal term for something presumed true absent other evidence. The statute that established the 8(a) program, Small Business Act, uses that presumption in the section that defines certain racial and ethnic groups as socially disadvantaged. The 8(a) program is aimed at helping small businesses that are at least 51% owned and operated by a U.S. citizen who is socially and economically disadvantaged.

Corker said the agencies didn’t identify whether racial groups are underrepresented in specific industries that are relevant to contracts in the 8(a) program and didn’t outline the goals of the program.

“Without stated goals for the 8(a) program or an understanding of whether certain minorities are underrepresented in a particular industry, Defendants cannot measure the utility of the rebuttable presumption in remedying the effects of past racial discrimination,” Corker said.

The lawsuit began in March 2020 when Ultima, a small business government contractor, filed a complaint alleging that the rebuttable presumption in the program was racially discriminatory. The business is owned by Celeste Bennett, a white woman, and isn’t eligible for the presumption. In court documents, Ultima claimed it lost out on opportunities to businesses in the program.

“We’re pleased with the decision,” said Michael E. Rosman, general counsel for the Center for Individual Rights, who represents Ultima. “Defendants’ reservation of contracts for the Section 8(a) program was decimating to Ultima, shrinking its revenues by large proportions.  The order enjoins the use of a presumption that favored certain small businesses based solely on the race of their owners.”

Rosman said they believe the decision would preclude USDA from “reserving virtually every contract in Ultima’s industry for the program” and “will have significant positive effects outside of the USDA” as the Small Business Administration’s approval is needed to reserve contracts for the 8(a) program.

Whether the ruling impacts those currently in the program isn’t clear.

Emily W. Murphy, a senior fellow at the Greg and Camille Baroni Center for Government Contracting at George Mason University, said if the ruling stands, current participants in the program will likely have to prove their disadvantage.

If the injunction is going to remedy harm against Ultima, which is challenging how the program is being administered in favor of those currently in the 8(a) program, “it would suggest it has to apply to current participants,” said Murphy, former administrator of the General Services Administration and a former SBA contracting official.

While the opinion could be appealed, she said the administration will likely have to weigh the risks of such an action in the current legal environment. The SBA and DOJ are going to have to figure out how to maneuver through that because the ruling comes as the federal government is in its fourth quarter, where a majority of spending takes place, Murphy said. 

Others said impact on those in the program currently isn’t likely.

Moriarty said the decision is unlikely to impact current set-asides — contracts specifically designated for types of small businesses — and people currently in the program. But moving forward, the SBA can’t rely on the presumption that certain individuals are socially disadvantaged. Those businesses could still demonstrate disadvantage, but that process would likely be longer and more difficult, he said.

“So now, because of this injunction, all applicants, regardless of the type of person that they are, are going to have to demonstrate specific instances of social disadvantage in order to be granted entry into the program,” Moriarty said.

Antonio R. Franco, managing partner at Piliero Mazza in Washington focused on government contracting, also said he doesn’t believe the ruling would apply to people currently in the program, but that it could have a chilling effect on working with businesses in the program.

“The problem is going to be can those people still benefit from the program if agencies are reluctant to award contracts to these companies because they believe that they’re going to be challenged using this case,” Franco said.

The parties in the cases will meet next on Aug. 31 to discuss other potential remedies.

House Dems call on White House to make agencies adopt NIST AI framework

House Democrats on Thursday pushed the White House’s Office of Management and Budget to mandate federal agencies adopt the National Institute of Standards and Technology’s AI Risk Management Framework, which could significantly affect how the government designs and develops AI systems.

House Science Space and Technology Committee Ranking Member Zoe Lofgren, D-Calif., along with Reps. Ted  Lieu, D-Calif., and Haley Stevens, D-Mich., sent a letter to OMB urging that federal agencies and vendors be required to follow the currently voluntary NIST AI guidance to analyze and mitigate the risks associated with the technology.

“We ask that you also consider utilizing the NIST AI RMF and subsequent risk management guidance specifically tailored for the federal government, to ensure agencies and vendors meet baseline standards in mitigating risk,” the three Democratic members said in their letter to OMB.

The Democrats said that the federal government must take a coordinated approach to ensure cutting-edge technologies like AI are used responsibly and that the NIST AI framework served as a “great starting point for agencies and vendors to analyze the risks associated with AI and how their systems can be designed and developed with these risks in mind.”

The Biden administration in recent months has worked to hold organizations accountable for addressing bias that may be embedded within AI systems while also promoting innovation. In October, it published an AI ‘Bill of Rights’ blueprint document, which was followed by NIST’s voluntary risk management framework in January.

The NIST AI framework document sets out four key functions that it says are key to building responsible AI systems: govern, map, measure and manage.

The document is a “rules of the road” that senior technical advisers at NIST hope will provide a starting point for government departments and private sector companies big and small in deciding how to regulate their use of the technology. Organizations can currently adopt the framework on a voluntary basis.

Commerce Secretary Gina Raimondo said in April that NIST’s AI framework represents the “gold standard” for the regulatory guidance of AI technology and has so far received a warm reception from industry.

Republicans are also in support of the NIST AI framework being adopted by federal agencies when creating and designing AI going forward.

A House Republican Science, Space and Technology committee aide told FedScoop that committee Chairman Frank Lucas first raised this issue of federal agency adoption of the NIST AI framework in May and now Republicans are in the process of drafting legislation on this issue.

OpenAI, Meta and other tech firms sign onto White House AI commitments

Seven major companies building powerful artificial intelligence software have signed onto a new set of voluntary commitments to oversee how the technology is used. These commitments are focused on AI safety, cybersecurity, and public trust, and come as the White House develops an upcoming executive order and bipartisan legislation focused on AI.

The seven companies participating in the effort are Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, according to a White House official who spoke with reporters on Thursday. Myriad countries, including Brazil, the UAE, India, and Israel have been consulted on the voluntary commitments, too. 

These latest updates reflect the Biden administration’s growing focus on artificial intelligence, and come just weeks after Senate Majority Chuck Schumer introduced his own SAFE Innovation Framework, which focuses on both regulating and incubating the technology.

The commitments include committing to pre-release internal and external security testing for AI models and forming insider threat safeguards and cybersecurity investments focused on unreleased and proprietary model weights. Weights serve a critical role in “training” AI neural networks. 

Along with commitments to research bias and privacy risks associated with the technology, the companies have also pledged to support the development of new tools that could automatically label AI-created content, including through the use of “watermarking”.

The commitments follow concern from the Biden administration over the use of AI. The leaders of several major AI firms, including OpenAI, Microsoft, and Anthropic, visited the White House in May to meet with Vice President Harris.

The White House’s chief cyber advisor Anne Neuberger met with executives from several tech companies, including OpenAI and Microsoft, in April to discuss cybersecurity risks created by these tools. At the sametime, Neuberger urged the companies to consider ways to deploy AI watermarking, FedScoop reported in May. 

Notably, there’s growing skepticism toward using voluntary measures and commitments to rein in Big Tech companies. 

On a call with reporters, a White House official said that in some cases, these commitments would be a change in the status quo for these companies. The White House is already in conversation with members of both parties on AI issues, according to the official, and emphasized the upcoming executive order, too. 

Microsoft President Brad Smith, Google President Kent Walker, Anthropic CEO Dario Amodei and Inflection AI CEO Mustafa Suleyman will today participate in a meeting with the White House to discuss the new commitments.

They will be joined by Meta President Nick Clegg, OpenAI President Greg Brockman and Amazon Web Services CEO Adam Selipsky.

In a statement shared with FedScoop, president of global affairs at Google and Alphabet Kent Walker said: “Today is a milestone in bringing the industry together to ensure that AI helps everyone. These commitments will support efforts by the G7, the OECD, and national governments to maximize AI’s benefits and minimize its risks.”

Meta President Nick Clegg said: “Meta welcomes this White House-led process, and we are pleased to make these voluntary commitments alongside others in the sector. They are an important first step in ensuring responsible guardrails are established for AI and they create a model for other governments to follow.

He added: “AI should benefit the whole of society. For that to happen, these powerful new technologies need to be built and deployed responsibly. As we develop new AI models, tech companies should be transparent about how their systems work and collaborate closely across industry, government, academia and civil society.”

In a blogpost commenting on the commitments, Microsoft President Brad Smith wrote: “The commitments build upon strong pre-existing work by the U.S. Government (such as the NIST AI Risk Management Framework and the Blueprint for an AI Bill of Rights) and are a natural complement to the measures that have been developed for high-risk applications in Europe and elsewhere.

He added: “We look forward to their broad adoption by industry and inclusion in the ongoing global discussions about what an effective international code of conduct might look like.”

Editor’s note, 7/21/23 at 11:30 a.m. ET: This story was updated to include comment from Google, Meta and Microsoft.

Watchdog calls on HUD to improve data collection for assisted household accommodation requests 

The Government Accountability Office has recommended that the Department of Housing and Urban Development improve how it collects and analyzes data about requests for reasonable accommodations from assisted households.

In a report issued Thursday, the congressional watchdog said that monitoring such requests — which range from service animal permit applications to requests for bathroom grab bars — would make the agency more aware of whether it is fulfilling the needs of assisted households.

In total, HUD’s assisted housing program supports nearly five million units, which house a total of 11 million people. According to the agency’s analysis of the 2019 American Housing Survey, about 42% of renting households — or 1.8 million households — assisted through the program reported having a disability.

“Although HUD collects information on a household’s disability status, the agency does not systematically collect data on requests for reasonable accommodations. Doing so would make HUD more aware of whether the needs of assisted households were met,” the report said.

The watchdog added: “HUD also does not have a comprehensive, documented strategy for its oversight of compliance with reasonable accommodation requirements. HUD prioritizes its oversight on investigating complaints, which it is legally required to do.”

Responding to GAO’s report, the agency neither agreed nor disagreed with the watchdog’s recommendation. It noted the challenges in addressing the recommendations, including resource constraints. 

The congressional watchdog reviewed issues related to the rental assistance HUD provides to households with disabilities in response to a request from lawmakers.

Congressional watchdog agrees to take on generative AI assessment

The Government Accountability Office will conduct a review of the potential harm caused by generative AI tools like ChatGPT. The chief government auditor’s plans to assess the technology follow a request sent by Sens. Ed Markey, D-Mass., and Gary Peters, D-Mich., to the GAO comptroller last month.

“[I]t has already become apparent that generative AI is a double-edged sword,
carrying with it a broad range of serious harms. Scammers have begun using generative AI for manipulative voice, text, and image synthesis,” wrote the senators in a June 22 letter. “The output from generative AI can replicate damaging racist and sexist stereotypes. Large language models can also ‘hallucinate,’ generating false content, including potentially defamatory statements.”

FedScoop learned about GAO’s plans after obtaining a response letter that the agency sent the senators at the end of last month. That letter, which was written by GAO congressional relations managing director A. Nicole Clowers, confirmed that GAO accepted the work as within the scope of its authority and noted that “staff with the required skills would be available shortly.”

“We have accepted the request and plan to meet with Congressional staffers soon to discuss our approach to the work,” said Charles Young, managing director of public affairs at GAO, in an email to FedScoop on Thursday. “That approach and time frames for issuance will be determined as we get started on the effort.”

GAO is not currently using generative AI in its auditing work, he added.

Generative AI isn’t quite cleared for takeoff at NASA

Back in May, NASA’s chief information officer Jeff Seaton emailed the space agency’s staff to make clear that tools like ChatGPT, Google Bard, and Meta Llama had not been cleared for any widespread use with “sensitive NASA data.” The email, which does not seem to have been publicized until now, also noted that a community of “potential early adopters” across the agency were working to investigate “certain” AI technologies.

The notice, which FedScoop obtained after it was included in a solicitation that NASA posted online, comes as federal agencies begin to outline policies related to the use of new AI tools, and particularly, text-generating software like ChatGPT.

The email also serves as a reminder that large federal agencies are wrestling with both the risks — and opportunities — that might come with generative AI tools. Privacy researchers have warned that sensitive information entered into AI models such as ChatGPT may end up in the public domain. In the private sector, these concerns have led companies like JPMorgan to clamp down on the use of the technology by staff.

“OCIO is coordinating closely with leading industry partners, fellow government organizations including the Federal CIO, Chief Data Officer, and Chief Information Security Officer Councils to understand the significant amount of policy guidance emerging around Generative AI as well as how other organizations are adopting generative AI capabilities,” said Seaton in the May 22 email. “OCIO is also connecting with commercial providers to understand how Generative AI will be integrated into widely available tools, such as the Microsoft 365 suite, visual tools like Adobe Illustrator, and the often-used Google search.”

Seaton also pointed to a range of issues raised by popular generative AI tools. Some of these programs are hosted in the cloud and use systems that store information outside the United States, which means that NASA data could be exposed to unauthorized and non-US individuals. He warned that these tools aren’t necessarily accurate — and raise ethical and intellectual property questions, too.  

In a statement, a NASA spokesperson said: “NASA provided written guidance to employees on generative artificial intelligence technologies in May 2023. While use of AI technologies on NASA systems is not authorized at this time, the agency’s Office of the Chief Information Officer is still evaluating use of some technologies in a secure online environment.

They added: “We also are evaluating AI tech in collaboration with others within the agency. This investigation is ongoing, and NASA will provide employees an update later, and codify any guidance in a future NASA Policy Directive. Finally, NASA also is closely working with other federal agencies and staying informed on evolving federal guidance and all policy related to AI.”

Notably, the space agency is still developing a unified approach to artificial intelligence. A report from NASA’s inspector general published in early May noted that the agency is struggling to track its own usage of the technology. NASA does not operate with a single standard definition of AI, and, the report outlined, “does not have a singular designation or classification mechanism to accurately classify and track AI or to identify AI expenditures within the [a]gency’s financial system.” 

NASA hasn’t completely eschewed the idea of working with this kind of AI, though. The Guardian reported in June that NASA engineers were developing a tool akin to ChatGPT that would facilitate information-sharing between astronauts and spacecraft. The space agency has also worked with a type of digital engineering called “evolved structures,” which involves using design software that incorporates AI generations. 

Nor is NASA the only agency attempting to rein in the use of generative AI tools. Across the federal government, agencies are grappling with how employees and contractors should and shouldn’t use those programs and developing preliminary policies.

An instructional letter the General Services Administration distributed to staff in June, for example, outlined “controlled access” to generative AI large language model tools on the agency network and equipment. 

Under the “interim policy” GSA said it would block access to third-party generative AI large language model endpoints tools from the GSA network and government equipment but would make exceptions for research. The policy provided guidance on “responsible use” of those tools, such as not inputting non-public information.

At the Environmental Protection Agency, technology leaders took a similarly cautious approach blocking use of the tools on an “interim basis” in a May internal memo. The EPA said it may reconsider that decision, however, and “allow use of such tools at a future time after further analysis and the implementation of appropriate guidance,” a spokesperson said in an email.

The Administration for Children and Families took a more permissive approach in its “interim policy” for staff and contractors in May. That memo didn’t include blocking of the tools, but similarly advised employees to not input things like non-public, personally identifiable, or protected health information.

In a Linkedin post about the memo, chief technology officer and acting chief information officer at ACF, Kevin Duvall, described the agency’s approach as “balancing risk, while still exploring this technology and its potential to empower federal government employees to serve citizens even better.”

The Department of Health and Human Services, of which ACF is a sub-agency, is taking a similar approach to the tools.

“HHS is reminding its employees that they should always follow HHS existing policies regarding personal identifiable information and data protection, data storage, transmission, and sharing, and that these tools fall under existing policies and guidance of the HHS IT Rules of Behavior,” the department’s chief information officer, Karl S. Mathias, told FedScoop in a June written statement.

Mathias said at the time that HHS operating division chief information security officers were advised not to put sensitive information into tools like ChatGPT.

Relatedly, in June, the National Institutes of Health published a notice clarifying that NIH peer reviewers were prohibited from working with generative AI tools for the purpose of developing critiques on grant applications and contract proposals. 

Editor’s note, 7/19/23 at 3:33 p.m. ET: This story was updated to include comment from NASA. The piece has also been updated to note that NASA confirmed that guidance was sent in May, and to include details of the EPA’s current approach to using AI tools.

Microsoft set to expand access to detailed logs in the wake of Chinese hacking operation

Technology Modernization Fund rescission: A chance to change course or the end of the road?

Last week, the Senate Financial Services and General Government Appropriations Subcommittee took an axe to GSA’s Technology Modernization Fund, proposing to rescind $290 million in previously appropriated funding. This move follows related action by the House Appropriations Committee earlier this year to zero out funding for TMF in their fiscal year 2024 bill. Clearly, TMF has a problem, at least in the eyes of congressional appropriators. Sadly, Capitol Hill’s concern is neither new nor unwarranted as there has been a growing chorus of policymakers in recent years concerned about the transparency and direction of this once-heralded program.

As someone who was involved in the very early stages of the discussion on the concept that became TMF, and a former congressional committee staffer myself, I share many of the concerns of my former colleagues in Congress. The TMF must find a way going forward, if it is to go forward, to be more open and transparent with Congress and the American public about the projects TMF has funded. Even as a close observer of the program, it’s often hard to tell exactly what’s being funded, who’s involved, and what we expect to achieve. If I were wearing my old congressional staffer hat, I’d be frustrated too.  

That said, when I heard the news last week, I was among the first to point out the important role that TMF has played in funding critical zero-trust cybersecurity and customer experience initiatives, helping in many ways to implement the requirements of the cyber and CX executive orders, as well as laws like 21st Century IDEA that are, for all intents and purposes, unfunded mandates. I said then, and I’ll say again: I think the proposed congressional action to rescind $290 million from TMF is short-sighted, particularly at this moment in our history. With our federal networks facing near-daily cyber incursions from rogue nation-states like Russia and China, we should be investing more, not less, in cybersecurity and IT modernization — and TMF is one tool in that toolbox.  

So, what do we do?  

First and foremost, let’s not give up on TMF. If Congress doesn’t think all of the projects that TMF has funded are worthy, sunlight is the best disinfectant. I encourage rigorous oversight, as the House Oversight and Accountability Committee is doing on the $187 million login.gov award, to determine where improvements can be made. If there are projects that Congress determines should not be funded, tackle those on a case-by-case basis.

Second, the TMF program management office needs to commit to being more open and transparent, as I noted above. Where’s the annual report to Congress that walks through what was funded, why it matters, who’s involved, when it will be completed, and expected outcomes? Something as simple as this would go a long way. The TMF PMO also needs to learn to promote their successes, acknowledge their failures and make the structural changes that may be necessary to get the program back on more solid ground. 

And about that requirement to pay back the “loans.”

The Senate Financial Services and General Government report accompanying the fiscal 2024 appropriations bill highlighted the lack of reimbursement by agencies that have received TMF funds as one of the main reasons for the proposed rescission. After the TMF received the $1 billion infusion from the ARP, OMB and GSA — having listened to agency concerns — issued new guidance related to the TMF. That guidance, in addition to encouraging the prioritization of both CX and cyber-related submissions, offered three new reimbursement paths: full, partial and zero. The reason for this, as I understand it, was that agencies had expressed concerns that the requirement for full reimbursement made participation in the TMF a bridge too far for many. Why? At the end of the day, true savings are hard to identify and even harder to realize (and cost avoidance isn’t real money). Often, even after an IT modernization project is complete, there is a time of transition, where the old and new systems may have to run side-by-side. The financial result of this is that whatever savings we may have hoped to find are likely to take a while to (or may never) be realized and you can’t pay back what you don’t have. The reality is that some reimbursement flexibility is necessary. Or as we saw in the early days of TMF, no one will want to participate. Congress needs to recognize and accept the reality of the reimbursement requirements.  

I’ll close with this, paraphrasing what I told FedScoop last week: The TMF is not perfect, but it has provided a key source of funding for a variety of projects that may not have been funded otherwise. If Congress is serious about IT modernization, improving customer experience and protecting critical federal networks, TMF must be part of the equation going forward.

Mike Hettinger is a former House Oversight Government Operations Subcommittee staff director and founder of Hettinger Strategy Group.

ODNI awards Leidos $375M technology and analytical services contract

The Office of the Director of National Intelligence has awarded federal contractor Leidos a $375 million prime contract to provide the agency with intelligence, technical, financial and management services.

As part of the contract, Leidos will provide ODNI with a wide range of technology services including systems integration, cybersecurity, science and technology, IT project management, security and risk management.

The company will provide further management services including facilities support, assets, logistics and information.

The cost-plus-award-fee contract has a one-year base performance period and six one-year options. 

It is the latest major contract win for Leidos, which provides a range of technology and R&D services to government agencies including the Pentagon and the intelligence community.

Last year, the Defense Information Systems Agency issued an $11 billion contract to the company to consolidate the networks of non-warfighting defense support agencies.

Prior contract awards include a $390 million contract from the Department of Homeland Security for low-energy portal systems, which are used to conduct non-intrusive inspections of passenger vehicles.

The contractor also holds key military research and development contracts with the Pentagon. In December, the Air Force Research Lab picked Leidos to work on a new hypersonic platform that could both gather intelligence on adversaries and attack them.

Small business government contracting hits record high of $163B, SBA says

The federal government awarded $162.9 billion in contracts with small businesses in fiscal year 2022, exceeding its goal for working with small vendors and setting a record high, the Small Business Administration said.

The figure represents a 5.6% year-over-year increase in the volume of small business contract awards made by the U.S. government, up from a total of $154.2 billion during fiscal year 2021.

Over a quarter of federal government contract dollars — 26.5%  — were awarded to small businesses in the last fiscal year, SBA Administrator Isabella Casillas Guzman said Tuesday at an event hosted at NASA’s Washington headquarters. The Biden administration’s goal was 23%. 

Guzman said the federal government overall received an “A” on the SBA’s scorecard for work with small businesses for the last fiscal year. A total of ten federal agencies received an “A+” for their work with small businesses.

Guzman highlighted NASA at the Tuesday event, praising the agency for working “hand-in-hand” with small businesses to ensure access to contract vehicles. The agency received an “A” for fiscal year 2022.

Pam Melroy, NASA’s deputy administrator, said the agency invested $3.6 billion in 1,700 small businesses exceeding its own goal. Small businesses made up 18.4% of NASA’s contracts in fiscal year 2022, Melroy said. The agency’s goal was 15.75%.  

“That’s real money. That is real contribution to our mission, and our mission is pretty exciting because we’re working to go back to the moon,” Melroy said. “Not this time just to visit, but to live and to create the ability for humans to have a sustained presence throughout the solar system for science and exploration.”

Twenty-five small businesses contributed to the work that went into the Mars Perseverance rover, Melroy said. Small businesses worked on the rover’s robotic arm and the blades of the Ingenuity helicopter that works alongside Perseverance, she said. 

NASA also partnered with 491 small businesses in creating the James Webb Telescope, Melroy said. That work included solar cells, batteries, thruster vales, and the sun shield that allows the telescope to look into deep space, which Melory called “probably one of the most significant technical achievements of the James Webb Space Telescope.”

While government-wide contracting for socioeconomic categories like small disadvantaged business and service-disabled veteran-owned small business exceeded goals, historically underutilized business zone (HUBZone) small business and women-owned small business (WOSB) were below targets, according to an SBA release Tuesday.

HUBZone small businesses were awarded a record $16.3 billion, despite not meeting the goal of making up 3% of total eligible dollars, SBA said.

Meanwhile, women-owned small businesses made up 4.6% of the fiscal year 2022 eligible dollars, which was below the administration’s goal of 5%. The total amount awarded to those contracts did increase by about $1.9 billion from the previous year, however.

SBA said it “remains dedicated to collaborating with contracting agencies, actively pursuing future changes to achieve the 5% WOSB goal.”

Only two of the 24 CFO Act agencies received below a score of “B.” The Department of Health and Human Services received a “C” and the Department of Veteran’s Affairs received a “D.” 

Sam Le, director for policy planning and liaison at the SBA, said in an interview with FedScoop that NASA does particularly well with its small business subcontracting.

“NASA buys things that are sometimes difficult for a small business to provide on its own,” Le said. “They’re buying rockets and complex research and development, but despite some of those requirements, NASA’s still able to meet its small business goals and particularly push on the prime contractor to use small businesses at the subcontracting level.”

Editor’s note, 7/18/23: This story was updated to include further details of other agencies’ scorecard performance and comment from Sam Le.