GSA seeks help to ‘get across the finish line’ modernizing cybersecurity, adopting zero trust

The General Service Administration has made important strides adopting a zero-trust cybersecurity model and “raising the bar” modernizing its security, CIO David Shive recently told FedScoop. But now the agency needs help from industry to “help us get across the finish line,” he said.

GSA recently issued a solicitation for cybersecurity support services that is meant to help the agency take those final steps in modernizing the way it delivers cyber services internally, Shive told FedScoop on a recent episode of the Daily Scoop Podcast.

“We’ve developed some maturity with cyber here at GSA, and we’re looking for partners that can demonstrate mature cyber operations in their past and help us lean pretty far forward with the use of cyber and protecting the business interests of GSA,” Shive said. While the solicitation wasn’t publicly available, a GSA spokesperson pointed FedScoop to a listing on the agency’s Acquisition Hallway forecasting the opportunity.

Explaining the scope of the contract solicitation, Shive said it’s quite broad and that GSA “looks to deliver a unified, defensible cybersecurity boundary with a focus on operational excellence.” However, because the solicitation is still open for bidding, Shive said he had to refrain from commenting on it too extensively to provide a “fair and equitable acquisition experience for anybody who might like to do work with us.”

“They have to be able to demonstrate that they can drive down risks, strengthen resilience within the enterprise, and maintain effective and compliant programs to facilitate innovation,” he said, highlighting that innovation is “kind of one of the hallmarks here at GSA. And so they need to be able to deploy and defend in that attitude of innovation that’s present here at GSA.”

Shive continued listing out what types of services the contract seeks: “Zero trust architectures, security delivery via product versus services orientation, infrastructure and security as code, security operations … true enterprise security visibility, security automation and augmentation — we’ve been doing that for a long time here at GSA. They need to be able to help us run our security operations center and incident response centers, be able to do cyber threat intelligence … be able to do cyber threat hunting. And then because we’ve been doing DevSecOps here at GSA for a long time, using agile for a long time, they need to fit seamlessly into that because they’re the ‘Sec’ in DevSecOps.”

And, finally, as GSA continues its journey to zero trust, it’s placing more emphasis on “the application security layer,” Shive said, and it will need a partner who can support that.

That shift to zero trust has presented GSA with an opportunity to pivot in the way it thinks about cybersecurity, the CIO said.

“That pivot is we’ve evolved from that traditional perimeter-based, compliance-oriented model to a zero-trust architecture that considers resources and accesses as fundamentally untrusted,” Shive said. “Instead of verifying devices at the perimeter, we verify everything and anything attempting to access anything within GSA. And we do that continually. This represents one of the key changes from the traditional model that we’ve been operating against. We’re pretty far along and are seeing the results that we hoped for.”

Now, Shive said, the agency just needs a good partner from industry to help finish that journey.

ICE lacks automated technology to identify imports at high risk for money laundering, watchdog says

U.S. Immigration and Customs Enforcement doesn’t have automated technology to identify imports at high risk for trade-based money laundering and lacks staffing needed to identify those schemes, an agency watchdog found. 

Those technology and staffing issues mean imports related to trade-based money laundering schemes “will continue to go undetected, thus allowing transnational criminal organizations to finance activities, threatening U.S. national security,” the U.S. Department of Homeland Security’s inspector general found in a recent report.

In lieu of automated technologies, the report found ICE currently identifies trade-based money laundering risks through “time-consuming” manual searches of import records. The inspector general pointed to funding constraints and competing priorities as reasons the agency hasn’t developed a technology to aid those efforts.

The report included two recommendations: That the executive associate director for homeland security investigations develop and implement a plan for upgrading ICE’s Data Analysis & Research for Trade Transparency System — which the agency uses to analyze U.S. and foreign trade data to identify anomalous patterns — including estimating funding needed and a timeline; and conduct an analysis of the Trade Transparency Unit’s workforce to determine what additional staff is needed.

In a response included in the report, ICE agreed with the recommendations and described its current work to achieve them. ICE said it established work plans and the estimated completion date for adding advanced analytics features, including identifying trade-based money laundering, is March 31, 2025. 

ICE also said it conducted a workforce survey, which it will submit to its chief financial officer for approval, and has requested staffing increases through the annual process. It estimated that recommendation will be completed by March 29, 2024. 

Veterans Affairs CIO ‘cautiously optimistic’ Oracle Cerner can turn around EHR modernization under new contract

Following the recent renegotiation of the Department of Veterans Affairs’ contract with Oracle Cerner to modernize its electronic health record system after a slew of issues forced the VA to pause its rollout, Kurt DelBene, the department’s chief information officer, sounded a note of optimism that the program is now headed in a positive direction.

“I guess I’d say I’m cautiously optimistic,” DelBene told FedScoop on an episode of the Daily Scoop Podcast this week.

DelBene reasoned that the schedule the VA took in its initial attempt at rolling out the $16 billion EHR modernization program was “pretty aggressive” but that the department “learned a ton from the five sites that we’ve deployed to” in terms of the functional usability of the system as well as its resilience and reliability.

The VA in April suspended the rollout of the EHR as part of a major reset, saying it wouldn’t be brought back online until it is “highly functioning.” Then in May, the department announced it had reached a new agreement with Oracle Cerner — the contractor that is developing and delivering the EHR platform — that it said “dramatically increases” the government’s ability to hold the contractor accountable for reliability, responsiveness and interoperability.

“I feel good about the fact that we have taken a pause, that we have very concrete requirements, a number of them are actually put into the update to the contract. So we now have lots and lots of SLA, or service level agreement, dimensions that are actually spelled out with penalties,” DelBene said.

DelBene, a former senior executive with Microsoft, continued: “Coming from the commercial world, I looked at it from the vantage point, if we were the recipient of this contract, how would I feel about … the difficulty of meeting these expectations and how much I’m being held accountable? And I feel very good. We took all dimensions of the performance of the system.”

So why the caution? DelBene understands the reality that this is an extremely complex system the VA and its partner Oracle Cerner are attempting to deliver to clinicians.

“It’s really complex to change your health record system. I think we’ve done the right thing in getting to the pause, getting to criteria to launch again, and I am optimistic,” he said.

The Pentagon’s success in rolling out its modernized EHR, which is based around the same Oracle Cerner platform, gives DelBene hope as well — though it didn’t face quite the same scale and complexity as VA’s system.

“I think that’s a signal that that we can make it work. But there’s been a certain uniqueness and variability of the way healthcare is delivered in the VA. And we’ve had to do more customizations to actually just facilitate how physicians and caregivers work in the VA,” he said.

With those challenges in mind, things are getting better and VA is getting “further along” on the journey to be at a point where it will decide to resume the rollout of the EHR, DelBene said. However, “We’re not going to start our resume from the pause until we feel good about where we’re going next,” he added.

During the wide-ranging conversation with FedScoop, DelBene also discussed how he’s approaching digital transformation within the VA, the changes he’s ushered in in getting IT and cyber professionals better pay, and how he’s thinking about artificial intelligence.

National Archives discloses planned AI uses for record management

The National Archives and Records Administration revealed that it plans to use several forms of AI to help manage its massive trove of records in an inventory published earlier this month.

In its 2023 AI use case inventory, the agency charged with managing U.S. government documents disclosed it wants to use an AI-based system to autofill metadata for its archival documents. Similar to some other agencies, the National Archives also disclosed its interest in using the technology to help respond to FOIA requests.

While NARA shared these planned applications, it did not include any current, operational use cases of AI.

The list of AI use cases is required of most federal agencies under a 2020 executive order (EO 13960). Those inventories must be posted publicly and annually.

The agency’s public release of its AI inventory comes after FedScoop reported that the National Archives had published its list only on MAX.gov, a platform for sharing information within the government. The Office of Management and Budget later re-emphasized that agencies are required to release a list on their agency website, in addition to the MAX portal.

“The National Archives is excited about the use of AI/ML/RPA and how we can utilize these technologies to help with natural language processing, search, and process automation,” said NARA Chief Information Officer Sheena Burrell in a previous statement to FedScoop. 

Burrell also said the agency was in the process of developing a governance life cycle for AI “along with the evaluation criterias to assess our AI solutions for compliances in accordance with the Executive Order.”

While the agency provided most details required under the Federal CIO Council’s 2023 guidance for the inventories — and additional optional information — it appears to follow a format consistent with the guidance for the previous year’s inventories. As a result, it doesn’t include whether the use is contracted or consistent with the executive order. It also doesn’t include columns for dates that note when stages in a use case’s life cycle take place.

Researchers at Stanford’s RegLab reported widespread lagging compliance in the first year of agencies’ use case inventories in a December 2022 report about the country’s AI strategy. Recent FedScoop reviews of agency use case inventories found inconsistencies in reporting have persisted.

A court ruling is forcing small businesses to detail bias to keep special contracting status

Small business owners who qualified for a government contracting program because of their presumed disadvantage as a member of certain racial or ethnic groups are now being required to describe precisely how they’ve been discriminated against to continue receiving contract awards under the program.

The new “narrative” requirement for the Small Business Administration’s 8(a) Business Development program, which is aimed at opening the federal contracting world to disadvantaged businesses, comes in light of a recent court ruling enjoining its use of presumed racial and ethnic disadvantage as a qualification. 

While the narratives will allow the SBA to keep things moving, they also stand to jeopardize both participants’ membership in the program and their future 8(a) contract opportunities, lawyers and experts said in interviews with FedScoop. They stressed the importance of providing detailed narratives.

“Take it seriously. Take it very seriously. Because not only do you have to have the social narrative, it has to be approved by SBA,” said Robb Wong, a former associate administrator of the SBA’s Office of Government Contracting & Business Development who, in an earlier role at the agency, helped write the 8(a) program eligibility rules.

Without a narrative, Wong said businesses in the program won’t be able to get new contracts, and even with a narrative, there’s still a possibility that SBA could disapprove it and businesses could lose their 8(a) certification. 

The July ruling by the U.S. District Court for the Eastern District of Tennessee struck down the program’s use of what’s known as a “rebuttable presumption” that certain racial and ethnic groups — including Black, Hispanic, and Asian Pacific Americans — have been subject to prejudice and are therefore socially disadvantaged. 

That presumption made it easier for businesses owned by people belonging to one of those groups to qualify for the program’s social disadvantage requirement. The court, however, said the presumption ran afoul of the constitutional right to equal protection. The opinion cited the Supreme Court’s ruling, just three weeks prior, that colleges can’t use race as a factor in admissions through affirmative action.

“The thing about these narratives is that they require a person to go into extensive detail about something that happened to them that they very well may want to forget,” said Matthew Moriarty, a federal contracting attorney and founding member of Schoonover & Moriarty who has helped clients with narratives.

Narratives have previously been a part of the application process for business owners outside the presumed groups wanting to establish social disadvantage for things like disability, religion, sexual orientation, and gender. The new guidance expands the pool of businesses who must complete them.

“I think people really need to consider these to be significant legal documents that have the chance to, unfortunately, at this point, make or break a business,” Moriarty said.

The 8(a) program is aimed at helping socially and economically disadvantaged businesses contract with the federal government over a period of nine years maximum. To qualify, businesses must be at least 51% owned and operated by one or more U.S. citizens who also meet social and economic disadvantage requirements. As participants in the program, businesses are able to get contract opportunities specifically set aside for the 8(a) program.

The Biden administration has sought to expand opportunities for the program, which is an important piece in its goal to achieve 15% of prime contracting awards going to small disadvantaged businesses by 2025. Awards for 8(a) businesses made up about 5.4% of all federal contracts awarded in fiscal year 2021, or roughly $34.4 billion, according to a Congressional Research Service report last year.

“As we work with the Department of Justice to continue reviewing the District Court’s ruling and evaluating the next steps, the SBA and Biden-Harris Administration remain committed to supporting this crucial program and the small business owners who have helped drive America’s strong economic growth,” SBA Administrator Isabella Casillas Guzman said in a statement announcing the new guidance Friday.

The SBA had previously announced a temporary suspension of new applications to comply with the court ruling, which is still in effect.

Under the new guidance, 8(a) participants were to receive letters Monday either detailing the process for creating a narrative or telling them they’ve already established disadvantage. SBA’s guidance also clarified that the new requirement doesn’t apply to businesses that previously established social disadvantage in a narrative or entity-owned firms, which refers to businesses owned by groups such as Indian tribes and Alaska Native Corporations. 

“The hardest part about the narrative — and frankly, this is the reason that even some attorneys struggle with the narrative — is it is a very unique meshing of emotional and detailed writing from kind of a personal perspective and one that is very hard for some people to relate to,” said Nicole Pottroff, an equity partner at Koprince McCall Pottroff LLC who specializes in 8(a) narratives.

The time period to complete those narratives if there is an active offer is tight. Generally, when companies get a contract offer, the SBA has about five days to accept it on that firm’s behalf, Pottroff said. 

But narratives, at least in the past, have taken time. Pottroff said the SBA typically comes back with questions asking for more information. She said clients have frequently come to her firm because everything else about their 8(a) applications is fine except for the narrative. 

“Even when we write fantastic narratives, the questions can be as simple as we’d like some more details on this event, or can you tell us a little more specifically how … you felt this event was based on bias,” Pottroff said. It’s not clear whether that will continue to be the norm, she said.

Narratives require business owners to outline exactly how they’ve experienced bias and discrimination based on their identity. That needs to be supported by detailed descriptions of incidents that show “chronic and substantial social disadvantage,” according to SBA guidance. Those descriptions should include “who, what, where, why, when, and how discrimination or bias occurred,” SBA says. 

While businesses can complete the narratives themselves, Pottroff said that the SBA’s requirements are very specific and recommended that companies that have the ability and resources seek assistance. However, Pottroff also said she hopes the SBA process generally gets easier so that more people can successfully complete them on their own.

Wong said whether a business owner should seek counsel depends on the person. He recommends business owners write their narratives in three sections for education, employment, and business with three examples each. “And try to be as specific as possible,” Wong said. 

Moriarty said those examples can include details as specific as what someone was wearing at the time or what car they were driving. 

While some delays or inconsistencies are possible with the change, Wong said he expects the end of the fiscal year to be “fairly unremarkable” for SBA. He voiced support for SBA’s longtime associate general counsel for procurement law, John Klein, who has shared guidance about the change with those in the 8(a) community. Wong said he believes Klein will come up with a solution that’s “efficient and effective for government.”

In the meantime, Moriarty emphasized “the clock is ticking” for businesses that want to be eligible for contract awards that are upcoming. 

“If you’re an 8(a) and there’s a contract that you have an eye on that you want to be insured that you’re eligible for, there is no time like the present to get moving on this thing,” Moriarty said.

NIST announces progress on quantum attack-resistant algorithms

The National Institute of Standards and Technology has taken a significant step forward in its plans to release several algorithms designed to defend against quantum computer-based attacks, according to an agency brief published Thursday.

The agency, which began ramping up its post-quantum cryptography efforts back in 2016, has released draft standards for three algorithms designed to encrypt computer systems and protect them from attacks facilitated by quantum technology. By creating these standards, NIST is moving closer to its goal of, eventually, sharing these algorithms publicly — which would allow organizations to incorporate them into their systems.

These three algorithms were originally selected in 2022. Standards for a fourth algorithm, also selected last year, are expected to be published in about a year, NIST added. At the same time, the agency hinted that another crop of post-quantum encryption standards could soon become available.

“In addition to the four algorithms NIST selected last year, the project team also selected a second set of algorithms for ongoing evaluation, intended to augment the first set,” noted the brief. “NIST will publish draft standards next year for any of these algorithms selected for standardization.”

These new encryption standards are supposed to take the place of cryptographic standards deemed the most vulnerable. The agency is accepting feedback on the three algorithms’ draft standards until late November.

Rep. Mace proposes new vulnerability disclosure rules for contractors

Rep. Nancy Mace, R-S.C., has proposed new legislation that would expand for contractors the use of vulnerability disclosure policies, a formalized way for people to share observed or potential cybersecurity flaws with an organization.

While the Office of Management and Budget instructed federal agencies to implement VDPs back in 2020, this latest proposal, the Federal Cybersecurity Vulnerability Reduction Act, focuses on pushing federal contractors to do the same. The bill comes as there’s a growing focus being placed on securing sensitive federal information housed on contractor-owned systems through initiatives like the Pentagon’s Cybersecurity Maturity Model Certification.

The legislation orders OMB, along with the directors of the National Institute for Standards and Technology and the Cybersecurity and Infrastructure Security Agency and the National Cyber Director, to recommend new requirements to the Federal Acquisition Regulation Council, which helps coordinate the government’s approach to procurement. Those updates, the legislation proposes, should include VDPs consistent with NIST guidelines.

The legislation also stipulates that chief information officers may waive VDP requirements if doing so is necessary in the interest of national security or research. The bill also outlines specific responsibilities for the Department of Defense.

In its 2020 memo, OMB said that VDPs “are among the most effective methods for obtaining new insights regarding security vulnerability information and provide high return on investment.” In particular, the agency noted that this approach provides protection to those who report vulnerabilities — and helps differentiate between “good faith” researchers and those using “unacceptable” methods.

Organizations often use VDPs as a starting point to launch bounty programs, in which they pay cybersecurity researchers to report vulnerabilities found in their systems. The Pentagon has employed a VDP since 2016 and hosted numerous bug bounty efforts.

“When federal contractors can effectively address security vulnerabilities, every U.S. citizen will be better protected against cyberattacks,” said Marten Mickos, the CEO of HackerOne, a cybersecurity firm supporting the legislation, in a statement shared with FedScoop.

Experts warn of ‘contradictions’ in Biden administration’s top AI policy documents

The Biden administration’s cornerstone artificial intelligence policy documents, released in the past year, are inherently contradictory and provide confusing guidance for tech companies working to develop innovative products and the necessary safeguards around them, leading AI experts have warned.

Speaking with FedScoop, five AI policy experts said adhering to both the White House’s Blueprint for an AI ‘Bill of Rights’ and the AI Risk Management Framework (RMF), published by the National Institute of Standards and Technology, presents an obstacle for companies working to develop responsible AI products.

However, the White House and civil rights groups have pushed back on claims that the two voluntary AI safety frameworks send conflicting messages and have highlighted that they are a productive “starting point” in the absence of congressional action on AI. 

The two policy documents form the foundation of the Biden administration’s approach to regulating artificial intelligence. But for many months, there has been an active debate among AI experts regarding how helpful — or in some cases hindering — the Biden administration’s dual approach to AI policymaking has been.

The White House’s Blueprint for an AI ‘Bill of Rights’ was published last October. It takes a rights-based approach to AI, focusing on broad fundamental human rights as a starting point for the regulation of the technology. That was followed by the risk-based AI RMF in January, which set out to determine the scale and scope of risks related to concrete use cases and recognized threats to instill trustworthiness into the technology.

Speaking with FedScoop, Daniel Castro, a technology policy scholar and vice president at the Information Technology and Innovation Foundation (ITIF), noted that there are “big, major philosophical differences in the approach taken by the two Biden AI policy documents,” which are creating “different [and] at times adverse” outcomes for the industry.

“A lot of companies that want to move forward with AI guidelines and frameworks want to be doing the right thing but they really need more clarity. They will not invest in AI safety if it’s confusing or going to be a wasted effort or if instead of the NIST AI framework they’re pushed towards the AI blueprint,” Castro said.

Castro’s thoughts were echoed by Adam Thierer of the libertarian nonprofit R Street Institute who said that despite a sincere attempt to emphasize democratic values within AI tools, there are “serious issues” with the Biden administration’s handling of AI policy driven by tensions between the two key AI frameworks.

“The Biden administration is trying to see how far it can get away with using their bully pulpit and jawboning tactics to get companies and agencies to follow their AI policies, particularly with the blueprint,” Thierer, senior fellow on the Technology and Innovation team at R Street, told FedScoop.

Two industry sources who spoke with FedScoop but wished to remain anonymous said they felt pushed toward the White House’s AI blueprint over the NIST AI framework in certain instances during meetings regarding AI policymaking with the White House’s Office of Science and Technology (OSTP).

Rep. Frank Lucas, R-Okla., chair of the House Science, Space and Technology Committee, and House Oversight Chairman Rep. James Comer, R-Ky., have been highly critical of the White House blueprint as it compares to the NIST AI Risk Management Framework, expressing concern earlier this year that the blueprint sends “conflicting messages about U.S. federal AI policy.”

In a letter obtained exclusively by FedScoop, Arati Prabhakar responded to those concerns, arguing that “these documents are not contradictory” and highlighting how closely the White House and NIST are working together on future regulation of the technology.

At the same time, some industry AI experts say the way in which the two documents define AI clash with one another.

Nicole Foster, who leads global AI and machine learning policy at Amazon Web Services, said chief among the concerns with the documents are diverging definitions of the technology itself. She told FedScoop earlier this year that “there are some inconsistencies between the two documents for sure. I think just at a basic level they don’t even define things like AI in the same way.”

Foster’s thoughts were echoed by Raj Iyer, global head of public sector at cloud software provider ServiceNow and former CIO of the U.S. Army, who believes the two frameworks are a good starting point to get industry engaged in AI policymaking but that they lack clarity.

“I feel like the two frameworks are complementary. But there’s clearly some ambiguity and vagueness in terms of definition,” said Iyer.

“So what does the White House mean by automated systems? Is it autonomous systems? Is it automated decision-making? What is it? I think it’s very clear that they did that to kind of steer away from wanting to have a direct conversation on AI,” Iyer added.

Hodan Omaar, an AI and quantum research scholar working with Castro at ITIF, said the two documents appear to members of the tech industry as if they are on different tracks. According to Omaar, the divergence creates a risk that organizations will simply defer to either the “Bill of Rights” or the NIST RMF and ignore the other.

“There are two things the White House should be doing. First, it should better elucidate the ways the Blueprint should be used in conjunction with the RMF. And second, it should better engage with stakeholders to gather input on how the Blueprint can be improved and better implemented by organizations,” Omaar told FedScoop.

In addition to compatibility concerns about the two documents, experts have also raised concerns about the process followed by the White House to take industry feedback in creating the documents.

Speaking with FedScoop anonymously in order to speak freely, one industry association AI official said that listening sessions held by the Office of Science and Technology Policy were not productive.

“The Bill of Rights and the development of that, we have quite a bit of concern because businesses were not properly consulted throughout that process,” the association official said. 

The official added: “OSTP’s listening sessions were just not productive or helpful. We tried to actually provide input in ways in which businesses could help them through this process. Sadly, that’s just not what they wanted.”

The AI experts’ comments come as the Biden administration works to establish a regulatory framework that mitigates potential threats posed by the technology while supporting American AI innovation. Last month, the White House secured voluntary commitments from seven leading AI companies about how AI is used, and it is expected to issue a new executive order on AI safety in the coming weeks.

One of the contributors to the White House’s AI Blueprint sympathizes with concerns from industry leaders and AI experts regarding the confusion and complexity of the administration’s approach to AI policymaking. But it’s also an opportunity for companies seeking voluntary AI policymaking guidance to put more effort into asking themselves hard questions, he said.

“So I understand the concerns very much. And I feel the frustration. And I understand people just want clarity. But clarity will only come once you understand the implications, the broader values, discussion and the issues in the context of your own AI creations,” said Suresh Venkatasubramanian, a Brown University professor and former top official within the White House’s OSTP, where he helped co-author its Blueprint for an ‘AI Bill of Rights.’ 

“The goal is not to say: Do every single thing in these frameworks. It’s like, understand the issues, understand the values at play here. Understand the questions you need to be asking from the RMF and the Blueprint, and then make your own decisions,” said Venkatasubramanian.

On top of that, the White House Blueprint co-author wants those who criticize the documents’ perceived contradictions to be more specific in their complaints.

“Tell me a question in the NIST RMF that contradicts a broader goal in the White House blueprint — find one for me, or two or three. I’m not saying this because I think they don’t exist. I’m saying this because if you could come up with these examples, then we could think through what can we do about it?” he said.

Venkatasubramanian added that he feels the White House AI blueprint in particular has faced resistance from industry because “for the first time someone in a position of power came out and said: What about the people?” when it comes to tech innovation and regulations. 

Civil rights groups like the Electronic Privacy Information Center have also joined the greater discussion about AI regulations, pushing back on the notion that industry groups should play any significant role in the policymaking of a rights-based document created by the White House.

“I’m sorry that industry is upset that a policy document is not reflective of their incentives, which is just to make money and take people’s data and make whatever decisions they want to make more contracts. It’s a policy document, they don’t get to write it,” said Ben Winters, the senior counsel at EPIC, where he leads their work on AI and human rights.

Groups like EPIC and a number of others have called upon the Biden administration to take more aggressive steps to protect the public from the potential harms of AI.

“I actually don’t think that the Biden administration has taken a super aggressive role when trying to implement these two frameworks and policies that the administration has set forth. When it comes to using the frameworks for any use of AI within the government or federal contractors or recipients of federal funds, they’re not doing enough in terms of using their bully pulpit and applying pressure. I really don’t think they’re doing too much yet,” said Winters.

Meanwhile, the White House has maintained that the two AI documents were created for different purposes but designed to be used side-by-side as initial voluntary guidance, noting that both OSTP and NIST were involved in the creation of both frameworks.

OSTP spokesperson Subhan Cheema said: “President Biden has been clear that companies have a fundamental responsibility to ensure their products are safe before they are released to the public, and that innovation must not come at the expense of people’s rights and safety. That’s why the administration has moved with urgency to advance responsible innovation that manage the risks posed by AI and seize its promise — including by securing voluntary commitments from seven leading AI companies that will help move us toward AI development that is more safe, secure, and trustworthy.”

“These commitments are a critical step forward and build on the administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework. The administration is also currently developing an executive order that will ensure the federal government is doing everything in its power to support responsible innovation and protect people’s rights and safety, and will also pursue bipartisan legislation to help America lead the way in responsible innovation,” Cheema added.

NIST did not respond to requests for comment.

How federal CDOs can unlock the power of artificial intelligence

There’s palpable buzz around artificial intelligence. It promises to transform the way that people and organizations operate. For government agencies, AI models can offer a wide range of solutions – from improved public safety and better citizen services to enhanced decision-making and more robust fraud detection.

But AI also carries a whole host of inherent risks and poses a serious set of challenges.

The success of any government effort to safely and effectively leverage AI will hinge on each agency’s data culture, along with its ability to execute robust, transparent and trustworthy operations. Data will serve as an agency’s most important asset and set the foundation for efficacious AI models. And that’s where the Chief Data Officer steps in.

As of last year, more than 75 federal agencies and sub-agencies have appointed CDOs. Major policies — including the Foundations of Evidence-Based Policymaking Act and the Federal Data Strategy — have bolstered the CDO’s authority to improve the accessibility and serviceability of federal data. However, although a significant amount of responsibility rides on the shoulders of government CDOs, they still face a number of ambiguities in their roles. A 2022 Data Foundation survey found that only 52 percent of federal CDOs reported that their individual responsibilities are “very” or “completely” clear, and only 17 percent of federal CDOs believe they have all the resources needed to succeed.

For government agencies to safely and successfully realize the transformational potential of AI, CDOs must guide the way. And to get there, CDOs can deploy four key strategies and rely on this playbook to drive change across their respective agencies.

It’s true: AI has the potential to revolutionize how government operates. But government CDOs must play a critical role in preparing their agencies for the technology and advancing AI adoption safely and effectively. By continuing to drive transformational change within agencies and maximizing the use of secure, trustworthy and transparent data, CDOs can help unlock impactful technologies across the federal government.

Adita Karkera is the chief data officer for Deloitte’s Government and Public Services practice and a fellow at the Deloitte AI Institute for Government. She spent nearly 20 years with the Arkansas Department of Information Systems and served as Arkansas’ Deputy State Chief Data Officer from 2017 through 2021.

Foreign intelligence warning to space industry highlights risks

The Office of the Director of National Intelligence last week released a brief highlighting potential foreign intelligence risks to the U.S. commercial space industry.

The warning served as a reminder that as the space startup scene continues to develop, “foreign intelligence entities” might attempt to steal technology assets and intellectual property.

The brief highlighted that this threat could manifest in myriad ways, including requests to visit companies on-site, targeted questions about proprietary information, and the recruitment of technical experts and consultancy work. These entities might also try to access information by forming subsidiaries in third countries “designed to obscure the parent company’s connections.”

The space startup industry is particularly vulnerable, said Scott Pace, the director of the Space Policy Institute at George Washington University. While long-time defense companies have extensive experience with preparing for potential intelligence threats, smaller companies don’t have the same resources, particularly around cybersecurity and personnel training.

“The growth of innovative commercial space companies is recognized by Russia and China as important to U.S. strategic advantage,” added Scott Pace. “It’s no surprise that they would be seeking to gain information on those companies, steal intellectual property, and possibly undercut their capabilities if possible.”

The problem is significant enough to threaten national security, DNI warned, since adversaries could try to interrupt satellite services, collect sensitive data, and target commercial space infrastructure during conflict.

Accessing intellectual property and other proprietary data might also undermine the U.S. space industry’s economic security and influence in the global market.

“[I]f a U.S. space company has a core technology that provides advantage to the U.S., be it software or hardware, if China’s espionage gets that technology, the U.S. will lose that advantage,” explained Namrata Goswami, a space policy analyst, in an email to FedScoop. “If adversarial foreign entities get access to space systems and create cyber vulnerabilities, the US cannot use that system reliably because there is the danger of malfunction.”

For this reason, Goswami said that DNI’s warning is “very timely.” She added that the stakes are high, in part, because of ongoing limitations on exporting critical technologies to China. U.S. officials have consistently emphasized the importance of securing the supply chain for space technology.