GOP lawmakers criticize federal agencies for failing to provide telework policy docs
House Republican lawmakers Monday blasted Biden administration federal agencies for allegedly failing to turn over materials related to telework and remote work policies that the House Oversight and Accountability committee requested months ago as part of an investigation into federal agency telework policies and their effect on agency performance.
House Oversight Chairman, Rep. James Comer, R-Ky., Subcommittee on Government Operations and the Federal Workforce Chairman Pete Sessions, R-Texas., and Rep. Lauren Boebert, R-Colo., renewed their initial May request to Biden administration federal agencies regarding telework and remote work.
“One of two options is currently playing out: either federal agencies are withholding information from Congress or federal agencies are not tracking telework and remote work policies as required by the law,” said Comer, Sessions, and Boebert in letters to dozens of federal agencies.
“Both possibilities are deeply concerning. The American people show up to work every day and federal agencies should follow their example. Committee Republicans remain steadfast in our pursuit of answers and if federal agencies continue to withhold this information, we will resort to compulsory measures,” the Congressman said.
The Republican lawmakers, in the latest missive, said the Biden Administration has not provided them current data about the specific amount of telework occurring within federal agencies or across the entire federal workforce and has provided “no evidence concerning the impact of elevated telework on agency performance.”
GOP lawmakers have sought to investigate agencies’ varying approach to telework, and in January introduced the SHOW UP Act, which was intended to compel departments to return to their pre-pandemic telework policies. That legislation was introduced by Rep. James Comer, R-TN, Andy Biggs, R-Ariz., Byron Donalds, R-Fla., and Michael Cloud, R-Texas.
Furthermore, they cite a recent Government Accountability Office (GAO) study on federal building occupancy which suggests that in some components of federal agencies the vast majority of employees are not coming to the office on a regular basis, with some agencies reporting occupancy rates as low as nine percent.
Last week, President Biden called for his Cabinet to “aggressively execute” plans for federal employees to carry out more in-office work this fall after years of working remotely.
Congressional inquiries to OPM have surged
Amid scrutiny of the retirement services division within the Office of Personnel Management, congressional inquiries to the agency have grown drastically, according to a February letter sent by Retirement Services Associate Director Margaret Pearson.
According to the missive, which was sent in response to questions from House lawmakers, OPM’s Congressional, Legislative, and Intergovernmental Affairs branch received more than 9,000 congressional inquiries in 2022, compared with more than 3,000 in 2020. In other words, the number of inquiries from Congress to the agency has approximately tripled in three years.
FedScoop obtained the letter from Pearson through a Freedom of Information Act request.
Retirement services managers associated with a retirement case receive a notification when they receive a congressional inquiry about the applicant associated with that application, Pearson wrote. In the letter, she added that the agency’s CLIA “is working to improve its operations regarding congressional inquiries by focusing on customer service, improving processing times and educating congressional offices about best practices.”
“Seems like average response time of ~4 months to congressional inquiry,” observed Jason Briefel, the policy and outreach director at the Senior Executives Association and a partner at the government-focused law firm Shaw, Bransford, & Roth, in an email. “OPM’s congressional relations office seems overwhelmed with requests for information.”
OPM was contacted for comment.
OPM planning four-month trial for online retirement system later this year
The Office of Personnel Management is expecting to conduct a four-month trial of a new online retirement application platform for federal employees later this year, FedScoop has learned.
In a letter to lawmakers, which was obtained by this publication through a Freedom of Information Act request, agency director Kiran Ahuja said OPM will conduct an approximately 120-day pilot in coordination with the National Finance Center, which is a federal agency division under the United States Department of Agriculture.
Responding to questions from lawmakers including Sen. Dick Durbin, D-Il., Ahuja wrote: “Between the electronic employee data received from the payroll center and the online retirement application, RS will receive all the information necessary to process a retirement application electronically.”
She added: “The pilot will likely last 120 days, at which point RS will evaluate the results and determine the appropriate next steps to expand the program.”
Details of the anticipated pilot come as escalating concerns about delays and retirement application backlogs attract increasing attention from Congress. Last month, FedScoop revealed that the Government Operations and Border Management Subcommittee, which is held within the Senate Committee on Homeland Security, is considering a new hearing focused on the retirement application backlog at OPM.
Other letters obtained through freedom of information requests, illustrate a range of challenges facing the agency in relation to the backlog. These include an increase in errors in retirement packages, difficulties filling vacancies, new in-person work limitations created by the pandemic, and a decline in legal administrative specialists at the agency.
In at least two of the letters to members of Congress, OPM said that plans for an online retirement application platform — considered a first step in reforming a still largely paper-based system — are expected at the end of this year.
“OPM has made investments to drive OPM’s retirement claims inventory to six year low in June 2023. OPM remains committed to helping federal employees transition from serving the American public to enjoying their hard-earned retirement,” a spokesperson for the agency told OPM.
In an effort to boost online access to its retirement services tools, the agency has released content urging retirees to use Login.gov and created a survivor benefits-focused chatbot, among other initiatives.
Despite details of the pilot, which is set to launch in the coming months, other efforts seem farther out. In a March letter, Ahuja told Rep. Jamie Raskin, D-Md., that the agency would not open the retirement services online portal to representative payees — citing issues with authentication challenges. In the letter, Ahuja noted that the new modernization and oversight of the payee program would “enable us to authenticate the payee so we can consider opportunities to provide payees with access to additional online tools.”
In an email to FedScoop, John Hatton, staff vice president of policy and programs at the National Active and Retired Federal Employees Association, emphasized the importance of ensuring the online system pilot is successful and that it is implemented governmentwide.
He said: “For a federal retiree community that has faced multi-month delays for decades, and failed modernization attempts in the past, history cautions skepticism. But recent signs provide at least a sparkle of hope.”
Jason Briefel, who is both the policy and outreach director at the Senior Executives Association and a partner at the government-focused law firm Shaw, Bransford, & Roth, reviewed these letters before publication. In his view, in these letters there was “[n]o clear plan for modernization” — and there wasn’t a clear timeline for when the online application would actually become usable.
“OPM’s answer is simply more money and hiring more people,” Briefel said in an email to FedScoop, adding: “not addressing root causes of issues, reliance on paper-based systems.”
The National Finance Center at the USDA provides human resources, financial and administrative services for U.S. government agencies.
NASA cautiously tests OpenAI software for summarization and code writing
NASA is cautiously testing OpenAI software with a range of applications in mind, including code-writing assistance and research summarization. Dozens of employees are participating in the effort, which also involves using Microsoft’s Azure cloud system to study the technology in a secure environment, FedScoop has learned.
The space agency says it’s taking precautions as it looks to examine possible uses for generative artificial intelligence. Employees looking to evaluate the technology are only invited to join NASA’s generative AI trial if their tests involve “public, non-sensitive data,” Edward McLarney, digital transformation lead for Artificial Intelligence and Machine Learning at the agency, told FedScoop.
In June, Microsoft announced a new Azure OpenAI tool designed for the government, which according to the company is more secure than the commercial version of the software. Last week, FedScoop reported that the Microsoft Azure OpenAI was approved for use on sensitive government systems. A representative for Microsoft Azure referred to NASA in response to a request for comment. OpenAI did not respond to a request for comment by the time of publication.
Experimentation with the technology has just begun, McLarney noted, and “many iterations” of testing, verification, validation, bias mitigation, and safety reviews, among other types of evaluations, are still ahead.
“NASA workers are assessing usability of the tools, accuracy of the results, completeness of AI-generated outputs, security behavior of the overall cloud services, speed of the models, costs, supportability and more,” McLarney said. “NASA is excited about the potential of generative AI and is also being clear-eyed about its risks and shortcomings.”
He added: “NASA also uses cloud services from other companies and is interested in testing generative AI capabilities from them. NASA may conduct additional generative AI testing with Google Cloud Platform, Amazon Web Services, or other companies in the future.”
Right now, the space agency plans to study OpenAI’s chat, code assistance capabilities, and image-generating capabilities. AI-generated art could help provide “inspiration” for NASA artists, McLarnley explained, while the system’s text-generating software could help with writing documents. He pointed to other use cases, too.
FedScoop learned about NASA’s generative AI tests after receiving a list of employees titled “Initial NASA OpenAI on Azure Testers” in response to a public records request. Last month, FedScoop obtained an email — which was sent by NASA’s chief information officer to employees in May — focused on preliminary guidance for using AI tools like ChatGPT. That email noted that some “early adopters” within the agency were preliminarily working with the technology.
Notably, the space agency’s generative AI testing had not begun when NASA began collecting its fiscal year 2023 AI use case inventory, which was required by a 2020 Trump administration executive order. Still, McLarney noted that as “NASA generative AI testing and nascent use begins, it will be included as appropriate in future AI inventory reporting cycles.”
NASA’s experimentation with OpenAI software is just one part of the agency’s growing focus on AI. Officials also released a Responsible AI plan last September — and artificial intelligence and machine learning remain an element of the agency’s digital transformation efforts.
The agency is one of the first government departments to disclose details of its approach to experimentation with OpenAI. The use of generative AI tools can raise privacy, trust and oversight, and national security concerns, as a Government Accountability Office brief from June highlighted. Relatedly, the Department of Transportation recently deleted a reference to using ChatGPT from its AI use inventory, in response to FedScoop’s reporting.
IRS launches digitization effort to go paperless by 2025
The IRS Wednesday announced an ambitious digitization effort that will give taxpayers the option to go paperless for all IRS correspondence by 2024 filing season and provide the added benefit of reducing tax evasion by wealthy individuals and large corporations.
According to the agency, the effort will eliminate up to 200 million pieces of paper annually, cut processing times in half, and expedite refunds by several weeks.
The digitization initiative is being financed through an $80 billion infusion of cash for the IRS over 10 years under the Inflation Reduction Act (IRA) which President Joe Biden passed into law last August.
“Thanks to the IRA, we are in the process of transforming the IRS into a digital-first agency,” Treasury Secretary Janet Yellen said during a visit to an IRS paper processing facility in McLean, Virginia with IRS Commissioner Daniel Werfel on Wednesday.
“By the next filing season,” she said, “taxpayers will be able to digitally submit all correspondence, non-tax forms, and notice responses to the IRS,” Yellen added.
The IRS also said in its announcement of the digitization program, that it would help enable agency data scientists to implement “advanced analytics and pattern recognition methods to pursue cases that can help address the tax, including wealthy individuals and large corporations using complex structures to evade taxes they owe.”
Using IRA resources, taxpayers are now able to respond to more notices online, and the IRS says it has made significant progress adopting new technology that automates the scanning of millions of paper returns.
In the coming two years, taxpayers will be able to digitally submit all correspondence, non-tax forms, and responses to notices which will allow more than 94% of individual taxpayers to no longer ever need to send mail to the IRS, according to the agency.
It added that taxpayers who still want to submit physical paper returns and correspondence will be able to do so.
Taxpayers use non-tax forms to request or submit information on a range of topics, including identity theft and proof that they are eligible for key credits and deductions to help low-income households.
SBA pauses applications for 8(a) business program
The Small Business Administration has “temporarily suspended” applications from federal contractors to the 8(a) Business Development Program, which is intended to help small businesses whose owners are socially and economically disadvantaged.
In a note on its certify.sba.gov portal, which was shared with FedScoop, SBA said it had stopped accepting new applications following an injunction issued last month by a federal court in Tennessee that enjoined the use of presumed racial and ethnic disadvantage as a qualification for the contracting program.
The agency wrote: “SBA has temporarily suspended new 8(a) application submissions while it revises the application questionnaire to comply with the Court’s decision. Thank you for your patience and interest in the 8(a) Business Development Program.”
Last month’s ruling by a Tennessee federal judge halted the Department of Agriculture and the SBA from considering these factors, which are cornerstones of the 8(a) program, when making contracting decisions.
Last month’s ruling relies in part on the Supreme Court’s recent decision striking down the use of race in college admissions through affirmative action.
Contracting experts previously told FedScoop that it could have a broad impact on the governmentwide program, although they noted that the full scope isn’t clear and will likely be appealed.
Commenting on the 8(a) program suspension in a blog post, federal contracting attorney Nicole Pottroff wrote: “SBA appears to be taking the “wait until we tell you otherwise” approach to new 8(a) Program applications … [w]e are all anxiously awaiting more information on the potential implications the decision could have on both applicant and current 8(a) Program participants.”
The SBA had not responded to a request for comment at the time of publication.
NIST appoints G. Nagesh Rao as deputy director of Manufacturing Extension Partnership
G. Nagesh Rao has joined the National Institute of Standards and Technology as deputy director of the agency’s Manufacturing Extension Partnership.
Rao moves to NIST from another Department of Commerce division, the Bureau of Industry and Security (BIS), where he was chief information officer.
The Manufacturing Extension Partnership is a public-private partnership that acts as an intermediary between the standards bureau and small and medium-sized manufacturers. It has centers in all 50 states and Puerto Rico.
Over the course of his career, Rao has held a variety of public sector and private sector roles. Prior to serving as BIS chief information officer, he was director of business technology solutions at the U.S. Small Business Administration’s Office of the Chief Information Officer. Before this, he was chief technology and entrepreneur in residence within SBA’s Office of Investment and Innovation.
Commenting on Rao’s appointment, NIST Manufacturing Extension Partnership Program Director Pravina Raghavan said: “We are excited to have Nagesh join NIST and the MEP National Network. His prior experience at the Department of Commerce and Small Business Administration and his record of implementing innovative tech solutions will help MEP strengthen domestic supply chains and support small and medium-sized manufacturers across the U.S.”
Editor’s note, 8/3/23: This story was updated to correct references to NIST and the Manufacturing Extension Partnership.
The government is struggling to track its AI. And that’s a problem.
Efforts to inventory artificial intelligence uses within major federal agencies have so far been inconsistent, creating a patchwork understanding of the government’s use of the budding technology.
Regulating AI is a cornerstone of the current administration’s agenda, but the push to figure out where the federal government was using the technology began before President Joe Biden took office. In the final weeks of the Trump administration, the White House published an executive order calling on federal agencies to report all current and planned uses of AI and publish those results. The goal, according to Executive Order 13960, was to document how the U.S. government is using AI and establish principles for the technology.
More than two years later, the process of actually developing these inventories hasn’t gone smoothly. Unlike other government AI initiatives such as the Blueprint for an AI Bill of Rights and the National Institute for Science and Technology’s Risk Management Framework, the 2020 executive order carries the force of law and has terms that require compliance, argues Christie Lawrence, an affiliate at Stanford’s RegLab.
The way agencies are complying with the executive order points to potential lessons for federal agency implementation of future executive orders and statutes related to AI regulation, she told FedScoop.
In the absence of a U.S. national AI strategy, said Lawrence, “compliance with Executive Order 13960 is really important because it kind of functions — along with some other documents — as the sort of American government strategy towards AI.”
The issues raised by the executive order highlight some of the broader hurdles that could face the big push to regulate AI, including defining the technology and identifying where the technology is actually being deployed. Notably, the Biden administration expects to issue a new AI-focused executive order soon.
Researchers at Stanford Law School, including Lawrence, who looked at implementation challenges for America’s AI strategy, and the Electronic Privacy Information Center (EPIC), both previously raised concerns about widespread lagging compliance with the Trump administration executive order among agencies. The White House did not respond to a request for comment.

FedScoop reviewed how the more than 20 larger Chief Financial Officer Act agencies that the executive order applies to inventory their AI technology. The findings showed a lack of standardization across the government. While some agencies offer detailed inventories, others provide little information — and some appear to miss use cases disclosed publicly elsewhere.
There also isn’t a public deadline for agencies to update their inventories for the current fiscal year, making it difficult to track progress.
Among the findings: Several agencies — including the Transportation Security Agency and the Small Business Administration — didn’t include apparent use cases publicly disclosed elsewhere. Meanwhile, the Department of Transportation said it disclosed a ChatGPT use in “error,” which FedScoop previously reported.
“We need to know the full universe of AI use cases that are effective today. And if we don’t have that, we’re not getting the full picture and we can’t really rest easy knowing that,” argues John Davisson, an attorney at EPIC. “The federal government’s having to play catch up with its own agencies by, now, asking them to disclose what AI systems they’re using. But things being as they are, step one is: Disclose what you’re using right now.”
The December 2020 executive order required agencies — except the Defense Department and those in the intelligence community — to inventory their current and planned AI uses, ensure uses were consistent with the order, share inventories with each other, and make non-classified and non-sensitive uses public on an annual basis.
The order also directed the Federal Chief Information Officers Council (CIO Council) to create guidance for the inventories. The initial deadline the council set for agencies to share their first inventories with each other on the MAX Federal Community, a federal information-sharing website, was March 22, 2022. Agencies began publishing inventories online in June 2022, according to the National Artificial Intelligence Initiative’s webpage for the order.
The public guidance from the CIO Council for 2023, however, doesn’t include a date by which they should be submitted to the MAX system. In response to detailed questions about a deadline, expectations for public inventories, and compliance for the current year, the agency sent a brief summary of its responsibilities under the order.
Key Documents
Executive Order 13960 | AI.gov Published Inventories List |
2021 CIO Council Guidance | 2023 CIO Council Guidance |
Other requirements established by the order to streamline the government-wide AI strategy appear to be running behind, too.
The Office of Personnel Management was supposed to create an inventory of rotational programs focused on increasing the number of employees with AI experience at federal agencies and issue a report focused, also, on boosting AI expertise — both within a year of the 2020 EO. In response to a request for comment, the agency directed FedScoop to a memo focused on AI competencies meant to comply with the AI in Government Act, and said that once a data call the agency is working on with the Chief Human Capital Officers Council is complete, it can start compiling a report.
Perhaps most notable is that several agencies seemed to exclude prominent examples of AI use cases — including those that do or could impact the public — from their inventories.
The inventory created for the Transportation Security Agency, for example, includes a single example of AI — a COVID-19 risk assessment algorithm program called Airport Hotspot Throughput — but does not mention its facial recognition program, perhaps one of the agency’s most controversial deployments of machine learning-based technology. The Department of Homeland Security did not respond to a FedScoop request for comment.
HUD, meanwhile, maintains that it has no AI use cases — despite a report submitted to the Administrative Conference of the United States in February 2020 that identified a prototype chatbot at the agency. HUD similarly publicly identified the use of AI in a December 2020 report on its progress in implementing the 21st Century Integrated Digital Experience Act. In that report, HUD said the Federal Housing Administration would “expand communication channel offerings to include live chat, SMS/MMS, AI chatbot, and Intelligent IVR.” HUD didn’t respond to FedScoop requests for comment.
It is also unclear how agencies should distinguish between “planned” use cases, which agencies are supposed to include, and AI projects that are in the process of research and development, which are not supposed to be included. For example, several AI uses discussed in a July 2022 presentation for EPA’s homeland security office are not included in the EPA’s inventory because, a spokesperson explained, the “activities described in the presentation are still in development.”
The Small Business Administration’s inventory, which is dated May 2023, states that after investigating its Federal Information Security Modernization Act (FISMA) systems, it did not discover any use cases. Still, the inventory does not include an AI use case for vetting loan applications. This application was discussed in an SBA announcement on the agency’s website, and in an Inc Magazine article, both published in May.
“During phase one, our focus was on how SBA Program Offices were using AI (including ML + RPA) to support their own internal operational efficiencies,” an SBA spokesperson told FedScoop.
SBA’s response reflects a larger trend: agencies used different methodologies to actually develop their inventories. Of the agencies that responded to FedScoop’s request for comment, some seemed to have determined their use cases by organizing a call out within their agencies — and asking various departments to share different ways they’re using AI.
The Department of Labor found that most of its AI use cases have been managed by its AI Center of Excellence, and the agency found other examples by reaching out to business units. Other agencies, including the EPA, the General Services Administration and Education Department, conducted “data calls” to collect information about AI uses.
Information that officials included about each disclosed use case across the agencies varies widely. Some agencies list specific contact information for different AI use cases, like the Department of Commerce, or include information like when the use began and whether it was contracted work, as USAID did. Others simply list the name of each use case, a summary, and the entity responsible for it — that approach was taken by both the Department of State and Social Security Administration.
Relatedly, there doesn’t appear to be a standardized procedure for removing mentions. The Department of Transportation deleted a reference to the Federal Aviation Administration’s Air Traffic Office using ChatGPT for code-writing assistance after FedScoop inquired about the technology, saying the example was included in “error.”
Agencies have also published their inventories on different timelines. Though the first inventories were expected to be shared with other agencies in March 2022 — per the initial CIO Council guidance — some agencies appear to have completed theirs later. For example, NASA’s fiscal year 2022 inventory is dated October 2022, the Department of Education said it completed its initial inventory in February 2023, and OPM appears to have only a 2023 inventory.
At the same time, while a deadline for the current year isn’t clear, some agencies, such as the General Services Administration and Social Security Administration, said they already completed updates to their inventories for 2023.
Several agencies, including the Department of Housing and Urban Development, the Justice Department, and the Department of the Interior, did not provide responses to FedScoop inquiries about updating their inventories and their overall process. While NASA has a public inventory from 2022 and 2023, the agency’s inventory is not included on an AI.gov list of inventories-to-date.
Finally, it’s difficult to tell whether the executive order actually helped agencies sort through whether their AI use cases lined up with established principles — which was a critical goal of the executive order.
Many agencies did not respond to a request for comment, but the Department of Labor, USAID, and USDA all said none of their use cases were inconsistent with the order. A State Department spokesperson said it was “employing a rigorous review process and making necessary adjustments or retirements as needed.” But it didn’t elaborate on what uses might need that adjustment or retirement.
Ultimately, the patchwork approach to Executive Order 13960 is a reminder that senior leadership within both the White House and the federal agencies need the right staff, resources, and authority to implement AI–related legal requirements, argued Lawrence, from Stanford.
For Davisson, the attorney from EPIC, it’s critical for agencies to have clarity about their obligations.
“Follow-through is really important. That applies both to the White House and to the agencies that are trying to execute on an executive order,” he added. “You can’t just put it on paper and assume that the job is done.”
Editor’s note, 8/4/23 at 3:00 p.m.: This piece was updated to note NASA’s 2023 AI use case inventory, which a NASA employee referenced in response to a request for comment for a subsequent FedScoop piece on a related topic.
White House science adviser defends ‘conflicting’ AI frameworks released by Biden admin
The Biden administration’s AI ‘Bill of Rights’ Blueprint and the NIST AI Risk Management Framework do not send conflicting messages to federal agencies and private sector companies attempting to implement the two AI safety frameworks within their internal systems, according to the director of the White House Office of Science and Technology Policy.
In a letter obtained exclusively by FedScoop, Arati Prabhakar responded to concerns raised by senior House lawmakers on the House Science, Space and Technology Committee and the House Oversight Committee over apparent contradictions in definitions of AI used in the documents.
“These documents are not contradictory. For example, in terms of the definition of AI, the Blueprint does not adopt a definition of AI, but instead focuses on the broader set of “automated systems,” Prabhakar wrote in a letter sent to House Science Chairman Frank Lucas, R-Okla., and Oversight Chairman James Comer, R-Ky., a few months ago.
“Furthermore, both the AI RMF and the Blueprint propose that meaningful access to an AI system for evaluation should incorporate measures to protect intellectual property law,” Prabhakar added.
In the letter, Prabhakar also described the “critical roles” both documents play in managing risks from AI and automated systems, and said they illustrate how closely the White House and NIST are working together on future regulation of the technology.
The two Republican leaders sent a letter in January to the OSTP director voicing concern that the White House’s AI ‘Bill of Rights’ blueprint document is sending “conflicting messages about U.S. federal AI policy.”
Chairman Lucas and Chairman Comer were highly critical of the White House blueprint as it compares with the NIST AI risk management framework.
Prabhakar in her letter also noted the close partnership between NIST and OSTP regarding AI policymaking and the high engagement both entities have had with relevant stakeholders within industry and civil society in crafting AI policy.
She also highlighted that the AI ‘Bill of Rights’ document recognizes the need to protect technology companies’ intellectual property. Although it calls for the use of confidentiality waivers for designers, developers and deployers of automated systems, it says that such waivers should incorporate “measures to protect intellectual property and trade secrets from unwarranted disclosure as appropriate.”
Commerce Secretary Gina Raimondo said in April that NIST’s AI framework represents the “gold standard” for the regulatory guidance of AI technology and the framework has also been popular with the tech industry.
This came after the Biden administration in October 2022 published its AI ‘Bill of Rights’ Blueprint, which consists of five key principles for regulating the technology: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation and human alternatives, consideration and fallback.
Chairman Lucas and Chairman Comer’s engagement with OSTP earlier this year regarding conflicting messages being sent by the Biden administration on AI policy followed concerns expressed by industry and academia about varying definitions within the two documents and how they relate to the definitions used by other federal government agencies.
While they are both non-binding, AI experts and lawmakers have warned about the chilling effect that lack of specificity within framework documents could have on innovation both inside government and across the private sector.
“We’re at a critical juncture with the development of AI and it’s crucial we get this right. We need to give companies useful tools so that AI is developed in a trustworthy fashion, and we need to make sure we’re empowering American businesses to stay at the cutting edge of this competitive industry,” Chairman Lucas said in a statement to FedScoop.
“That’s why our National AI Initiative called for a NIST Risk Management Framework. Any discrepancies between that guidance and other White House documents can create confusion for industry. We can’t afford that because it will reduce our ability to develop and deploy safe, trustworthy, and reliable AI technologies,” he added.
Meanwhile, the White House has repeatedly said the two AI documents were created for different purposes but designed to be used side-by-side and noted that both the executive branch and the Department of Commerce had been involved in the creation of both frameworks.
OSTP spokesperson Subhan Cheema said: “President Biden has been clear that companies have a fundamental responsibility to ensure their products are safe before they are released to the public, and that innovation must not come at the expense of people’s rights and safety. That’s why the Administration has moved with urgency to advance responsible innovation that manage the risks posed by AI and seize its promise—including by securing voluntary commitments from seven leading AI companies that will help move us toward AI development that is more safe, secure, and trustworthy.”
“These commitments are a critical step forward, and build on the Administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework. The Administration is also currently developing an executive order that will ensure the federal government is doing everything in its power to support responsible innovation and protect people’s rights and safety, and will also pursue bipartisan legislation to help America lead the way in responsible innovation,” Cheema added.
Editor’s note, 8/2/23: This story was updated to add further context about NIST’s AI Risk Management Framework and prior concerns raised by AI experts.
How ‘observability’ empowers the public sector advance digital strategies
As data volumes, cyber threats and technological complexities increase, achieving comprehensive visibility across the IT landscape is imperative. And it’s more than monitoring and alerts. According to a recent report from Elastic, the public sector needs to achieve end-to-end observability to proactively predict patterns, detect and respond to anomalies and strengthen their risk management posture.