NTIA calls for independent audits of AI systems in new accountability report

The National Telecommunications and Information Administration on Wednesday called for independent audits of high-risk artificial intelligence systems, part of a new report from the Commerce Department bureau that also included eight recommendations for federal agency use of AI. 

The NTIA’s AI Accountability Policy Report recommends that the federal government take action to establish guidance, support and regulations for AI systems. Within those three categories, NTIA calls for agencies to increase transparency through disclosures, such as AI nutrition labels, encourage research and evaluations on AI tools, require contractors and suppliers to “adopt sound AI governance and assurance practices” and more. 

In addition to its focus on federal involvement in guidelines for AI audits and auditors, NTIA recommends that the government strengthen its capacity to “address risks and practices related to AI across sectors of the economy,” which includes maintaining a registry of “high-risk AI deployments, AI adverse incidents and AI system audits.”

“NTIA’s AI Accountability Policy recommendations will empower businesses, regulators and the public to hold AI developers and deployers accountable for AI risks, while allowing society to harness the benefits that AI tools offer,” NTIA Administrator Alan Davidson said in a statement

Significantly, the NTIA called for the creation of AI disclosure cards that mimic “nutrition labels” detailing a product’s name, whether or not there is a human in the loop, the model type, the data retention frequency, base model and more. NTIA stressed in the report that the standardization of accessible and plain language labeling could “enhance the comparability and legibility of disclosures.”

The agency noted that the report is just “one element” of its work to meet the Biden administration’s commitment to establishing guardrails and promoting innovation regarding AI. The report follows a request for comment submitted by the agency last year. 

The request sought feedback about policy development for AI mechanisms (such as audits and assessments) meant to encourage trustworthiness. In particular, the NTIA inquired about what data would be necessary to conduct audits and what approaches might be needed in various industry environments. 

Hodan Omaar, senior policy analyst at the Center for Data Innovation, said in a statement that the focus on regulatory frameworks throughout the report “will not help the United States become a leading global adopter of AI.”

“The United States should pursue policies that encourage U.S. businesses to hire more AI developers, integrators and engineers, not divert those resources to hiring more auditors and lawyers,” Omaar added. “Policymakers should instead rely on voluntary frameworks because they are more adaptable, dynamic, and effective at addressing risks in a rapidly evolving AI landscape.”

When asked for comment in response to Omaar’s statement, NTIA directed FedScoop to its press release and fact sheet.

Treasury report calls out cyber risks to financial sector fueled by AI

Peaceful protests, lawful assembly can’t be sole reason for DOJ facial recognition use under interim policy

Activities protected under the First Amendment, such as peaceful protests and lawful assembly, “may not be the sole basis for the use of” facial recognition technology under the Justice Department’s interim policy governing its deployment of the technology, the agency told a civil rights panel. 

In written testimony submitted to the U.S. Commission on Civil Rights last week, the DOJ shared details of its approach to using facial recognition technology, or FRT, including its interim policy, which it issued in December but hasn’t shared publicly. The testimony came a couple of weeks after the civil rights panel held a briefing on federal use of facial recognition technology that the DOJ didn’t testify at in-person or submit testimony for in advance.

“Notably, the Interim FRT Policy mandates that activity protected by the First Amendment may not be the sole basis for the use of FRT,” the DOJ said in its testimony. “This would include peaceful protests and lawful assemblies, or the lawful exercise of other rights secured by the Constitution and laws of the United States.”

Additionally, the interim policy states that “FRT results alone may not be relied upon as the sole proof of identity,” the DOJ said. It also requires that facial recognition technology complies with the department’s AI policies and that employees never use the technology to “engage in or facilitate unlawful discriminatory conduct,” in addition to requiring risk assessments for the accuracy of facial recognition systems used by the department.

The interim policy could also lead to public disclosures of certain information about use of the technology at the department. Components using facial recognition systems are required to “develop a process to account for and track system use” under the interim policy and report on that use annually to the DOJ’s Emerging Technology Board, which was established to oversee the department’s use of AI and emerging technology, and its Data Governance Board. 

“Without compromising law-enforcement sensitive or national security information, each of these annual reports will be consolidated into a publicly released summary on the Department’s FRT use,” the testimony said.

The commission’s March 8 briefing explored federal use of facial recognition technology at DOJ, the Department of Homeland Security, and the Department of Housing and Urban Development as it prepares a report. Adoption of the technology in the federal government has prompted concerns about privacy and civil liberties, including from lawmakers and academics.

Neither the DOJ nor HUD participated in the hearing, and DOJ’s lack of participation, in particular, prompted two commissioners to indicate they were willing to use subpoena power to produce information. At the time of the briefing, a DOJ spokesperson told FedScoop it was communicating with the commission about a response. 

A Government Accountability Office review of facial recognition systems in the government found that agencies, including the DOJ, didn’t have policies specific to the use of the technology and initially didn’t require training. That report found that the DOJ had “taken steps to issue a department-wide policy” but “faced delays.” The GAO ultimately recommended, among other things, that the attorney general develop a plan for issuing a policy that addresses civil rights and civil liberties.

In testimony to the commission at its briefing, GAO’s Gretta Goodwin said the department informed the government watchdog that it had issued an interim policy but the GAO hadn’t yet seen that policy. Goodwin, who directs the watchdog’s Homeland Security and Justice team, said the GAO plans to review the interim policy as part of its follow-up process on the recommendation.

The description of the interim policy in the department’s testimony appears to address some of GAO’s findings. For example, the DOJ said that the policy mandates that employees using those systems receive training that includes information about privacy, civil rights and civil liberties laws relevant to the use of facial recognition technology. 

While the department acknowledged potential equity and fairness implications of the technology, it also underscored the potential benefits. According to the testimony, facial recognition technology was used by the FBI over the last year to combat crime, find missing children, and address threats on the border. The U.S. Marshals Service also uses the technology for investigations and protective security missions, DOJ said. 

“When employed correctly, FRT affirmatively strengthens our public safety system,” the DOJ said. 

The interim policy was created by a working group within the department that met throughout 2022 and 2023. That group included legal experts and subject matter experts throughout the DOJ. The interim policy will be updated after the department completes an interagency report on best practices required under President Joe Biden’s executive order on policing, the DOJ said.

MITRE launches lab to test federal government AI risks

Public interest nonprofit corporation MITRE opened a new facility dedicated to testing government uses of artificial intelligence for potential risks Monday.

MITRE’s new AI Assurance and Discovery Lab is designed to assess the risk of systems using AI in simulated environments, red-teaming, and “human-in-the-loop experimentation,” among other things. The lab will also test systems for bias and users will be able to control how their information is used, according to the announcement.

In remarks presented at the Monday launch, Keoki Jackson, senior vice president of MITRE National Security Sector, pointed to the corporation’s poll that found less than half of the American public respondents thought AI would have the trust needed for applications. 

“We have some work to do as a nation, and that’s where this new AI lab comes in,” Jackson said.

Mitigating the risks of AI in government has been a topic of interest for lawmakers and was a key component of President Joe Biden’s October executive order on the technology. The order, for example, directed the National Institute of Standards and Technology to develop a companion to its AI Risk Management Framework for generative AI and create standards for AI red-teaming. MITRE’s new lab bills itself as a testbed for that type of risk assessment.

“The vision for this lab really is to be a place where we can pilot … and develop these concepts of AI assurance — where we have the tools and capabilities that can be adopted and applied to the special the specialized needs of different sectors,” Charles Clancy, MITRE senior vice president and chief technology officer, said at the event. 

Clancy also noted that both the “assurance” and “discovery” aspects of the new lab are important. Focusing too much on assurance and getting “tangled up in security” could prevent from balancing “against the opportunity,” he said. 

Members of the Virginia congressional delegation were also present to express their support at the event, which was held at MITRE’s McLean, Virginia, headquarters where the new lab is located. The three lawmakers were Reps. Gerry Connolly and Don Beyer, and Sen. Mark Warner. All are Democrats. 

Warner, in remarks at the event, said he worries that the race for the best large language model by companies like Anthropic, Open AI, Microsoft, and Google might be so intense that those entities aren’t building in assurance. 

“Getting it right is critical as any mission I can imagine, and I think, unfortunately, that we’re going to have to make sure that we come up with the standards,” Warner said. He added that policymakers are still trying to figure out whether the federal government houses AI expertise in one location, such as NIST or the Office of Science and Technology Policy, or spreads it out across the government. 

For MITRE, working on AI projects isn’t new. The corporation has been doing work in that space for roughly 10 years, Miles Thompson, MITRE’s AI assurance solutions lead, told FedScoop in an interview at the event. “Today really codifies that we’re going to provide this as a service now,” Thompson said of the new lab.

As part of its approach to evaluation, MITRE created its own process for AI risk assessment it calls the AI Assurance Process, which is consistent with existing standards for things like machinery and medical devices. Thompson described the process as “a stake in the ground for what we think is the best practice today,” noting that it could change with the evolving landscape. 

Thompson also said the level of assurance for that process changes depending on the system and how it’s being used. The consequences for something like Netflix’s recommendations system are low whereas those for AI for self-driving cars or air traffic control are dire, he said.

An example of how MITRE has applied that process to work with an agency is its recent work with the Federal Aviation Administration, Thompson said. 

The FAA and its industry partners came to MITRE to talk through potential tweaks to a standard inside the agency pertaining to software in airborne systems (DO-178C) that doesn’t currently address AI or machine learning, he said. Those conversations addressed the question of how that standard might change to be able to say “this use of AI is still safe,” he said. 

Eight trends that are redefining government at ‘warp speed’

Government leaders today find themselves grappling with an epochal technological upheaval. As artificial intelligence unfurls its wings, a fervent dialogue ensues on how government agencies might wield this technological juggernaut to streamline operations and confront the thorniest challenges of our era.

Surveying the global landscape of governmental evolution, we see reason for optimism. We’ve identified more than 200 cases worldwide that offer proof of radical transformation, where government agencies have achieved quantum leaps, delivering upwards of 10X improvements across areas ranging from operational efficiency to customer experience to mission outcomes.

Here are eight seismic trends redefining governance in 2024 and beyond:

As we navigate the complexities of our time, embracing these trends will be paramount in building a government that is not only responsive, but also proactive in addressing the needs of the individuals and families it serves. By harnessing the power of technology, prioritizing collaboration, and striving for innovation, agencies can overcome adversity and thrive in 2024.

To hear more about these trends, listen to William Eggers on the Daily Scoop Podcast discuss Deloitte’s Top Trends in Government 2024 report.

Congressional offices experimenting with generative AI, though widespread adoption appears limited

As generative artificial intelligence tools have made their way into public use, a few offices on Capitol Hill have also begun to experiment with them. Widespread use, however, appears to be limited. 

FedScoop inquiries to every member of the House and Senate AI caucuses yielded over a dozen responses from lawmakers’ offices about whether they are using generative AI tools, as well as if they have their own AI policies. Seven offices indicated or had previously stated that staff were using generative AI tools, five said they were not currently using the technology, and three provided a response but didn’t address whether their offices were currently using it. 

The varied responses from lawmakers and evolving policies for use in each chamber paint a picture of a legislative body exploring how to potentially use the technology while remaining cautious about its outputs. The exploration of generative AI by lawmakers and staff also comes as Congress attempts to create guardrails for the rapidly growing technology.

“I have recommended to my staff that you have to think about how you use ChatGPT and other tools to enhance productivity,” Rep. Ami Bera, D-Calif., told FedScoop in an interview, pointing to responding to constituent letters as an example of an area where the process could be streamlined.

But Bera also noted that while he has accessed ChatGPT, he doesn’t often use it. “I’d rather do the human interaction,” he said.

Meanwhile, Sen. Gary Peters, D-Mich., has policies for generative AI use in both his office and the majority office of the Homeland Security and Governmental Affairs Committee, which he chairs. 

“The policy permits the use of generative AI, and provides strong parameters to ensure the accuracy of any information compiled using generative AI, protect the privacy and confidentiality of constituents, ensure sensitive information is not shared outside of secure Senate channels, and guarantee that human judgment is not supplanted,” a Peters aide told FedScoop.

And some lawmakers noted they’ve explored the technology themselves.

Rep. Scott Franklin, R-Fla., told FedScoop that when ChatGPT first became public, he asked the service to write a floor speech on the topic of the day as a Republican member of Congress from Florida. Once the machine responded, Franklin said he joked with his communication staff that “y’all are in big trouble.’”

While Franklin did not directly comment on AI use within his office during an interview with FedScoop, he did say that he’ll play with ChatGPT and doesn’t want to be “left behind” where the technology is concerned. 

House and Senate policies

As interest in the technology has grown, both House and Senate administrative arms have developed policies for generative tools. And while generative AI use is permitted in both chambers, each has its own restrictions.

The House Chief Administrative Officer’s House Digital Services purchased 40 ChatGPT Plus licenses last April to begin experimenting with the technology, and in June the CAO restricted ChatGPT use in the House to the ChatGPT Plus version only, while outlining guardrails. That was first reported by Axios and FedScoop independently confirmed with a House aide. 

There is also indication that work is continuing on that policy. At a January hearing, House Deputy Chief Administrative Officer John Clocker shared that the office is developing a new policy for AI with the Committee on House Administration and said the CAO plans on creating guidance and training for House staff.

In a statement to FedScoop, the Committee on House Administration acknowledged that offices are experimenting with AI tools — ChatGPT Plus, specifically — for research and evaluation, and noted some offices are developing “tip sheets to help guide their use.”

“This is a practice we encourage. CAO is able to work with interested offices to craft tip sheets using lessons learned from earlier pilots,” the committee said in a statement. 

The committee has also continued to focus on institutional policies for AI governance, the statement said. “Towards that end, last month we updated our 2024 User’s Guide to include mention of data governance and this month we held internal discussions on AI guardrails which included national AI experts and House Officials.”

On the Senate side, the Sergeant at Arms’ Chief Information Officer issued a notice to offices allowing the use of ChatGPT, Microsoft Bing Chat, and Google Bard and outlining guidance for use last year. PopVox Foundation was the first to share that document in a blog, and FedScoop independently confirmed with a Senate aide that the policy was received in September. The document also indicated that the Sergeant at Arms CIO determined that those three tools had a “moderate level of risk if controls are followed.”

Congressional support agencies, including the Library of Congress, the Government Accountability Office and the Government Publishing Office, have also recently shared how they’re exploring AI to improve their work and services in testimony before lawmakers. Those uses could eventually include tools that support the work of congressional staff as well.

Aubrey Wilson, director of government innovation at the nonprofit POPVOX Foundation who has written about AI use in the legislative branch, said the exploration of the technology is “really innovative for Congress.”

“Even though it might seem small, for these institutions that traditionally move slowly, the fact that you’re even seeing certain offices that have productively and proactively set these internal policies and are exploring these use cases,” Wilson said. “That is something to celebrate.”

Individual approaches

Of the offices that told FedScoop they do use the technology, most indicated that generative tools were used to assist with things like research and workflow, and a few, including Peters’ office, noted that they had their own policies to ensure the technology was being used appropriately. 

Clocker, of the CAO, had recommended offices adopt their own internal policies adjusted to their preferences and risk tolerance at the January Committee on House Administration hearing. POPVOX has also published a guide for congressional offices establishing their own policies for generative AI tools.

The office of Rep. Glenn Ivey, D-Md., for example, received approval from the House for its AI use and encouraged staff to use the account to assist in drafting materials. But they’ve also stressed that staff should use the account for work only, ensure they fact-check the outputs, and are transparent about their use of AI with supervisors, according to information provided by Ivey’s office. 

“Overall, it is a tool we have used to improve workflow and efficiencies, but it is not a prominent and redefining aspect of our operations,” said Ramón Korionoff, Ivey’s communications director.

Senate AI Caucus co-chair Martin Heinrich, D-N.M., also has a policy that provides guidance for responsible use of AI in his office. According to a Heinrich spokesperson, those policies “uphold a high standard of integrity rooted in the fundamental principle that his constituents ultimately benefit from the work of people.”

Even if they don’t have their own policies yet, other offices are looking into guidelines. Staff for one House Republican, for example, noted they were exploring best practices for AI for their office.

Two House lawmakers indicated they were keeping in line with CAO guidance when asked about a policy. Rep. Ro Khanna, D-Calif., said in a statement that his “office follows the guidance of the CAO and uses ChatGPT Plus for basic research and evaluation tasks.” 

Rep. Kevin Mullin, D-Calif., on the other hand, isn’t using generative AI tools in his office but  said it “will continue to follow the CAO’s guidance.”

“While Rep Mullin is interested in continuing to learn about the various applications of AI and find bipartisan policy solutions to issues that may arise from this technology, our staff is not using or experimenting with generative AI tools at this time,” his office shared with FedScoop in a written statement.

That guidance has been met with some criticism, however. Rep. Ted Lieu, D-Calif, initially pushed back on those guardrails after they were announced, arguing the decision about what to use should be left up to individual offices. He also noted, at the time, that his staff were free to use the tools without restrictions. 

Sen. Todd Young, R-Ind., has also previously indicated he and his staff use the technology. A spokesperson for Young pointed FedScoop to a statement the senator made last year noting that he regularly uses AI and encourages his staff to use it as well, though he said staff are ultimately responsible for the end product.

Parodies and potential uses

Some uses of generative tools have made their way into hearings and remarks, albeit the uses are generally more tongue-in-cheek or meant to underscore the capabilities of the technology.

Sen. Chris Coons, D-Del., for example, began his remarks at a July hearing with an AI-generated parody of “New York, New York;” Sen. Richard Blumenthal, D-Conn., played an AI-generated audio clip at a May hearing that mimicked the sound of his own voice; Rep. Nancy Mace, R-S.C., delivered remarks at a March 2023 hearing written by ChatGPT; and Rep. Jake Auchincloss, D-Mass., delivered a speech on the House floor in January 2023 written by ChatGPT.

Rep. Don Beyer, D-Va., said anecdotally in an interview that he’s heard of others using it to draft press releases or speeches, though it’s not something his office uses. “This is no criticism of GPT4, but when you are looking at an enormous amount of written material, and you’re averaging it all out, you’re going to get something pretty average,” Beyer said.

Other lawmakers seemed interested in the uses of technology but haven’t yet experimented with it in their offices. 

Rep. Adriano Espaillat, D-N.Y., for example, said in an interview that while his office isn’t using AI right now, he and his staff are exploring how it could be used.

“We are looking at potential use of AI for fact-finding, for the verification of any data that we may have available to us, fact-checking matters that are important for us in terms of background information for debate,” Espaillat said, adding “but we’re not there yet.”

POPVOX Foundation’s Wilson, a former congressional staffer, said one of her takeaways from her time working in Congress was “how absolutely underwater” staff is with keeping up with information, from corresponding with federal agencies to letters from constituents. She said that generative AI could help congressional staff sort through information and data faster, which could inform data-driven policymaking.

“In a situation where Congress is not willing to give itself more people to help with the increased workflow, the idea that it’s innovatively allowing the people who are in Congress to explore use of better tools is one way that I think congressional capacity can really be aided,” Wilson said. 

Rebecca Heilweil contributed to this story.

GSA announces new Presidential Innovation Fellows focused broadly on tech, with a second AI cohort coming later in 2024

The General Services Administration announced Monday that for the first time, the Presidential Innovation Fellows program will feature two cadres in 2024 — with one exclusively focused on AI coming later this year.

The first PIF cohort of 21 fellows, introduced Monday, will work with “a broader technology focus” under their respective assignments at 14 agencies with “high-impact priorities.” Meanwhile the second group of fellows — to be announced this summer — will focus solely on artificial intelligence, according to the GSA, which houses the program under its Technology Transformation Services branch.

“More than ever, federal agencies are looking for top talent to help them improve the digital experience of their customers, better leverage data and enhance cybersecurity,” Robin Carnahan, GSA administrator, said in a release. “We’re excited to see how these innovators put their skills to work for the public good and collaborate alongside agency leaders to better deliver services for the American people in their moments of need.”

The agency shared in the release that the first cohort will be working “alongside partners to create innovative solutions that advance national priorities.” The AI-focused PIFs coming later in 2024 will aim to deliver on the AI executive order that President Joe Biden issued last year, which named the PIF program as one of the existing federal technology pipelines for recruiting AI talent into government. 

Previously, PIFs have worked on a variety of efforts, such as projects to improve data sharing throughout the Department of Veteran Affairs and ensure data-driven decision-making through modernization within the Department of Justice, among many others. The PIF program was launched in 2012 by the White House’s Office of Science and Technology Policy before it was transferred to GSA in 2013. During that time, the program has hosted more than 250 fellows who have worked at more than 50 agencies. Many of those fellows continue on in other innovative and often tech-focused roles within government.

So far, in light of the October AI executive order, the Biden administration has continued working towards recruiting and retaining an AI-talent workforce to keep up with the competition and challenges posed by the technology. 

Recently, the administration has established funds for the technology’s research and development talent recruitment alongside the other gaps for AI talent within the federal government.

Watchdog report ties veteran death to scheduling error in VA’s new electronic health record system

A scheduling error within the Department of Veterans Affairs modernized electronic health record system played a role in the 2022 death of a veteran in Ohio, the VA’s Office of the Inspector General charges in a new report.

A patient of the VA Central Ohio Healthcare System in Columbus with a history of behavioral health and substance abuse issues didn’t receive adequate outreach from the hospital system to reschedule a missed appointment due to “a system error in the functioning of the new EHR,” the report details. Just over 40 days later, the patient died an “accidental death” caused by “acute cardiac arrythmia ‘due to (or as a consequence of) acute toxic effect of inhalant.'”

The rollout of the VA’s EHR Modernization program has experienced a litany of issues since its launch in 2020, which led the department in April 2023 to pause the implementation of the new system at additional VA hospitals until it is deemed “highly functioning” and issues at first-adopter locations — like the hospital in Columbus — are resolved. That review is still ongoing.

The new inspector general’s report, released March 21, points to an error in the scheduling function that “resulted in staff’s failure to complete required minimum scheduling efforts following the patient’s missed mental health appointment,” the report says.

On the day of the missed appointment, staff at the facility followed standard operating procedures to call the patient and send a letter to reschedule. However, “staff did not complete the required three telephone calls on separate days,” as is VA policy for patients with mental health concerns, because in the new EHR system, the notification of the missed appointment “routed to a request queue and, as a result, schedulers were not prompted to conduct required rescheduling efforts.”

“The OIG concluded that the lack of contact efforts may have contributed to the patient’s disengagement from mental health treatment and ultimately the patient’s substance use relapse and death,” the watchdog wrote in the report.

However, the inspector general also attributes the case to other relevant incidents of mismanagement by VA staff, such as the failure to effectively evaluate and address the patient’s treatment needs and a lack of “caring communications” as required by the department’s Caring Communication Program. It also concluded that the hospital’s leadership failed to properly share lessons learned about what went wrong in the case.

This isn’t the only incident in which the VA’s new EHR Modernization program — developed by Oracle Cerner — has been tied to veteran deaths. In March 2023, Sen. Richard Blumenthal, D-Conn., disclosed during a Senate Committee on Veterans’ Affairs hearing six incidents of “catastrophic harm” to veterans, four of which resulted in deaths — one in Spokane, Washington, and the others in Columbus, Ohio. It’s unclear if the patient at the focus of the new OIG report is one of the three from Ohio that Blumenthal referenced.

Meanwhile, the VA inspector general released a pair of similar reports last week involving the Oracle Cerner EHR: one that calls out how “scheduling system limitations have caused additional work and redundancies” that could lead to an increased risk of scheduling errors, and another that highlighted “pharmacy-related patient safety issues nationally” due to a software coding error, as well as data transmission issues “that have affected approximately 250,000 new EHR site patients who received care at a legacy EHR site.”

In the latter case, the OIG concluded: “Affected patients have not been notified of their risk of harm and the OIG remains concerned for their safety.”

Rep. Scott Franklin eyeing public-private partnerships as his House AI task force work kicks into gear

Proposed applications of artificial intelligence considered by Congress have included everything from cybersecurity uses to streamlining processes. For Rep. Scott Franklin, R-Fla., the technology also presents a more hyperlocal function: enhancing weather forecasting for agriculture purposes.

“I represent an area that’s a long way from Silicon Valley. I’ve got the largest agricultural district east of the Mississippi [River], and we have a lot of challenges that are facing farmers today. Technology can be at least a partial solution,” Franklin said in an interview with FedScoop. “If AI can help us do better weather prediction, that’s going to have massive implications for agriculture. Even better, hurricane forecasts that allow farmers and growers to respond in a more timely manner with [an] impending storm.”

A member of the new House AI task force, Franklin completed 26 years of military service as a Naval aviator before joining Congress three years ago. Now, the Florida Republican serves on the House Science, Space and Technology Committee and the joint Research and Technology Subcommittee. 

Franklin has introduced legislation to support American businesses’ participation in the establishment of global standards for AI, as well as the Land Grant Research Prioritization Act, which would provide universities with dedicated access to the Department of Agriculture’s grant funding to enhance AI research and mechanization. He spoke with FedScoop recently about his plans for the AI task force, AI-related cybersecurity concerns, the role of the private sector in Congress’s AI work and more.

Editor’s note: The transcript has been edited for clarity and length.

FedScoop: AI has remained largely bipartisan. What are you anticipating as some differences between Democrats and Republicans as the task force members work to produce a report?

Rep. Scott Franklin: It’s going to be interesting to see — I can’t anticipate all of that yet. I know we all have First Amendment concerns, and there’s concerns about either purposeful or unintentional bias built into algorithms that produce outputs that can cut both ways, and I think both sides of the aisle will be concerned about that. … I’ve been hearing … there was a lot of concern about the algorithms are only going to spit out results that are as good as the data that’s fed into them, and is there purposeful or unintentional bias in some of that information?

… We’re all obviously concerned about election integrity and the implications for AI in nefarious hands to influence the elections one way or another. I think we’re all rightfully worried about that. We’ve got a tremendous advantage, I think, now over maybe the rest of the world; we want to make sure we preserve that. That’s obviously, it’s in everyone’s best interest there, so that’s not a political thing either. … We’ll all come at it from different perspectives, which I’m actually interested to hear how my other counterparts think differently on it. 

FS: How can the U.S. lead on that global stage for AI governance?

SF: I think establishing those guardrails is gonna be important, but where those are, and that’s something I still haven’t landed on. I’m interested to hear the people that we’re going to have come in to speak to us and go through our deliberations. We want to be careful not to squelch innovation and be … overly burdensome with regulation. So where’s that fine, Goldilocks spot on this. I think it’s not one of those things that we’re going to just nail from the beginning. I think we’re going to need to probably look at it as a work in progress and try things as the technology evolves, [and] we may realize that we need to make tweaks along the way.

… A concern that I have is that if we don’t lead … and if there’s a void, states are gonna create their own regulations. I think if we’re not careful, we’ll end up with a patchwork of state regulations that are just going to muddy the water and make things a lot more confusing. … The time is of the essence to get busy on this. 

FS: Are you worried about AI uses in cybersecurity threats? Can you share anything that worries you regarding defense and AI?

SF: I think AI is an area that’s going to help us with cybersecurity. I think when there’s so much out there that we need to protect and so many vulnerabilities, I think AI is going to help. But there’s also the other side of that coin: AI is going to enable our adversaries to be much more pervasive in their efforts to try to hack into our systems. … Defense-wise, a question I get a lot is, “do we envision a future where we’re going to have these autonomous machines where a human’s not in the loop making decisions?” I don’t see us getting to that level of AI, at least in a generation. I don’t see where it’s ever going to be turned over to just kill boxes where the machines are on autopilot and the human’s no longer in the loop. 

FS: Do you think that the private sector and industry leaders should be informing Congress, especially as these task force meetings happen?

SF: Yeah, and I think that was something that [former House Speaker] Kevin McCarthy had recognized early on, that whether we want this role or not, that’s coming our way as Congress. … So he was trying to bring people like [OpenAI CEO Sam Altman] and others in to speak to us and just start bringing up a base level of knowledge. But I think the old days, like when you go back to the moon program, the 60s and the Apollo program, so much of that cutting-edge research came from within the government and then went outside in the private sector. It’s completely reversed now. I think if we try to be the guardian of it all — ‘We’ve got all the answers as a government, we’re going to tell you how to do things without a collaborative partnership with private enterprise.’ — it’s going to be a mistake. Because they’re the ones that have invested massively more money into this than the federal government has, and they’re the ones that are innovating. I think there needs to be a voice represented there from the private side, too. 

FS: Is there anything within the new task force or artificial intelligence that you want to share?

SF: Labor is difficult, and that’s one of our biggest challenges. It’s hard work, we don’t have enough labor to pick our crops and do things like that. So we’re trying to automate that. But, if you’re talking about machines that can go in and pick strawberries out of a field and decide, ‘is this one ready to be picked?’ … There’s a lot of artificial intelligence that will need to be applied to that, and we may ultimately be able to fix our labor issues through the use of AI. [There’s] a lot of concern about AI’s gonna put people out of work, and it is going to cause some disruptions and shifts, depending on the area, but I think there are going to be plenty of areas where we need labor, where we have shortages of labor. AI is going to be able to help fix that and we can retrain. 

There’ll be other big initiatives that are going to be necessary to retrain folks and changes in the workforce to re-deploy people into different areas. But I see far more upside and the potential for AI than the downside.

NTIA’s spectrum IT modernization plans have several gaps, GAO reports

As the National Telecommunications and Information Administration embarks on an ambitious project to modernize the systems it uses to manage spectrum, the agency’s efforts are compromised by cost estimates that lack detail, incomplete project schedules, lackluster communication with stakeholders and unestablished performance measures, according to a new congressional watchdog report.

The Government Accountability Office found that although the NTIA’s modernization planning is “aligned with several leading practices,” its shortcomings on the aforementioned points undermine the project’s potential for success at an especially critical time for federal radio-frequency spectrum — a scarce natural resource that is used for everything from satellite communications to navigation systems.

Tasked with managing federal use of spectrum, the NTIA has already identified the IT systems that it aims to modernize and submitted a contract order for acquisition planning support, the GAO notes. Agencies that use spectrum largely rely on NTIA-provided IT, though a handful — such as the Department of Defense and the Federal Aviation Administration — supplement their own IT with the NTIA’s systems. 

The NTIA has checked several boxes in its modernization efforts, including the creation of an implementation team with designated leadership, securing a funding source, setting outcome-oriented goals and establishing processes for progress reports to management and adapting plans. Before the NTIA can progress to the design phase of its modernization project, however, it “has to complete a number of activities including finalizing a concept of operations, assessing alternatives, and developing project management plans,” the GAO reported. 

NTIA officials told the GAO that they’re working on cost estimates ahead of an upcoming milestone review with two IT investment review boards, and MITRE has been enlisted as part of the estimation process. The GAO report pushes NTIA to follow the watchdog’s cost estimating and assessment guide and pursue independent cost estimates as it moves forward. 

The agency’s schedule for the modernization project hasn’t been completed, meanwhile, “because other activities were in progress” and “according to the Schedule Guide, the master schedule should include the entire required scope of effort,” NTIA officials told the GAO.

From a communications standpoint, NTIA officials said they’re working on a stakeholder management plan that would formalize their coordinating processes with other agencies. At the time of the GAO’s review, however, “NTIA had not finalized its stakeholder management plan or any documentation that demonstrates its policies and procedures for how it will facilitate coordination, including how frequently NTIA would hold meetings,” the watchdog said.

Finally, NTIA told the GAO that performance measures hadn’t yet been established because they were waiting for MITRE to finish its analysis of alternative project options, adding that there is no timeline for completing the assessment. 

“We acknowledge that pending NTIA’s selection and approval of a preferred alternative, the project’s acquisition strategy and implementation could vary,” the GAO stated. “However, knowing the project’s goals and what needs to be measured to achieve those goals should drive how NTIA implements the project.” 

The GAO delivered four recommendations to the NTIA, calling on the Office of Spectrum Management to address the deficiencies in cost estimates, scheduling, stakeholder management and performance measures for IT modernization. 

“Fully incorporating these practices — such as establishing performance measures to demonstrate progress and developing a schedule that includes the entire project — into NTIA’s activities could benefit the modernization effort as NTIA enters the next phases of this high-profile IT investment,” the GAO concluded.