Biden administration working on ‘enhancing’ AI use case reporting, Martorana says

The Biden administration’s efforts to improve reporting of artificial intelligence use case inventories include efforts to make them more searchable, the government’s top IT official said Tuesday.

“We’re working really hard to make sure that we’re enhancing those use cases … with metadata so that we can search them and really interrogate them, rather than just collect them and broadcast them — really to get key learnings from those,” Federal CIO Clare Martorana told reporters at a Federal CIO Council symposium Tuesday. 

The White House has previously indicated the inventories will be more central to understanding how agencies are using the technology going forward. In fact, draft Office of Management and Budget guidance that corresponded to President Joe Biden’s AI executive order proposed expanding the inventories with information about safety- and rights-impacting AI, the risks of uses, and how those risks are being managed.

“Federal agencies have special responsibility to get AI governance right, and we believe this policy will continue our global leadership,” Martorana said of that guidance in a keynote address earlier in the day.

The draft guidance was released in November shortly after Biden’s AI order and would establish a framework for agencies to carry out the administration’s policies for the budding technology. It included, among other things, requirements for agencies to designate chief AI officers — which agencies have already been starting on — and expanding existing reporting on agency AI uses.

Martorana, while talking to reporters, said public comment was “critical” to the development process for the guidance, and noted that equity and transparency as common themes in comments they received from interested parties.

With respect to transparency, Martorana pointed to the administration’s desire to improve agencies’ AI use case inventories, which were required initially under a Trump-era executive order and later enshrined into statute.

As of September, the federal government reported over 700 public uses of AI, demonstrating broad interest and potential for the technology across the federal government. Those inventories, which are required annually, have also so far been inconsistent in terms of things like format and information included. 

Election integrity and ‘digital liberty’ are top of mind for House AI task force member Kat Cammack

As one of the younger members of this Congress, Rep. Kat Cammack grew up in both the analog and digital eras, a fact that has led her to jokingly refer to her office as “effectively House IT,” where other lawmakers who need tech-related help come with tasks such as resetting their iPhones.  

The Florida Republican will have a chance to burnish her tech-focused reputation as one of 12 members of her party appointed to the House AI task force. Cammack, who serves on the House Energy and Commerce Subcommittees on Communications and Technology and Innovation, Data and Commerce,  said in an interview with FedScoop that the House AI task force will have work to do this election season. 

The 2024 cycle “is going to be America’s first real up-close encounter with AI in a bad way,” Cammack said, calling on Congress to first approach AI as a “philosophical product” and engage with private sector leaders. Cammack, a member of the House Rural Broadband and Blockchain Caucuses, added that she “would love to see” the Federal Election Commission put together “top concerns” and work to establish guardrails around AI where it has the authority to do so, with Congress asked to fill in the remaining gaps. 

“These administration officials have tremendous latitude in how they can react in real time, and I feel like sometimes you have agencies that overreact and you have some that stand down,” Cammack said. “This is not a situation where we want them to stand down. We want folks to go into the polling booth and feel like they’re very confident that there’s not going to be interference, that they haven’t been lied to and that everything is as it seems.”

Cammack pointed to past elections where voters would receive text messages that a candidate was dropping out of the race. She also acknowledged the ease with which bad actors are able to remove watermarks for AI content, pointed to a proposed solution for AI-generated content that has the potential to confuse voters, and spoke about the rise in the use of deepfake technology

The FEC has indicated that it is reviewing public comments on the use of AI in campaign ads, and that the agency plans to “resolve the AI rulemaking by early summer,” according to the Washington Post

Cammack noted her concerns about the federal government coming in with “a heavy hand” on AI matters and stifling “innovation and development.” She’d like to see private sector providers share what they are developing with Congress and what they envision for AI’s future. 

“I don’t want us to overregulate because I’m fearful that that will stamp out innovation. I’m fearful that if you don’t address the philosophical issue in these language models, that we’re gonna see real implications immediately and long term,” Cammack said. “AI is not going anywhere; it’s going to be a very big part of every aspect of our lives for the foreseeable future. So we have to make sure that we’re doing everything right on the front end. … For once, we need to actually force the government to look at private sector and say, ‘tell us what you know so we can be better.’”

Cammack said that when someone asks ChatGPT a question, the answer will reflect natural bias and “we want equal opportunity for people to use these systems with the understanding that there’s not going to be an equal outcome, but it’s going to be a truthful one,” adding that language models need “to be a position of digital liberty versus digital authoritarianism.”

“If we don’t approach the philosophical development of the language model, the brain, with a mindset of those basic values and tenants — equal opportunity, freedom, liberty, diversity of thought, expression [and] constitutional protections — then we are going to end up with what we currently have today,” Cammack said. “Which is, a system that will write a poem about Nancy Pelosi but not Donald Trump, where it paints conservatives in a harsh light but a glowing light when it comes to a Democrat.”

Export-Import Bank taking open-minded approach on the use of generative AI tools

The Export-Import Bank of the United States is among the agencies opting for a more permissive approach to generative AI tools, providing employees the same kind of access the independent agency has for access to the internet, according to its top IT official.

“We do not block AI any more than we block general internet access,” Howard Spira, chief information officer of Ex-Im, said during a Thursday panel discussion hosted by the Advanced Technology Academic Research Center (ATARC).

Spira said the agency is approaching generative tools with discussions about accountability and best practices, such as not inputting private information into tools like ChatGPT or other public large language models. “But frankly, that is just an evolution of policies that we’ve had with respect to just even search queries on the general internet,” Spira said.

He emphasized the importance of context in AI usage, noting that the agency — whose mission is facilitating U.S. exports — deals with the kinds of decisions that it believes are “a relatively low-risk environment” for AI. Most of the work the agency is doing with AI is with “embedded AI” that’s within its existing environments, such as those for cyber and infrastructure monitoring.

“We’re also actually encouraging our staff to play with this,” Spira said.

His comments come as agencies across the federal government have grappled with how to address the use of generative AI tools by employees and contractors. Those policies have so far varied by agency depending on their individual needs and mission, according to FedScoop reporting.

While some agencies have taken a permissive approach like Ex-Im, others are approaching the tools with more caution.

Jennifer Diamantis, special counsel to the chief artificial intelligence officer in the Securities and Exchange Commission’s Office of Information Technology Strategy and Innovation, said during the panel that the SEC isn’t jumping into third-party generative AI tools yet, citing unknowns and risks. 

There is, however, a lot of exploration, learning, safe testing and making sure guardrails are followed, Diamantis said. She added that while the agency is exploring the technical side, there is also an opportunity right now to explore the process, policy and compliance side of things to make sure they’re ready to manage risks if and when they do move forward with the technology. 

Diamantis, who noted she wasn’t speaking for the commission or commissioners, encouraged people to use this time to focus not just on the technology, “but also, what do you need in terms of governance? What do you need in terms of updating your lifecycle process? What do you need in terms of upskilling, training for staff?”

In addition to exploration, the SEC is also educating its staff on AI. Diamantis said those efforts have included trainings — such as a recent one on responsible AI — and having outside speakers, as well as establishing an AI community of practice and a user group.

Spira similarly noted that Ex-Im has working groups addressing AI and is including discussions about the technology in its continuous strategy process. This year, that process for its IT portfolio included having “the portfolio owners identify potential use cases that they were interested in exploring” and the identification of embedded use cases, he said.

Tony Holmes, another panelist and Pluralsights’s director of public sector presales solution consulting for North America, underscored the importance of broad training on AI to build a workforce that isn’t afraid of the technology. 

“I know when I talk to people in my organization, when I talk to people at agencies, there are a lot of people that just haven’t touched it because they’re like, ‘we’re not sure about it and we’re a little bit scared of it,’’’ Holmes said. Exposure, he added, can help those people “understand it’s not scary” and “can be very productive.”

GSA working on corrective action plan following OIG report on ‘noncompliant’ video-conferencing camera purchase

Following scrutiny from both an agency watchdog and Congress for its purchases of Chinese-made video-conference cameras that were susceptible to security vulnerabilities, the General Services Administration said Thursday that it must deliver a corrective action plan to its inspector general’s office by March 25.

In a statement to FedScoop, a GSA spokesperson said the agency has put corrective actions in place and intends to provide the plan to OIG later this month. The spokesperson said the report will include “enhancements to acquisition processing procedures that ensure that compliance with all applicable laws is precisely documented.”

GSA’s Office of the Inspector General released a report in January detailing the agency’s purchase and use of Chinese-manufactured video-conference cameras with “known security vulnerabilities” that were not compliant with the Trade Agreements Act of 1979, or TAA.

At the time of the original report, OIG shared that GSA records indicated that the non-compliant video cameras had not been updated and remained susceptible to vulnerabilities. Out of 210 active cameras, the OIG report noted that 37 had not been updated with the most recent software version, which was from September 2022. Additionally, 29 of the cameras “had not been updated to the June and July 2022 software versions that addressed the prior security vulnerabilities,” the report found.

The GSA spokesperson told FedScoop that as of Friday, the agency “has 172 OWL devices that are approved for use around our environment. All 172 devices have been updated to the latest software version.” The spokesperson added that the GSA has not found any additional security vulnerabilities and that it has a “strong zero trust architecture to prevent cyber threats and bad actors.”

“GSA is confident that the use of the OWL video conference cameras has been and remains secure under our security protocols,” the spokesperson said. “GSA took several measures to assure the ongoing security of these devices, including limiting their connectivity to the internet, discontinuing a subset of the cameras that did not meet our standards and conducting ongoing threat monitoring, patching and maintenance.”

The agency’s Office of Digital Infrastructure Technologies (IDT) “misled a contracting officer with egregiously flawed information” to purchase 150 video cameras as part of a pilot project overseen by the GSA’s Federal Acquisition Services’ Federal Systems Integration and Management Center (FEDSIM), according to the report.

GSA Chief Information Officer David Shive and Deputy Inspector General Robert Erickson testified Thursday before the House Subcommittee on Cybersecurity, Information Technology, and Government Innovation regarding the audit’s findings. Shive said he was unaware of “any evidence suggesting that GSA IT personnel sought to intentionally mislead acquisition.”

“As a result of this audit, GSA has put in place new processes and improved documentation requirements,” Shive said. “The team has strengthened our alternatives of analysis documentation … [allowing] for possible solutions to be adequately analyzed and locked down once the analysis is completed.”

In response to a question from subcommittee Chairwoman Nancy Mace, R-S.C., about possible intentions behind the purchase, Erickson said that the OIG’s report did not find any evidence of ill intent, referring to the purchase as “gross incompetence.”

The OIG recommended four action items for the GSA in its original report, including to “return, or otherwise dispose of, previously purchased TAA-noncompliant cameras.” The agency partially concurred with that point, stating that a subset of cameras that did not meet GSA standards was discontinued and that it is “confident that the use of the detailed video conference cameras are secure under our current security protocols.”

The headline of this story was updated March 4, 2024, to better characterize the OIG’s findings.

Senate bill calls on NIST to boost work on emerging tech standards

A newly introduced bipartisan Senate bill seeks to improve U.S. participation in international standards-setting bodies for emerging technologies by creating a pilot program that would fund the hosting of standard-setting meetings in the United States. 

Amid growing concern that U.S. companies and technologies are getting outmuscled by China in standard-setting bodies, the Promoting United States Leadership in Standards Act of 2024 from Sens. Mark Warner, D-Va., and Marsha Blackburn, R-Tenn., calls on the National Institute of Standards and Technology and the State Department to bolster U.S. participation in the creation and implementation of standards for AI and other emerging tech. 

“In recent years, the Communist Party of China has asserted their dominance in the global technology space, and as their status has risen, our authority and influence has fallen,” Warner said in a statement. “This legislation clearly outlines steps we must take to reestablish our leadership and ensure that we are doing all we can to set the global standards for critical and emerging technologies.”

According to a press release, the legislation aims to preserve U.S.  influence when it comes to technical requirements as well as “values, such as openness, safety, and accessibility, embedded in emerging technologies.”

It’s the Chinese Communist Party’s “mission to undermine the U.S. and our interests around the globe by exploiting our deficiencies,” Blackburn said in a statement. “As they ramp up their efforts to dominate global standards for emerging technologies, the U.S. must be a global leader in innovation, and that includes setting standards that reflect our interests and values.”

The legislation mandates NIST to deliver two reports to Congress: one covering current U.S. participation in the development of standards for AI and other emerging technologies and another that assesses a pilot program that would award $10 million in grants over four years for the hosting of standards meetings in the U.S.

That second report, which would be due after the pilot program’s third year, would also detail expenses, identify the recipients of the grants, and highlight the geographic distribution of participants at the standards meetings. 

Finally, the bill calls on NIST’s director to launch a web portal that enables stakeholders to “navigate and actively engage in international standardization efforts,” in addition to featuring information on how to contribute to activities related to standards for AI and emerging technologies. 

“Nurturing open and global participation in standardization activities, especially when hosted in the United States, can address shared technical challenges while advancing American technology leadership,” Morgan Reed, President of ACT | The App Association, said in a statement. “This legislation represents a decisive step in the right direction.”

Federal leaders on accelerating the mission with AI and security

Artificial intelligence holds tremendous potential to help federal agencies augment security and workforce capacity to improve mission outcomes. In a recent executive interview series, government leaders share a number of programs and strategies their agencies are embracing to take full advantage of these new capabilities responsibly and ethically.

The series, “Accelerating the Mission with AI and Security,” produced by Scoop News Group for FedScoop and underwritten by Google for Government, invited leaders to share where they hope to see the most significant return on investment for AI implementation in the coming year.

Artificial intelligence to meet core mission needs

Workforce augmentation was a highly discussed use case for AI implementation in the series.

FEMA’s Office of the Chief Financial Officer is one agency that has been strategically working on a generative AI tool to improve mission efficiency.

Christopher Kraft, Assistant Administrator, Financial Systems for FEMA’s OCFO shared that his office is developing a proprietary generative AI tool – owned and operated by FEMA and DHS – to generate draft responses to budget requests that his team can review for accuracy.

The Department of Labor CISO Paul Blahusch discussed how his agency is leaning into AI with a dedicated AI office inside the Office of the CIO to help develop and implement tools and techniques to streamline workflows, which can translate into cost avoidance and improved programs. He referred to three AI implementation areas his agency is focusing on, including cybersecurity, back-office support, and assisting constituents in accessing services more quickly.

For agencies like the U.S. Patent and Trademark Office, using AI as an augmented assistant has been developing even further over the past three years, according to CISO Jamie Holcombe, providing each examiner with an augmented intelligence system next to them.

“So, during its searches, it can bring up not just one thing but a myriad of things that pertain to the uniqueness of that patent application or trademark registration. So, you really have to think that the examiners don’t want one thing, they want a plethora of things to say, ‘yes,’ it is unique and novel, or ‘no, it’s not,’” Holcombe explains. “AI and generative AI has helped in that regard because each examiner has a customized version that just applies to them.”

Many leaders see generative AI as a way to improve standard workflow procedures. Department of Commerce CIO Andre Mendes, said that for tasks that are incredibly onerous, his department is looking at how AI can be used to break through some of the clutter.

“In HR processes, for example, position descriptions are not really that exciting, but at the end of the day, consume an enormous amount of people and time and resources, and where we can, I think, leverage AI to dramatically improve and optimize those environments,” he explained.

Improved security for federal data

Agencies like U.S. Citizen and Immigration Services (USCIS) are far along in their cloud migration strategies, which means that data security strategies must now shift to account for an explosion of digital resources.

“All the immigration data that has to be cataloged and identified and tagged is a monstrous task. And frankly, there is no easy button to push when you’re talking about the volume and scale of data that we have, and the amount of change that it goes through on even a daily basis,” shared USCIS CISO Shane Barney.

“We have, from a cybersecurity perspective, in my plans I am building, what we’re referring to as a security integration platform, which is an open source-based platform, and it has a whole AI/machine learning piece built into it based on open-source principles and practices, as well as some software platforms that will be integrated into the security program. And more on the threat hunting side of things where we’re looking for those abnormal changes in the environment that could indicate a breach.”

His agency leadership is waiting on further White House guidance on AI implementation but is working on foundational principles that can help the organization move forward with implementation plans quickly, referring to an open cybersecurity schema framework USCIS has been working on.

“I see it as the future. It’s the way we have to handle it; the future of cybersecurity is data,” said Barney.

This sentiment was echoed by other leaders who want to improve how they manage, store and analyze data to strengthen their agency’s security posture. Centers for Medicare and Medicaid Services (CMS) CISO Robert Wood said that his agency is building a security data lake to minimize data silos.

According to Wood, generative AI models could play a more significant role in empowering the government workforce to ask plain language questions to get actionable insights from data if properly structured and react more quickly to security threats and vulnerabilities.

Other participants who shared their insights in this series included:

This video series was produced by Scoop News Group, for FedScoop, and sponsored in part by Google for Government.

DOE seeks information on AI uses for climate change mitigation, grid resilience

The Department of Energy is seeking information on a variety of artificial intelligence-related topics regarding the White House’s AI executive order, which calls on the agency to leverage the technology’s potential on everything from mitigating climate change risks to securing electric power.

In a document scheduled to post Friday on the Federal Register, the DOE said it is looking for information that will aid the agency in delivering its public report on AI, due within 180 days of the executive order’s issuance. The order called on the DOE’s report to detail the potential AI could have to “improve planning, permitting, investment, and operations for electric grid infrastructure and to enable the provision of clean, affordable, reliable, resilient, and secure electric power to all Americans.”

The request for information features a lengthy callout for responses on how AI can be used to “strengthen the nation’s resilience against climate change, including opportunities to help predict, prepare for, and mitigate climate-driven risk.”

The “non-exhaustive list of topics” it seeks comments on include the forecasting of extreme climate-driven events such as hurricanes and wildfires, projections on long-term climate impacts on resource levels, and how to improve and expedite numerical weather prediction models.

AI’s potential to “improve the security and reliability of grid infrastructure and operations and their resilience to disruptions” is another callout in DOE’s RFI, which welcomes contributions from private actors, public-private partnerships and all levels of government.

The DOE said it is interested specifically in how AI can improve grid reliability through predictive maintenance for utilities, more efficient load and supply balance, and better demand management for technologies like EV charging and smart devices, and improved flexibility of power systems models and related connected software. 

On the topic of grid resiliency, the DOE is interested in the effects of climate hazards on electricity infrastructure, climate mapping for resilience and adaptation outputs, and AI-enabled threat detection and “real-time self-healing infrastructure.” 

Finally, the DOE wants to know how AI can “improve planning, permitting, and investment in the grid and related clean energy infrastructure.” Leveraging the technology to expedite siting and permitting, improve project planning, validate and monitor current projects, and enhance the compatibility of datasets are among the uses of interest to the agency.

The DOE will solicit information for 30 days following Friday’s publication. Comments can be submitted electronically or mailed to the agency’s Washington, D.C. headquarters.  

FBI, DHS lack information-sharing strategies for domestic extremist threats online, GAO says

The FBI and Department of Homeland Security’s information-sharing efforts on domestic extremist threats with social media and gaming companies lack an overarching strategy, a Government Accountability Office report found, raising questions about the effectiveness of the agencies’ communications to address violent warnings online.

In response to the proliferation in recent years of content on social media and gaming platforms that promote domestic violent extremism, the FBI and DHS have taken steps to increase the flow of information with those platforms. But “without a strategy or goals, the agencies may not be fully aware of how effective their communications are with companies, or how effectively their information-sharing mechanisms serve the agencies’ overall missions,” the GAO said.

For its report, the GAO requested interviews with 10 social media and gaming companies whose platforms were connected most frequently with domestic violent extremism terms, per article and report searches. Discord, Reddit and Roblox agreed to participate, as did a social media company and a game publisher, both of which asked to remain anonymous.

The platforms reported using a variety of measures to identify content that promotes domestic violent extremism, including machine learning tools to flag posts for review or automatic removal, reporting by users and trusted flaggers, reviews by human trust and safety teams, and design elements that discourage users from committing violations.

Once those companies have identified a violent threat, there are reporting mechanisms in place with both DHS and the FBI. “However, neither agency has a cohesive strategy that encompasses these mechanisms, nor overarching goals for its information-sharing efforts with companies about online content that promotes domestic violent extremism,” the GAO noted.

The agencies are engaged in multiple other efforts to stem the tide of domestic extremist threat content. The FBI, for example, is a participant in the Global Internet Forum to Counter Terrorism, and in the United Nations’ Tech Against Terrorism initiative. The agency also employs a program manager dedicated to communications with social media companies, conducts yearly meetings with private sector partners and operates the National Threat Operations Center, a centralized entity that processes tips.

DHS, meanwhile, has participated in a variety of non-governmental organizations aimed at bolstering information-sharing, in addition to providing briefings to social media and gaming companies through the agency’s Office of Intelligence and Analysis. 

There are also joint FBI-DHS efforts in progress, including the issuing of products tied to the online threat landscape, and a partnership in which the FBI delivers briefings, conducts webinars and distributes informational materials on various threats to Domestic Security Alliance Council member companies. 

Though the FBI and DHS are clearly engaged in myriad efforts to stem domestic extremist violent threats made on social media and gaming platforms, the GAO noted that implementing strategies and setting specific goals should be considered “a best practice” across agencies.

With that in mind, the GAO recommended that the FBI director and the I&A undersecretary both develop a strategy and goals for information-sharing on domestic violent extremism with social media and gaming companies. DHS said it expects to complete the strategy by June.

AI advisory committee wants law enforcement agencies to rethink use case inventory exclusions

There’s little debate that facial recognition and automated license plate readers are forms of artificial intelligence used by police. So the omissions of those technologies in the Department of Justice’s AI use case inventory late last year were a surprise to a group of law enforcement experts charged with advising the president and the National AI Initiative Office on such matters.

“It just seemed to us that the law enforcement inventories were quite thin,” Farhang Heydari, a Law Enforcement Subcommittee member on the National AI Advisory Committee, said in an interview with FedScoop.

Though the DOJ and other federal law enforcement agencies in recent weeks made additions to their use case inventories — most notably with the FBI’s disclosure of Amazon’s image and video analysis software Rekognition — the NAIAC Law Enforcement Subcommittee wanted to get to the bottom of the initial exclusions. With that in mind, subcommittee members last week voted unanimously in favor of edits to two recommendations governing excluded AI use cases in Federal CIO Council guidance

The goal in delivering updated recommendations, committee members said, is to clarify the interpretations of those exemptions, ensuring more comprehensive inventories from federal law enforcement agencies.

“I think it’s important for all sorts of agencies whose work affects the rights and safety of the public,” said Heydari, a Vanderbilt University law professor who researches policing technologies and AI’s impact on the criminal justice system. “The use case inventories play a central role in the administration’s trustworthy AI practices — the foundation of trustworthy AI is being transparent about what you’re using and how you’re using it. And these inventories are supposed to guide that.” 

Office of Management and Budget guidance issued last November called for additional information from agencies on safety- or rights-impacting uses — an addendum especially relevant to law enforcement agencies like the DOJ. 

That guidance intersected neatly with the NAIAC subcommittee’s first AI use case recommendation, which permitted agencies to “exclude sensitive AI use cases,” defined by the Federal CIO Council as those “that cannot be released practically or consistent with applicable law and policy, including those concerning the protection of privacy and sensitive law-enforcement, national security, and other protected interests.”

Subcommittee members said during last week’s meeting that they’d like the CIO Council to go back to the drawing board and make a narrower recommendation, with more specificity around what it means for a use case to be sensitive. Every law enforcement use of AI “should begin with a strong presumption in favor of public disclosure,” the subcommittee said, with exceptions limited to information “that either would substantially undermine ongoing investigations or would put officers or members of the public at risk.”

“If a law enforcement agency wants to use this exception, they have to basically get clearance from the chief AI officer in their unit,” Jane Bambauer, NAIAC’s Law Enforcement Subcommittee chair and a University of Florida law professor, said in an interview with FedScoop. “And they have to document the reason that the technology is so sensitive that even its use at all would compromise something very important.”

It’s no surprise that law enforcement agencies use technologies like facial or gait recognition, Heydari added, making the initial omissions all the more puzzling. 

“We don’t need to know all the details, if it were to jeopardize some kind of ongoing investigation or security measures,” Heydari said. “But it’s kind of hard to believe that just mentioning that fact, which, you know, most people would probably guess on their own, is really sensitive.”

While gray areas may still exist when agencies assess sensitive AI use cases, the second AI use case exclusion targeted by the Law Enforcement Subcommittee appears more cut-and-dried. The CIO Council’s exemption for agency usage of “AI embedded within common commercial products, such as word processors or map navigation systems” resulted in technologies such as automated license plate readers and voice spoofing to often be left on the cutting-room floor. 

Bambauer said very basic AI uses, such as autocomplete or some Microsoft Edge features, shouldn’t be included in inventories because they aren’t rights-impacting technologies. But common commercial AI products might not have been listed because they’re not “bespoke or customized programs.”

“If you’re just going out into the open market and buying something that [appears to be exempt] because nothing is particularly new about it, we understand that logic,” Bambauer said. “But it’s not actually consistent with the goal of inventory, which is to document not just what’s available, but to document what is actually a use. So we recommended a limitation of the exceptions so that the end result is that inventory is more comprehensive.”

Added Heydari: “The focus should be on the use, impacting people’s rights and safety. And if it is, potentially, then we don’t care if it’s a common commercial product — you should be listing it on your inventory.” 

A third recommendation from the subcommittee, which was unrelated to the CIO Council exclusions, calls on law enforcement agencies to adopt an AI use policy that would set limits on when the technology can be used and by whom, as well as who outside the agency could access related data. The recommendation also includes several oversight mechanisms governing an agency’s use of AI.

After the subcommittee agrees on its final edits, the three recommendations will be posted publicly and sent to the White House and the National AI Initiative Office for consideration. Recommendations from NAIAC — a collection of AI experts from the private sector, academia and nonprofits — have no direct authority, but Law Enforcement Subcommittee members are hopeful that their work goes a long way toward improving transparency with AI and policing.

“If you’re not transparent, you’re going to engender mistrust,” Heydari said. “And I don’t think anybody would argue that mistrust between law enforcement and communities hasn’t been a problem, right? And so this seems like a simple place to start building trust.”

US, partner countries preach open, secure and resilient principles for 6G systems

The U.S. and nearly a dozen other countries are calling on other governments and organizations to support and uphold shared principles concerning open, secure and resilient 6G wireless communication. 

In a joint statement issued Monday with Australia, Canada, the Czech Republic, Finland, France, Japan, the Republic of Korea, Sweden and the United Kingdom, the White House announced six principles regarding 6G: trusted technology that is protective of national security; affordability, sustainability and global connectivity; secure, resilient and protective of privacy; global industry-led and inclusive standard setting and international collaborations; cooperation to enable open and interoperable innovation; and spectrum and manufacturing. 

“Telecommunications must be open, free, interoperable, reliable, resilient, and secure to be trusted by citizens and countries alike,” Anne Neuberger, deputy national security advisor for cyber and emerging technologies, said in a statement to FedScoop. “That is why the U.S. and nine allies just launched joint principles to guide the development of 6G. These principles aren’t applicable to just one future ‘G.’ They matter now, as technology and networks evolve.”

The spectrum and manufacturing principle zeroes in on 6G technologies that “use spectrum efficiently and incorporate spectrum sharing mechanisms by design to coexist with incumbent service providers,” while also promoting “a globally competitive market along the [information and communications technology] value chain and in all elements of the compute and connectivity continuum, with multiple software and hardware suppliers.”

“We believe this to be an indispensable contribution towards building a more inclusive, sustainable, secure and peaceful future for all, and call upon other governments, organizations and stakeholders to join us in supporting and upholding these principles,” the statement said. “Collaboration and unity are key to resolving pressing challenges in the development of 6G, and we hereby declare our intention to adopt relevant policies to this end in our countries, to encourage the adoption of such policies in third countries and to advance research and development and standardization of 6G networks.”

This release follows the White House’s November 2023 release of the National Spectrum Strategy, which acknowledged that the “demand for spectrum access is growing rapidly” — including the advancement of 6G technologies. 

The strategy states that demand for 5G and 6G broadband networks is growing and “the United States is uniquely positioned to embrace a whole-of-nation approach to advance the state of technology for dynamic forms of sharing.”

This story was updated Feb. 28, 2024, with comments from Anne Neuberger.