CISA’s chief data officer: Bias in AI models won’t be the same for every agency
As chief data officer for the Cybersecurity and Infrastructure Security Agency, Preston Werntz has made it his business to understand bias in the datasets that fuel artificial intelligence systems. With a dozen AI use cases listed in CISA’s inventory and more on the way, one especially conspicuous data-related realization has set in.
“Bias means different things for different agencies,” Werntz said during a virtual agency event Tuesday. Bias that “deals with people and rights” will be relevant for many agencies, he added, but for CISA, the questions become: “Did I collect data from a number of large federal agencies versus a small federal agency [and] did I collect a lot of data in one critical infrastructure sector versus in another?”
Internal gut checks of this kind are likely to become increasingly important for chief data officers across the federal government. CDO Council callouts in President Joe Biden’s AI executive order cover everything from the hiring of data scientists to the development of guidelines for performing security reviews.
For Werntz, those added AI-related responsibilities come with an acknowledgment that “bias-free data might be a place we don’t get to,” making it all the more important for CISA to “have that conversation with the vendors internally about … where that bias is.”
“I might have a large dataset that I think is enough to train a model,” Werntz said. “But if I realize that data is skewed in some way and there’s some bias … I might have to go out and get other datasets that help fill in some of the gaps.”
Given the high-profile nature of agency AI use cases — and critiques that inventories are not fully comprehensive or accurate — Werntz said there’s an expectation of additional scrutiny on data asset purchases and AI procurement. As CISA acquires more data to train AI models, that will have to be “tracked properly” in the agency’s inventory so IT officials “know which models have been trained by which data assets.”
Adopting “data best practices and fundamentals” and monitoring for model drift and other potentially problematic AI concepts is also top of mind for Werntz, who emphasized the importance of performance security logging. That comes back to having an awareness of AI models’ “data lineage,” especially as data is “handed off between systems.”
Beyond CISA’s walls, Werntz said he’s focused on sharing lessons learned with other agencies, especially when it comes to how they acquire, consume, deploy and maintain AI tools. He’s also keeping an eye out for technologies that will support data-specific efforts, including those involving tagging, categorization and lineage.
“There’s a lot of onus on humans to do this kind of work,” he said. “I think there’s a lot of AI technologies that can help us with the volume of data we’ve got.” CISA wants “to be better about open data,” Werntz added, making more of it available to security researchers and the general public.
The agency also wants its workforce to be trained on commercial generative AI tools, with some guardrails in place. As AI “becomes more prolific,” Werntz said internal trainings are all about “changing the culture” at CISA to instill more comfort in working with the technology.
“We want to adopt this. We want to embrace this,” Werntz said. “We just need to make sure we do it in a secure, smart way where we’re not introducing privacy and safety and ethical kinds of concerns.”
Scientists must be empowered — not replaced — by AI, report to White House argues
The team of technologists and academics charged with advising President Joe Biden on science and technology is set to deliver a report to the White House next week that emphasizes the critical role that human scientists must play in the development of artificial intelligence tools and systems.
The President’s Council of Advisors on Science and Technology voted unanimously in favor of the report Tuesday following a nearly hourlong public discussion of its contents and recommendations. The delivery of PCAST’s report will fulfill a requirement in Biden’s executive order on AI, which called for an exploration of the technology’s potential role in “research aimed at tackling major societal and global challenges.”
“Empowerment of human scientists” was the first goal presented by PCAST members, with a particular focus on how AI assistants should play a complementary role to human scientists, rather than replacing them altogether. The ability of AI tools to process “huge streams of data” should free up scientists “to focus on high-level directions,” the report argued, with a network of AI assistants deployed to take on “large, interdisciplinary, and/or decentralized projects.”
AI collaborations on basic and applied research should be supported across federal agencies, national laboratories, industry and academia, the report recommends. Laura H. Greene, a Florida State University physics professor and chief scientist at the National High Magnetic Field Laboratory, cited the National Science Foundation’s Materials Innovation Platforms as an example of AI-centered “data-sharing infrastructures” and “community building” that PCAST members envision.
“We can see future projects that will include collaborators to develop next-generation quantum computing qubits, wholesale modeling, whole Earth foundation models” and an overall “handle on high-quality broad ranges of scientific databases across many disciplines,” Greene said.
The group also recommended that “innovative approaches” be explored on how AI assistance can be integrated into scientific workflows. Funding agencies should keep AI in mind when designing and organizing scientific projects, the report said.
The second set of recommendations from PCAST centered on the responsible and transparent use of AI, with those principles employed in all stages of the scientific research process. Funding agencies “should require responsible AI use plans from researchers that would assess potential AI-related risks,” the report states, matching the principles called out in the White House’s AI Bill of Rights and the National Institute of Standards and Technology’s AI Risk Management Framework.
Eric Horvitz, chief scientific officer at Microsoft, said PCAST’s emphasis on responsible AI use means putting forward “our best efforts to making sure these tools are used in the best ways possible and keeping an eye on possible downsides, whether the models are open source or not open source models. … We’re very optimistic about the wondrous, good things we can expect, but we have to sort of make sure we keep an eye on the rough edges.”
The potential for identifying those “rough edges” rests at least partially in the group’s third recommendation of having shared and open resources. PCAST makes its case in the report for an expansion of existing efforts to “broadly and equitably share basic AI resources.” There should be more secure access granted to federal datasets to aid critical research needs, the report noted, with the requisite protections and guardrails in place.
PCAST members included a specific callout for an expansion of NSF’s National Secure Data Service Demonstration project and the Census Bureau’s Federal Statistical Research Data Centers. The National Artificial Intelligence Research Resource should also be “fully funded,” given its potential as a “stepping-stone for even more ambitious ‘moonshot’ programs,” the report said.
AI-related work from the scientists who make up PCAST won’t stop after the report is edited and posted online next week. Bill Press, a computer science and integrative biology professor at the University of Texas at Austin, said it’s especially important now in this early developmental stage for scientists to test AI systems and learn to use them responsibly.
“We’re dealing with tools that, at least right now, are ethically neutral,” Press said. “They’re not necessarily biased in the wrong direction. And so you can ask them to check these things. And unlike human people who write code, these tools don’t have pride of ownership. They’re just as happy to try to reveal biases that might have incurred as they are to create them. And that’s where the scientists are going to have to learn to use them properly.”
404 page: the error sites of federal agencies
Infusing a hint of humor or a dash of “whimsy” in government websites, including error messages, could humanize a federal agency to visitors. At least that’s how the National Park Service approaches its digital offerings, including its 404 page.
“Even a utilitarian feature, such as a 404 page, can be fun — and potentially temper any disappointment at having followed a link that is no longer active,” an NPS spokesperson said in an email to FedScoop. “Similar to our voice and tone on other digital platforms, including social media, our main goal is to always communicate important information that helps visitors stay safe and have the best possible experience.”
404 pages are what appear when a server cannot locate a website or resource at a specific URL. Hitting a 404 could be due to a number of reasons: a spelling error in the URL, the page may not exist anymore, or the server moved a page without having the link redirect. As a result of the error, many different entities with websites — such as state and local governments — have had a stroke of creative genius to make users aware of an issue while also having a bit of fun — which rings true for some federal agencies as well.
While 404 pages could seem like a silly or boring part of the federal government’s use of technology, there has been a significant push in the Biden administration, specifically out of the Office of Management and Budget, to enhance the user experience of federal agencies’ online presence — with a focus on accessibility.
NPS’s spokesperson said the agency strives to make its website “as user-friendly as possible” and “have processes in place” to make sure that the links are working.
Currently, the park service’s site has a revolving 404 page that showcases several different nature-themed images, with puns or quotes alongside information on how to get back on the right track for whatever online adventure a visitor seeks.
NPS said that it doesn’t have any plans to update its error page, “but we’re always working to better understand our users and to improve the user experience of NPS.gov and all our digital products.”
So, until further notice, visitors can still see an artistic rendering of a bear — complete with a relevant pun — if they get a little turned around on NPS’s site.
NPS isn’t alone in walking a line of informing the public about website miscommunications and simultaneously showcasing a bit of humor. The Federal Bureau of Prisons, for one, told FedScoop in an email that it “seeks to optimize the user experience in performance, access and comprehension.”
“The design of the FBOP’s 404 page was meant to be both functional and informative; by combining imagery with text, we educate the user as to the nature of a 404 error beyond standard system language and provide explanations as to why the error occurred,” Benjamin O’Cone, a spokesperson for FBOP, said in an email to FedScoop.
Unlike other agencies, the FBOP’s 404 imagery is not totally relevant to the mission of the bureau. Instead, it offers something a bit more meta than the others — referring to the 404 page as a “door that leads to nowhere.”
“While the Federal Bureau of Prisons (FBOP) seeks to ensure a fully responsive and evolving website, we recognize that there may be occasions where search engine indexing is outdated and may contain links to expired pages,” O’Cone said.
Similarly, NASA has a specific area of its 404 page that shares information about its updated, or “improved,” site, with an option to look at a sitemap and submit feedback. “Rockets aren’t the only thing we launch,” the agency muses.
This also comes with an equally creative 404 page, stating that the “cosmic object you were looking for has disappeared beyond the horizon,” against the backdrop of outer space.
Other websites, like the National Institute of Standards and Technology’s site, may not have artistic renderings or out-of-this-world visuals, but NIST instead shares a joke centered around the agency’s area of interest.
As NIST releases significant frameworks and updated guidance for different areas of federal technology use and deployment, it only makes sense that the agency refers to its error page as a request that isn’t standard.
While this collection of websites represents just a handful that add a creative touch to error messages, many government entities lack the same information and resources that others have.
For example, see the Department of Energy, which simply states that “the requested page could not be found” and offers no further clue as to what a user could be experiencing.
Oracle approved to handle government secret-level data
Oracle has added its name to the short list of cloud vendors approved to handle classified, secret-level data for the federal government.
The company on Monday announced that three of its classified, air-gapped cloud regions received accreditation from the Department of Defense to handle workloads at the secret level — what the department refers to as Impact Level 6 (IL-6).
The achievement comes after Oracle last August also earned a Top Secret/Sensitive Compartmented Information accreditation from the intelligence community. With both that and the latest secret-level cloud authorization, Oracle is approved to handle government information at any classification level in the cloud.
“America’s warfighters must have the world’s preeminent technology and our taxpayers insist that technology is delivered at competitive costs. Oracle is bringing both to the Department of Defense’s Secret networks,” Rand Waldron, vice president of Oracle, said in a statement. “Technology no longer sits outside the mission; technology is a part of the mission. In austere locations with limited communication, and in massive secure data centers, Oracle is bringing our best capabilities to serve the men and women that defend the U.S. and our Allies.”
While the news comes most to the benefit of the DOD, which is expanding its use of cloud in the classified space and at the edge through its Joint Warfighting Cloud Capability, it ultimately puts Oracle on a level playing field with its top competitors in the federal cloud space — Amazon, Google and Microsoft, which have all earned secret and top secret accreditations ahead of Oracle. Google announced its accreditation at the secret and top-secret levels just two weeks earlier.
Notably, it is those companies that Oracle is vying against for DOD task orders under its $9 billion JWCC cloud contract. Those companies also hold spots, with IBM, on the intelligence community’s multibillion-dollar Commercial Cloud Enterprise (C2E) contract, which requires work at the secret and top-secret levels as well.
Generative AI could raise questions for federal records laws
The Department of Homeland Security has been eager to experiment with generative artificial intelligence, raising questions about what aspects of interactions with those tools might be subject to public records laws.
In March, the agency announced several initiatives that aim to use the technology, including a pilot project that the Federal Emergency Management Agency will deploy to address hazard mitigation planning, and a training project involving U.S. Citizenship and Immigration Services staff. Last November, the agency released a memo meant to guide the agency’s use of the technology. A month later, Eric Hysen, the department’s chief information officer and chief AI officer, told FedScoop that there’s been “good interest” in using generative AI within the agency.
But the agency’s provisional approval of a few generative AI products — which include ChatGPT, Bing Chat, Claude 2, DALL-E2, and Grammarly, per a privacy impact assessment — call for closer examination in regard to federal transparency. Specifically, an amendment to OpenAI’s terms of service uploaded to the DHS website established that outputs from the model are considered federal records, along with referencing freedom of information laws.
“DHS processes all requests for records in accordance with the law and the Attorney General’s guidelines to ensure maximum transparency while protecting FOIA’s specified protected interests,” a DHS spokesperson told FedScoop in response to several questions related to DHS and FOIA. DHS tracks its FOIAs in a public log. OpenAI did not respond to a request for comment.
“Agency acknowledges that use of Company’s Site and Services may require management of Federal records. Agency and user-generated content may meet the definition of Federal records as determined by the agency,” reads the agreement. “For clarity, any Federal Records-related obligations are Agency’s, not Company’s. Company will work with Agency in good faith to ensure that Company’s record management and data storage processes meet or exceed the thresholds required for Agency’s compliance with applicable records management laws and regulations.”
Generative AI may introduce new questions related to the Freedom of Information Act, according to Enid Zhou, senior counsel at the Electronic Privacy Information Center, a digital rights group. She pointed to nuances related to “agency and user-generated content,” since the DHS-OpenAI clause doesn’t make clear whether inputs or user prompts are records, or also the outputs produced by the AI system. Zhou also pointed to record management and data storage as a potential issue.
“The mention of ‘Company’s record management and data storage processes’ could raise an issue of whether an agency has the capacity to access and search for these records when fulfilling a FOIA request,” she said in an email to FedScoop. “It’s one thing for OpenAI to work with the agency to ensure that they are complying with federal records management obligations but it’s another when FOIA officers cannot or will not search these records management systems for responsive records.”
She added that agencies could also try shielding certain outputs of generative AI systems by citing an exemption related to deliberative process privilege. “Knowing how agencies are incorporating generative AI in their work, and whether or not they’re making decisions based off of these outputs, is critical for government oversight,” she said. “Agencies already abuse the deliberative process privilege to shield information that’s in the public interest, and I wouldn’t be surprised if some generative AI material falls within this category.”
Beryl Lipton, an investigative researcher at the Electronic Frontier Foundation, argued that generative AI outputs should be subject to FOIA and that agencies need a plan to “document and archive its use so that agencies can continue to comply properly with their FOIA responsibilities.”.
“When FOIA officers conduct a search and review of records responsive to a FOIA request, there generally need to be notes on how the request was processed, including, for example, the files and databases the officer searched for records,” Lipton said. “If AI is being used in some of these processes, then this is important to cover in the processing notes, because requesters are entitled to a search and review conducted with integrity. “
White House hopeful ‘more maturity’ of data collection will improve AI inventories
An expansion of the process for agencies’ AI use case inventories outlined in the Office of Management and Budget’s recent memo will benefit from “clearer directions and more maturity of collecting data,” Deputy Federal Chief Information Officer Drew Myklegard said.
Federal CIO Clare Martorana has “imbued” the idea of “iterative policy” within administration officials, Myklegard said in an interview Thursday with FedScoop at Scoop News Group’s AITalks. “We’re not going to get it right the first time.”
As the inventories, which were established under a Trump-era executive order, enter the third year of collection, Myklegard said agencies have a better idea of what they’re buying, and communication — as well as the skills for collecting and sorting the data — are improving.
On the same day OMB released its recent memo outlining a governance strategy for artificial intelligence in the federal government, it also released new, expansive draft guidance for agencies’ 2024 AI use case inventories.
Those inventories have, in the past, suffered from inconsistencies and even errors. While they’re required to be published publicly and annually by certain agencies, the disclosures have varied widely in terms of things like the type of information contained, format, and collection method.
Now, the Biden administration is seeking to change that. Under the draft, information about each use case would be now collected via a form and agencies would be required to post a “machine-readable” comma-separated value (CSV) format inventory of the public uses to their website, in addition to other changes. The White House is currently soliciting feedback on that draft guidance, though a deadline for those comments isn’t clear.
In the meantime, agencies are getting to work on a host of other requirements OMB outlined in the new AI governance memo. According to Myklegard, the volume of comments was the highest the administration had seen on an OMB memo.
“We were really surprised. It’s the most comments we’ve received from any memo that we’ve put out,” Myklegard said during remarks on stage at AI Talks. He added that “between those we really feel like we were able to hear you.”
The memo received roughly 196 public comments, according to Regulations.gov. The same number for OMB’s previous guidance on the Federal Risk and Authorization Management Program (FedRAMP) process, for example, was 161.
Among the changes in the final version of that memo were several public disclosure requirements, including requiring civilian agencies and the Defense Department to report aggregate metrics about AI uses not published in an inventory, and requiring agencies to report information about the new determinations and waivers they can issue for uses that are assumed to be rights- and safety-impacting under the memo.
Myklegard told FedScoop those changes are an example of the iterative process that OMB is trying to take. When OMB seeks public input on memos, which Myklegard said hasn’t happened often in the past, “we realize areas in our memos that we either missed and need to address, or need to clarify more, and that was just this case.”
Another addition to the memo was encouragement for agencies to name an “AI Talent Lead.” That individual will serve “for at least the duration of the AI Talent Task Force” and be responsible for tracking AI hiring in their agency, providing data to the Office of Personnel Management and OMB, and reporting to agency leadership, according to the memo.
In response to a question about how that role came about, Myklegard pointed to the White House chief of staff’s desire to look for talent internally and the U.S. Digital Service’s leadership on that effort.
“It just got to a point that we felt we needed to formalize and … give agencies the ability to put that position out,” Myklegard said. The administration hopes “there’s downstream effects” of things like shared position descriptions (PDs), he added.
He specifically pointed to the Department of Homeland Security’s hiring efforts as an example of what the administration would like to see governmentwide. CIO Eric Hysen has already hired multiple people with “good AI-specific skillsets” from the commercial sector, which is typically “unheard of” in government, he said.
In February, DHS launched a unique effort to hire 50 AI and machine learning experts and establish an AI Corps. The Biden administration has since said it plans to hire 100 AI professionals across the government by this summer.
“We’re hoping that every agency can look to what Eric and his team did around hiring and adopt those same skills and best practices, because frankly, it’s really hard,” Myklegard said.
Cybersecurity executive order requirements are nearly complete, GAO says
Just a half-dozen leadership and oversight requirements from the 2021 executive order on improving the nation’s cybersecurity remain unfinished by the agencies charged with implementing them, according to a new Government Accountability Office report.
Between the Cybersecurity and Infrastructure Security Agency, the National Institute of Standards and Technology and the Office of Management and Budget, 49 of the 55 requirements in President Joe Biden’s order aimed at safeguarding federal IT systems from cyberattacks have been fully completed. Another five have been partially finished and one was deemed to be “not applicable” because of “its timing with respect to other requirements,” per the GAO.
“Completing these requirements would provide the federal government with greater assurance that its systems and data are adequately protected,” the GAO stated.
Under the order’s section on “removing barriers to threat information,” OMB only partially incorporated into its annual budget process a required cost analysis.
“OMB could not demonstrate that its communications with pertinent federal agencies included a cost analysis for implementation of recommendations made by CISA related to the sharing of cyber threat information,” the GAO said. “Documenting the results of communications between federal agencies and OMB would increase the likelihood that agency budgets are sufficient to implement these recommendations.”
OMB also was unable to demonstrate to GAO that it had “worked with agencies to ensure they had adequate resources to implement” approaches for the deployment of endpoint detection and response, an initiative to proactively detect cyber incidents within federal infrastructure.
“An OMB staff member stated that, due to the large number of and decentralized nature of the conversations involved, it would not have been feasible for OMB to document the results of all EDR-related communications with agencies,” the GAO said.
OMB still has work to do on logging as well. The agency shared guidance with other agencies on how best to improve log retention, log management practices and logging capabilities but did not demonstrate to the GAO that agencies had proper resources for implementation.
CISA, meanwhile, has fallen a bit short on identifying and making available to agencies a list of “critical software” in use or in the acquisition process. OMB and NIST fully completed that requirement, but a CISA official told the GAO that the agency “was concerned about how agencies and private industry would interpret the list and planned to review existing criteria needed to validate categories of software.” A new version of the category list and a companion document with clearer explanations is forthcoming, the official added.
CISA also has some work to do concerning the Cyber Safety Review Board. The multi-agency board, made up of representatives from the public and private sectors, has felt the heat from members of Congress and industry leaders over what they say is a lack of authority and independence. According to the GAO, CISA hasn’t fully taken steps to implement recommendations on how to improve the board’s operations.
“CISA officials stated that it has made progress in implementing the board’s recommendations and is planning further steps to improve the board’s operational policies and procedures,” the GAO wrote. “However, CISA has not provided evidence that it is implementing these recommendations. Without CISA’s implementation of the board’s recommendations, the board may be at risk of not effectively conducting its future incident reviews.”
Federal agencies have, however, checked off the vast majority of boxes in the EO’s list. “For example, they have developed procedures for improving the sharing of cyber threat information, guidance on security measures for critical software, and a playbook for conducting incident response,” the GAO wrote. Additionally, the Office of the National Cyber Director, “in its role as overall coordinator of the order, collaborated with agencies regarding specific implementations and tracked implementation of the order.”
The GAO issued two recommendations to the Department of Homeland Security, CISA’s parent agency, and three to OMB on full implementation of the EO’s requirements. OMB did not respond with comments, while DHS agreed with GAO recommendations on defining critical software and improving the Cyber Safety Review Board’s operations.
GSA administrator: Generative AI tools will be ‘a giant help’ for government services
Running 150 artificial intelligence pilots while using 132 different generative AI tools and technologies might seem like a lot for any federal agency. So, too, might a yearslong track record of using machine learning, large language models and language processing bots.
But for the General Services Administration, the decision to go all-in on AI wasn’t really up for debate.
“We’re doing this because it’s GSA’s job to have shared services for the government,” GSA Administrator Robin Carnahan said Thursday. “And generative AI tools are going to be a giant help in that.”
Speaking at AIScoop’s AITalks event, Carnahan said GSA is currently operating seven different sandbox environments, and there’s “more to come” across the agency with AI. Fully embracing the technology is a matter of recognizing that public- and private-sector tech leaders are “going to decide whether we’re on the right or wrong side of history on this topic, whether we get it right for the American people,” she said. “If we do, it opens up all kinds of possibilities.”
Exploring those possibilities to the fullest extent comes down to buying “best-in-class AI technologies,” Carnahan said. The agency plans to partner closely with industry, she added, and its IT category management office within the Federal Acquisition Service is in the process of developing an acquisition resource guide for generative AI and specialized computing infrastructure.
“This is a big deal,” Carnahan said, “because procurement officers need to know about these new technologies. A sneak peek of what you’re gonna see in there is going to identify a lot of common challenges. It’s gonna identify use cases. It’s gonna help procurement officers navigate the marketplace so the missions of these agencies can be fulfilled.”
The GSA is also focused on highlighting products that already have FedRAMP approval, part of the newly released roadmap for the federal government’s cloud services compliance program. Carnahan said that the strategy document is aimed at making FedRAMP more scalable, more secure and easier to use.
For any budget-strapped agency considering new AI projects, Carnahan pushed the Technology Modernization Fund as a means to “go outside your budget cycle and get access to funding for these new tools.” TMF is currently soliciting proposals from agencies with ideas for AI projects.
“We expect to see a lot of interest from across the government,” Carnahan said. “If your agency hasn’t thought about using the TMF for your AI proposals, you should do that. Now is the best time for it.”
For the GSA internally, a new Login.gov pilot leveraging facial matching technology best represents the agency’s commitment to “using technology ethically and responsibly and securely for the public good,” Carnahan said. The pilot will help people verify their identities remotely, though the GSA is pledging to minimize data retention and ensure “that personal information is protected and not shared. And it is never sold.”
This next phase of the GSA’s work on the governmentwide single sign-on and identity verification platform, which includes a partnership with the U.S. Postal Service, is emblematic of what the agency views as its mission to deliver secure and inclusive products. And although there are “precarious uncharted waters ahead” when it comes to full-scale adoption of AI tools and systems, Carnahan is bullish on the government’s prospects.
“We know that by working together through our government teams, industry teams, that we can get to the other side,” she said. “The American people are counting on us to get it right. There is no time to waste. So let’s all get to work.”
State Department encouraging workers to use ChatGPT
The State Department is encouraging its workforce to use generative AI tools, having launched a new internal chatbot to a thousand users this week. The move comes as the agency leans heavily on chatbots and other artificial intelligence-based tools amid the Biden administration’s push for departments to look for use cases for the technology.
“Of our workforce, there are a lot of people who haven’t been playing with ChatGPT,” State Chief Information Officer Kelly Fletcher said Thursday at AIScoop’s AITalks event in Washington, D.C. “We’re encouraging them to do so, but they need training.”
The internal chatbot, which FedScoop previously reported on, is an example of how the agency is weighing how generative AI might help with tasks like summarization and translation. It comes in response to staff demand.
Beyond the chatbot, the State Department is using artificial intelligence for other purposes, including declassifying documents, said Matthew Graviss, the agency’s chief data and artificial intelligence officer. The department is also using open-source models to help create a digital research assistant for certain mandated reports, though he didn’t name those documents.
The department is also using public tools with public information to help synthesize information for ambassadors, Graviss said. “You don’t need FedRAMP this and FISMA that to do that kind of stuff,” he added. “Public tools work.”
Earlier this month, FedScoop reported that the Department of State had removed several references to artificial intelligence use cases in its executive order-required inventory.
Other agencies, meanwhile, have taken a variety of approaches to generative AI, with some more cautious about exploring the technology. Others are setting up sandboxes to explore generative AI tools, working, for instance, with versions of OpenAI tools available on Azure for Government.
Keeping public sector data private and compliant with AI
Public sector and commercial enterprises are ingesting ever-growing amounts of data into their enterprise operations. That’s placing greater demands on enterprise IT executives to ensure the requisite data privacy and security controls are in place and functioning effectively.
At the same time, executives are also being asked to integrate smarter tools into their operations to help their employees work more productively.
At Google Cloud Next ’24, Google Cloud experts Ganesh Chilakapati, director of product management and Luke Camery, group product manager, were joined by executives from the United Nations Population Fund (UNFPA), UK energy retailer OVO and Air Liquide, a global industrial gases supplier, to discuss how Google Cloud’s generative AI capabilities are helping to achieve those objectives.
How Gemini safeguards your data
Chilakapati and Camery demonstrated some of Gemini’s and Google Workspace’s signature capabilities, emphasizing features such as client-side encryption and comprehensive security frameworks. They also explained what happens to data inside Gemini.
“What is Gemini doing with all this data? How is it providing these customized and targeted responses that are so helpful? Is it learning and training on all of my enterprise data? No, it’s not. All of the privacy commitments we’ve made over the many decades to Google Workspace customers remain true,” said Chilakapati.
“Your data is your data and strictly stays within the workspace data boundary. Your privacy is protected, your content is not used for any other customers, and all of your existing data protections are automatically applied,” he added.
Your data, your trust boundary, managed by you
“Everything happens within your Google Workspace trust boundary. That means you have the ability to control whether or not Gemini stores not only the user prompts but also the generated responses. It’s completely up to you,” added Camery.
“One of the things we’re most excited to announce is the general availability of AI classification for Google Drive. This is a privacy-preserving customer-specific model that you have the option to train on your own specific corpus using your unique data class taxonomy,” said Camery. “Leveraging AI classification and the guarantees that we’ve built into Gemini itself, you can have a virtuous cycle where you are leveraging AI while protecting your organization from emerging threats.”
Unparalleled Security: 5 key takeaways
Chilakapati and Camery stressed how the platform is designed to offer unparalleled security, built on the robust foundations of Google’s secure cloud infrastructure:
· Enterprise terms of operation: Gemini operates strictly under processor enterprise terms, even when fetching the latest information from the internet, not on consumer controller terms.
· Client-side encryption extension: Enterprises that have traditionally leveraged client-side encryption capabilities, ensuring that sensitive data remains inaccessible, can extend that one step further to protect against access attempts by any unauthorized entity, including other generative AI models.
· Foundation on secure cloud infrastructure: Gemini is constructed on Google’s secure cloud platform, providing a solid foundation to enhance the overall security posture.
· Zero-trust architecture: Zero-trust protocols are built in, not bolted on, not just on Google Cloud’s foundation but all the way up the stack to Gemini itself.
· Sovereign controls integration: Gemini is also seamlessly integrated into an enterprise’s sovereign controls for Google Workspace, ensuring the integrity of data’s digital sovereignty journey, regardless of wherever you are in the world.
How Gemini AI is boosting productivity for the global workforce
Those features are especially important to customers like Soren Thomassen, director of IT solutions at UNFPA, which operates in 150 countries. Thomassen initially started using Gemini in May of 2023 to make chat functionality available to the fund’s entire user base. He began piloting Gemini Workspace last November.
“As an agency, safety and privacy is paramount. That’s why we were quick at rolling out the Gemini Chatbot because it’s covered by the same rules and the same controls as with Workspace.”
How Gemini AI is boosting productivity for the global workforce
Thomassen also pointed out how Gemini AI is helping UNFPA’s global workforce work more productively.
“Our users have been using it as a superpower writing assistant,” he said. Project managers spend a lot of time writing proposals. “Instead of starting out with a blank screen…they can at least have a zero-draft that they can start working with. But the feedback that’s on my heart the most was when I hear those who have English as a second language say that Gemini helps them get their ideas across a little bit more clearly. Gemini (helps) everybody write English perfectly. And I think that’s important for a global organization.”
Jeremy Gibbons, Air Liquide’s digital and IT CTO, and Simon Goldsmith, OVO’s enterprise security and platforms lead, echoed Thomassen’s testament to Gemini’s utility. Each attested how the strategic deployment of Gemini within their organizations helped bolster productivity and ensure security. A recurrent theme throughout their conversation was the transformative potential of AI in reimagining work securely.
“I like to think of Workspace as kind of a walled garden of Eden,” said Goldsmith. “We want to give our people a really amazing experience in that garden… and allow them to experiment. But at the same time, within that safe environment, Workspace gives us the ability to, at an enterprise level, do the sensitive detective and corrective control work.”
Learn more about how Google Public Sector can help your organization “Kickstart your generative AI journey.”
This article was produced by Scoop News Group and sponsored by Google Public Sector. Google Public Sector is an underwriter of AI Week.