Gundeep Ahluwalia leaving Department of Labor after 8 years as CIO
Gundeep Ahluwalia is stepping down from his role as Department of Labor CIO after nearly eight years on the job.
In a letter to staff, obtained by FedScoop, Ahluwalia wrote that Friday will be his last day at the department, which he joined in August 2016 as deputy CIO. After roughly two months in that job, he was named CIO in October 2016.
“As I think back over the last eight years, there have been so many successes and milestones that propelled DOL’s digital infrastructure by leaps and bounds above other agencies,” Ahluwalia wrote to staff. “When the team chose the slogan and decided to be the ‘best in federal service’ almost seven years ago, many of us had our doubts, but as the saying goes, ‘When the going gets tough, the tough get going!’ and that is exactly what happened. Our successes in creating novel funding mechanisms, TMF wins, legendary TechDay, creating resilient infrastructure, websites, applications, mobile applications, data infrastructure, cybersecurity, AI, and emerging technologies are all things I can talk about for days!”
In his note, Ahluwalia called the department’s biggest accomplishment during his time its “ability to attract talent and create leaders.”
“We have a formidable leadership factory,” he said. “I have never seen so many talented, competent, and diverse group of professionals in one place. The team is always thinking of ways to get it done, constantly innovating to improve delivery at a lower cost.”
Ahluwalia, a winner of multiple FedScoop 50 awards, oversaw the Department of Labor’s IT portfolio during a period of major transformation, including the department’s work to modernize unemployment benefits delivery with states during the early days of the COVID-19 pandemic.
Under his leadership, DOL has also been a frequent winner of Technology Modernization Fund awards — five times in total for projects such as data modernization, cybersecurity, faster processing of permanent labor certifications and streamlining the Integrated Federal Employee Compensation System.
Ahluwalia spoke with FedScoop earlier this year at the 2024 AWS Innovate Day about how new technologies like artificial intelligence could impact Labor’s multi-faceted mission.
“Every time I say gen AI, my team makes me put a dollar in the jar now,” he joked. “But I think AI — and all the other capabilities that we’ve had: [robotic process automation], blockchain — we need to be mission-focused rather than thinking of it from a technology perspective, and that’s what my team is trying to do right now.”
Ahluwalia pointed to examples like worker’s compensation claims and injuries submitted to the Occupational Safety and Health Administration as mission areas that can be made more efficient with AI.
Ahluwalia did not reveal what he would do after leaving the Labor Department.
Federal News Network first reported Ahluwalia’s departure.
NIST releases three encryption standards to prepare for future quantum attacks
The National Institute of Standards and Technology has officially released three new encryption standards that are designed to fortify cryptographic protections against future cyberattacks by quantum computers.
The finalized standards come roughly eight years after NIST began efforts to prepare for a not-so-far-off future where quantum computing capabilities can crack current methods of encryption, jeopardizing crucial and sensitive information held by organizations and governments worldwide. Those quantum technologies could appear within a decade, according to a RAND Corp. article cited by NIST in the Tuesday announcement.
“Quantum computing technology could become a force for solving many of society’s most intractable problems, and the new standards represent NIST’s commitment to ensuring it will not simultaneously disrupt our security,” Laurie E. Locascio, director of the Department of Commerce’s NIST and undersecretary of commerce for standards and technology, said in a statement. “These finalized standards are the capstone of NIST’s efforts to safeguard our confidential electronic information.”
The new standards provide computer code and instructions for implementing algorithms for general encryption and digital signatures — algorithms that serve as authentication for an array of electronic messages, from emails to credit card transactions.
For general encryption, the finalized Module-Lattice-Based Key-Encapsulation Mechanism (ML-KEM) is a standard under which small encryption keys can be easily exchanged by parties quickly, according to the release. Meanwhile, for digital signatures, NIST released the final Module-Lattice-Based Digital Signature Algorithm (ML-DSA) as the primary standard and the Stateless Hash-Based Digital Signature Algorithm (SLH-DSA) as a secondary line of defense based on different math.
“We encourage system administrators to start integrating them into their systems immediately, because full integration will take time,” Dustin Moody, a NIST mathematician who leads the post-quantum standardization project, said in a statement included in the release.
Future preparedness
The standards are based on four algorithms that NIST selected in 2022 after a six-year competition to craft new quantum-ready encryption methods. Those algorithms were CRYSTALS-Kyber, CRYSTALS-Dilithium, Sphincs+ and FALCON. In 2023, NIST released draft versions of the three standards that were finalized Tuesday to solicit feedback. According to the agency, the standards haven’t substantially changed since then.
Additionally, while the newly finalized standards are based on the CRYSTALS-Kyber, CRYSTALS-Dilithium, and Sphincs+ algorithms, another draft standard for digital signatures based on FALCON is on the way. That standard will be called the fast-Fourier transform over NTRU-Lattice-Based Digital Signature Algorithm (FN-DSA), NIST’s announcement said.
The agency is also in the process of evaluating two other sets of algorithms for general encryption and digital signatures “that could one day serve as backup standards,” NIST said.
During a White House event Tuesday, Locascio said there will be scenarios in which the first three standards might be insufficient, which is why NIST and its global partners will keep working on generating and testing additional algorithms.
“We will ensure a strong pool of alternates and backups to provide resiliency and redundancy in the case of any yet unknown leaps in quantum mathematics,” Locascio said. “Now, while we know that these leaps and technological advances are inevitable, we do not wait for that future. We act now.”
Scott Crowder, vice president of quantum adoption and business development at IBM, which developed three of the four algorithms NIST selected with its collaborators, told FedScoop in an interview ahead of the announcement that the motivation for releasing the standards now has a couple of aims.
The first is mitigating risks from bad actors collecting information now that they’ll try to decrypt when quantum computing is fully realized. For secure government work and industry areas where security is key, “that data has long-term value,” Crowder said. The second is giving organizations time to implement them, he said.
In a way, the situation shares some similarity with the two-digit year abbreviation software bug that was projected to wreak havoc in 2000, known as Y2K. Whereas developers needed to find and change the places in code with a two-digit year ahead of 2000, here organizations need to find cryptographic deployments and change them, Crowder said. Though, the difference between the situations, he said, is that cryptography isn’t static and must evolve for different threats.
The IBM-developed algorithms are CRYSTALS-Kyber, which is now the general encryption standard ML-KEM; CRYSTALS-Dilithium, which is now the primary digital signature algorithm ML-DSA; and FALCON, the forthcoming standard that will be called FN-DSA. The other finalized algorithm, which was called Sphincs+ and is now SLH-DSA, was co-developed by a researcher who was later hired by IBM.
Crowder said that following the release of the NIST standards, more compliance agencies around the world are likely to follow.
Government, industry security
In addition to the standards, other work to prepare the U.S. government for post-quantum cryptography is also underway.
The National Security Agency, for example, released its Commercial National Security Algorithm Suite 2.0 in 2022, outlining requirements for future quantum-resistant algorithms in national security systems. That same year, the Office of Management and Budget directed agencies to inventory cryptography on certain systems and estimate funding needed for migration to post-quantum standards.
Based on those estimates, the White House said the approximate funding needed to make the transition between 2025 and 2035 would be $7.1 billion. That estimate was part of a congressionally mandated report released last month that outlined a plan for migration to post-quantum cryptography, or PQC, standards in the federal government. OMB is required by statute to release guidance on agency migration plans within one year of the first NIST standards being published.
At the White House event Tuesday, Anne Neuberger, deputy national security advisor for cyber and emerging technology, said that through the process of inventorying cryptography, the government learned that it would be “wise” to do it in a more automated way. Neuberger also highlighted a need for prioritization.
“We’re learning that it’s important when you do those inventories to identify what are the most sensitive systems? What’s the most high-value data? Indeed, what’s the data that you’d care if an adversary could use a quantum computer in nine or 10 years to decrypt it?” Neuberger said. “We have lots of that in the intelligence community. We have lots of that in our Department of Defense.”
On an IBM press call ahead of the announcement, Lily Chen, a mathematician and NIST fellow, said “the PQC standardization process has been a community effort.” NIST worked with cryptographic researchers, industry and government for evaluation and feedback on the algorithms. That work with industry will need to continue as organizations make the transition, she said.
Similar to the government, some companies have also started looking at post-quantum standards ahead of the NIST announcement to ensure the safety of their information.
Richard Marty, the chief technology officer at LGT Financial Services who also spoke on the IBM call, said it isn’t an option for his company to write off the issue, sometimes called Q-day, as an industry or global problem that it will deal with later.
“We want to be ready for this, and we want to implement solutions as early as possible to specifically also address the threat of ‘harvest now and decrypt later,’” Marty said. “The less old our data is once Q-day happens, the better is our standing in the market, and we can keep up that trust with our clients.”
State Department aims for total of 52 bureau chief data officers in next five years
Nearly two years after launching its bureau chief data officer program, the Department of State is seeing success and aiming to almost quadruple the size of its current cohort, Farakh Khan, director of communications, culture and training at the agency’s Center for Analytics, told FedScoop in a recent interview.
The bureau CDO, or BCDO, program officially began in September 2022 as a way to improve its use, governance, and management of data throughout the department. So far, Khan said, it’s led to better knowledge sharing and collaboration among bureaus, and advanced its efforts on artificial intelligence.
Currently, the department has hired 14 people in BCDO roles, accounting for roughly 25% of its bureaus, Khan said. But more positions are expected to be added. Bureaus will be developing and submitting plans to request additional roles over the next five calendar years, she said.
“We’re aiming for a total of 52 across the department,” Khan said, which would mean having an official in all the bureaus and major offices.
The program sits within the Center for Analytics and is overseen by the department’s chief data officer, who is also its chief AI officer. According to the National Defense Authorization Act for Fiscal Year 2024 — which established and expanded the program — its goals are to promote “data fluency and data collaboration;” “increased data analytics use in critical decisionmaking areas;” “data integration integration and standardization;” and increase efficiencies in the department.
“The goal of this is really to harness the power of data to advance diplomatic efforts, improve operational efficiency, and ensure transparency and accountability in the Department of State’s activities,” Khan said.
While AI wasn’t initially the intention of the BCDO program, its growth has come during the boom of the technology that heavily relies on vast amounts of data. As a result, the officials — who are responsible for data and AI initiatives in their bureaus — are also playing an important role in the department’s efforts to support its international affairs work.
“We won’t have AI if we don’t have good data,” Khan said. She added that “having a senior data professional in a bureau enables having a subject matter expert and a point of contact for all of these things that are related to data and AI.”
The Department of State has embraced AI over the past several years, releasing a strategy for the technology in October 2023 that included goals to provide broad access to secure AI in the department and use AI to advance diplomacy. Since then, it’s launched an internal AI chatbot, encouraged employees to use generative AI tools, and noted that the technology has freed the department up to focus on what its employees bring to work.
As an example of a BCDO program success story, Khan highlighted the Bureau of International Organization Affairs’ 2023 work to partially automate the quantitative part of an annual report to Congress on U.S. government financial contributions to international organizations. Khan said that improved the quality of the data and saved more than 600 labor hours.
The dashboards created as part of that effort, including information about United Nations financial contributions and personnel, are now also available to all department employees on an internal website, she said. That information has been used by personnel at the department’s overseas posts, for the bureau’s missions to international organizations, and by U.S. participants at the most recent UN General Assembly High-Level Week, Khan said.
That same bureau also developed an application using AI “to review draft UN resolutions for problematic language that would undermine UN principles,” Khan said. That use case will save the department time they would typically dedicate to manually finding and flagging that language and increase the accuracy of that process, she said.
A lot of those initiatives, Khan said, were led by that bureau’s BCDO.
As far as next steps for the program, the department aims to hire a consistent number of BCDOs each year to reach its goal of eventually having such an official in each bureau in the next five years, she said. And some of those postings could be on the horizon.
“We are currently working with a few bureaus on a plan for hiring in the near future and hope to have our next job posting go live this Fall,” Khan said.
Microsoft’s Azure OpenAI Service lands FedRAMP High authorization
Microsoft’s increasingly ubiquitous artificial intelligence model has gotten the green light for FedRAMP High authorization, the tech giant announced Monday, making the popular AI product available for some of the federal government’s most sensitive datasets.
Azure OpenAI Service’s approval as a service within the FedRAMP High authorization for Azure Government also includes the availability of GPT-4o, an OpenAI model that can be used for natural language understanding and processing, text summarization and classification, sentiment analysis, question answering, conversational agents and more, according to the company.
Microsoft in February first announced that it had expanded its Azure Government platform to allow agencies to use its Azure OpenAI Service, which at the time included GPT-4, and that it was working to earn FedRAMP High accreditation.
The FedRAMP High designation denotes that the OpenAI services have met a higher security threshold to work with sensitive civilian datasets, including those in the fields of health care, law enforcement, finance and emergency response, among others.
Other potential uses for GPT-4o, according to Microsoft, are: accelerated discovery, or the recognition of patterns and code anomalies to identify vulnerabilities and also offer suggestions to combat those vulnerabilities; augmented cognition, in which the product is paired with large datasets to help agencies spot trends and patterns; and enhanced productivity, including the creation of draft documents such as RFPs.
In a blog post accompanying the announcement, Douglas Phillips, Microsoft’s corporate vice president for Azure Edge + Platform, said the availability of Azure OpenAI Service in government clouds represents the company’s commitment to “enabling government transformation with AI.”
“Along with delivering innovations that help drive missions forward, we make AI easy to procure, easy to access, and easy to implement,” Phillips said. “Microsoft is committed to delivering more advanced AI capabilities across classification levels in the coming months.”
The announcement of Microsoft’s latest FedRAMP authorization comes after the company last week announced an expanded partnership with Palantir to boost offerings of AI capabilities to the intelligence and defense communities. Under the agreement, within Microsoft’s government and classified cloud environments, intelligence and defense officials can now utilize Palantir’s large language models through the Azure OpenAI Service within Palantir’s AI Platform (AIP).
This story was updated Aug. 12, 2024 to correct the name of the author of the Microsoft blog post.
White House, VA launch new website to protect veterans from fraud
The White House and the Department of Veterans Affairs launched a governmentwide website and call center Friday to better protect veterans and their families from scams and fraud attempts.
The new VSAFE.gov website offers veterans, service members and their loved ones a one-stop-shop for fraud support and reporting, including resources for prevention, response information and reporting assistance, according to a release. Additionally, the VA stood up a single call line to assist relevant parties with fraud-related questions that will route them “to the correct federal agency to address their specific concerns,” the release explains.
VSAFE combines resources from across the federal government — including the Department of Education, the Federal Trade Commission, the Internal Revenue Service, the Office of Management and Budget and others — to ensure there is no “wrong door” for veterans and service members to access tools and information to fight scam attempts.
VA Secretary Denis McDonough said in the release: “This new call center and website are a one-stop-shop for Veterans, service members, and their families to help avoid fraud and scams. We know that more Veterans than ever before are now receiving VA benefits, which sadly means that more bad actors are trying to steal those benefits. That’s why we’re launching these tools: to give these heroes every tool at the federal government’s disposal to protect themselves and their families.”
Coinciding with the launch of the new resources, the Office of Science and Technology Policy also released a five-year interagency strategic plan to address the Promise to Address Comprehensive Toxics Act, also known as the PACT Act, which was signed into law two years ago.
In November, Reps. Elise Stefanik, R-N.Y., and Mike Bost, R-Ill., introduced the Veterans Scam and Fraud Evasion (VSAFE) Act, which would establish a veterans scam and fraud evasion officer role in the VA. The bill would also provide identity theft resources and promote a “whole-of-government approach to fraud prevention” through federal cross-agency coordination to “effectively field fraud and scam inquiries,” according to a release from Stefanik’s office.
Additionally, in July, 29 legislators signed off on a letter addressed to McDonough to request an update regarding the VA’s actions to defend against scammers and protect veterans who are applying for PACT Act benefits. Members of Congress asked what steps the agency has taken to protect veteran data and privacy from collection and monetization, and if the VA has worked to ensure a directory of accredited veterans service organizations are placed higher up in search engine results, among other things.
GSA risks exposing systems and data due to weaknesses with RPA program, IG says
The General Services Administration’s robotic process automation (RPA) program is at risk of exposing agency systems and data to bots, and stronger security measures need to be put into place for the program, according to the agency’s inspector general.
In a Tuesday report, GSA’s Office of the Inspector General stated that the agency’s RPA program did not comply with IT security requirements to “ensure bots are operating securely and properly.” The agency reportedly did not update system security plans to manage bots’ access and removed or modified security requirements instead of addressing these issues, according to the OIG’s report.
The watchdog found a slew of security issues with the bots ranging from the agency not establishing a process for removing access to decommissioned bots to a lack of monitoring and reporting bot-related activity.
The OIG pointed to an executive guide for the federal RPA community of practice — which is housed within GSA — as a resource agencies should use to employ a secure framework for operating RPA programs with established guardrails.
However, the report says GSA did not follow monitoring requirements within that guidance that include performing baseline monitoring to “alert RPA program management if a bot was accessing, reading writing or moving more data than authorized,” collecting weekly log reviews to identify any errors with logic or processing in each bot’s operation, and annual bot reviews so the agency could approve each bot’s continued use on agency systems.
Additionally, the IG reported reviewing the security plans for 16 agency systems that bots had access to and found that none of the security plans were updated to address how bots were accessing the systems. Seven of the system security plans did not even mention bots. And, 10 of the system security plans “failed to list and authorize non-person entities’ access to the systems.”
However, GSA pushed back on some of the inspector general’s conclusions. In response to the OIG’s finding that “a bot could erroneously delete or overwrite thousands of records before GSA could even identify that an issue has occurred,” the agency provided a clarification that it would be “technically impossible” for a bot to do that because of the agency’s controls. Additionally, GSA provided a list of comments on the findings, including additional context, updates and further clarifications.
“We do not entirely agree with the findings,” the agency said in response to the findings. “Because there is no federal guidance, as the agency has expanded to the size and scope of the RPA program, GSA has intentionally iterated on our security protocols to address new and emerging challenges in this novel space and is developing the security playbook that is being broadly leveraged across the government. GSA operational processes and capabilities have avoided any RPA-related security incidents to date.”
New FCC rules would require AI disclosures from robocallers
The Federal Communications Commission ratcheted up its fight against AI-generated robocalls and robotexts Wednesday with the proposal of new disclosure rules for the people behind the unwanted calls and messages.
In releasing a notice of proposed rulemaking adopted by the commissioners during the agency’s August open meeting, the FCC said it is seeking public comment on a measure to require callers to disclose up front their use of AI to generate calls and texts.
FCC Chairwoman Jessica Rosenworcel said the agency’s AI-related work has been grounded in transparency, specifically referencing its actions in response to a January robocall featuring an AI-generated imitation of President Joe Biden telling New Hampshire Democratic primary voters to stay away from the polls.
The FCC in May proposed multimillion-dollar fines against the Democratic operative behind the New Hampshire call and the telecom carrier that distributed the calls. The agency also issued a declaratory ruling that robocalls that use voice-cloning technology violate the Telephone Consumer Protection Act.
“If a campaign uses AI to create an ad, as the voter, viewer and listener, you have a right to know,” Rosenworcel said. “The concern about these technology developments is real, and rightfully so. But if we focus on transparency and taking swift action when we find fraud, I think we can look beyond the risks of these technologies and harness the benefits.”
AI disclosures would come at the beginning of a call, provided consumers consent to receiving the call in the first place. The rules carve out protections for AI uses that help consumers with disabilities use phone networks without compromising Telephone Consumer Protection liabilities.
Commissioners also voted Wednesday in favor of a series of improvements to the agency’s robocall mitigation database, a tool launched three years ago to help the FCC and its law enforcement partners monitor carriers’ efforts to stop junk robocalls.
The agency seeks comment on how to ensure that filings in the database are current and authenticated, part of what Rosenworcel said is an attempt to make the database “more accurate, effective and secure.”
“We also ask about penalties for false and inaccurate information,” she said. “This is not the only update we are working on to keep this junk off the line.”
Rosenworcel noted a memorandum of understanding signed with the Treasury Department’s Financial Crimes Enforcement Network last week that would give the FCC access to Bank Secrecy Act information, helping it track the entities behind the “flood” of unwanted calls and texts.
“When coupled with the work of the industry traceback group and the 49 state attorneys general partnering with us,” she said, “I think we can make real progress stopping the scammers behind these schemes.”
How enhanced identity data supports AI adoption, strengthens security, and improves public service
The advent of generative AI technologies promises to revolutionize public service delivery and the citizen experience. Most experts agree that the long-term potential of artificial intelligence (AI) depends on building a solid foundation of reliable, readily available, high-quality data.
One area where data quality and readiness play a particularly crucial role for federal, state, and local government agencies is identity management. Maintaining the quality and currency of identity data is essential to enhancing secure service access, meeting citizens’ expectations, and earning their trust.

The technical demands required to manage and leverage identity data properly have grown in complexity and scale as the variety of data parameters has multiplied along with the number of systems that now inject data into the mix.
This endangers the success of AI adoption. With so much on the line, trustworthy identity data management increasingly requires an enterprise-wide focus and commitment to data quality as well as responsible and intelligent AI practices.
Here are some measures agency leaders should consider:
Build a foundation for data literacy
The first step in harnessing AI’s power in government involves cultivating a culture of data literacy and effective data management. This has many dimensions but starts with a top-down and bottom-up commitment to promoting effective data governance and standards. That includes striving toward a single source of truth wherever possible and ensuring that the data agencies gathering and managing applications that harness the data are used and handled responsibly.
Orchestrate data readiness
Data readiness encompasses many factors, from data quality and governance to data security and infrastructure. For AI to function optimally, agencies must invest in comprehensive data management strategies that address and orchestrate attention across these areas. This includes implementing robust data quality controls, establishing clear data governance frameworks, and ensuring data is stored securely and accessible to authorized personnel. The measure of success ultimately is reflected in how consistently services are delivered to the right person, at the right time, every time.
Foster superior data quality
Having suitable systems, controls, and governance in place won’t overcome deficiencies in the data itself. High-quality data has always been instrumental for sound decision-making, but it is crucial to AI applications. When agencies can draw on accurate and reliable data from multiple sources, they can unlock a wealth of benefits, from automating administrative processes to improving decision-making and tailoring services to the specific needs of verified individuals. However, they must also ensure that the AI applications used to augment decisions comport with responsible AI practices. To achieve this, agencies must prioritize data integration and standardization efforts and invest in AI-supported technologies that can help ensure data quality and consistency across the enterprise.
Utilize AI to combat AI-driven cyber threats
The added benefit of supporting AI-enablement is the ability to help combat fraud and cyber threats. According to our latest analysis of 92 billion transactions processed through the LexisNexis Digital Identity Network®throughout 2023, the volume of global human-initiated digital attacks, including digital account takeovers, increased by 19% year-over-year. Attackers are moving quickly to leverage AI, but you can, too. By adopting a holistic AI-fueled risk decision engine like LexisNexis ThreatMetrix for Government, agencies can take a proactive approach to detecting fraud and reducing risk while enhancing the ability to recognize what is a threat versus a trusted citizen. What is certain is that staying with older methodologies is a losing proposition.
Leverage identity data management provider relationships
Realizing the full potential of AI in government also means considering expanding relationships with data management solution providers that can supply specialized services, including:
- secure data-sharing tools
- master person indexes and
- automated data profiling and validation solutions.
Agencies may also want to take a fresh look at modernized data integration, standardization, and quality assurance solutions, which can improve decision-making while allowing them to focus on their core mission of serving the public.
These and other measures reflect our experience putting AI to work. For over a decade, AI has been integrated into LexisNexis Risk Solutions technology and plays a crucial role in helping drive innovation and enhance products and services. LexisNexis Risk Solutions, together with parent company RELX, remain committed to the ethical and responsible use of AI technology, which includes protecting the privacy and security of our systems and working to eliminate bias.
The future of government service delivery lies in the responsible and intelligent use of AI for secure and seamless service access for constituents and optimized operational efficiencies for agencies. By prioritizing data quality, readiness, and security, government agencies can unlock the full potential of these technologies and deliver more efficient, effective, and secure services to the citizens they serve.
Learn more about how LexisNexis Risk Solutions can help your agency with successful AI adoption, risk reduction and enhanced government services through better identity management.
Government benefits information moving to USA.gov in September
Starting next month, U.S. adults looking online to locate benefits they are eligible for will no longer be able to use the government platform that has provided assistance with such inquiries for the past 22 years.
The General Services Administration and the Department of Labor announced Wednesday that USA.gov platforms will absorb the responsibilities of Benefits.gov, in a move to comply with President Joe Biden’s 2021 executive order on customer experience and service delivery. The Spanish version of USA.gov will reflect the English version updates as well.
The migration is part of a broader government-wide effort to make USA.gov into a “federal front door,” a term coined in the executive order.
Newly available services on USA.gov include the benefit finder to determine eligibility and locate benefits programs (though it’s unclear whether this will be the same process), category-based navigation to organize benefits information to specific types and new landing pages to offer a one-stop location for information, according to the release.
“We are excited to build off of the years of Benefits.gov expertise and create a seamless benefits experience,” Ann Lewis, the director of the GSA’s Technology Transformation Services, said in the release. “As the front door to government information and services, USAGov is committed to using data-driven and human-centered design to ensure the public has equitable access to the information it needs.”
The CX-focused executive order states that the GSA administrator is required to consolidate content that currently appears on Benefits.gov and move it to the redesigned USA.gov website “from which customers may navigate to all government benefits, services and programs.” Other “appropriate” websites for this process include Grants.gov.
Benefits.gov was founded in 2002 in order to increase public access to information, reduce costs and improve interactions between the public and the government. The website has served over 220 million people and increased access to more than 1,100 government benefits, according to the release.
“For 20 years, Benefits.gov was a leading example of interagency collaboration, technological innovation and customer experience,” DOL CIO Gundeep Ahluwalia said in the release. “The collaboration with GSA to have USA.gov become the new destination for benefits information demonstrates what’s possible in providing streamlined services for the public.”
Microsoft, Palantir partner to make AI and data tools available for national security missions
The federal intelligence and defense communities are getting access to an array of artificial intelligence and analytics products to support mission-planning in their classified networks, two of the country’s top tech giants said in announcing a new partnership Thursday.
Under the agreement between Palantir and Microsoft, national security leaders will be able to leverage a “first-of-its-kind, integrated suite of technology” to operationalize their missions. In Microsoft’s government and classified cloud environments, intelligence and defense officials can utilize the company’s large language models through the Azure OpenAI Service within Palantir’s AI Platform (AIP), according to the announcement.
“This expanded partnership between Microsoft and Palantir will help accelerate the safe, secure, and responsible deployment of advanced AI capabilities for the U.S. government,” Deb Cupp, president of Microsoft Americas, said in a statement. “Palantir, a leader in delivering actionable insights to government, will now leverage the power of Microsoft’s government and classified clouds and robust Azure OpenAI models to further develop AI innovations for national security missions.”
In addition to the availability of its AI platform, Palantir’s Gotham and Apollo products — a data-driven enterprise mission-planning platform and an operational software deployment control center, respectively — will be installed in Microsoft Azure Government, as well as in the Azure Government Secret (Defense Department Impact Level 6) and Top Secret clouds.
Palantir’s Foundry product, which leverages data integration and ontology capabilities, will also be available in those Microsoft cloud environments, providing mission operators with AI tools to help with everything from logistics to contracting to action planning.
The Palantir Federal Cloud Service, which includes Gotham, Foundry, AIP and other products, will also be authorized for use on Microsoft Azure for IL5 environments, the announcement noted.
The use of all Palantir and Microsoft services included in the deal are contingent on intelligence and defense staffers completing authorization and accreditation requirements as determined by the relevant federal agencies.
“Bringing Palantir and Microsoft capabilities to our national security apparatus is a step change in how we can support the defense and intelligence communities,” Shyam Sankar, Palantir’s chief technology officer, said in a statement. “It’s our mission to deliver this software advantage and we’re thrilled to be the first industry partner to deploy Microsoft Azure OpenAI Service in classified environments.”
News of the partnership comes in the wake of a blockbuster quarter for Palantir in government sales. The company reported $371 million in sales in the category, a 23% year-over-year increase. Trailing revenue for the past year after the second quarter from Palantir’s federal government work surpassed $1 billion for the first time, the company reported this week.
Microsoft, meanwhile, is reportedly in line for an expansion of services across all Pentagon components, and continues to be the top heavyweight among government contractors, even as it fends off critics of its cyber practices following a series of security failures.