Five takeaways from the AI executive order’s 180-day deadline

AI talent recruiting is surging, while DOE, USDA, DOL and other agencies issue new AI-related guidance.
President Joe Biden hands Vice President Kamala Harris the pen he used to sign an executive order regarding artificial intelligence during an event at the White House on Oct. 30, 2023, in Washington, D.C. (Photo by Chip Somodevilla/Getty Images)

Many federal agencies were up against the clock this weekend to complete requirements outlined in the October artificial intelligence executive order, ahead of a Monday announcement from the White House that all 180-day actions in the order had been completed. 

The order’s requirements span the tech talent surge to guidance for various types of AI. Announcements from this deadline include guidance on generative AI tools for hiring, a safety and security board focused on AI and a new generative AI guidance for federal purchasers

The White House credited federal agencies with the completion of requirements for the deadline, and included announcements for requirements in the executive order that were due at a later date. Additionally, the executive branch reported that “agencies also progressed on other work tasked by the E.O. over longer timeframes.”

Here are five takeaways from the White House’s 180-day announcement:


1. The AI talent surge’s progress report

    The AI and Tech Talent Task Force reported a 288% increase in AI job applications via a combination of agency hiring, the U.S. Digital Corps, the U.S. Digital Service and the presidential innovation fellows program. 

    Additionally, the task force offered 10 recommendations throughout the federal government for “further increasing AI capacity.”

    The task force recommends institutionalizing the U.S. Digital Corps and other technology recruitment programs, enhancing user experience on USAJOBS through the updating of digital service capabilities, exploring a talent exchange engagement with foreign partners that are also looking to invest in AI-related talent and more. 


    The report calls on Congress to grant agencies the ability to use flexible hiring authorities for the AI-talent surge, while also offering pay incentives and support for rotational practices. 

    Significantly, the task force reported that the Office of Personnel Management has “developed a legislative proposal” that aims to enhance compensation flexibilities. That proposal “has been transmitted to Congress.”

    2. New actions from the Department of Energy

      The DOE announced several AI-related actions at the deadline that focused on both cybersecurity and environmental concerns, including a new website that exhibits agency-developed AI tools and models


      The agency’s Office of Critical and Emerging Technologies released a report addressing the potential AI has to “significantly enhance how we manage the [electric] grid” and how climate change’s effect on the environment “will require a substantial increase in the rate of modernization and decarbonization” of the grid. The report offers considerations for how large language models might assist compliance with federal permitting, how AI could enhance resilience and more. 

      DOE has also announced a $13 million investment to build AI-powered tools to improve the siting and permitting of clean energy infrastructure for a new VoltAlc initiative. Significantly, the agency announced that it is establishing a working group to make recommendations by June on meeting the energy demands for AI and data center infrastructure. 

      Additionally, the agency’s Cybersecurity, Energy Security and Emergency Response (CESER)  unit worked with energy sector partners — with support from the Lawrence Livermore National Laboratory — to create an interim assessment to identify opportunities and potential risks regarding AI use within the sector.

      3. Department of Labor guidance on AI and tech-based hiring systems


        The DOL was six months early on meeting its requirement to publish guidance for contractors regarding non-discrimination in talent acquisition that involves AI and other technology-based hiring programs. 

        The report points to the use of AI systems as having the potential to continue discrimination and unlawful bias. It requires federal contractors to cooperate with the Office of Federal Contract Compliance Programs (OFCCP) by providing requested information on their AI systems in order to prevent discrimination.

        Contractors are not insulated from the risk of violating equal employment opportunity or obligations if they use automated systems, the agency states in the report. OFCCP also noted obligations related to AI with regard to  investigations into compliance evaluations and complaints  to identify if a contractor is abiding by nondiscrimination requirements. 

        While OFCCP reported that it does not endorse products or issue compliance certifications, it does encourage federal contractors to be transparent about AI use in the hiring process and with employment decisions, while nd safeguarding private information of all involved parties. 

        4. USDA’s framework for state, local, tribal and territorial (SLTT) public administrative use of AI


          The U.S. Department of Agriculture issued a framework for SLTTs to use AI to administer the agency’s Food and Nutrition Service (FNS) programs, which include school breakfast, summer food service, emergency food assistance and more. 

          The guidance states that FNS will work with SLTTs for risk management, and lays out four categories of risk for AI usage in regard to the service, ranging from low to high.

          USDA recommends a “human in the loop” in AI implementation for risk mitigation. The framework recommends that  staffers who provide human oversight for AI-enabled functions “should receive sufficient training” to assess AI models or functions for accurate outputs. 

          The agency also outlines how other uses of the technology may be “rights-impacting” or “safety-impacting,” as designated by FNS.


          5. A framework for nucleic acid synthesis screening

            The Office of Science and Technology Policy, the National Science and Technology Council and the Fast Track Action Committee for Synthetic Nucleic Acid Procurement Screening released a framework to encourage synthetic nucleic acid providers to implement screening mechanisms to prevent the misuse of AI for “engineering dangerous biological materials.” 

            This guidance builds on a Department of Health and Human Services strategy document released in October 2023

            OSTP said in a release that the National Institute of Standards and Technology “will further support implementation of this framework” through engagement with industry entities to “develop technical standards for screening.”

            Latest Podcasts