Advertisement

Apple agrees to White House AI commitments, as agencies meet more executive order deadlines

All 270-day mark actions from the AI executive order were also completed, per the White House, though some details were not immediately public.
The White House in Washington, DC, on July 2, 2023. (Photo by Daniel SLIM / AFP) (Photo by DANIEL SLIM/AFP via Getty Images)

The White House on Friday marked nine months since the signing of President Biden’s executive order on artificial intelligence with a new voluntary safety commitment from Apple and the announcement of several new completed actions on the technology across the government.

Apple’s agreement to safety, testing, and transparency measures outlined by the Biden administration brings the total number of AI companies that have signed on to the commitments to 16. Those commitments were initially announced last year and include companies such as Meta, OpenAI, IBM, and Adobe. 

Meanwhile, federal agencies have completed a number of actions required within 270 days of the executive order’s issuance. Those include the first technical guidelines issued for public comment from the AI Safety Institute, the development of initial guidance for agencies on AI training data, and the completion of a national security memo on AI. While the White House said all 270-day actions were completed, not all were made public. More information, however, is expected.

A White House spokesperson confirmed to FedScoop that the national security memo was sent to the president and that non-classified portions of the document would be made available. An exact release date for that information wasn’t given. 

Advertisement

The memo is essentially a national security version of an Office of Management and Budget memo finalized in March for agency AI governance and, per the order, is expected to address “AI used as a component of a national security system or for military and intelligence purposes.” 

In anticipation of the document, 15 civil society groups and individuals wrote to the administration urging it to establish “meaningful standards” for AI use in national security as those systems, arguing that the memo “risks endangering U.S. national security” if certain guardrails aren’t established.

Another action where there are more details expected is a report from the Department of Commerce’s National Telecommunications and Information Administration on the risks and benefits of dual-use foundation models — large, complex models trained on huge datasets and adaptable for an array of uses — that have widely available numerical parameters called model weights. That type of model is sometimes referred to as an open foundation model.

Although the report wasn’t available Friday, an NTIA spokesperson told FedScoop in an email that the report “has been delivered to the White House and we expect to publish our findings next week.”

The creation of initial guidance for federal agencies on balancing data transparency with potential national security concerns in training AI systems from the Chief Data Officers Council was similarly marked as completed but didn’t appear to be available publicly. The council didn’t immediately respond to a request for a copy of that guidance or information about what it contained.

Advertisement

The latest updates come as the administration races to address the booming technology using mechanisms throughout the executive branch. Previous actions under the order included using the Defense Production Act to require companies operating large models to provide the government with their safety measures, setting up a pilot National AI Research Resource to provide access to tools needed for research, and launching an AI Talent Surge initiative for the federal government.

According to the Friday announcement, the administration has now hired more than 200 people through that recruitment effort, up from the over 150 that it reported in April

The Friday announcement also highlighted the work of Vice President Kamala Harris — the presumptive presidential nominee for the Democratic party — on the administration’s AI efforts, including her actions announced after the executive order and her “major policy speech” ahead of the Global Summit on AI Safety in London last year. Harris has been the public face of several of the administration’s AI actions and was a leader in getting the voluntary commitments from AI companies.

Notably, the new guidelines from the AI Safety Institute, which operates under Commerce’s National Institute of Standards and Technology, focus on mitigating misuse of dual-use foundation models and ways to identify, measure and reduce those risks. 

The objectives in the guidance include anticipating potential misuse, ensuring that the risk of misuse is managed before deploying a model, managing model theft risks, collecting information about misuse after deployment, and being transparent about those risks. Comments on that document are due Sept. 9.

Advertisement

President Biden was briefed by senior staff on AI on Friday, including Office of Science and Technology Policy Director Arati Prabhakar and National Security Advisor Jake Sullivan, per a White House pool report. That meeting included a discussion of “national security issues related to AI and AI research and development to achieve a better future for all,” in addition to updates on the implementation of the order, according to a statement from the White House.

Latest Podcasts