Advertisement

Commerce adds five members to AI Safety Institute leadership

The new AI Safety Institute executive leadership team members include researchers and current administration officials.
Commerce Secretary Gina Raimondo speaks during the daily press briefing at the White House on Sept. 6, 2022, in Washington, D.C. (Photo by Kevin Dietsch/Getty Images)

The Department of Commerce has added five people to the AI Safety Institute’s leadership team, including current administration officials, a former OpenAI manager, and academics from Stanford and the University of Southern California.

In a statement announcing the hires Tuesday, Commerce Secretary Gina Raimondo called the new leaders “the best in their fields.” They join the institute’s director, Elizabeth Kelly, and chief technology officer, Elham Tabassi, who were named in February. The new leaders are:

  • Paul Christiano, founder of the nonprofit Alignment Research Center who formerly ran OpenAI’s language model alignment team, will be head of AI safety;
  • Mara Quintero Campbell, who was most recently the deputy chief operating officer of Commerce’s Economic Development Administration, will be the acting chief operating officer and chief of staff;
  • Adam Russell, director of the AI division of USC’s Information Sciences Institute, will be chief vision officer;
  • Rob Reich, a professor of political science and associate director of the Institute for Human-Centered AI at Stanford, will be a senior advisor; and
  • Mark Latonero, who was most recently deputy director of the National AI Initiative Office in the White House Office of Science and Technology Policy, will be head of international engagement.

The AI Safety Institute, which is housed in the National Institute of Standards and Technology, is tasked with advancing safety of the technology through research, evaluation and developing guidelines for those assessments. That work includes actions listed in President Joe Biden’s executive order on AI outlined for NIST, such as developing guidance, red-teaming and watermarking synthetic contact. 

Advertisement

In February, the AI Safety Institute launched a consortium, which will contribute to the agency’s work carrying out the executive order actions. That consortium is made up of more than 200 stakeholders, including academic institutions, unions, nonprofits, and other organizations. Earlier this month, the department also announced a partnership with the U.K. to have their AI Safety bodies work together.

“I am very pleased to welcome these talented experts to the U.S. AI Safety Institute leadership team to help establish the measurement science that will support the development of AI that is safe and trustworthy,” said Laurie Locascio, NIST’s director and undersecretary of commerce for standards and technology. “They each bring unique experiences that will help the institute build a solid foundation for AI safety going into the future.”

“I am very pleased to welcome these talented experts to the U.S. AI Safety Institute leadership team to help establish the measurement science that will support the development of AI that is safe and trustworthy,” said Laurie Locascio, NIST’s director and undersecretary of commerce for standards and technology. “They each bring unique experiences that will help the institute build a solid foundation for AI safety going into the future.”

Latest Podcasts