White House cyber adviser raises AI watermarking in private meeting with tech execs
The White House’s chief cyber adviser Anne Neuberger met privately with top tech industry executives from companies including OpenAI and Microsoft at the end of April to discuss new cybersecurity risks being created by artificial intelligence technology and tools, FedScoop has learned.
The White House National Security Council (NSC) official was invited by industry leaders to meet on the sidelines of the RSA cybersecurity conference in San Francisco, according to a senior administration official familiar with the conversation.
During talks, Neuberger urged industry leaders to consider watermarking online content they generate in an attempt to tackle AI-generated disinformation, according to the official. She also called on the companies to ensure greater transparency over the use of AI training data and models and to increase sharing of best practices.
“[W]e talked to them and said: would you be in a position to consider watermarking content you generate so we know what you generated, what the model generated versus what’s real,” the official said.
The use of watermarking to verify AI-generated content has generated a degree of consensus among technologists within government and across the private sector. Companies such as Getty Images have long added a visible watermark to all digital images in their catalog to verify the authenticity of content.
Earlier this week, the White House’s former top AI official, Lynne Parker, also proposed watermarking online content to understand the history of where a photo, video or text originally came from in order to determine whether that content is real or manipulated.
The meeting on the sidelines of RSA came shortly before a May 4 discussion held between Biden administration officials including Vice President Kamala Harris and the leaders of tech companies including OpenAI, Anthropic, Microsoft and Alphabet.
According to a transcript provided by the White House, the Harris meeting focused on three key areas: the need for companies to be more transparent with policymakers, the public, and others about their AI systems; the importance of being able to evaluate, verify, and validate the safety, security, and efficacy of AI systems; and the need to ensure AI systems are secure from malicious actors and attacks.
Speaking with FedScoop, the senior administration official said they pushed tech company executives on the sidelines of RSA to be more transparent with their data as well as trust and safety standards.
“Would you be more transparent about the data your systems are learning on? Because if it’s garbage in, it’s garbage out. If it’s learning on data that reflects the web today, if it’s learning on 4chan then three guesses as to what the model is going to produce. Will you share more information about how you’re implementing trust and safety?”
The Biden administration official also encouraged tech companies that are leading on AI to be more collaborative with each other when it comes to cybersecurity standards and protocols.
“It would actually be very cool if companies in the AI sector didn’t compete on trust and safety but then shared what they are learning amongst each other so that everybody’s learning from each other and making trust and safety as rigorous as possible,” the official added.
The official said that the White House also frequently coordinates for AI and tech industry leaders “to meet with different government experts so they can learn from each other,” including sharing best practices regarding cybersecurity as well as AI trust and safety issues.
Microsoft declined to comment on the meeting while OpenAI did not respond to requests for comment.