Advertisement

AI watermarking could be exploited by bad actors to spread misinformation. But experts say the tech still must be adopted quickly

As Washington putters on AI watermarking legislation, TikTok and Adobe are leading the way with transparency standards.
(Getty Images illustration)

By and large, government and private-sector technologists agree that the use of digital watermarking to verify AI-generated content should be a key component for tackling deepfakes and other forms of malicious misinformation and disinformation. 

But there is no clear consensus regarding what a digital watermark is, or what common standards and policies around it should be, leading many AI experts and policymakers to fear that the technology could fall short of its potential and even empower bad actors.

Industry groups and a handful of tech giants — most notably TikTok and Adobe — have been singled out by experts as leading the charge on AI watermarking and embracing a transparent approach to the technology. They’ll need all the help they can get during what promises to be an especially chaotic year in digital spaces. 

With over 2 billion people expected to vote in elections around the world in 2024, AI creators, scholars and politicians said in interviews with FedScoop that standards on the watermarking of AI-generated content must be tackled in the coming months — or else the proliferation of sophisticated, viral deepfakes and fake audio or video of politicians will continue unabated.

Advertisement

“This idea of authenticity, of having authentic trustworthy content, is at the heart of AI watermarking,” said Ramayya Krishnan, dean of Carnegie Mellon University’s information systems and public policy school and a member of President Joe Biden’s National Artificial Intelligence Advisory Committee. 

“Having a technological way of labeling how content was made and having an AI detection tool to go with that would help, and there’s a lot of interest in that, but it’s not a silver bullet,” he added. “There’s all sorts of enforcement issues.” 

Digital watermarking “a triage tool for harm reduction”

There are three main types of watermarks created by major tech companies and AI creators to reduce misinformation and build trust with users: visible watermarks added to images, videos or text by companies like Google, OpenAI or Getty to verify the authenticity of content; invisible watermarks that can only be detected through special algorithms or software; and cryptographic metadata, which details when a piece of content was created and how it has been edited or modified before someone consumes it.

Using watermarking to try and reduce AI-generated misinformation and disinformation can be helpful when the average consumer is viewing a piece of content, but it can also backfire. Bad actors can manipulate a watermark and create even more misinformation, AI experts focused on watermarking told FedScoop.

Advertisement

It’s like Olympic athletes — now that I know that you’re looking for this drug, I’ll just take another drug.

Senior senate independent staffer on how bad actors can manipulate watermarks

“Watermarking technology has to be taken with a grain of salt because it is not so hard for someone with a knowledge of watermarks and AI to being able to break it and remove the watermark or manufacture one,” said Siwei Lyu, a University at Buffalo computer science professor who studies deepfakes and digital forgeries. 

Lyu added that digital watermarking is “not foolproof” and invisible watermarks are often more effective, though not without their flaws. 

“I think watermarks mostly play on people’s unawareness of their existence. So if they know they can, they will find a way to break it.”

A senior Senate independent staffer deeply involved in drafting legislation related to AI  watermarking said the concern of bad actors using well-intentioned watermarks for manipulative purposes is “1,000% valid. It’s like Olympic athletes — now that I know that you’re looking for this drug, I’ll just take another drug. It’s like we need to try our best we can to keep pace with the bad actors.”

Advertisement

When it comes to AI watermarking, the Senate is currently in an “education and defining the problem” phase, the senior staffer said. Once the main problems with the technology are better defined, the staffer said they’ll begin to explore whether there is a legislative fix or an appropriations fix.

Senate Majority Leader Chuck Schumer said in September that ahead of the 2024 election, tackling issues around AI-generated content that is fake or deceptive and can lead to widespread misinformation and disinformation was an exceedingly time-sensitive problem.

“There’s the issue of actually having deepfakes, where people really believe … that a candidate is saying something when they’re totally a creation of AI,” the New York Democrat said after his first closed-door AI insight forum

“We talked about watermarking … that one has a quicker timetable maybe than some of the others, and it’s very important to do,” he added.

Another AI expert said that watermarking can be manipulated by bad actors in a small but highly consequential number of scenarios. Sam Gregory, executive director at the nonprofit WITNESS, which helps people use technology to promote human rights, said it’s best to think of AI watermarking as “almost a triage tool for harm reduction.” 

Advertisement

”You’re making available a greater range of signals on where content has come from that works for 95% of people’s communication,” he said. “But then you’ve got like 5% or 10% of situations where someone doesn’t use the watermark to conceal their identity or strip out information or perhaps they’re a bad actor. 

“It’s not a 100% solution,” Gregory added.

TikTok, Adobe leading the way on watermarking

Among major social media platforms, Chinese-owned TikTok has taken an early lead on watermarking, requiring users to be highly transparent when AI tools and effects are used within their content, three AI scholars told FedScoop. Furthermore, the company has created a culture of encouraging users to be comfortable with sharing the role that AI plays in altering their videos or photos in fun ways.

“TikTok shows you the audio track that was used, it shows you the stitch that was made, it shows you the AI effects used,” Gregory said. And as “the most commonly used platform by young people,” TikTok makes it “easy and comfortable to be transparent about how a piece of content was made with presence of AI in the mix.” 

Advertisement

TikTok recently announced new labels for disclosing AI-generated content. In a statement, the social media platform said that its policy “requires people to label AI-generated content that contains realistic images, audio or video, in order to help viewers contextualize the video and prevent the potential spread of misleading content. Creators can now do this through the new label (or other types of disclosures, like a sticker or caption).”

We realized that we can’t keep getting out in front to determine if something is false, so we decided to flip it and say, ‘Let’s have everybody expect to say this is true.’

Jeffrey young, adobe senior solutions consultant manager, on the company’s approach content authenticity

Major AI developers, including Adobe and Microsoft, also support some forms of labeling AI in their products. Both tech giants are members of the Coalition for Content Provenance and Authenticity (C2PA), which addresses the prevalence of misinformation online through the development of technical standards for certifying the source and history of online content.

Jeffrey Young, a senior solutions consultant manager at Adobe, said the company has “had a big drive for the content authenticity initiative” due in large part to its awareness that bad actors use Photoshop to manipulate images “for nefarious reasons.” 

“We realized that we can’t keep getting out in front to determine if something is false, so we decided to flip it and say, ‘Let’s have everybody expect to say this is true,’” Young said. “So we’re working with camera manufacturers, working with websites on their end product, that they’re able to rollover that image and say, this was generated by [the Department of Homeland Security], they’ve signed it, and this is confirmed, and it hasn’t been manipulated since this publication.”

Advertisement

Most major tech companies are in favor of labeling AI content through watermarking and are working to create transparent watermarks, but the tech industry recognizes that it’s a simplistic solution, and other actions must be taken as well to comprehensively reduce AI-generated misinformation online. 

Paul Lekas, senior vice president for global public policy & government affairs at the Software & Information Industry Association, said the trade group — which represents Amazon, Apple and Google, among others — is “very supportive” of watermarking labeling and provenance authentication but acknowledges that those measures do “not solve all the issues that are out there.” 

“Ideally we’d have a system where everything would be clear and transparent, but we don’t have that yet,” Lekas said. “I think another thing that we are very supportive of is nontechnical, which is literacy — media literacy, digital literacy for people — because we can’t just rely on technology alone to solve all of our problems.”

In Washington, some momentum on AI watermarking

The White House, certain federal agencies and multiple prominent members of Congress have made watermarking and the reduction of AI-generated misinformation a high priority, pushing through a patchwork of suggested solutions to regulate AI and create policy safeguards around the technology when it comes to deepfakes and other manipulative content.

Advertisement

Through Biden’s October AI executive order, the Commerce Department’s National Institute of Standards and Technology has been charged with creating authentication and watermarking standards for generative AI systems — following up on discussions in the Senate about similar kinds of verification technologies

Alondra Nelson, the former White House Office of Science and Technology Policy chief, said in an interview with FedScoop that there is enough familiarity with watermarking that it is no longer “a completely foreign kind of technological intervention or risk mitigation tactic.”

“I think that we have enough early days experience with watermarking that people have to use,” she said. “You’ve got to use it in different kinds of sectors for different kinds of concerns, like child sexual abuse and these sorts of things.” 

Congress has also introduced several pieces of legislation related to AI misinformation and watermarking, such as a bill from Rep. Yvette Clarke, D-N.Y., to regulate deepfakes by requiring content creators to digitally watermark certain content and make it a crime to fail to identify malicious deepfakes that are related to criminal conduct, incite violence or interfere with elections.

In September, Sens. Amy Klobuchar, D-Minn., Josh Hawley, R-Mo., Chris Coons, D-Del., and Susan Collins, R-Maine, proposed new bipartisan legislation focused on banning the use of deceptive AI-generated content in elections. In October, Sens. Brian Schatz, D-Hawaii, and John Kennedy, R-La., introduced the bipartisan AI Labeling Act of 2023, which would require clear labeling and disclosure on AI-generated content and chatbots so consumers are aware when they’re interacting with any product powered by AI.

Advertisement

Meanwhile, the Federal Election Commission has been asked to establish a new rule requiring political campaigns and groups to disclose when their ads include AI-generated content.

In the absence of any AI legislation within Congress becoming law or garnering significant bipartisan consensus, the White House has pushed to get tech giants to sign voluntary commitments governing AI, which require steps such as watermarking AI-generated content. Adobe, IBM, Nvidia and others are on board. The private commitments backed by the Biden administration are seen as a stopgap. 

From Nelson’s point of view, NIST’s work on the creation of AI watermarking standards will “be taking it to another level.” 

“One hopes that CIOs and CTOs will take it up,” she said. “That remains to be seen.”

Latest Podcasts