´ºÉ«Ö±²¥ researchers create tool to remove anti-deepfake watermarks from AI content

OpenAI CEO Sam Altman participates in a panel discussion during the annual meeting of the World Economic Forum in Davos, Switzerland, Thursday, Jan. 18, 2024. OpenAI was one of the major tech firms that promised to pursue watermarking technology. (AP Photo/Markus Schreiber)

OTTAWA - University of Waterloo researchers have built a tool that can quickly remove watermarks identifying content as artificially generated — and they say it proves that global efforts to combat deepfakes are most likely on the wrong track.

Academia and industry have focused on watermarking as the best way to fight deepfakes and "basically abandoned all other approaches," said Andre Kassis, a PhD candidate in computer science who led the research.

At a White House event in 2023, the leading AI companies — including OpenAI, Meta, Google and Amazon — pledged to implement mechanisms such as watermarking to clearly identify AI-generated content.

AI companies’ systems embed a watermark, which is a hidden signature or pattern that isn’t visible to a person but can be identified by another system, Kassis explained.

He said the research shows the use of watermarks is most likely not a viable shield against the hazards posed by AI content.

"It tells us that the danger of deepfakes is something that we don't even have the tools to start tackling at this point," he said.

The tool developed at the University of Waterloo, called UnMarker, follows other academic research on removing watermarks. That includes work at the University of Maryland, a collaboration between researchers at the University of California and Carnegie Mellon, and work at ETH Zürich.

Kassis said his research goes further than earlier efforts and is the "first to expose a systemic vulnerability that undermines the very premise of watermarking as a defence against deepfakes."

In a follow-up email statement, he said that "what sets UnMarker apart is that it requires no knowledge of the watermarking algorithm, no access to internal parameters, and no interaction with the detector at all."

When tested, the tool worked more than 50 per cent of the time on different AI models, a university press release said.

AI systems can be misused to create deepfakes, spread misinformation and perpetrate scams — creating a need for a reliable way to identify content as AI-generated, Kassis said.

After AI tools became too advanced for AI detectors to work well, attention turned to watermarking.

The idea is that if we cannot "post facto understand or detect what's real and what's not," it's possible to inject "some kind of hidden signature or some kind of hidden pattern" earlier on, when the content is created, Kassis said.

The European Union’s AI Act requires providers of systems that put out large quantities of synthetic content to implement techniques and methods to make AI-generated or manipulated content identifiable, such as watermarks.

In Canada, a voluntary code of conduct launched by the federal government in 2023 requires those behind AI systems to develop and implement "a reliable and freely available method to detect content generated by the system, with a near-term focus on audio-visual content (e.g., watermarking)."

Kassis said UnMarker can remove watermarks without knowing anything about the system that generated it, or anything about the watermark itself.

"We can just apply this tool and within two minutes max, it will output an image that is visually identical to the watermark image" which can then be distributed, he said.

"It kind of is ironic that there's billions that are being poured into this technology and then, just with two buttons that you press, you can just get an image that is watermark-free."

Kassis said that while the major AI players are racing to implement watermarking technology, more effort should be put into finding alternative solutions.

Watermarks have "been declared as the de facto standard for future defence against these systems," he said.

"I guess it's a call for everyone to take a step back and then try to think about this problem again."

This report by ´ºÉ«Ö±²¥was first published July 23, 2025. 

The ´ºÉ«Ö±²¥ Press. All rights reserved.

More Science Stories

Sign Up to Newsletters

Get the latest from ´ºÉ«Ö±²¥News in your inbox. Select the emails you're interested in below.