Google unveils SynthID, a watermark for AI images that is impossible to remove
Google DeepMind is launching a special tool for watermarking and identifying AI images, SynthID, that cannot be edited or removed. Check details.
Google DeepMind, the AI division of the company, is launching a tool that will both identify as well as watermark images created with the help of artificial intelligence. This major breakthrough was announced on Tuesday, August 29, when the DeepMind team revealed the product for the first time. This watermark can help with the ongoing challenge of deepfakes where it is sometimes very difficult to tell apart the artificially created image from the real one. This detection tool can enable people to identify fake images and not fall into the trap of cybercriminals. This new tool has been named SynthID.
Announcing the tool, the DeepMind team said in a blog post, “Today, in partnership with Google Cloud, we're launching a beta version of SynthID, a tool for watermarking and identifying AI-generated images. This technology embeds a digital watermark directly into the pixels of an image, making it imperceptible to the human eye, but detectable for identification”.
Since it is still in the beta testing stage, it is being released to a limited number of Google Cloud's Vertex AI customers using Imagen, the company's native text-to-image AI model.
Google to fight deepfakes using SynthID
Traditional watermarks aren't sufficient for identifying AI-generated images because they're often applied like a stamp on an image and can easily be edited out.
This new watermark technology is added as an invisible layer on top of the image. It cannot be removed whether cropped or edited, or even when filters are added. While they do not interfere with the image, they will show up on the detection tools.
The best way to understand them is to think of them as lamination on physical photos. They do not hinder our viewing of the photo and you cannot crop or edit them out. SynthID basically creates a digital version of lamination.
“While generative AI can unlock huge creative potential, it also presents new risks, like enabling creators to spread false information — both intentionally or unintentionally. Being able to identify AI-generated content is critical to empowering people with knowledge of when they're interacting with generated media, and for helping prevent the spread of misinformation,” the post added.