OpenAI Just Made It Easier to Tell If an Image Was Created by DALL-E 3

OpenAI is finally making it easier to determine whether an image was created using DALL-E 3. The company shared the news this week, noting that it will soon begin adding two types of watermarks to all images created by DALL-E 3, adhering to standards established by C2PA (Coalition for Content Provenance and Authenticity). The change already applies to images created via the website and API, but mobile users will begin receiving watermarks starting February 12th.

The first of the two watermarks exists strictly in the image metadata. You can verify image creation credentials using the Content Credentials Verify website, as well as other similar websites. The second watermark will be a visible CR symbol in the top left corner of the image.

1 credit

This is a good change that moves DALL-E 3 in the right direction and also correctly identifies when something has been done using AI. Other artificial intelligence systems use similar watermarks in metadata, and Google has implemented its own watermark to help identify images created using its image generation model, which has recently migrated to Google Bard .

As of this writing, only still images have a watermark. The video or text will still remain without a watermark. OpenAI states that a watermark added to the metadata should not cause any latency issues or affect the quality of image generation. However, in some tasks this will increase the size of the images slightly.

If this is the first time you’ve heard of it, C2PA is a group made up of companies like Microsoft, Sony, and Adobe. These companies continue to insist on including Content Credentials watermarks to help determine whether images were created using artificial intelligence systems. In fact, the Content Credentials symbol that OpenAI adds to DALL-E 3 images was created by Adobe.

While watermarks can help, they are not a foolproof way to ensure that misinformation does not spread through AI-generated content. Metadata can still be omitted through the use of screenshots, and the most visible watermarks can be cropped out of photos. However, OpenAI believes these techniques will help users realize that these “signals are key to improving the reliability of digital information” and that they will lead to less abuse of the systems they make available.

More…

Leave a Reply