OpenAI Watermarks Images Created by Dall-E 3 To Combat Deepfakes and Misinformation

Updated on February 8 2024

As artificial intelligence (AI) systems become more capable of creating ultra-realistic images, videos and text, companies like OpenAI aim to get ahead of the risk these technologies pose that enables the spread of deepfakes and misinformation at scale.

This week, OpenAI unveiled that images created by ChatGPT and new DALL-E 3 API will now include visible and invisible watermarks.

Why Add Watermarks to AI Media?

With image generators reaching new levels of quality, there are urgent concerns over how seamlessly falsified photos and videos could be used to impersonate real people or fabricate evidence online. Malicious actors could leverage advanced AI to undermine truth and spread misinformation across social media or news outlets.

To get ahead of these risks, OpenAI has adopted watermarking techniques for imagery created by DALL-E 3, its cutting-edge text-to-image generator launched last year. The goal is to increase transparency and accountability around synthetic AI media.

OpenAI Begins Roll Out of Watermarks in DALL -E3

Dalle 3 images watermarked

As of this week, all images created through ChatGPT and DALL-E 3 API contain both visible watermarks and invisible watermarks embedded in the metadata.

The visible watermark consists of a stylized “CR” symbol in the top-left corner, allowing people to instantly identify if an image was human-made or created with assistance from AI systems.

The embedded invisible watermark aligns with pioneering standards established by the Coalition for Content Provenance and Authenticity (C2PA), an organization consisting of major technology and media entities. The metadata provides certified provenance details that confirm the image originated from DALL-E 3 along with the associated creation timestamp.

Combined, these two complementary watermark techniques promote transparency around synthetic AI content as it spreads across the internet.

No Impact on Image Quality or Latency

OpenAI states that adding the C2PA watermark metadata has a “negligible effect on latency and will not affect the quality of the image generation.” While image file sizes may increase slightly for images created through its API and website, the visual quality remains unaffected.

The C2PA standards now backed by major technology and media companies promote transparency around AI-generated media. Adobe, Microsoft, BBC, Intel and others support marking AI content with its provenance details through metadata watermarks.

Limitations of AI Media Watermarking

OpenAI admits that watermark solutions have certain limitations. Social platforms often automatically strip out image metadata when users upload media, removing AI accountability details. In addition, taking a screenshot of an AI-generated image with a visible watermark omits any identifying metadata.


While watermarks take constructive steps towards responsibly increasing trust and truth, additional comprehensive protections will be essential to prevent harm from advanced generative AI in the years ahead across sectors. Still, by proactively adopting emerging content provenance standards for its systems today, OpenAI leads towards an authentic future for artificial intelligence-enabled media.