ChatGPT and Dall-E AI Should Watermark Their Results
Getty Images wants the AI to stop copying them
Shortly after rumors leaked about former President Donald Trump’s impending indictment, images purported to show his arrest have surfaced online. These pictures looked like news photos, but they were fake. They are created by a generative artificial intelligence system.
Generative AI, in the form of image generators such as DALL-E, Occasionally And Stable diffusionand text generators like bard, ChatGPT, chinchilla And lama, has exploded in public. By combining clever machine learning algorithms with billions of pieces of human-generated content, these systems can do anything from creating an eerily realistic image from a caption, to synthesizing a speech in President Joe Biden’s voice, to replacing a person’s likeness with a others in a video, or write a coherent 800-word comment from a title prompt.
Even in these early days, generative AI is capable of creating highly realistic content. My colleague Sophie Nightingale and I have found that this is the average person cannot reliably distinguish an image of a real person from an AI generated person. While audio and video haven’t fully passed through the uncanny valley—images or models of people that are disturbing because they’re close but not entirely realistic—they likely will soon. When this happens, and it’s all but guaranteed, it becomes increasingly easy to distort reality.
G/O Media may receive a commission
Amazon Fire TV 50″ 4K Smart TV
This smart TV has access to a variety of streaming services, all of which are easier to navigate, offer 4K graphics for a stunning picture, and come with an Alexa voice remote too.
In this new world, it will be a cinch to create a video in which a CEO says their company’s profits have fallen by 20%, which could result in billions of dollars in market share loss, or to create a video in threatened by a world market leader with military action. which could trigger a geopolitical crisis, or inserting a person’s likeness into a sexually explicit video.
The technology to create fake videos of real people is becoming increasingly available.
Advances in generative AI will soon result in fake but visually compelling content proliferating online, leading to an even more chaotic information ecosystem. A secondary consequence is that critics will be able to easily dismiss anything from police brutality and human rights abuses to a world leader burning top-secret documents as fake.
As society stares at what is almost certainly just the beginning of these advances in Generative AI, there are sensible and technologically feasible interventions that can be used to curb this abuse. As a computer scientist specializes in image forensicsI believe watermarking is a key technique.
There is a long one History of marking documents and other items to prove their authenticity, indicate ownership and make counterfeiting. Today, Getty Images, a huge image archive, adds a visible watermark on all digital images in their catalogue. This allows customers to freely browse images while protecting Getty’s assets.
Imperceptible digital watermarks are also present used for digital rights management. A watermark can be added to a digital image, for example, by adjusting every 10th image pixel so that its color (usually a number in the range 0 to 255) has an even value. Because this pixel adjustment is so small, the watermark is imperceptible. And since this periodic pattern is unlikely to occur naturally and can be easily verified, it can be used to verify an image’s provenance.
Even medium-resolution images contain millions of pixels, which means additional information can be embedded in the watermark, including a unique identifier that the generating software encodes and a unique user ID. The same type of imperceptible watermark can be applied to audio and video.
The ideal watermark is one that is imperceptible and also resistant to simple manipulations such as cropping, resizing, color adjustment and digital format conversion. Although the pixel color watermark example is not resilient because the color values can be changed, many watermarking strategies have been proposed that are robust – if not impervious – to attempts to remove them.
Watermarks and free AI image generators
These watermarks can be burned into the generative AI systems by watermarking all training data, after which the generated content will contain the same watermark. This baked-in watermark is attractive because it means that generative AI tools can be open source – like the image generator Stable diffusion is – without concern that a watermarking process could be removed from the software of the image generator. Has stable diffusion a watermark functionbut since it’s open source, anyone can easily remove that piece of code.
OpenAI is Experimenting with a watermarking system The creations of ChatGPT. Of course, characters in a paragraph cannot be adjusted like a pixel value, so the text watermark takes on a different shape.
Text-based generative AI is based on to produce the next most sensible word in one sentence. For example, if you start with the sentence fragment “an AI system can…”, ChatGPT predicts that the next word should be “learn”, “predict” or “understand”. Each of these words is assigned a probability equal to the probability that each word will appear next in the sentence. ChatGPT learned these probabilities from the large amount of text it was trained on.
Generated text can be watermarked by surreptitiously marking a subset of words and then biasing the selection of a word to be a synonymously marked word. For example, instead of “understand”, the highlighted word “understand” can be used. By periodically influencing word selection in this way, a body of text is watermarked based on a particular distribution of marked words. This approach doesn’t work for short tweets, but is generally effective for text of 800 words or more, depending on the specific watermark details.
Generative AI systems can and should, I believe, watermark all content, allowing for easier downstream identification and intervention if necessary. If the industry does not voluntarily do so, the legislature could issue a regulation to enforce this rule. Of course, unscrupulous people will not live up to these standards. But when the big online gatekeepers — Apple and Google app stores, Amazon, Google, Microsoft cloud services, and GitHub — enforce these rules by banning noncompliant software, the damage is greatly reduced.
Sign authentic content
To approach the problem from the other side, a similar approach could be taken to authenticate original audiovisual recordings at the point of capture. A specialized camera app could cryptographically sign the recorded content as it is being recorded. There is no way to tamper with this signature without leaving evidence of the attempt. The signature is then stored in a centralized list of trusted signatures.
Although this does not apply to text, audiovisual content can then be verified as human-made. The Coalition for Content Origin and Authentication (C2PA), a collaborative effort to create a standard for media authentication, recently released an open specification to support this approach. With major institutions such as Adobe, Microsoft, Intel, BBC and many others joining this effort, the C2PA is well positioned to develop an effective and widely used authentication technology.
The combined signing and watermarking of human-generated and AI-generated content will not prevent all forms of abuse, but it will provide some level of protection. All security precautions must be continuously adjusted and refined as adversaries find new ways to weaponize the latest technologies.
In the same way society fought against a decades of fighting other cyber threats like spam, malware, and phishing, we should prepare for an equally protracted battle to defend against various forms of abuse perpetrated using Generative AI.
Want to learn more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligenceor browse our guides The best free AI art generators And Everything we know about OpenAI’s ChatGPT.
Hanni Faridprofessor of computer science, University of California, Berkeley
This article is republished by The conversation under a Creative Commons license. read this original article.
https://gizmodo.com/chatgpt-dall-e-free-ai-art-should-watermark-results-1850289435 ChatGPT and Dall-E AI Should Watermark Their Results