All the Ways to Tell a Photo Was Taken by AI
This post is part of Lifehacker’s Artificial Intelligence Debunked series. We explore six different types of AI-generated media and highlight common features, by-products, and distinguishing features that will help you distinguish artificial content from human-generated content.
The art of artificial intelligence is no longer just a concept: it’s everywhere, and you’ve probably encountered it in the wild, whether you knew it or not. Artificial intelligence art has won awards (albeit amid controversy), been used in Netflix movies (again, facing criticism), and fooled countless Facebook users with false images of babies, Jesus, and vegetables. It’s not going anywhere anytime soon.
As imaging technology continues to improve, it’s more important than ever to learn how to recognize these AI-generated images so you don’t get fooled by someone’s fake photos. It may seem harmless to not think twice about some AI-generated duck images, but when bad actors spread misinformation through AI images , the consequences are dire. AI image generator companies are working on ways to watermark AI-generated images , but we’re not there yet. Much artificial intelligence art is distributed online without clear labeling warning users that the images are not real. Instead, these tips will help you spot fake images when they appear on your feed.
How AI Art Generators Work
Looking at a piece of AI art, it’s easy to assume that the generator that created it simply stitched the piece together from many images in its database. However, that is not what is happening here.
AI art generators are actually trained on huge image data sets, from works of art to photographs. However, these tools cannot “see” these images the way humans do. Instead, they break down these images pixel by pixel. He won’t see the apple; it will see a group of pixels and their values. Over time, it will realize that this group of specific pixel values tends to be an apple. Train the AI with enough images with enough context, and it will start drawing lines between all sorts of objects and styles. He will understand how to generally display fruits, animals and people, as well as artistic styles, colors and moods.
Modern AI image generators such as DALL-E use something called diffusion to learn and create images. Essentially, they take a training image and add visual noise (like static) to it until the entire image becomes meaningless. The idea is to understand how the image is affected by adding noise at each stage: from here it tries to do the opposite by creating its own version of the original image only from noise, which trains the AI to create images from scratch.
This is an extremely simplified explanation, but it is important to know the fundamental idea that these generators use a huge database of learned relationships. This allows complex scenes to be rendered in seconds, but it also introduces strange features that can help us differentiate an AI image from the real thing.
Count your fingers
AI art is getting better, but it’s still not perfect. Although these tools are capable of producing high-quality images with realistic lighting, they still lack fine detail.
Perhaps most famously, “human” subjects in artificial intelligence tend to have the wrong number of fingers. The rest of the image may seem convincing at first glance, but look at the hands and you’ll notice six, seven or eight fingers on each. Or maybe it’s the other way around, and there are three fingers on the hand, two of which turn into one. In any case, the fingers and the hands to which they are attached are often damaged.
But while AI makes the most mistakes in the fingers, its problems are not limited to the hands. Any repeating patterns, especially when it comes to people’s parts, can be generated incorrectly by AI art. Look at the subject’s teeth in the image: sometimes there are too many of them or they look distorted in a way that wouldn’t normally be the case. Most of us don’t have a perfect smile, but artificial teeth are on another level.
You may even see a subject with an extra limb: you’ll be looking at the image and wondering what seems strange about it, when suddenly you notice the subject’s third arm coming out of his hoodie.
Vox has a great video explaining why the AI struggles with these repetitive elements, but essentially it comes down to a lack of experience on the AI’s part. These tools are trained on a huge amount of data, but when it comes to something complex like hands, the data they have doesn’t provide enough context for the bot to know how to generate the element correctly and realistically. He doesn’t know how hands actually work – he can only pull from the hands he sees. Here we test the limits of this knowledge.
Watch out for mixing elements
Look at the art of artificial intelligence and you will notice something strange: things are mixing and transforming into each other everywhere. I already mentioned that this happens to fingers, but it can happen to many other elements of the subject, including teeth turning into other teeth, clothing merging into itself, and eyes appearing to bleed into other parts of the subject’s head.
But it’s not just about the objects: everything in the image is quite suitable for such mixing. Check out the image I created using DALL-E below. The board game is wavy: parts of the board transform into other parts, and parts merge with tiles. The woman on the right has her teeth clenching, while the other woman’s sweater cuffs are folding in on themselves. (Not to mention her fingers merge into one.)
Criticize the letter
AI may be able to generate text, but it often cannot write well on images. In many cases, AI art that includes writing will look unattractive . Sometimes it’s a logo that resembles its real-life counterpart, but doesn’t quite match it. (You can tell he’s trying to say “Coca-Cola” but all the letters are jumbled up.) Other times it’s like someone trying to make up a language, or what it would be like to try to read something in a dream. (In fact, many of these AI images seem dreamlike, at least to me.)
Now this side of the art of artificial intelligence is rapidly improving. Both DALL-E and Meta AI were able to create an image of a cake that said “HAPPY BIRTHDAY KAREN” without any terrible problems. However, Meta wasn’t perfect: the second “P” in “HAPPY” looked more like a “Y,” and there were two lines through the “A” in “KAREN” rather than one. But it’s important to note that these images turn out better when you specifically ask the AI to record them: When left to its own devices, the email often looks weird, so if whoever created the image didn’t think to fix it, it could be an obvious signal.
Look for things that just don’t make sense
After all, AI art doesn’t really know anything. He creates art based on the relationships he has built from all his training data. He doesn’t know how a building should actually be built, how tennis is played, or how the human hand moves. He uses his skills to reproduce these requests to the best of his ability. If you look closely enough, you’ll see that these knowledge gaps appear in all AI art, especially in images where there’s a lot going on.
Take this image for example: I asked DALL-E to create an image of a basement party where people are playing beer pong, drinking from red Solo cups, and chatting. There are a few major problems to note right away: the eyes of the people in the frame are mostly off; the hand of a person playing beer pong is damaged; and why is he throwing ping pong balls from the side of the table? Speaking of the table, it’s wavy and warps in a way that a real table wouldn’t, and for some reason there are two sets of cups on one side of the table.
Look at the background and things get even weirder. The man appears to be kneeling and drinking from his red cup as if from a bottle. The man behind him appears to have a blue can in his red cup, as if the cup were a weirdo. The face of the person standing behind him appears to have been photoshopped, although the subject is blurred.
Even official examples from companies producing these tools contain similar logical inconsistencies. OpenAI has a fun image of a therapist’s avocado to demonstrate DALL-E 3.
The avocado has a pit that has been dug out and he complains to the spoon therapist about feeling empty inside. That’s decent, but look at the therapist’s notebook: they write backwards in it, with the pages facing out. DALL-E has seen enough pictures of therapists to know what tools they typically use to write notes, but doesn’t realize that we humans typically write on paper rather than on a tablet.
Of course, AI posts that go viral on social networks like Facebook often make no sense at all. Who makes cat sculptures out of Oreo cookies? Why do so many sad artists build Jesus sand castles? Art can be creative, but AI art is especially strange, both in its small details and larger themes.
AI shine
After looking at AI images for a while, you start to notice something strange, especially among photorealistic images: everything is shiny. AI images often have what some call an “AI sheen,” a sheen that can give away the image’s origin if you know what you’re looking for. Images are often overexposed or harshly lit, making objects appear particularly bright.
After a while, you look at a photo like the one below and immediately know it was taken by AI just by the way it looks. (Even if the subject’s arms weren’t severed either.)
Use a healthy level of skepticism
While these tips may be relevant today, AI technology continues to evolve and improve and these tips may not be useful in the near future. AI is already a better writer, so who’s to say it won’t find a way to generate realistic hands every time? Or so that the elements in the photo do not merge with each other? Or stop adding weird crap to the background of your images? And while the tips above are currently useful for photorealistic images, identifying AI-generated artworks can be more difficult: these artworks may have the same flaws noted above, but these inconsistencies are easier to disguise with “paint” that is often mixed , less realistic and more open to interpretation.
As we head into a particularly tumultuous election year, it will be more important than ever to turn on your BS radar as you browse the internet. Before you get impressed by someone’s intricate work of art or angered by an offensive image, think twice: is the image even real?