To start, let’s go over how AI generates images. The AI takes input data in the form of human made images, videos, or text depending on the desired output. It then uses that input with proprietary algorithms to create an output matching what the human prompt input requests. Because it literally just looks at something and then rearranges pixels to look like what it references, it has no clue what the meaning of its output is. This is a collection of many things AI does not inherently understand and the clues that may help you find content created by one. Most of these are due to context, so there are obvious examples of human art violating these rules, but they usually explain it or have context that helps with understanding.

AI does not understand anatomy:

The most famous one. If the image has misplaced, unusual, or missing limbs, digits, or features, this could be AI. This includes the well known hand problem, as well as distortions in facial features or other small errors, like blending into a background or adjacent object. I am not trying to discriminate against amputees or people actually reflecting these issues in art or real life. If these problems are present, it is usually obvious if it was human intent or not to include them.

AI does not understand details:

This is the most important one that can help you detect an image even if it is chosen as the best out of a sample for publishing, due to it being an inherent issue with all AI content. Of all of the images I looked at to research this, this was the main and often only giveaway. Sometimes a part of the image may consist of unrecognizable blobs or shapes. Other times, what should be a smooth surface is rough, bent, warped, or covered in detail that should not be there. AI also uses blur on backgrounds very liberally to avoid potential mistakes in the fine details. In short, if you zoom in really close to an image and find yourself thinking “WTF is that” it may be AI nonsense.

AI does not understand lighting:

Another easier one, lighting is difficult for AI to get right. It takes a good amount of practice to get lighting right in art, and it requires both knowing the sources and getting the shadows right. AI generated content tends to have more uniform lighting, if it includes any lighting at all. The shadows may not exist, or not match the position or shape of the thing casting shadows. The shadow may be in the wrong direction compared to the perceived light source, or there may be full lighting with no real shadow or obvious direction. This should not be your only evidence, as like I said, lighting can be hard, and mistakes can be made. Corporations tend to be guilty of this due to rushing their artists, and artists doing their best with short timeframes.

AI does not understand proportion and distance:

This is all about relativity. The size of everything in the image may not be consistent. The proportions of a person or object might be weird or misshapen. The relative location of everything in the image or video might be unclear or unusual due to placement or size. Not to forget things could just be in odd places, like an airplane and a football being at the same vertical point in an image of an airplane over a football stadium. This one has the caveat of artistic license, or just art style. This is just one of many things that can point towards AI.

AI does not understand gravity:

Most art has gravity built in as an assumption due to its presence with us at all times. Sometimes, you might notice something that does not obviously follow gravity in an image or video. This means there is no context as to if it is flying, falling, or should have any reason to hover. This could also include a person in a position that they look relaxed in, but would be uncomfortable in real life due to gravity. I have seen an image where an ornament is shown close up without a string, just floating with no connections to a branch.

AI does not understand continuity:

This one is complex. First, if something in an image goes behind something else, a normal artist would take it into account, and extend it or terminate it with purpose. If going behind or around something warps the thing on the other side, or something like an arm appears in two different places after going behind something, that is a big sign of AI generation. Second, if there is a time factor, such as in a video or series of images, the AI may disregard changes made previously. Things like colors, shapes, and sizes may become inconsistent, and change when a normal artist would instinctively keep most details linear. The last thing is obvious mistakes, like if Earth’s moon is in the dark sky above a moon landing photo. Artistic license applies here as well.

AI does not understand barriers:

Walls are not walls for AI. They are just a part of the image they are replicating. This one is rare due to likely being rejected by the prompter when they notice the obvious issue. If water goes through something it shouldn’t, if wind blows the same direction regardless of barriers like it is a video game, or if something seems to “clip into” or disappear into a solid without a logical explanation for where it could have gone, it could be AI. Some art made by humans intentionally evokes this, but most of the time it is easy to tell if it is intentional or not, especially because AI outputs with this little quality control usually have other defects.

AI does not understand diversity:

Humans are often the ones that put the diversity in AI images, otherwise the AI uses aggregate data full of stereotypes and human inherent bias to make its outputs. This is true, but the main thing to help you spot AI generated images is the diversity of features among a group in a single image. This tip will help you spot issues with a group of people beyond the obvious visual artifacts or clone syndrome of AI. Think of a simple character creator like making a Mii on the Wii console. One of the features I found symmetry with while practicing is noses. How many look like they chose the same one in the creator? Compare it to a real image of groups of people. Is the nose diversity natural? Eyebrows are another feature AI reuses for multiple people, and prompters tend to overlook them when publishing images. They often look eerily similar compared to eyebrow shape diversity in real groups of people. The last is facial hair. AI tends to put the same or similar facial hair on every instance of use in a group. Even if it is like a 5 o'clock shadow compared to a beard, it is still present.

AI does not understand art:

This is the very hardest to get right as it could easily ruin artists most affected by theft. This should be left to help with a gut feeling only, or help determine if a photo is real or just photorealistic AI. AI images are usually generic looking. For example, if you have seen one AI frog, you have seen them all. It would take specifying a difference to a generator to get a non AI looking frog. The glossiness inherent in AI images comes into effect more with the artificial photos, where human skin is often shiny and smooth like a 5 hour makeup routine was done before a photoshoot. It also affects surfaces within the image that should be matte or less reflective. Because AI is trained on real artists, that means many artists have a similar style to what is made, and people have been falsely accused of using AI art when they create art that AI has already adapted.

Before I wrap this up, everything previous to this point is just observations that can be helpful. This is in no way advice or a real learning tool. I did not reference anything professional, I just used what I observed and noted looking at databases of known AI images and compared it to real art and photos. This was just an exercise in recognizing computer output vs human intent, just like the development of chess computers in the 80s and 90s. All of this could be obsolete in a year or less. What follows is very real advice that will work for as long as AI is a problem, and help you avoid the worst outcomes of simple images.

The last bit: context

Reverse image search the image. Where is it found? This can tell you potentially about where it originated or if it is stored in any database that confirms it is real or fake. What was the first time it was used? AI generated images became harder to distinguish around 2022 with Midjourney. Before then you had the “abstract art” era of AI where it was easy to spot. Where did you find it? If you found it online, who published it? Do you trust them? What was their motivation to post it? This is massive to remember, because the image is defined by the AIs inability to think like a human, the context is defined by the human with an agenda for posting it.

The agenda is the biggest danger, beyond any artistic loss or human devaluement; The ability to spread misinformation and to ensure complacency. It is not limited to news, or any political side. Any information made by AI and presented as truth without extensive verification of all facts contained is a massive risk. You are not immune to propaganda, and the sooner you can determine the truth from falsehoods, the sooner you can call out the harmful propaganda. AI images can also be a sign of an untrustworthy source, like if you notice an AI image in a book about foraging, you should probably not eat any mushrooms it shows you. While you can and should allow your judgments to change who you listen to, you should limit your accusations to questions. If they say no, trust them. Most people who use AI are very open about their use of it and the reasons. If they say yes, tell them why they should pursue an alternative.

/as

Reply

or to participate

Keep Reading

No posts found