Great Job, Internet!: Learn how to identify AI videos with Jeremy Carrasco

If you're skeptical of a video of a cat fighting a bear or a possum eating Halloween candy, just check out Carrasco's Instagram page.

Great Job, Internet!: Learn how to identify AI videos with Jeremy Carrasco

AI has advanced to a place where we can no longer rely on people having weird numbers of fingers or unhinging their jaws in horrifying ways to distinguish a real video from a fake one. For every obviously artificial clip of Donald Trump dousing protesters in shit, there’s relatively innocuous videos of an unwanted animal in a family’s yard or two people having a meet-cute on the subway that trip more people up. But there’s someone out there who can help. If you ever find yourself wondering whether you just got got by a clip of someone falling off of Mount Everest or a cat freaking out in a bathtub, just check out Jeremy Carrasco’s Instagram page

Carrasco has become one of the internet’s preeminent AI video spotters. Having worked for a long time in the media industry as both a director and technical producer, Carrasco told The A.V. Club that he started picking out AI videos because he “knew what a huge range of typical ‘traditional’ errors looked like, and the AI ones stuck out to me as unique.” Eventually, he said, he developed an eye and language for it.

On his Instagram page, Carrasco uses those skills to talk through all the granular reasons he can tell viral videos are (or aren’t) AI. For a video about a possum stealing Halloween candy, for example, he directs his audience to look for evidence that a watermark from OpenAI’s Sora video generator had been removed, as well as a number of other tells including magically appearing candy and the fact that the possum looks away from a scary Halloween decoration when it gets startled because “AI mixes up directions.”

Carrasco sent The A.V. Club five general tips for spotting AI generated videos:

1. Watermarks: The Sora app lets users generate Sora 2 AI videos for free, but there is a watermark on top. Since this is so popular, you will still see watermarks on many AI videos. However, there are watermark removers, which leave a blemish on the video on the top or sides. Most AI videos don’t have a watermark at all—Sora is the exception.

2. Formats with blurry cameras: Since Sora 2 has a noisy or staticky image, most of the viral videos using Sora 2 so far are from AI versions of security cameras, Ring doorbell cameras, police body cameras, or even action cameras. People have more tolerance for noisy images in these formats. This was also the case for Google Veo videos—think of the trampoline AI videos!

3. Check video source: Many AI videos stretch reality, rage bait you, or are very bizarre. Meta-analyzing videos is becoming more important. Why does this video exist? What does the creator want me to feel? Then check their page. You’re looking for reliable and consistent content from a page that has frequent characters, or verified creators or news organizations. Be wary of repost accounts or accounts with the same format over and over; many AI creators find one viral format and repeat it with slight modifications.

4. Look for typical tells: Background issues like blurry or smudgy objects, poor spatial reasoning, and very noisy or wobbly textures. Look directly into the eyes of human subjects—does it look real or does it feel uncanny? While hands and limbs are mostly sorted and you’re unlikely to see 6 fingers, just look around to see what feels off. AI videos are often too well lit for the scenario and have a smooth look, but this is changing too.

5. Learn with the obvious ones: I point out AI videos of animals or harmless videos because it can train your brain to see subconscious tells. For example, while a video frame rate and blurry image can be difficult for the average person to explain, there are subconscious tells that become apparent after watching enough. This can prepare you for if and when more misleading or harmful videos use AI.

Carrasco still has faith that “most people don’t want to watch AI generated videos, especially when they feel like they’re being tricked.” “The general population seems to understand it’s not good for their feeds and want to keep a grip on reality,” he continued. “Short-term, the increasing quality of AI videos has made people lose confidence in their ability to figure out what’s real, which can lead to cynicism and detachment. Long-term, this projects out to disinformation and distrust. While it may seem like a stretch to go from AI bunnies to political deepfakes, the slow normalization of deepfakes from Sora and synthetic media in general is pushing us further from what was good about the internet in the first place.” Carrasco can be found on Instagram and YouTube under the username @showtoolsai.

 
Join the discussion...