What People Are Getting Wrong This Week: Recognizing Videos Using AI

You’ve probably already been fooled by an AI video, whether you realize it or not. On May 20, Google released Veo 3 , its latest AI video generation tool, showing it off with a video of an AI-generated sailor, and the results were either impressive or terrifying, depending on your point of view.
Yes. We’re sold. While the video has a slightly surreal quality when viewed up close, it’s good enough to fool most casual viewers. The barrier that prevents the average person from falling for computer-generated video has been broken. Veo 3 videos are so good that you can’t easily tell they’re not real, especially when you see them while casually scrolling through your social media feed. People are already using Veo 3 for profit, politics, and propaganda . As Lifehacker’s Jake Peterson put it, “ You’re Not Ready for This Terrifying New Wave of AI-Generated Videos . ”
Veo 3 produces hyper-realistic videos with natural lighting, physics effects, sound effects, camera movement, and dialogue. Unlike traditional CGI, this new generation of AI doesn’t require a Hollywood budget or a team of animators — you just need to create a few sentences’ worth of cues and feed them to Veo 3. The output is free of many of the telltale distortions that used to mark content as clearly AI-generated.
Watch this video, made entirely on Veo 3, to see how persuasive it can be:
This tweet is currently unavailable. It may be loading or has been removed.
Making these videos is also incredibly easy — you don’t have to spend all day repeating prompts to get good results. I went from “I don’t know how to do this” to making the video below in about half an hour, and it was even made using a “free trial” of Google’s AI tool:
Is there anything a dedicated truth-seeker can do in the face of the onslaught of AI video scum? Maybe. Not much. There are (a few) steps you can take to (maybe sometimes) spot a fake video. At least until Veo 4 makes it even harder, or until a competing AI video-generating service releases a new model that’s even better.
Some tips to spot fake AI videos (sometimes it might work)
Look for watermarks (both visible and invisible)
According to Google, the SynthID watermark is embedded in all content created by Google’s generative AI models. Unfortunately, you can’t see it, and you can’t easily spot it — at least not yet. The company says it’s testing a verification portal to “ quickly and efficiently identify content created by Google’s AI.” It’s not live yet. (Maybe they could finish working on it before Veo 3 launches?) Either way, hopefully soon anyone will be able to upload a piece of content and see if it was created by one of Google’s AI tools.
Late last week, Google also rolled out a visible watermark on Veo 3 content called DeepMind’s SynthID . Unfortunately (again), it won’t apply to “videos created by Ultra members in Flow, our tool for AI filmmakers,” so anyone using the expensive, “pro” version of Veo 3 can still get tricked.
Non-technical, common sense ways to recognize videos with AI
Here are some more tips for spotting AI-generated videos that don’t require any tools more sophisticated than your own brain:
-
Slow down. Don’t immediately trust videos you see online, even if they’re from people or accounts you usually trust.
-
Double check. Has this clip been shared anywhere else? Who is sharing it and why?
-
Watch for clues. Even Veo 3 isn’t perfect yet. Look for weird physics, unnatural skin textures, odd mouth movements, or weird lighting.
-
Think critically. Always ask yourself: Who benefits from me believing this video is real? Consider whether it makes sense at all. Consider who might have filmed it and why. Ask yourself whether the behavior of the characters in the video matches how you think real people behave.
Limitations of AI Detection
These are some concrete steps you can take, but I don’t think many people will take them when watching videos on social media. Even in a perfect world where everyone had access to reliable AI verifiers that reliably identified fake content, plenty of people would still believe that AI garbage was real. Who’s going to bother investigating every TikTok video they scroll through and every photo that pops up on Facebook? It’s a lot of work, and I don’t think most people actually care whether what they see is real, as long as they like it.
I write about people being duped by AI creations in this column fairly regularly, and it doesn’t seem to matter how convincing the fakes are. Even the sloppiest, most obviously weird creations are good enough when the people who see them want them to be real . And that’s the hard part about detecting AI: We’re most vulnerable to fake content when it confirms our biases. Humans are the weak link in the chain.
While these videos are horrifying, fake news is nothing new.
While AI programs like Veo 3 make it easier to create fake videos, creating effective disinformation wasn’t impossible before. CGI has been convincing people of unrealistic things for decades. Before that, you could just film a realistic version of what you wanted to see, without the digital effects. Before movies, people faked photographs, and before photographs, people lied in print. And people lie with their mouths all the time, sometimes while standing behind a podium with an official government seal. Deception, forgery, and fraud are as old as humanity. The only difference is that we can do it a lot faster now.
The best way to tell the real from the fake has always been to develop your own bullshit detector, but it’s also the hardest method to rely on. Being subject to confirmation bias is basic human nature, and while it’s easy to say “be especially suspicious of things that seem true,” it’s not a skill many of us (or maybe any of us) actually possess.
Perhaps the biggest mistake this week (and every week) is that people like me think I can spot AI fakes when it really matters. It may be easy to spot and debunk Facebook’s fake AI , but how can I know that what I’m sure of is actually true? I can’t. No one can. And that’s a philosophical conundrum that no amount of watermarking or detection tools can solve.