This Trend Makes It Even More Difficult to Determine Whether a Video Was Created Using Artificial Intelligence.

Did you know you can configure Google to filter out junk? Follow these steps to improve your search results, including adding my work on Lifehacker as a preferred source .

It’s frightening how realistic videos created by artificial intelligence are becoming . But even more frightening is how accessible the tools for creating them are. Using an app like OpenAI’s Sora , users can create hyper-realistic short videos about almost anything— including real people like celebrities, friends, or even themselves.

OpenAI understands the risks associated with an app that makes it so easy to create realistic videos. That’s why the company adds a watermark to every Sora video you create through the app. So, if you’re scrolling through your social media feed and see a small Sora logo with a cute cloud and darting eyes, you’ll know it’s an AI-generated video.

You may also like

Sora’s watermark can’t be trusted.

When OpenAI announced this app, I was immediately worried that someone would find a way to remove the watermark, causing confusion online. I wasn’t wrong: there are already plenty of options for those who want to make their AI videos even more realistic. But I didn’t expect the opposite: people would want to add the Sora watermark to real videos to make them look like they were created by AI.

I was recently scrolling—or maybe doom-scrolling—the X mobile app when I started seeing some of these videos, like this one featuring Apple executive Craig Federighi : The post says “Sora is getting so good” and has a Sora watermark, so I assumed someone had made a cameo of Federighi in the app and posted it to X. To my surprise, however, the video was simply taken from one of Apple’s pre-recorded WWDC events—the one where Federighi does parkour around Apple headquarters.

Later, I saw this clip , which also uses the Sora watermark. At first glance, you might think it’s an OpenAI creation. But upon closer inspection, you’ll notice that the clip features real people: the footage is perfectly rendered, without the blurriness and noise typically seen in AI-powered videos. This clip simply mimics the way Sora typically generates multi-frame clips of people speaking. (Attentive viewers may also notice that the watermark is slightly larger and more static than a real Sora watermark.)

This tweet is currently unavailable. It may be loading or has been removed.

As it turns out, the account that posted the second video also created a tool for adding a Sora watermark to any video. They don’t explain how it works or what it’s for, but it definitely exists. Even if it didn’t, I’m sure adding a Sora watermark to a video would be easy, especially if you don’t mind replicating the official Sora watermark’s motion.

What do you think at the moment?

To be more precise, people were already posting similar videos before the watermarking tool existed. The joke is that you claim to have created something with Sora, but instead you post a popular or controversial video—for example, Drake’s 15-year-old Sprite commercial , Taylor Swift’s dance from The Eras Tour , or the entire Sonic the Hedgehog movie . It’s a funny meme, especially when it’s obvious the video wasn’t created with Sora.

This tweet is currently unavailable. It may be loading or has been removed.

Real or not real?

But this is an important reminder to always be vigilant when browsing videos in your feed. You need to be on the lookout for both fake clips and genuine ones purported to be AI-generated. There are many consequences here. Sure, it’s fun to slap a Sora watermark on a viral video, but what happens if someone adds it to a legitimate video of illegal activity? “Oh, that video isn’t real. All the videos you see without a watermark have been spoofed.”

At this point, it seems no one has figured out how to perfectly replicate Sora’s watermark, so if someone really does try to pass off a real video as one created by AI, there will be signs of it. But all of this is still somewhat troubling, and I don’t know what solution will be found. Perhaps we’re heading for a future where internet videos are universally considered unreliable. If you can’t tell what’s real and what’s fake, why even try?

More…

Leave a Reply