OpenAI Just Created an App for Sharing Hyper-Realistic AI Technologies.

Did you know you can configure Google to filter out junk? Follow these steps to improve your search results, including adding my work on Lifehacker as a preferred source .

Last year , I wrote that we should all be wary of Sora , OpenAI’s AI-powered video generator. Sora’s initial launch promised hyper-realistic videos, which, while exciting for some, terrified me. While AI enthusiasts see a future of AI-generated movies and shows, I envision a future where no one can distinguish truth from lies . In my view, the only purpose of this technology is mass disinformation.

Over the past year and a half, these AI-generated videos have become not only more realistic but also more accessible, as companies like Google make their tools available to anyone willing to pay . This is precisely the situation we find ourselves in with OpenAI’s latest announcements: Sora 2 , a new AI model for creating video with sound, and a new Sora app for creating and distributing AI-generated products.

You may also like

Sora 2

OpenAI positions Sora 2 as a significant upgrade over Sora, comparing them to GPT-3.5 and GPT-1, respectively. The company claims the new model can generate complex videos inaccessible to previous models. These include, among other things, an Olympic gymnastics routine; a man performing a backflip on a paddleboard, “accurately” simulating the physics of water; and a figure skater performing a triple axel with a cat on his shoulder.

This tweet is currently unavailable. It may be loading or has been removed.

One of the common flaws of AI video models is their lack of understanding of real-world physics. While the visuals may appear realistic, elements may transform randomly, while others may disappear and reappear without meaning. OpenAI claims that Sora 2 makes these kinds of errors less frequently. A basketball that misses the hoop won’t magically appear there; instead, it will bounce off the backboard as expected. The company cautions that the model is still imperfect, but has been improved. This allows the model to better maintain continuity between shots: according to OpenAI, your videos should maintain consistency between takes, and you can define different styles, including “realistic,” “cinematic,” and “anime.”

Perhaps the biggest advancement in Sora 2 is the ability to incorporate real-world elements into the model. OpenAI calls this feature “Cameo.” You can insert real people into the Sora 2 model and ask the AI ​​to create any video from them. OpenAI demonstrates several examples of their employees adding themselves to various videos. While the quality isn’t always consistent, it’s a huge leap over JibJab .

Like Google’s Veo 3, Sora 2 can generate video with realistic sound. The announcement video clearly demonstrates this: an elephant roars; a skater glides across ice; water splashes on the ground. But what’s even more impressive (and disturbing) is that people are talking . Sam Altman, created using artificial intelligence, explains the new model and app in the video, and while it’s obvious to those in the know that it’s artificial intelligence, I think many won’t realize that it’s not the real Altman in the video.

Sora’s app

OpenAI claims that the Sora app is the result of a “natural evolution of communication.” The company sees it as a way to create and remix AI generations of other users, especially given the ability to upload one’s own face and image to the model.

The app is currently available by invitation only, but you can download it for free from the App Store today . However, you can get a feel for it by watching the demo video published by OpenAI on Tuesday, as well as reading messages from users who already have access.

This first demo from OpenAI features a double cameo featuring OpenAI researcher Bill Peebles and Sam Altman. The video includes an introductory shot of the two men conversing, followed by a close-up of Peebles quickly discussing the app’s revenue, a close-up of Altman delivering a tirade, and finally, the original introductory shot. At first glance, it’s the kind of video you’d binge-watch on TikTok or Reels, but it’s entirely AI-generated.

The OpenAI team showcases a number of other ready-made examples, including a Cameo that transforms into a cartoon, another that changes the effect to anime, and yet another that creates a “news” report about one employee’s ketchup addiction. (The latter, I must say, is quite disgusting.) They also showcase remixes of videos found in the feed, as you can ask Sora to alter the video as you see fit. One video features Peebles in a “commercial” for Sora 2 cologne, but others have remixed it to look like toothpaste or to be entirely in Korean.

These videos are quite realistic: in one, you think you’re watching a clip of a tennis match, but it turns out to be a cameo appearance by Rohan Sahay of OpenAI. After “Sahay” wins, the video cuts to his “interview” in which he thanks his detractors. In other videos, the artificial intelligence is more obvious, though again, not enough for most viewers to notice.

Safety and security according to OpenAI

Cameo sounds like a privacy and security nightmare, though OpenAI does provide some protection. You can’t simply use someone else’s face for any videos, and you can only upload your own face to the platform. Setting up the Cameo feature in the app is simple, if extremely unsettling. The app scans your face, similar to Face ID on an iPhone, and then sends the data to OpenAI’s “systems” for “multiple verification” to block impostors or users who might want to create a Cameo with you without your consent. Once approved, you choose who can create Cameos with you: all users, friends, users you’ve specifically approved, or just you.

What do you think at the moment?

As for the videos themselves, the Sora app places a visible watermark on any clip exported from the app. If you’ve seen similar videos online, you’ll notice a small “Sora” stamp on each one, similar to the watermark you might see on TikTok clips exported to other platforms. There are also intelligent models in place to block users from creating “harmful” content, particularly with Cameos.

If you’re a teenager and use the Sora app, you won’t be able to scroll endlessly. After a certain amount of scrolling, a cooldown period will begin to prevent you from wasting hours watching these AI-powered videos. While this limitation doesn’t apply to adult accounts, the app will encourage you to take a break.

Who asked for this?

With all due respect to OpenAI and its security team, this app looks like it will be a disaster, and there are many reasons for that.

First, OpenAI has simplified the creation of hyper-realistic short videos, making them as easy as asking Siri about the weather. I understand that all these videos contain watermarks, but it doesn’t take much skill to edit them—at least not in a way that most people won’t notice. Once this becomes widely available, our social media feeds will be inundated with this content. And given that many of them contain fairly realistic video and audio, many will be misled by such content.

It’s bad enough when it comes to silly videos like those of bunnies jumping on a trampoline . But what happens when “politicians” say something outrageous, or a “celebrity” shoplifts? In one viral video by Sora, Sam Altman tries to make off with a GPU at a Target store , but is stopped by a security guard. How many more Sora videos will show Sam Altman and everyone else who approves of remixing their Cameos committing crimes or simply doing something shameful? Those with enough power or fame might debunk these videos, but by then it’ll be too late: most people who watch the video will accept it as fact.

This tweet is currently unavailable. It may be loading or has been removed.

In this regard, it’s certainly great that there are security measures in place to prevent people from remixing other people’s Cameos without permission, but the risk of abuse is enormous : what if someone figures out how to “scan” someone’s face from a video, or hacks the settings that prevent others from using the original face scan? If they manage to bypass OpenAI’s security measures, they can insert that person’s face into any video approved by the platform. That’s it.

Look, I’m constantly online. I’m not going to pretend I don’t enjoy a good AI-generated meme when I come across one. But I’m not going to waste my free time scrolling through nothing but AI-generated crap. I’m sure people will find creative ways to make funny videos with Sora or have fun making Cameos with friends, but that’s the point: beyond the sheer novelty of the technology, nothing good will come of it.

It’s time to stop believing everything you see online: someone could have just faked it in an app.

More…

Leave a Reply