What Is Frame Generation and Should You Use It in Games?

Earlier this year, Nvidia announced its new 50-series GPU lineup with a hot new feature in tow: “Multi-Frame Generation.” Building on earlier frame generation technology, these new GPUs allow games to generate multiple video frames from a single frame rendered the normal way. But is this a good thing? Or is it just “fake frames”? Well, it’s complicated.

At its most basic level, “frame generation” refers to the technique of using deep learning AI models to generate frames between two frames of a game rendered by the GPU. Your graphics card does the more mundane work of creating “Frame One” and “Frame Three” based on 3D models, lighting, textures, etc., but then frame generation tools take those two images and guess what “Frame Two” should look like.

Multi Frame Generation takes this a step further. Instead of just generating one extra frame, it generates multiple. This means that at the highest settings, three out of every four frames you see may be generated rather than rendered directly. Whether this is a good thing, however, depends largely on what game you’re playing and what you want your gaming experience to be.

What is the difference between scaling and frame generation?

Nvidia’s new Multi Frame Generation technology is part of the DLSS 4 announcement. DLSS stands for Deep Learning Super Sampling, and as the name suggests, earlier versions of it weren’t about frame generation, but rather about supersampling (or upscaling).

In this version of the technology, the GPU will render a lower-resolution version of the frame, such as 1080p, and then upscale it to a higher resolution, such as 1440p or 2160p (4K). The “deep learning” in DLSS refers to training a machine learning model on each game individually to give the upscaler a better idea of ​​what the frame should look like at a higher resolution.

Nowadays, DLSS refers more to a whole set of tools that Nvidia uses to improve performance, and the method mentioned above is commonly referred to as Super Resolution. Frame generation, on the other hand, takes two full frames and generates a completely new frame in between from scratch.

Of course, it’s also possible to use all of these technologies at once. You could end up in situations where your GPU is technically rendering only one lower-resolution frame for every two — or more, on newer GPUs — full-resolution frames you see. If that sounds like a lot of extrapolation, well, it is. And, incredibly, it works pretty well. Most of the time.

When is frame generation useful?

In a relatively short period of time, we’ve seen an explosion in demand for GPUs. As mentioned above, 4K resolutions contain four times as much pixel information as 1080p. What’s more, while media like movies and TV have been stuck at a relatively stable 24-30 fps, gamers are increasingly demanding at least 60 fps as a baseline, often pushing that figure to 120 or 240 fps for high-end machines. And don’t get me started on Samsung’s absurd 500 fps display.

If your GPU had to calculate every pixel of a 4K image 120 (or 500) times per second, the resulting fire emanating from your PC would be visible from space—at least in games with the kind of detailed ray-traced graphics we’re used to in AAA titles.

From this perspective, frame generation isn’t just useful, it’s essential. On Nvidia’s latest GPUs, multi-frame generation can allow a game to increase its frame rateby several hundred frames per second even at 4K, while still looking pretty good. It’s just not the frame rate that’s possible at that resolution without production hardware.

When it works (and we’ll get to that), frame generation can provide smoother motion and less eye strain. If you want to feel the difference,this little tool lets you experiment with different frame rates (assuming your display supports them). Try comparing 30 fps to 60 fps or 120 fps and follow each ball with your eyes. The effect is even sharper if you turn off motion blur, which many games will have by default.

For chaotic games with a lot of movement, these extra frames can be a huge benefit, even if they’re not quite perfect. If you were to look closely at the images frame by frame, you might see some artifacts , but they might be less noticeable while playing – at least that’s how it should work in theory.

What are the disadvantages of frame generation?

In practice, how well this technology works can vary greatly depending on the game, as well as the power of your computer. For example, going from 30 frames per second to 60 frames per second can make the frames generated look more choppy than going from 60 frames per second to 120 frames per second. This is due, at least in part, to the fact that at lower frame rates, there is more time between reference frames, which means more guesswork for the frames generated. This results in more noise and artifacts.

Whether these artifacts bother you is also highly subjective. For example, if you’re swinging through a city in Spider-Man 2 and the trees in the background look weirder than they should, will you even notice? On the other hand, in slower, atmospheric games like Alan Wake II , where graphical detail and set design are more important to the atmosphere, the ghosting and blurring may seem more pronounced.

What do you think at the moment?

It’s also worth noting that artifacts aren’t necessarily inherent to all frame generation. For starters, better input frames can lead to better frame generation. Nvidia, for example, is touting new models behind Super Resolution and Ray Reconstruction — an entirely different technology for improving ray tracing results that we simply don’t have time to get into — to improve the images that are fed into the frame generation portion of the pipeline.

You can think of it as a giant, complex version of a phone game. The only way to get the most accurate, detailed frames from your game is to render them directly. The more steps you add to extrapolate additional pixels and frames, the more chances there are for errors. However, our tools are gradually getting better at reducing these errors. So it’s up to you to decide whether more frames or more detail are worth it.

Why Frame Generation Is (Probably) Bad for Competitive Gaming

There’s one major exception to this whole argument, and that’s competitive gaming. If you’re playing online games like Overwatch 2 , Marvel Rivals , or Fortnite , smooth movement isn’t necessarily your main concern. You might be more concerned with latency — that is, the delay between when you react to something and when your game registers your reaction.

Frame generation complicates latency issues because it requires frames to be generated in the wrong order. Recall our previous example: the GPU generates frame one, then frame three, then the frame generator figures out what frame two should be. In this scenario, the game can’t show you frame two until it figures out what frame three should be.

Now, in most cases, this isn’t usually a problem. At 120 frames per second, each frame is on screen for only about 8.33 milliseconds. Your brain can’t even register that kind of delay, so it’s unlikely to cause much of a problem. In fact, human reaction times are typically measured in hundreds of milliseconds. For completely unscientific proof, try this reaction time test . Let me know when you get under 10 milliseconds. I’ll wait.

However, this becomes a problem in competitive gaming, as frame lag is not the only latency issue you encounter. Latency exists between the keyboard and the computer, between the computer and the server, and between the server and other players.

Most of these individual links in the chain may be quite low, but they have to be synchronized somewhere. That “somewhere” is the game’s tick rate . This is how often the game you’re playing is updated on the server. For example, Overwatch 2 has a tick rate of 64. This means that every second, the server updates what happened in the game 64 times, or once every 15.63 milliseconds.

This is enough if, say, your game shows you our rhetorical frame one, where the enemy Cassidy is in your crosshair, but hasn’t yet updated to frame three, when he’s not there, the server could have crashed before your screen has updated. This could mean your shot is registered as a miss, even though it looks like it should have hit. It’s also the only problem that can actually get worse with multiple frame generation .

There are ways to soften the blow, like Nvidia’s Reflex technology , which reduces input lag in other areas, but it’s not something you can avoid entirely. If you’re playing competitive online games, you’re better off lowering your graphics settings to get a better frame rate rather than using frame generation at the moment.

More…

Leave a Reply