NVIDIA SlowMo Even Better than Real SloMo

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,532
NVIDIA is using its neural network to interpolate frames in video to produce smoother slow motion playback, and quite frankly the results are tremendously impressive. It even gets more impressive when NVIDIA takes slomo video and made it even slomoer. It has to be some of the most slomoest slomos ever slomoed. Thanks cageymaru.

Check out the SloMo.

Researchers from NVIDIA developed a deep learning-based system that can produce high-quality slow-motion videos from a 30-frame-per-second video, outperforming various state-of-the-art methods that aim to do the same.
 
How is this any different than TV interframe interpolation? Analyze the motion vectors and interpolate.
 
How is this any different than TV interframe interpolation? Analyze the motion vectors and interpolate.

I've seen similarly impressive stuff from an old premier plugin that did the calcs offline. Took a while, but looked great.
 
How is this any different than TV interframe interpolation? Analyze the motion vectors and interpolate.
You could have answered that yourself.
Its better.
Processors in TVs cant do much.

I use SVP 3.1 for realtime video playback.
Its great with a fast CPU when set to max smoothness and artifact masking to strongest.
It will be interesting to compare if we get to use the NVidia tool ourselves.
 
Pretty neat little teaser video.

Processors in TVs cant do much.
Depends on the TV. My JS9500 does fantastic. Can take horrible content and post process to make it at least acceptable if not very good. On the other hand my LG OLED which has a much better panel with a much shittier "computer" basically feeds exactly whatever content is to the panel. UHD Blu-ray it smokes the Samsung, but anything under 720p it looks awful and can't even properly stretch the image. The Samsung was a bit more expensive, but there are a number out there that can do some pretty impressive processing, just comes at a cost (wasn't willing to spend more than $3k on a secondary TV or would have went with a flagship OLED)... Much like spending money on a nice GPU.
 
Seems interesting for entertainment purposes, though I wonder if the tech becoming ubiquitous could eventually lead to misapplication. I'm not sure I trust John Q. Juror to grasp the concept of interpolation.
 
When I saw the first part of this I thought NV was saying "this 30fps video slowed down doesn't look as good as one we took at 240fps and slowed down even more" :D
 
You could have answered that yourself.
Its better.
Processors in TVs cant do much.

I use SVP 3.1 for realtime video playback.
Its great with a fast CPU when set to max smoothness and artifact masking to strongest.
It will be interesting to compare if we get to use the NVidia tool ourselves.

The point is NVIDIA makes it sound revolutionary when in fact they are doing the same thing TV's have been doing for you, with perhaps a slight advantage in they can analyze a couple frames ahead to perhaps create a better motion interpolation. But that isn't real time. TV's can't do that because they have to sync audio sometimes with external drivers so additional delay isn't an option.

Sorry not impressed.
 
The point is NVIDIA makes it sound revolutionary when in fact they are doing the same thing TV's have been doing for you, with perhaps a slight advantage in they can analyze a couple frames ahead to perhaps create a better motion interpolation. But that isn't real time. TV's can't do that because they have to sync audio sometimes with external drivers so additional delay isn't an option.

Sorry not impressed.
And what TVs are doing is not anywhere on this level of definition or granularity.
 
Here's a question, can this work on video games? can it be done "on the fly" with low input lag?

Assuming it requires less hardware resources than just rendering the effective FPS.
 
Someone at nVidia PR deserves a cookie. Share it with the programmers that actually did something useful though.
 
Here's a question, can this work on video games? can it be done "on the fly" with low input lag?

Assuming it requires less hardware resources than just rendering the effective FPS.
It requires major hardware to do well at 1080p. And thats not to the standard NVidia are touting.
I cant see it being feasible when hardware is already being punished, especially at higher res.
 
The tech looks good, and I know you guys didn't use it, but 2018 is already burning out the "AI" term.

That said, I had to do a double take here - thought I was watching a man in a thong for a second.
manthong.jpg
 
Here's a question, can this work on video games? can it be done "on the fly" with low input lag?

Assuming it requires less hardware resources than just rendering the effective FPS.

doesn't do anything in video games, might as well just display the same frame 4x (like a 2x better AFR SLI)
 
I don't know, looking at the videos, they seems to be losing a whole lot of definition. The slow mo guys examples certainly didn't shine any good light on the technology: look its slower and a shitload blurrier.
Ok
 
Could it be that whatever program nvidia is using creates a sort of dynamic vector field off of the bitmap data in the video, and then interpolates from that instead of from the frame as a whole? I guess that would be more suited for a gpu to handle than full frame interpolation.
 
Oh I think I could have produced a waaaaaaay better slo-mo demo starting with better content.
 
All it’s a shame that nVidia never share tools like this. They have shown off some lovely demos of stuff over the years, and they never share, or open source it for download.

At least we have AMD, and they share many things that benefit others, for free.
 
Although its cool to see it slowed down.
Being that my degree was in digital media and design, watching this didn't really impress me that much because I could tell it was just adding frames in between which didn't exists. The tech it self is impressive to create the "extra" frames, but the results (to me) weren't that great, because the level of detail is lost due to it not being true to frame capture. This is why the slomo guy capture stuff at sometimes 100K fps.....they capture the detail frame by frame. Clearly in this video example the details are muddied versus what they had with raw capture.
 
Here's a question, can this work on video games? can it be done "on the fly" with low input lag?

Assuming it requires less hardware resources than just rendering the effective FPS.

All interpolation implementations that I know of interpolate between frames. That means interpolation would introduce input lag into games, no matter how fast your algorithm is.

However, VR does something kinda similar with frame warping, and it only works with the previous frame. It's not actually interpolating moving objects in the frame IIRC, but the idea is similar.
https://xinreality.com/wiki/Asynchronous_Spacewarp


There was some discussion about this idea in the ReShade forums, but nothing ever came of it: https://reshade.me/forum/suggestions/3826-frame-interpolation-shader-for-reshade

All it’s a shame that nVidia never share tools like this. They have shown off some lovely demos of stuff over the years, and they never share, or open source it for download.

This ^

Speaking of AMD, they did something similar years ago with FluidMotion. Unless they recently ripped it out of the driver, you can do it in real time, on your AMD GPU, right now:



It totally flew under the radar though, even among AMD fans. I think most people aren't really that concerned about video quality, even when major breakthroughs are happening, and the tech won't work on the big streaming video platforms since they all use crappy, closed browser renderers.
 
Last edited:
I don't know, looking at the videos, they seems to be losing a whole lot of definition. The slow mo guys examples certainly didn't shine any good light on the technology: look its slower and a shitload blurrier.
Usually the higher the frame rate the lower the resolution especially if hitting the really high frame rates in the high hundreds or thousands anyway, nothing new here other than it is pretty sick to take slow mo and make it slow mo'er!
 
The Nvidia assisted slow mo looks odd to me. Like you can tell they are digitally altered images as they look a bit CGI. I wonder if it's just my brain being crazy or if I would see it in a double blind comparison.
 
You could have answered that yourself.
Its better.
Processors in TVs cant do much.

I use SVP 3.1 for realtime video playback.
Its great with a fast CPU when set to max smoothness and artifact masking to strongest.
It will be interesting to compare if we get to use the NVidia tool ourselves.

Doesn't setting Artifact Masking to strongest almost disable the whole motion interpolation? In my eyes when I set it to strongest all fluidity is gone. I always used no artifact masking and lowest setting on motion interpolation strength. Slightly smoother but not artificially so and does not hurt the oh so precious "film look". (I'm slightly sarcastic on that one)
 
The Nvidia assisted slow mo looks odd to me. Like you can tell they are digitally altered images as they look a bit CGI. I wonder if it's just my brain being crazy or if I would see it in a double blind comparison.

I mean... I think it's basically creating data. Interpolation involves adding information where there previously wasn't any. Whatever the algorithms are, it's basically artificial footage. Once it starts getting up to a certain percentage of the footage, it basically gets to the point where more of the footage is fake than real.
 
Doesn't setting Artifact Masking to strongest almost disable the whole motion interpolation? In my eyes when I set it to strongest all fluidity is gone. I always used no artifact masking and lowest setting on motion interpolation strength. Slightly smoother but not artificially so and does not hurt the oh so precious "film look". (I'm slightly sarcastic on that one)
It isnt quite as smooth but its still a lot better than without.
Artefacts trailing things that move across the screen stand out too much for me.
Masking on max gets rid of the artefacts.
 
Now if only we could get nvidia to use a neural network to release the next gaming card 4x faster.
 
Back
Top