Why is Nvidia better than AMD for video editing?

Peat Moss

Limp Gawd
Joined
Oct 6, 2009
Messages
466
The consensus sees to be that Nvidia is better (or at least faster) than AMD for video editing. I was just curious about why that is? The mechanics of it. Is it the clock speed? The cuda cores? The drivers? etc.

I don't game, but would rather support AMD than the greedy green team.

So, even though AMD is not as good as Nvidia, what kind of specs should I look for in an AMD card that would provide a decent video editing experience? Number of stream processors? ROPs? Clock speed?
 

LukeTbk

2[H]4U
Joined
Sep 10, 2020
Messages
3,603

DooKey

[H]F Junkie
Joined
Apr 25, 2001
Messages
12,430
The consensus sees to be that Nvidia is better (or at least faster) than AMD for video editing. I was just curious about why that is? The mechanics of it. Is it the clock speed? The cuda cores? The drivers? etc.

I don't game, but would rather support AMD than the greedy green team.

So, even though AMD is not as good as Nvidia, what kind of specs should I look for in an AMD card that would provide a decent video editing experience? Number of stream processors? ROPs? Clock speed?
Nvidia has a long history of supporting their software well. AMD not so much.
 

emphy

Limp Gawd
Joined
Aug 31, 2016
Messages
369
My guess would be the tensor cores; image and video manipulation are some of the more prominent applications for that machine learning stuff, and amd is rather well-known for running a few leagues behind nvidia in that area.
 

TrunksZero

Limp Gawd
Joined
Jul 15, 2021
Messages
324
It all comes down to developers coding tools to make exclusive use of CUDA programing that will only work on nVidia hardware or coding GPU acceleration to only use CUDA for it. That's about it. The reason for that is nVidia pours money into helping this along and providing CUDA support to help it along.
 

Brackle

Old Timer
Joined
Jun 19, 2003
Messages
8,258
Not only is because of Cuda, but Market share. Nvidia has what 80%, AMD 20%? Because of that there are way more people who use Cuda.....thus the reason why Cuda is more supported. Nvidia wants to keep those customers happy!

That is going to be a hard to overcome.
 

Bankie

2[H]4U
Joined
Jul 27, 2004
Messages
2,144
It all comes down to developers coding tools to make exclusive use of CUDA programing that will only work on nVidia hardware or coding GPU acceleration to only use CUDA for it. That's about it. The reason for that is nVidia pours money into helping this along and providing CUDA support to help it along.
AMD couldn't be even bothered to fix their own encoder engine for far too long...

It was basically unusable in things like OBS for so long that I was surprised when I heard that they actually made improvements to it.
 

E4g1e

Supreme [H]ardness
Joined
May 21, 2002
Messages
7,373
The problem with AMD GPUs for video editing is OpenCL (or AMD's support of that API). OpenCL is now already at version 3.0 - but only Nvidia and Intel (GPU-wise) support that newest OpenCL version. The latest AMD GPUs' OpenCL support is stuck on version 2.0 (2.1 for the RX 6000 series) - and OpenCL 2.x has been problematic for video editing software when it comes to GPGPU acceleration. As a result, some of the rendering features that normally would have gone to the GPU on an Nvidia GPU-powered system instead went straight to the CPU (or not rendered at all) on an AMD GPU-powered system.
 

Peat Moss

Limp Gawd
Joined
Oct 6, 2009
Messages
466
Interesting. Thanks for all the replies.

Is there an Nvidia 3000 series card with more than 8 GB of vram? It's the amount of vram that partly made me interested in AMD.
 
Last edited:

Sycraft

Supreme [H]ardness
Joined
Nov 9, 2006
Messages
5,304
Interesting. Thanks for all the replies.

Is there an Nvidia 3000 series card with more than 8 GB of vram? It's the amount of vram that partly made me interested in AMD.
Ya there's a couple. The 3090 and Ti variants have 24GB of RAM. They are pricey but they have boatloads of VRAM if you need it. The 3080Ti is cheaper and has 12GB. If you want to go ham and get a pro card, they have Quadros with 48GB.

Also something not mentioned by others yet is that with the 2000 series, nVidia really upped their game on nvenc, their dedicated hardware encoder, and it is now something that is worth using. It won't get you quite as good a quality as a good software one, but damn near and since it runs on a dedicated part of the chip it can really speed up renders sometimes. While you probably wouldn't use it for the master file going to make a Blu-ray, it works perfectly well for previews or the like.
 

WilyKit

Gawd
Joined
Dec 18, 2020
Messages
618
My guess would be the tensor cores; image and video manipulation are some of the more prominent applications for that machine learning stuff, and amd is rather well-known for running a few leagues behind nvidia in that area.
CUDA, not Tensor. Not aware of any video editing apps that use Tensor cores, pretty much all of them, including free ones, use Cuda
 

OFaceSIG

2[H]4U
Joined
Aug 31, 2009
Messages
3,619
Software support in the drivers have a lot to do with it. Easier support make it easier for Devs to implement their features. I can attend to Nvenc. I use handbrake from time to time to make video files out of ripped discs. I went from 10 minutes on my 5800x to encode an episode of TV to 1 minute using the Nvenc encoder on my 3080. Mind you, from what I understand it's the same encoder even on the "cheaper" cards. It blew my mind.
 

LukeTbk

2[H]4U
Joined
Sep 10, 2020
Messages
3,603
So, 3080 > rx 6800 XT if doing video editing, right? :-/
Well it is very general statement, could depend what you do in which software, if one mean Premiere Pro

pic_disp.jpg


And playing something like 4K red playback:
pic_disp.jpg


https://www.pugetsystems.com/recomm...obe-Premiere-Pro-143/Hardware-Recommendations
 

Peat Moss

Limp Gawd
Joined
Oct 6, 2009
Messages
466
Ya there's a couple. The 3090 and Ti variants have 24GB of RAM. They are pricey but they have boatloads of VRAM if you need it. The 3080Ti is cheaper and has 12GB. If you want to go ham and get a pro card, they have Quadros with 48GB.

Also something not mentioned by others yet is that with the 2000 series, nVidia really upped their game on nvenc, their dedicated hardware encoder, and it is now something that is worth using. It won't get you quite as good a quality as a good software one, but damn near and since it runs on a dedicated part of the chip it can really speed up renders sometimes. While you probably wouldn't use it for the master file going to make a Blu-ray, it works perfectly well for previews or the like.

Thanks. I also just noticed in the Puget graphs that there is a 3060 with 12 GB.

Kind of strange that AMD scores are that low when Apple used AMD GPUs in their Macs for so many years. I guess AMD must have created drivers specifically for Final Cut Pro since FCP is pretty fast.
 
  • Like
Reactions: pavel
like this

LukeTbk

2[H]4U
Joined
Sep 10, 2020
Messages
3,603
Thanks. I also just noticed in the Puget graphs that there is a 3060 with 12 GB.

Kind of strange that AMD scores are that low when Apple used AMD GPUs in their Macs for so many years. I guess AMD must have created drivers specifically for Final Cut Pro since FCP is pretty fast.
I am not sure if it is relevant, but they had extra silicon for what their GPU lacked:
https://www.digitaltrends.com/compu...s that the Afterburner,RAW video at 29.97 fps.

And the AMD card were often the AMD Radeon Pro W5700X, pro vega type (which I am not sure they had significantly better support in those application too).
 

Sycraft

Supreme [H]ardness
Joined
Nov 9, 2006
Messages
5,304
Thanks. I also just noticed in the Puget graphs that there is a 3060 with 12 GB.

Kind of strange that AMD scores are that low when Apple used AMD GPUs in their Macs for so many years. I guess AMD must have created drivers specifically for Final Cut Pro since FCP is pretty fast.
Maybe, they also may not have effectively used the acceleration. Apple doesn't always make the best choices when it comes to the hardware they put in their pro devices. They have, on numerous occasions in their history, put hardware in that was not useful in most of the software they ran but had another reason for its choice. A good example is way back in the day when they first had a dual CPU system. MacOS didn't support two CPUs in the OS scheduler, so a program itself had to support it and basically nothing did, so you paid for a more expensive system that got you nothing.
 

TrunksZero

Limp Gawd
Joined
Jul 15, 2021
Messages
324
Maybe, they also may not have effectively used the acceleration. Apple doesn't always make the best choices when it comes to the hardware they put in their pro devices. They have, on numerous occasions in their history, put hardware in that was not useful in most of the software they ran but had another reason for its choice. A good example is way back in the day when they first had a dual CPU system. MacOS didn't support two CPUs in the OS scheduler, so a program itself had to support it and basically nothing did, so you paid for a more expensive system that got you nothing.
They also used custom drivers with custom Apple only API's to accelerate things. So something like OpenCL only going up to version 2.x, doesn't necessarily matter in Apple land. Becuse it's not going to accelerated using that anyway. It would probably use something like Apples own "Metal" APi instead. Completely different world to PC.

And I do find it kind of funny. As historically for Radeons GPU before AMD, so ATi era, video was a huge part of there wheel house. Curious how things have changed over the years.
 

funkydmunky

2[H]4U
Joined
Aug 28, 2008
Messages
3,332
My guess would be the tensor cores; image and video manipulation are some of the more prominent applications for that machine learning stuff, and amd is rather well-known for running a few leagues behind nvidia in that area.
No. t is all CUDA support .
 

pavel

Limp Gawd
Joined
Apr 8, 2014
Messages
469
LukeTbk , Yeah, I am familiar with Puget Systems and their graphs.

They have compared hardware and their performance in Davinci Resolve, Premiere Pro and Photoshop: Isn't it wild that the 3060 12GB outperforms (quite significantly) the RX 6900 XT in most of these programs?!? The RX 6900 XT is 2.5x more $$ - at least in my country.

Yeah, the points above this post - about Apple MAC gpu and it has Radeon gpu/igpu - using this same software - but, it theoretically isn't as good as a desktop with a nvidia card (according to these benchmarks?!?)?

I dunno if it makes sense to get a 3090 (24GB of VRAM!) since it's about $200 more than the 3080 12gb and from what I read - supposedly, at times, it consumes quite a bit of power which might require a psu upgrade?: I have a Corsair RM850x - so I really don't want to upgrade that - since, it would make a gpu upgrade the cost of the card PLUS NOW a psu (add $200 more for the psu). A 3080 12GB probably slots just under the Ti version and probably slightly better than a 10GB version? Even the 10gb versions are pretty decent in 4K tests - and any of these will slay in games (some at 4K?).

The 3060 12gb appears to have lost a bit of value nowadays - although at new, the prices seem to be around the same I paid. But, used - I would have to take $100 off at least , probably. Tough call. I probably don't need a new (upgraded) card but you know, new toys - and who knows what will happen later. If these 40xx cards aren't in huge demand - or the industry goes crazy again for some reason - I might be glad if I eventually upgrade b4 it all happens? Time to brainstorm....

Last question: Is it too risky buying used - some 3080s are about $200-$300 cheaper than new - but, you have to consider/assume they were mined? Many are the MSI Ventus and EVGA cards, too.
 
Last edited:

TheSlySyl

2[H]4U
Joined
May 30, 2018
Messages
2,414
Mining cards should be more than adequate. They're only risky if you're running em overclocked for MAXIMUM FPS BENCHMARKS. Basically run the card stock and it'll probably be fine for years.

Been looking at used 3090s myself for the VRAM.
 
Last edited:
  • Like
Reactions: pavel
like this

pavel

Limp Gawd
Joined
Apr 8, 2014
Messages
469
Mining cards should be more than adequate. They're only risky if you're running em overclocked for MAXIMUM FPS BENCHMARKS. Basically run the card stock and it'll probably be fine for years.

Been looking at used 3090s myself for the VRAM.
Thanks for your reply. So, you think the extra VRAM is worth the extra price (well, even used - the sellers want around $200 more than the 3080/3080 Ti sellers - at least, in my area).

On Puget Systems, their benchmarks seem to show the 3090 is not significantly more effective (unless I'm interpreting the data inaccurately?) - although, if you factor in gaming as well - it's a good boost - the question is whether it's worth the extra ask $ you are going to find?
 

Sycraft

Supreme [H]ardness
Joined
Nov 9, 2006
Messages
5,304
Thanks for your reply. So, you think the extra VRAM is worth the extra price (well, even used - the sellers want around $200 more than the 3080/3080 Ti sellers - at least, in my area).

On Puget Systems, their benchmarks seem to show the 3090 is not significantly more effective (unless I'm interpreting the data inaccurately?) - although, if you factor in gaming as well - it's a good boost - the question is whether it's worth the extra ask $ you are going to find?
Probably depends on what you are doing. RAM is one of those things where it is extremely important to have more, until you have enough, then more doesn't help at all. So if you have a protect that uses, say, 8GB of VRAM, you will see no improvement moving to 12GB, 24GB or more from a card that has 10GB, unless the card is faster. But if you then tried to do a project that needed 11GB of VRAM it would tank in performance as it had to swap to system RAM. So it kinda depends on what you are doing with it. I don't really know how much video editing tends to use, as the editing I do is pretty simple so RAM usage is always very low.
 

staknhalo

2[H]4U
Joined
Jun 11, 2007
Messages
4,072
Artifacts in output on AMD encodes are more common

I notice it a lot on some YouTuber videos "Ah, they used an AMD card for this video"
 

TheSlySyl

2[H]4U
Joined
May 30, 2018
Messages
2,414
I want the VRAM 100% for productivity reasons, if you're just gaming it'll likely be wasted.
 

philb2

[H]ard|Gawd
Joined
May 26, 2021
Messages
1,178
Thanks for your reply. So, you think the extra VRAM is worth the extra price (well, even used - the sellers want around $200 more than the 3080/3080 Ti sellers - at least, in my area).

On Puget Systems, their benchmarks seem to show the 3090 is not significantly more effective (unless I'm interpreting the data inaccurately?) - although, if you factor in gaming as well - it's a good boost - the question is whether it's worth the extra ask $ you are going to find?
Just curious. Which benchmark are you using?

I know that Lightroom, my of my "daily driver" programs doesn't use cores beyond a certain amount. I know this guy who has a Threadripper and runs Lightroom. So I wrote to him, "Dude, isn't that Threadripper kind of overkill for Lightroom?" His answer was that he also does a lot with astrophotography, for which lots of threads are needed.

It all depends on the use case.
 

bigshell

n00b
Joined
Aug 24, 2021
Messages
3
The consensus sees to be that Nvidia is better (or at least faster) than AMD for video editing. I was just curious about why that is? The mechanics of it. Is it the clock speed? The cuda cores? The drivers? etc.

I don't game, but would rather support AMD than the greedy green team.

So, even though AMD is not as good as Nvidia, what kind of specs should I look for in an AMD card that would provide a decent video editing experience? Number of stream processors? ROPs? Clock speed?
Nvidia GPUs tend to be favored for video editing because of their better CUDA performance and hardware-accelerated video encoding and decoding capabilities. Nvidia has also established a strong ecosystem of software and hardware solutions that are optimized for their GPUs. Additionally, the company's long history in the professional graphics market and close relationships with major video editing software vendors have led to more robust support for Nvidia hardware.
It depends on what you want to use Movavi Video Editor for and the specifications of your NVIDIA computer (https://www.movavi.com/). Movavi Video Editor is a basic video editing software that can handle simple tasks like trimming and splitting videos. If your NVIDIA computer has a good amount of RAM and a dedicated graphics card, it should be able to run the software smoothly. However, if you want to perform more demanding video editing tasks, you might consider using a more powerful video editing software that is optimized for NVIDIA hardware.
 
Last edited:

UnknownSouljer

Supreme [H]ardness
Joined
Sep 24, 2001
Messages
7,803
New Pudget bench and guidance is up. 7900XTX performs better specifically in DaVinci Resolve than the 4090. Although "just" so. There is no benefit to spending $600 over a 7900XTX to buy a 4090 or $200 over to buy a 4080 for Resolve.
Premiere and After Effects both cards are similar, but the 4090 is slightly ahead. My specific commentary on that is that it's probably not worth the $600 premium for the small increase in Premiere or AE performance on a 4090 vs a 7900XTX. It definitely falls inside of margin.

https://www.pugetsystems.com/labs/articles/amd-radeon-rx-7900-xtx-24gb-content-creation-review/

For things like 3D rendering though (Blender, Unreal), the 4090 destroys the 7900XTX. In those cases it absolutely does make sense to spend the extra $600, of course provided that the money is there to do so. For a closer more similar cost to the 7900XTX, the 4080 at $1200 is still far ahead in both of those apps. This is of course owing to CUDA implementation, which has been already thoroughly discussed.
 

philb2

[H]ard|Gawd
Joined
May 26, 2021
Messages
1,178
FWIW, Adobe seems to favor NVidia over AMD for its photo editing programs. That's why I got a 3060 Ti (at a time when prices were crazy-high.)
 
Top