Why is Nvidia better than AMD for video editing?

Peat Moss

Gawd
Joined
Oct 6, 2009
Messages
543
The consensus sees to be that Nvidia is better (or at least faster) than AMD for video editing. I was just curious about why that is? The mechanics of it. Is it the clock speed? The cuda cores? The drivers? etc.

I don't game, but would rather support AMD than the greedy green team.

So, even though AMD is not as good as Nvidia, what kind of specs should I look for in an AMD card that would provide a decent video editing experience? Number of stream processors? ROPs? Clock speed?
 
The consensus sees to be that Nvidia is better (or at least faster) than AMD for video editing. I was just curious about why that is? The mechanics of it. Is it the clock speed? The cuda cores? The drivers? etc.

I don't game, but would rather support AMD than the greedy green team.

So, even though AMD is not as good as Nvidia, what kind of specs should I look for in an AMD card that would provide a decent video editing experience? Number of stream processors? ROPs? Clock speed?
Nvidia has a long history of supporting their software well. AMD not so much.
 
My guess would be the tensor cores; image and video manipulation are some of the more prominent applications for that machine learning stuff, and amd is rather well-known for running a few leagues behind nvidia in that area.
 
It all comes down to developers coding tools to make exclusive use of CUDA programing that will only work on nVidia hardware or coding GPU acceleration to only use CUDA for it. That's about it. The reason for that is nVidia pours money into helping this along and providing CUDA support to help it along.
 
Not only is because of Cuda, but Market share. Nvidia has what 80%, AMD 20%? Because of that there are way more people who use Cuda.....thus the reason why Cuda is more supported. Nvidia wants to keep those customers happy!

That is going to be a hard to overcome.
 
It all comes down to developers coding tools to make exclusive use of CUDA programing that will only work on nVidia hardware or coding GPU acceleration to only use CUDA for it. That's about it. The reason for that is nVidia pours money into helping this along and providing CUDA support to help it along.
AMD couldn't be even bothered to fix their own encoder engine for far too long...

It was basically unusable in things like OBS for so long that I was surprised when I heard that they actually made improvements to it.
 
The problem with AMD GPUs for video editing is OpenCL (or AMD's support of that API). OpenCL is now already at version 3.0 - but only Nvidia and Intel (GPU-wise) support that newest OpenCL version. The latest AMD GPUs' OpenCL support is stuck on version 2.0 (2.1 for the RX 6000 series) - and OpenCL 2.x has been problematic for video editing software when it comes to GPGPU acceleration. As a result, some of the rendering features that normally would have gone to the GPU on an Nvidia GPU-powered system instead went straight to the CPU (or not rendered at all) on an AMD GPU-powered system.
 
Interesting. Thanks for all the replies.

Is there an Nvidia 3000 series card with more than 8 GB of vram? It's the amount of vram that partly made me interested in AMD.
 
Last edited:
Interesting. Thanks for all the replies.

Is there an Nvidia 3000 series card with more than 8 GB of vram? It's the amount of vram that partly made me interested in AMD.
Ya there's a couple. The 3090 and Ti variants have 24GB of RAM. They are pricey but they have boatloads of VRAM if you need it. The 3080Ti is cheaper and has 12GB. If you want to go ham and get a pro card, they have Quadros with 48GB.

Also something not mentioned by others yet is that with the 2000 series, nVidia really upped their game on nvenc, their dedicated hardware encoder, and it is now something that is worth using. It won't get you quite as good a quality as a good software one, but damn near and since it runs on a dedicated part of the chip it can really speed up renders sometimes. While you probably wouldn't use it for the master file going to make a Blu-ray, it works perfectly well for previews or the like.
 
My guess would be the tensor cores; image and video manipulation are some of the more prominent applications for that machine learning stuff, and amd is rather well-known for running a few leagues behind nvidia in that area.
CUDA, not Tensor. Not aware of any video editing apps that use Tensor cores, pretty much all of them, including free ones, use Cuda
 
Software support in the drivers have a lot to do with it. Easier support make it easier for Devs to implement their features. I can attend to Nvenc. I use handbrake from time to time to make video files out of ripped discs. I went from 10 minutes on my 5800x to encode an episode of TV to 1 minute using the Nvenc encoder on my 3080. Mind you, from what I understand it's the same encoder even on the "cheaper" cards. It blew my mind.
 
So, 3080 > rx 6800 XT if doing video editing, right? :-/
Well it is very general statement, could depend what you do in which software, if one mean Premiere Pro

pic_disp.jpg


And playing something like 4K red playback:
pic_disp.jpg


https://www.pugetsystems.com/recomm...obe-Premiere-Pro-143/Hardware-Recommendations
 
Ya there's a couple. The 3090 and Ti variants have 24GB of RAM. They are pricey but they have boatloads of VRAM if you need it. The 3080Ti is cheaper and has 12GB. If you want to go ham and get a pro card, they have Quadros with 48GB.

Also something not mentioned by others yet is that with the 2000 series, nVidia really upped their game on nvenc, their dedicated hardware encoder, and it is now something that is worth using. It won't get you quite as good a quality as a good software one, but damn near and since it runs on a dedicated part of the chip it can really speed up renders sometimes. While you probably wouldn't use it for the master file going to make a Blu-ray, it works perfectly well for previews or the like.

Thanks. I also just noticed in the Puget graphs that there is a 3060 with 12 GB.

Kind of strange that AMD scores are that low when Apple used AMD GPUs in their Macs for so many years. I guess AMD must have created drivers specifically for Final Cut Pro since FCP is pretty fast.
 
  • Like
Reactions: pavel
like this
Thanks. I also just noticed in the Puget graphs that there is a 3060 with 12 GB.

Kind of strange that AMD scores are that low when Apple used AMD GPUs in their Macs for so many years. I guess AMD must have created drivers specifically for Final Cut Pro since FCP is pretty fast.
I am not sure if it is relevant, but they had extra silicon for what their GPU lacked:
https://www.digitaltrends.com/compu...s that the Afterburner,RAW video at 29.97 fps.

And the AMD card were often the AMD Radeon Pro W5700X, pro vega type (which I am not sure they had significantly better support in those application too).
 
Thanks. I also just noticed in the Puget graphs that there is a 3060 with 12 GB.

Kind of strange that AMD scores are that low when Apple used AMD GPUs in their Macs for so many years. I guess AMD must have created drivers specifically for Final Cut Pro since FCP is pretty fast.
Maybe, they also may not have effectively used the acceleration. Apple doesn't always make the best choices when it comes to the hardware they put in their pro devices. They have, on numerous occasions in their history, put hardware in that was not useful in most of the software they ran but had another reason for its choice. A good example is way back in the day when they first had a dual CPU system. MacOS didn't support two CPUs in the OS scheduler, so a program itself had to support it and basically nothing did, so you paid for a more expensive system that got you nothing.
 
Maybe, they also may not have effectively used the acceleration. Apple doesn't always make the best choices when it comes to the hardware they put in their pro devices. They have, on numerous occasions in their history, put hardware in that was not useful in most of the software they ran but had another reason for its choice. A good example is way back in the day when they first had a dual CPU system. MacOS didn't support two CPUs in the OS scheduler, so a program itself had to support it and basically nothing did, so you paid for a more expensive system that got you nothing.
They also used custom drivers with custom Apple only API's to accelerate things. So something like OpenCL only going up to version 2.x, doesn't necessarily matter in Apple land. Becuse it's not going to accelerated using that anyway. It would probably use something like Apples own "Metal" APi instead. Completely different world to PC.

And I do find it kind of funny. As historically for Radeons GPU before AMD, so ATi era, video was a huge part of there wheel house. Curious how things have changed over the years.
 
My guess would be the tensor cores; image and video manipulation are some of the more prominent applications for that machine learning stuff, and amd is rather well-known for running a few leagues behind nvidia in that area.
No. t is all CUDA support .
 
LukeTbk , Yeah, I am familiar with Puget Systems and their graphs.

They have compared hardware and their performance in Davinci Resolve, Premiere Pro and Photoshop: Isn't it wild that the 3060 12GB outperforms (quite significantly) the RX 6900 XT in most of these programs?!? The RX 6900 XT is 2.5x more $$ - at least in my country.

Yeah, the points above this post - about Apple MAC gpu and it has Radeon gpu/igpu - using this same software - but, it theoretically isn't as good as a desktop with a nvidia card (according to these benchmarks?!?)?

I dunno if it makes sense to get a 3090 (24GB of VRAM!) since it's about $200 more than the 3080 12gb and from what I read - supposedly, at times, it consumes quite a bit of power which might require a psu upgrade?: I have a Corsair RM850x - so I really don't want to upgrade that - since, it would make a gpu upgrade the cost of the card PLUS NOW a psu (add $200 more for the psu). A 3080 12GB probably slots just under the Ti version and probably slightly better than a 10GB version? Even the 10gb versions are pretty decent in 4K tests - and any of these will slay in games (some at 4K?).

The 3060 12gb appears to have lost a bit of value nowadays - although at new, the prices seem to be around the same I paid. But, used - I would have to take $100 off at least , probably. Tough call. I probably don't need a new (upgraded) card but you know, new toys - and who knows what will happen later. If these 40xx cards aren't in huge demand - or the industry goes crazy again for some reason - I might be glad if I eventually upgrade b4 it all happens? Time to brainstorm....

Last question: Is it too risky buying used - some 3080s are about $200-$300 cheaper than new - but, you have to consider/assume they were mined? Many are the MSI Ventus and EVGA cards, too.
 
Last edited:
Mining cards should be more than adequate. They're only risky if you're running em overclocked for MAXIMUM FPS BENCHMARKS. Basically run the card stock and it'll probably be fine for years.

Been looking at used 3090s myself for the VRAM.
 
Last edited:
  • Like
Reactions: pavel
like this
Mining cards should be more than adequate. They're only risky if you're running em overclocked for MAXIMUM FPS BENCHMARKS. Basically run the card stock and it'll probably be fine for years.

Been looking at used 3090s myself for the VRAM.
Thanks for your reply. So, you think the extra VRAM is worth the extra price (well, even used - the sellers want around $200 more than the 3080/3080 Ti sellers - at least, in my area).

On Puget Systems, their benchmarks seem to show the 3090 is not significantly more effective (unless I'm interpreting the data inaccurately?) - although, if you factor in gaming as well - it's a good boost - the question is whether it's worth the extra ask $ you are going to find?
 
Thanks for your reply. So, you think the extra VRAM is worth the extra price (well, even used - the sellers want around $200 more than the 3080/3080 Ti sellers - at least, in my area).

On Puget Systems, their benchmarks seem to show the 3090 is not significantly more effective (unless I'm interpreting the data inaccurately?) - although, if you factor in gaming as well - it's a good boost - the question is whether it's worth the extra ask $ you are going to find?
Probably depends on what you are doing. RAM is one of those things where it is extremely important to have more, until you have enough, then more doesn't help at all. So if you have a protect that uses, say, 8GB of VRAM, you will see no improvement moving to 12GB, 24GB or more from a card that has 10GB, unless the card is faster. But if you then tried to do a project that needed 11GB of VRAM it would tank in performance as it had to swap to system RAM. So it kinda depends on what you are doing with it. I don't really know how much video editing tends to use, as the editing I do is pretty simple so RAM usage is always very low.
 
Artifacts in output on AMD encodes are more common

I notice it a lot on some YouTuber videos "Ah, they used an AMD card for this video"
 
I want the VRAM 100% for productivity reasons, if you're just gaming it'll likely be wasted.
 
Thanks for your reply. So, you think the extra VRAM is worth the extra price (well, even used - the sellers want around $200 more than the 3080/3080 Ti sellers - at least, in my area).

On Puget Systems, their benchmarks seem to show the 3090 is not significantly more effective (unless I'm interpreting the data inaccurately?) - although, if you factor in gaming as well - it's a good boost - the question is whether it's worth the extra ask $ you are going to find?
Just curious. Which benchmark are you using?

I know that Lightroom, my of my "daily driver" programs doesn't use cores beyond a certain amount. I know this guy who has a Threadripper and runs Lightroom. So I wrote to him, "Dude, isn't that Threadripper kind of overkill for Lightroom?" His answer was that he also does a lot with astrophotography, for which lots of threads are needed.

It all depends on the use case.
 
The consensus sees to be that Nvidia is better (or at least faster) than AMD for video editing. I was just curious about why that is? The mechanics of it. Is it the clock speed? The cuda cores? The drivers? etc.

I don't game, but would rather support AMD than the greedy green team.

So, even though AMD is not as good as Nvidia, what kind of specs should I look for in an AMD card that would provide a decent video editing experience? Number of stream processors? ROPs? Clock speed?
Nvidia GPUs tend to be favored for video editing because of their better CUDA performance and hardware-accelerated video encoding and decoding capabilities. Nvidia has also established a strong ecosystem of software and hardware solutions that are optimized for their GPUs. Additionally, the company's long history in the professional graphics market and close relationships with major video editing software vendors have led to more robust support for Nvidia hardware.
It depends on what you want to use Movavi Video Editor for and the specifications of your NVIDIA computer (https://www.movavi.com/). Movavi Video Editor is a basic video editing software that can handle simple tasks like trimming and splitting videos. If your NVIDIA computer has a good amount of RAM and a dedicated graphics card, it should be able to run the software smoothly. However, if you want to perform more demanding video editing tasks, you might consider using a more powerful video editing software that is optimized for NVIDIA hardware.
 
Last edited:
New Pudget bench and guidance is up. 7900XTX performs better specifically in DaVinci Resolve than the 4090. Although "just" so. There is no benefit to spending $600 over a 7900XTX to buy a 4090 or $200 over to buy a 4080 for Resolve.
Premiere and After Effects both cards are similar, but the 4090 is slightly ahead. My specific commentary on that is that it's probably not worth the $600 premium for the small increase in Premiere or AE performance on a 4090 vs a 7900XTX. It definitely falls inside of margin.

https://www.pugetsystems.com/labs/articles/amd-radeon-rx-7900-xtx-24gb-content-creation-review/

For things like 3D rendering though (Blender, Unreal), the 4090 destroys the 7900XTX. In those cases it absolutely does make sense to spend the extra $600, of course provided that the money is there to do so. For a closer more similar cost to the 7900XTX, the 4080 at $1200 is still far ahead in both of those apps. This is of course owing to CUDA implementation, which has been already thoroughly discussed.
 
FWIW, Adobe seems to favor NVidia over AMD for its photo editing programs. That's why I got a 3060 Ti (at a time when prices were crazy-high.)
 
New Pudget bench and guidance is up. 7900XTX performs better specifically in DaVinci Resolve than the 4090. Although "just" so. There is no benefit to spending $600 over a 7900XTX to buy a 4090 or $200 over to buy a 4080 for Resolve.
Premiere and After Effects both cards are similar, but the 4090 is slightly ahead. My specific commentary on that is that it's probably not worth the $600 premium for the small increase in Premiere or AE performance on a 4090 vs a 7900XTX. It definitely falls inside of margin.

https://www.pugetsystems.com/labs/articles/amd-radeon-rx-7900-xtx-24gb-content-creation-review/

For things like 3D rendering though (Blender, Unreal), the 4090 destroys the 7900XTX. In those cases it absolutely does make sense to spend the extra $600, of course provided that the money is there to do so. For a closer more similar cost to the 7900XTX, the 4080 at $1200 is still far ahead in both of those apps. This is of course owing to CUDA implementation, which has been already thoroughly discussed.
Have you seen Techgage's recent benchmarks/tests? The RX 7900 series *seeems* a lot better - yes, the usual Blender Cycles etc. is still showing 7900 series behind but that's because Nvidia is using OptiX - and AMD HIP-RT still isn't functioning. There's a bit of ground to catch up but even if it is somewhat effective - the 7900 series could be a half decent card for Blender - that is, if one needs AMD for something - e.g. Linux etc. The benefit, too, of those cards is 20+ of VRAM - and you only get 20+ of VRAM with Nvidia if you choose either: a 3090 (24GB) - almost have to go used if you don't have one or a 4090 (flagship card - most expensive - 24GB). A 4080 provides 16GB, which is pretty good although it's priced higher than the 7900 XTX. The VRAM factor might be more applicable to video editing - so, this is mostly just an observation when I'm mentioning that topic. The 7900 XTX seems to perform adequately in Davinci Resolve?
 
and you only get 20+ of VRAM with Nvidia
Or SLI/NVLInk is available and / or Titan-quadro-a card.

People made nvlinked 2080ti (how many duo you want) for 22gb vram system for blender for example in the past, used headless 24gb quadro RTX 6000 seem to be possible to find for as low as $1000 now (not sure if there is something wrong with them or failed AI startup liquidating already), combine 2 and you get 48GB
 
Or SLI/NVLInk is available and / or Titan-quadro-a card.

People made nvlinked 2080ti (how many duo you want) for 22gb vram system for blender for example in the past, used headless 24gb quadro RTX 6000 seem to be possible to find for as low as $1000 now (not sure if there is something wrong with them or failed AI startup liquidating already), combine 2 and you get 48GB
Unless something has changed in recent years, SLI doesn't work that way. You don't get double the VRAM, the data is mirrored on both cards so if you're using 11GB cards you get 11GB of VRAM. If you're using 24GB cards, you get 24GB of vram
 
Unless something has changed in recent years, SLI doesn't work that way. You don't get double the VRAM, the data is mirrored on both cards so if you're using 11GB cards you get 11GB of VRAM. If you're using 24GB cards, you get 24GB of vram
That what happen if you put many card togethers with no nvlink (and in many program it will limit to the card with the lowest as it will try to copy everything on everycard exactly the same), the use of a NVLink was to be able to create one vcard to the machine that share the memory:

https://www.pugetsystems.com/suppor...dro-and-geforce-rtx-cards-in-windows-10-1266/

Cuda offer nvlinked pooled ram malloc

It will be heavily dependant of the application how well it work, for example:
https://home.otoy.com/render/octane-render/faqs/
e.g., If you combine a 6 GB Titan Black with a 4 GB 980 for rendering, the rendering speed increases, but the memory size available for the rendering is effectively 4 GB.
However, OctaneRender 2018.1 and higher support NVIDIA NVLink, which allows sharing data between two GPUs via an NVLink Bridge, on supported RTX GPU configurations.

Maybe it did not work with optix specifically too and only some part like cycle, it appeared in late 2.x
https://wiki.blender.org/wiki/Reference/Release_Notes/2.90/Cycles
  • NVLink support for CUDA and OptiX. When enabled in the Cycles device preferences, GPUs connected with an NVLink bridge will share memory to support rendering bigger scenes.
 
That what happen if you put many card togethers with no nvlink (and in many program it will limit to the card with the lowest as it will try to copy everything on everycard exactly the same), the use of a NVLink was to be able to create one vcard to the machine that share the memory:

https://www.pugetsystems.com/suppor...dro-and-geforce-rtx-cards-in-windows-10-1266/

Cuda offer nvlinked pooled ram malloc

It will be heavily dependant of the application how well it work, for example:
https://home.otoy.com/render/octane-render/faqs/
e.g., If you combine a 6 GB Titan Black with a 4 GB 980 for rendering, the rendering speed increases, but the memory size available for the rendering is effectively 4 GB.
However, OctaneRender 2018.1 and higher support NVIDIA NVLink, which allows sharing data between two GPUs via an NVLink Bridge, on supported RTX GPU configurations.

Maybe it did not work with optix specifically too and only some part like cycle, it appeared in late 2.x
https://wiki.blender.org/wiki/Reference/Release_Notes/2.90/Cycles
  • NVLink support for CUDA and OptiX. When enabled in the Cycles device preferences, GPUs connected with an NVLink bridge will share memory to support rendering bigger scenes.

Are there any applications that can actually make use of this? If not, it's pretty useless. It's like the promise of DX12 and support for mismatching multi GPU's. Its one thing if the tech is capable, it's another if anyone is actually willing to go through the trouble to code for it.
 
Are there any applications that can actually make use of this? If not, it's pretty useless. It's like the promise of DX12 and support for mismatching multi GPU's. Its one thing if the tech is capable, it's another if anyone is actually willing to go through the trouble to code for it.
Do you mean outside the 2 application in the message you quoted ? PyTorch model learning affair apparently, according to chatgpt, some examples would be:

  • Adobe Premiere Pro
  • Autodesk Arnold
  • Blender
  • Chaos Group V-Ray
  • Dassault Systèmes CATIA
  • Dassault Systèmes SOLIDWORKS Visualize
  • Luxion KeyShot Pro
  • OctaneRender
  • Redshift
A quick google of some of those, they seem like it is right

DaVinci resolve an other one:
https://www.pugetsystems.com/labs/a...ce-in-DaVinci-Resolve-17-0-2079/#Introduction
 
Back
Top