GPU for Rendering Video

vidoprof

Limp Gawd
Joined
May 3, 2007
Messages
468
So I currently have a GTX680 (not sure of the clock speed or anything) with 1gb ram.

I am a photographer and was only doing photo work, but now have ventured into the world of video.. Rendering takes a long time, but I am not sure if it's the GPU or Processor that is the problem.

I am currently running
an i5 3570k
32gb Ram
250 SSD HD (for C:)
4TB - WD Black drive (working projects)
GTX680

How do I find out what the bottleneck would be and then IF CPU should I get a whole new rig or just upgrade the CPU if possible?

Thanks for any and all help.


Thanks
Ryan G
 
ctrl-alt-del (task manager) will let you see CPU usage. Get a program like MSI Afterburner will show you GPU usage (and CPU actually).

Some programs don't use GPU, like I use Adobe Premiere Elements which is CPU only.
 
Rendering takes a long time, but I am not sure if it's the GPU or Processor that is the problem.
What software are you using? Are you rendering video with a 3D graphics program (and if so, which renderer), a 2D compositior, or do you mean encoding video (and if so, which encoder)? Not all software can be GPU accelerated yet, and it may not always be the best option (e.g. x264's very minimal GPU acceleration).
 
I am using either the Go Pro Studio software or Adobe Premiere (Full) and Adobe After Effects as well.

The export (render) is what is taking an insanely long time.

Thanks for the software Afterburner I will be downloading that and seeing what can be done. Any other tips or ideas I am all ears.



Thanks

Ryan G
 
I am using either the Go Pro Studio software or Adobe Premiere (Full) and Adobe After Effects as well.

The export (render) is what is taking an insanely long time.

Thanks for the software Afterburner I will be downloading that and seeing what can be done. Any other tips or ideas I am all ears.



Thanks

Ryan G

The full Adobe Premiere uses CUDA cores so having a 680 is a good thing. Make sure your renderer is set to CUDA.... I read once you may have to download the nVidia cuda driver to be able to select it in premiere and after effects. Note: this information may be out dated and I don't own the full programs :)
 
What software are you using? Are you rendering video with a 3D graphics program (and if so, which renderer), a 2D compositior, or do you mean encoding video (and if so, which encoder)? Not all software can be GPU accelerated yet, and it may not always be the best option (e.g. x264's very minimal GPU acceleration).


Also note that if you're encoding video on a GPU the graphic fidelity is very poor compared to CPU encoding. The benefit is speed, which GPU's can tear through both rendering and encoding video/graphics. Rendering graphics isn't really an issue, but encoding the video is definitely a problem that wont be going away anytime soon.
 
So would I benefit from spending the $300ish on the newest 970 or something higher end like that?

Thanks
Ryan G
 
Get a program like MSI Afterburner will show you GPU usage

probably too late to add, but gpu-z is lighter weight and unless you want to OC, it's probably a little easier to view stats and details.
 
Ctrl+Shift+Esc will give you direct access to the task manager. Can check the cpu usage there ;)
 
So would I benefit from spending the $300ish on the newest 970 or something higher end like that?

Thanks
Ryan G

If you have no plan of gaming on that computer, you should invest on a workstation class card. Something like nvidia quadro or ati firepro. They will be better suited for your photography editing and video editing work.
 
Also note that if you're encoding video on a GPU the graphic fidelity is very poor compared to CPU encoding. The benefit is speed, which GPU's can tear through both rendering and encoding video/graphics. Rendering graphics isn't really an issue, but encoding the video is definitely a problem that wont be going away anytime soon.
Oh boy yes. There's encoding with x264 using CRF, and then there's just mucking about.
 
To be blunt rendering to cuda on supported software can be a hell of lot faster if you have something like a tesla or titan... Lot of simple work load cells that do that task fast then grab the next piece. With rendering you can break the screen down into BSP or buckets depending on if they can be accessed in real time or they are locked once rendering starts. You can also set objects to render to layers and how shadows interact with other objects in the scene. h264 is nice but it can introduce artifacting if not used correctly and learning is stone b. The 680GTX is the lowest card I'd use and it will over heat over time. Really the cost of an Core i7 4770K make more sense if you are not making a lot of money on the card or writing it out of your tax costs. I'm running a titan (plain) mostly because I have load a billion polies scene files and not what the computer crash when I separate the files... but really the codec matters more than the hardware. It absolutely needs to be configured correctly or you may as well not bother rendering it is going to look like crap and it may not do this until the render does not know where to reference files or settings. Always move a copy to a different machine that does not have access to the first one to check the output. If you don't know the correct settings you can find them by trial and error but I find it is better to see what the rest of the footage was done with.
 
Also note that if you're encoding video on a GPU the graphic fidelity is very poor compared to CPU encoding. The benefit is speed, which GPU's can tear through both rendering and encoding video/graphics. Rendering graphics isn't really an issue, but encoding the video is definitely a problem that wont be going away anytime soon.

I wonder why this poor fidelity on a GPU. What comes to mind is that maybe FP64 arithmetics is set in the CPU for encoding for good quality? In this case a high-end graphics card like Titan Black which has good FP64 support could offer good quality?? Or is this due to some other problem???

BTW, before going to the workstation class graphics cards one can check the Titan Black which is much cheaper but essentially the same. Just no certified drivers, professional support and maybe not running with extreme-high-end software.
 
GPU encoding uses a fixed function block, which is geared towards encoding quickly and with minimal power usage (because why else would you implement a fixed function block). This means that image quality is at a fixed level or fixed set of discrete steps, and that card performance is otherwise unrelated to encoding performance.

There is also GPU acceleration of software encoders, but that isn't as easily parallelisable as decoding. Encoding in an efficient manner requires lots of cross-checking between as many adjacent blocks (both spatially and temporally), which means you can't easily split things into nicely unrelated chucks. Everything is interrelated, so it ends up being faster with one or two fast cores than trying to spread things over a lot of slow cores and having to wait for them all to finish operations and share work between each step.
 
Also note that if you're encoding video on a GPU the graphic fidelity is very poor compared to CPU encoding. The benefit is speed, which GPU's can tear through both rendering and encoding video/graphics. Rendering graphics isn't really an issue, but encoding the video is definitely a problem that wont be going away anytime soon.

So wait... using the GPU makes the video quality worse? I've never heard that.

I assumed it was faster because you were throwing more processing cores at it... not because it was reducing quality.

Every article I've read says using the GPU... particularly using CUDA with Adobe products... is the way to go.
 
If you're doing a lot of work with AfterEffects, CUDA will certainly help in that regard. For straight up rendering captured video, you're better off getting lots of cores that are clocked fast. We built a video render machine for our company and wish we had gotten the dual-XEON hexacore processors (now Intel produces 12-core processors in a single package) in addition to the Quadro K5000 we did put in there.

In regard to configuring Adobe to use your video card for rendering (and this works for AME as well) make sure to place your video card's device name in supported_cuda_cards.txt (for nVidia) and supported_opencl_cards.txt (for AMD). This is for pre-CC. In CC you should be able to enable it as a preference setting (but we don't have CC so I can't tell you for sure).
 
OP, whatever you do, DO NOT GO WORKSTATION-CLASS/QUADRO GPU.

Read here, and do your research thoroughly and be extremely super-duper sure before pulling the trigger on any Quadro. You're going to be throwing a lot of money away for piss poor performance.

Quadro or GeForce Video Cards

The only reason to use a Quadro video card with Adobe programs is if you are using 10 bit source material and a 10 bit monitor like the HP Dreamcolor or similar. Otherwise, the Quadro’s are under powered and over priced.

Let’s take a look at some of the Quadro cards.

The Quadro 2000 - This video card only has 192 CUDA cores and a 128bit memory interface. Basically, it is just an GTS 450 with a slower clock speed. In other words, the GTS450 would be slightly faster. Also, the GTX 550 Ti, with it’s 192 CUDA cores and 192 bit memory interface would be faster than the Quadro 2000, due to the wider memory interface and slightly faster clock speed.

The Quadro FX 3800. This card is now 3 generations old and is based on the GTX 260, but with only a 256 bit memory interface. The GTX260 has a wider memory interface at 448 bit and would produce faster results than the FX 3800.

The Quadro 4000 - This video card is based on the same GPU that was used on the GTX 470. However, it performs much slower than the GTX470. In fact, the performance level is like the GTX 460 SE. Even a regular GTX 460 (not the GTX 460 SE version) would give you better performance due to the Quadro 4000 have only 256 CUDA cores, while the GTX 460 has 336 CUDA cores.

Quadro 5000 - This is based on a GTX 465, with a wider memory interface 320-bit memory bus giving it an edge over the GTX 465. However, it would be slower than a GTX 470 or GTX 570.

Quadro 6000 - This video card is on par with the GTX470, although the Quadro 6000 is much more expensive.

I do not recommend a Quadro video card, unless you have a specific program that requires a Quadro video card or you have the HP Dreamcolor monitor. You will get better performance for a lot less money with the GeForce cards.
From some first hand experience, I've seen the difference and the whole thing about going Quadro is full of horseturd (unless if course you have valid justifications based on technical benefits such as what is mentioned above and is a factor important to you).

And don't go with AMD. CUDA has been thoroughly broken in for years, and although Adobe "supports" OpenCL it's still in its infancy in their products and has known issues (I know this from first hand experience in multiple encounters).



If you're doing a lot of work with AfterEffects, CUDA will certainly help in that regard. For straight up rendering captured video, you're better off getting lots of cores that are clocked fast. We built a video render machine for our company and wish we had gotten the dual-XEON hexacore processors (now Intel produces 12-core processors in a single package) in addition to the Quadro K5000 we did put in there.

In regard to configuring Adobe to use your video card for rendering (and this works for AME as well) make sure to place your video card's device name in supported_cuda_cards.txt (for nVidia) and supported_opencl_cards.txt (for AMD). This is for pre-CC. In CC you should be able to enable it as a preference setting (but we don't have CC so I can't tell you for sure).
Could you tell me more about the kind of work/type of videos your company produces and edits? Curious. :)
 
Last edited:
If you're doing a lot of work with AfterEffects, CUDA will certainly help in that regard. For straight up rendering captured video, you're better off getting lots of cores that are clocked fast. We built a video render machine for our company and wish we had gotten the dual-XEON hexacore processors (now Intel produces 12-core processors in a single package) in addition to the Quadro K5000 we did put in there.

In regard to configuring Adobe to use your video card for rendering (and this works for AME as well) make sure to place your video card's device name in supported_cuda_cards.txt (for nVidia) and supported_opencl_cards.txt (for AMD). This is for pre-CC. In CC you should be able to enable it as a preference setting (but we don't have CC so I can't tell you for sure).

It looks like rendering/exporting in Premiere Pro gets quite a boost with a GPU.

I found this (long) article: http://www.studio1productions.com/Articles/PremiereCS5.htm

Here are some of their Premiere Pro results for reference:

Timeline Render
CPU-only: 327 seconds
GTX980 CUDA: 17 seconds

MPEG-2 DVD Export
CPU-only: 789 seconds
GTX980 CUDA: 378 seconds

H.264 BluRay Export
CPU-only: 590 seconds
GTX980 CUDA: 225 seconds

They're showing much faster results when using the GPU vs just using the CPU.

So we know it's fast... but Liger88 said the quality is worse. That's the part I had a question about.

I've never heard that quality is reduced when you use the GPU/CUDA

I'd like some clarification about that.
 
I'm not sure how accurate that article really is. For the same video, I can outperform the hexa-core XEON + Quadro K5000 with a MacBook Pro laptop with an i7 + nVidia 650m, both using Premiere CS6. Sure, there are some differences between the platforms, but if CUDA was the deciding factor, you'd see much faster render times from a graphics card having 1536 CUDA cores vs. 384 CUDA cores on the laptop.

We also don't do the workflows listed in that article so that may be some of the differences in our observed output (Premiere's H.264 output is acknowledged crap so we output lossless and use a dedicated x264 render for final output to customers), not to mention that the type of video you are rendering will almost certainly change the outcome of your times.

I read an article very similar to your linked one when we decided to purchase the Quadro K5000 but hindsight is always 20/20. I wish we had spent the money on the CPU cores instead now that we've thrown thousands of hours of captured video through our workflow.

Could you tell me more about the kind of work/type of videos your company produces and edits? Curious. :)

We do training videos on technology subjects, almost all captured video (screencast/liveaction), edited using Premiere with some special effects added by AfterEffects. My domain is mostly video output for customers depending on platform delivery but you can safely assume we do a lot of H.264-output video for low(er) bandwidth devices (for which the amount of compression typically applied to our videos completely dwarfs the errors introduced via GPU rendering).
 
Last edited:
I'm not sure how accurate that article really is. For the same video, I can outperform the hexa-core XEON + Quadro K5000 with a MacBook Pro laptop with an i7 + nVidia 650m, both using Premiere CS6. Sure, there are some differences between the platforms, but if CUDA was the deciding factor, you'd see much faster render times from a graphics card having 1536 CUDA cores vs. 384 CUDA cores on the laptop.

We also don't do the workflows listed in that article so that may be some of the differences in our observed output (Premiere's H.264 output is acknowledged crap so we output lossless and use a dedicated x264 render for final output to customers), not to mention that the type of video you are rendering will almost certainly change the outcome of your times.

I read an article very similar to your linked one when we decided to purchase the Quadro K5000 but hindsight is always 20/20. I wish we had spent the money on the CPU cores instead now that we've thrown thousands of hours of captured video through our workflow.

Gotcha.

But can anyone comment on what Liger88 said about "if you're encoding video on a GPU the graphic fidelity is very poor compared to CPU encoding."

I understand faster CPUs and tons of CUDA cores... but I'm not finding any evidence that the GPU provides poorer quality.
 
The very first link I got searching on Google:

https://forums.creativecow.net/thread/24/977972

It's dated about a year ago and there's anecdotal evidence that the GPU render output is inferior to an x264 encoder in this linked thread (which mirrors the anecdotal evidence that I'd read about elsewhere).

I just wanted to add that the 'general consensus' as to why hardware-accelerated encoding is not as good as software is that software has had literally years to get the coding algorithms to maturity whereas hardware encoding is still a work in progress and you're left with whatever shortcomings are embedded in the hardware design.
 
The very first link I got searching on Google:

https://forums.creativecow.net/thread/24/977972

It's dated about a year ago and there's anecdotal evidence that the GPU render output is inferior to an x264 encoder in this linked thread (which mirrors the anecdotal evidence that I'd read about elsewhere).

I just wanted to add that the 'general consensus' as to why hardware-accelerated encoding is not as good as software is that software has had literally years to get the coding algorithms to maturity whereas hardware encoding is still a work in progress and you're left with whatever shortcomings are embedded in the hardware design.

Thanks for the info!

I will be building a new computer this summer... including an Nvidia card with CUDA.

I guess I'll be experimenting with the various export options :)
 
Back
Top