Should I enable iGPU or leave disabled? (re: GTX 980)

jmk396

Gawd
Joined
Jul 22, 2004
Messages
787
I have an Nvidia GTX 980 that I use 100% for video output. However, I might want to use Intel QuickSync in the future, so is it worth enabling the iGPU?

Does it have any performance difference if I leave it enabled but don't connect any monitors to it?

What is the recommended approach when using a dedicated PCIe card like my GTX 980?

(Also, if it matters, my CPU is an i7-6700k)

EDIT: It looks like DirectX 12 has a multi adapter feature that can use both integrated and discrete GPU for increased performance?
 
Last edited:
If you want to use QuickSync, enable it.

Windows will grab the drivers on the next boot and that's pretty much it.
 
You're lose a small amount of RAM and that's it. I have mine off on my 6700k because I don't plan on using it and I don't care for the extra complications caused by having multiple GPUs and GPU drivers. I doubt DX12 multi-GPU will be worth using with an Intel 530 and Nvidia 980 but who knows.
 
I have an Nvidia GTX 980 that I use 100% for video output. However, I might want to use Intel QuickSync in the future, so is it worth enabling the iGPU?

Does it have any performance difference if I leave it enabled but don't connect any monitors to it?

What is the recommended approach when using a dedicated PCIe card like my GTX 980?

(Also, if it matters, my CPU is an i7-6700k)

EDIT: It looks like DirectX 12 has a multi adapter feature that can use both integrated and discrete GPU for increased performance?
It's called explicit multi-adapter, and the game has to implement it to take advantage. So far the only game to support explicit multi-adapter is Ashes of the Singularity. In short, I wouldn't depend on the use of this feature becoming anymore widespread in the future.
 
Intel QuickSync is used to rapidly convert media files (videos/movies) from one format and encoding to another. It's actually a very useful feature.

For OP, he has nVidia and even if DX12's new multi-adapter didn't work, I believe nVidia Optimus technology will enable some power saving features. It's designed for laptops to power off the dedicated GPU when there's no load, and I read somewhere they enabled it on Desktop cards as well.
 
Quicksync/shadow play/amd gvr doesn't come anywhere near close to cpu encode in terms of file size/quality.

Even at utrafast preset, x264 is miles ahead of Quicksync.
 
it was also reported by anandtech that windows is more responsive on iGPU due to less latency. This is particularly true with IRIS GPUs. Windows should run faster on an IRIS GPU and other iGPUs verse dedicated GPUs.

Quicksync/shadow play/amd gvr doesn't come anywhere near close to cpu encode in terms of file size/quality.

Even at utrafast preset, x264 is miles ahead of Quicksync.

last i read that wasn't the case
 
The explicit multiad
I have an Nvidia GTX 980 that I use 100% for video output. However, I might want to use Intel QuickSync in the future, so is it worth enabling the iGPU?

Does it have any performance difference if I leave it enabled but don't connect any monitors to it?

What is the recommended approach when using a dedicated PCIe card like my GTX 980?

(Also, if it matters, my CPU is an i7-6700k)

EDIT: It looks like DirectX 12 has a multi adapter feature that can use both integrated and discrete GPU for increased performance?
The explicit multi adapter is up to tge game devs to use it,not all games would use it,

But for Quicksync streaming keep igpu enabled
 
last i read that wasn't the case

Depends on speed or quality is important. Quick Sync is faster than x264 but it ends up with larger files (20% larger) of less quality than x264. I've all but moved on to x265 now so quicksync is kinda pointless for me.
 
You're lose a small amount of RAM and that's it. I have mine off on my 6700k because I don't plan on using it and I don't care for the extra complications caused by having multiple GPUs and GPU drivers. I doubt DX12 multi-GPU will be worth using with an Intel 530 and Nvidia 980 but who knows.

You lose a small amount of system ram, but the upside is you save a bit of VRAM if you're running dual monitors and have your secondary display connected to the iGPU. This is how I have my dual displays setup on both my IB and SB boxes. Works great and zero complications caused by running them this way for the past 3 years. Which complications might you be referring to? I've literally not noticed a single difference in usability or issues between running them like this vs both screens of the same video card.
 
For OP, he has nVidia and even if DX12's new multi-adapter didn't work, I believe nVidia Optimus technology will enable some power saving features. It's designed for laptops to power off the dedicated GPU when there's no load, and I read somewhere they enabled it on Desktop cards as well.

Doesn't exist for desktops. Nvidia solved this problem by vastly reducing idle power:

NVIDIA GeForce GTX 980 Ti 6 GB Review

Their most power-hungry card uses 11w at idle, and it gets down to as low as 4w. You're really not gong to notice the difference between that and IGP unless you're running on batteries.
 
There is one case where enabling the iGPU might help.

If you're like me, and you watch videos with both frame interpolation (SVP) and lots of post processing, you will want to have the iGPU enabled. You can decode using Intel QuickSync/NVIDIA CUDA, and use the GPU for post-processing/scaling effects, and use SVP with the iGPU.
 
Doesn't exist for desktops. Nvidia solved this problem by vastly reducing idle power:

NVIDIA GeForce GTX 980 Ti 6 GB Review

Their most power-hungry card uses 11w at idle, and it gets down to as low as 4w. You're really not gong to notice the difference between that and IGP unless you're running on batteries.

Glad to note this. Then I guess there really is no need for enabling the iGPU unless you use QuickSync, but nVidia covers that pretty well with CUDA and the video engines in their GPUs.
 
Glad to note this. Then I guess there really is no need for enabling the iGPU unless you use QuickSync, but nVidia covers that pretty well with CUDA and the video engines in their GPUs.
Depends on if you want it for streaming or storage. NVENC is fine for streaming but it's utter garbage compared to even QuickSync. (i've tried both.. NVENC on a 970 and QuickSync on a recent Intel proc)
I already covered QuickSync above.
 
Depends on if you want it for streaming or storage. NVENC is fine for streaming but it's utter garbage compared to even QuickSync. (i've tried both.. NVENC on a 970 and QuickSync on a recent Intel proc)
I already covered QuickSync above.

Encoding quality still differs that much?
 
i honestly dont understand how...everything is 0s and 1s....i never have seen a god explanation how a GPU x264 vs a CPU x264 vs an iGPU x264 can vary.

The same way the "Constant Quality" in x264 can give you better results than the classic 2-pass, all without any target bitrate guesswork? You can do a lot for encode efficiency if you are flexible. The only thing that's standardized is the result :D

I'm pretty sure the hardware encoders are single-pass, with few options to enable. Their algorithms are optimized to minimize die space AND power consumption.
 
I tend to tweak/care about encoding quality only if I'm planning to watch the content on a Retina screen or another really good screen. If it's going on a cheap 50 dollar smartphone from 1.5 years ago, I tend to just go with the lower quality/faster speed route. If it's being played back on a 4K screen, I let the sucker take its time with the CPU software encoding.
 
The same way the "Constant Quality" in x264 can give you better results than the classic 2-pass, all without any target bitrate guesswork? You can do a lot for encode efficiency if you are flexible. The only thing that's standardized is the result :D

I'm pretty sure the hardware encoders are single-pass, with few options to enable. Their algorithms are optimized to minimize die space AND power consumption.

With NVENC and MediaCoder you can do 2-pass AFAIK.
 
The same way the "Constant Quality" in x264 can give you better results than the classic 2-pass, all without any target bitrate guesswork? You can do a lot for encode efficiency if you are flexible. The only thing that's standardized is the result :D

I'm pretty sure the hardware encoders are single-pass, with few options to enable. Their algorithms are optimized to minimize die space AND power consumption.
sorry i dont exactly understand this. I know very little about encoding so constant quality means nothing to me :/ Sorry. You mention flexibility and i dotn get it. again sorry for not having a clue on this.
 
Back
Top