i5 NUC and GTX980

wormholev2

n00b
Joined
Jun 13, 2015
Messages
9
Okay so here's the idea: I have a NUC5i5RYK laying around and my GTX980 from my main mATX build.
The NUC has an m.2 port which can provide up to x4 pcie lanes which in theory should be plenty, even for the GTX980.

I got an m.2 to PCIe x4 adapter and a PCIe x4 to x16 riser.
Since the m.2 slot will be used for the GPU and I can't seem to find the tiny 5-pin to sata power cable for the NUC, I'll try installing Windows on a USB drive and hope for the best.

In theory, this could end up being a <2L system (excluding the power supply).

Thoughts?

----------------------------------------------
Edit: Actually got this working!



Here are some more pictures and details about the current setup.
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
Don't you have to use a powered PCIe riser not to mix the 12v (from nuc mobo and from the external GPU)?
Hehe, I wanna see this working, hope you on't fry anyhitng! :D
 
So the m.2 adapter needs it's own 12v power and the GPU will also need extra juice as the puny NUC power brick just won't be enough.
I have a Dell DA-2 somewhere in storage that I'm going to try digging up; if I remember correctly that thing could provide up to 220W of 12V power which should allow me to power at least the riser and the GPU.

The NUC needs 19V though, so if I want to power everything using just one power supply, I'll probably need an extra dc-dc converter.
 
So the m.2 adapter needs it's own 12v power and the GPU will also need extra juice as the puny NUC power brick just won't be enough.
I have a Dell DA-2 somewhere in storage that I'm going to try digging up; if I remember correctly that thing could provide up to 220W of 12V power which should allow me to power at least the riser and the GPU.

The NUC needs 19V though, so if I want to power everything using just one power supply, I'll probably need an extra dc-dc converter.

The nuc actually can run on anything from 10V up to 19V. They only list the spec of 12V-19V though.

Be careful on the card power as it might be more difficult to find a brick that can provide the kind of power that the 980 needs on JUST the 12V. The old school dell 12V 18A bricks are awesome and do a decent job of it, just remember that the broadwell nuc's m.2 slot is only pcie 2.0 and not pcie 3.0, so it might not be as good of an experience as you are hoping for.

Good luck and have fun with it though, just make sure you aren't introducing any cross talk on the nuc's power feed, which shouldn't be an issue because m.2 slots by default do not push 12V into or out of the board which is why it has the floppy pin power adapter. You may need to add some capacitors to the riser or videocard to help deal with the vdroop on the system.

https://communities.intel.com/thread/49219

I believe this applies to the current run of NUCs too.
 
I've heard this idea being thrown around multiple times now, and I personally like it a lot. You'll have to experiment whether your GPU actually requires the 75W of power an x16 slot can provide, or whether it will work with power provided mostly via the PEG connectors.
I tested this with a thin mITX board that only has an x4 slot as well and my GTX970 booted up fine on it, no issues whatsoever.
Something that you may want to consider if you don't mind a bit of tinkering is using a FlexATX PSU instead of an external one. It will only increase the volume of the build by <1L and as the NUC can be powered by 12V, that could be a very good option. If you count the volume of the external bricks, the difference in size isn't even that large anymore.

If Kaos_Drem is right about the M.2 only being PCIe2.0 instead of 3.0, though, the performance hit will be higher than you may have expected, up to 14%.

Other than that, I'd really like to finally see this being done by someone :D
 
just remember that the broadwell nuc's m.2 slot is only pcie 2.0 and not pcie 3.0, so it might not be as good of an experience as you are hoping for.
If Kaos_Drem is right about the M.2 only being PCIe2.0 instead of 3.0, though, the performance hit will be higher than you may have expected, up to 14%.

This would be a shame, although 86% of a 980 is still a hell of a lot more than the on board HD6000. I can't find documentation backing this up though, I guess I'll have to check once I have everything put together.

The nuc actually can run on anything from 10V up to 19V. They only list the spec of 12V-19V though.

This is good to know, it'll definitely make powering the whole thing a little easier.
 
This would be a shame, although 86% of a 980 is still a hell of a lot more than the on board HD6000. I can't find documentation backing this up though, I guess I'll have to check once I have everything put together.

You wont find any documentation because the performance impact is dependent on the application. The more bandwidth used the higher the performance impact is going to be.
If you look through the whole article I posted, you can see the impact on every game they tested is different.
But yeah, that's still pretty damn powerful, no matter what.
 
You wont find any documentation because the performance impact is dependent on the application. The more bandwidth used the higher the performance impact is going to be.
If you look through the whole article I posted, you can see the impact on every game they tested is different.
But yeah, that's still pretty damn powerful, no matter what.

Sorry, I wasn't clear; I actually meant I couldn't find any documentation that specified that the NUC's m.2 was only PCIe 2 as opposed to PCIe 3.
I realise performance will be very application (and apparently resolution) dependent.
 
In Intel's tech sheet it states "Using PCIe x4 M.2 SSD maximum bandwidth is approximately 1600 MB/s". PCIe 3.0 can transfer about 985 MB/s per lane which means at 2x it would be about 2000 MB/s. Considering that it would take 4x 2.0 lanes to transfer this amount of data, it would seem to me that this is probably a 2.0 PCIe bus.

With respect to the build, I am currently working on a similar build involving the new i7 NUC and a Galax short form factor 970. Contrary to what you have in mind, I am planning to do a brickless implementation using a 500W Flex ATX PSU as I believe it will be easier to make it work. I think most of the difficulties with this build will center around powering both devices from a single source and a single switch. If anyone has any cool ideas for doing this please let me know.

Also with respect to the SATA cable for the NUC, you can buy that and other compatible NUC pieces directly through Intel.

http://www.intel.com/support/motherboards/desktop/sb/CS-034599.htm
 
Last edited:
@wormholev2
Finally I was waiting for this project to come out! HOAH!
I did a similar setup back then last year with my laptop and a desktop GPU,
I was thinking NUCs will be the future if they can implement support for dual MXM gpus.
think GTX980Ms in SLI in a NUC with a skylake K CPU and boom, a newer form factor that will shake the market!
 
Don't know if anyone here considers themselves knowledgeable enough to figure out a power solution to get something like this running, but if anyone does I could really use some help.
 
or simply use this buddy;

PE4C_V3.0_All_1.jpg


http://www.bplus.com.tw/Adapter/PE4C%20V3.0.html

or an EXP GDC V6 or V7 will do the trick like I did last year
 
Doesn't do M.2 and ExpressCard and mPCIe are both limited to 1x PCIe 2.0 lane which doesn't give very satisfactory performance. Add to that a bulky and inflexible cable and a bulky and large base and it's a pretty crappy option for a small form factor PC.
 
Doesn't do M.2 and ExpressCard and mPCIe are both limited to 1x PCIe 2.0 lane which doesn't give very satisfactory performance. Add to that a bulky and inflexible cable and a bulky and large base and it's a pretty crappy option for a small form factor PC.

There was a separate cable they are selling with an M.2 end I am sure but don't know if that is still in stock, yes it could be quite bulky until intel will wake up and implement that MXM GPU module on NUCs like how they did on that USB 3.1 external GPU support for laptops soon. Many cried (appealed) for that and thank god they heard it.
 
think GTX980Ms in SLI in a NUC with a skylake K CPU and boom, a newer form factor that will shake the market!

it wont "shake the market" as joe sixpack still thinks a full atx tower with watercooling is what performance means but for some of us it would be pretty cool!
 
It works!

Please forgive the terrible phone-camera photography and the fact that my test-bench is a random box I had lying around...
I'm going to be playing around with this a bit more, trying to clean up the powering solution and maybe looking into a nice way to bundle this all together into a custom case or something.

Let me know if you guys want me to run any specific benchmarks or tests on the system in the mean time.

think GTX980Ms in SLI in a NUC with a skylake K CPU and boom, a newer form factor that will shake the market!
If a Skylake NUC comes out with 8 or more PCIe lanes over m.2 (or some other connector) then you could in theory do something like this using this.
If it wasn't so expensive and if I could borrow another 980, I'd be willing to try this out with my current setup to see what happens; each GPU would end up having only 2 PCIe lanes though which I imagine would hurt performance a lot...
 
Last edited:
Don't know if anyone here considers themselves knowledgeable enough to figure out a power solution to get something like this running, but if anyone does I could really use some help.

Take a look at my post about powering a thin mITX board with a FlexATX PSU. That may be helpful.

It works!

Please forgive the terrible phone-camera photography and the fact that my test-bench is a random box I had lying around...
I'm going to be playing around with this a bit more, trying to clean up the powering solution and maybe looking into a nice way to bundle this all together into a custom case or something.

Let me know if you guys want me to run any specific benchmarks or tests on the system in the mean time.


If a Skylake NUC comes out with 8 or more PCIe lanes over m.2 (or some other connector) then you could in theory do something like this using this.
If it wasn't so expensive and if I could borrow another 980, I'd be willing to try this out with my current setup to see what happens; each GPU would end up having only 2 PCIe lanes though which I imagine would hurt performance a lot...

Very nice, now to put it in a custom case :p
 
Nice! I've never seen it done before.

Can we have some more details on the cabling for the GPU?
 
Have you thought about powering the NuC via the Auxillary power connector under the 2260 to 2280 part of the M.2 card? Wondering if you could get that running from another splice.
 
Can we have some more details on the cabling for the GPU?

It's nothing that special actually: turns out that the 8-pin PCIe power splitter that came with the GPU lines up and fits on the DA-2's connector if you offset it correctly.

37afw9X.png


Pins 2, 4, and 6 on the PCIe are also 12v and the shapes match up.

Once that's in place, just hook the "remote" to the ground to turn the power supply on.
The power to the m.2 adapter is just two wires going from 12v/ground to a floppy molex.

Have you thought about powering the NuC via the Auxillary power connector under the 2260 to 2280 part of the M.2 card? Wondering if you could get that running from another splice.
I think finding the right connector would be tricky, but I ordered a couple standard 2.1mm DC connectors which I can plug into the back panel. As Kaos_Drem pointed out, the NUC is perfectly happy with 12v through there so that'll just be an extra pair of wires from the power supply.
 
Last edited:
It works!

Please forgive the terrible phone-camera photography and the fact that my test-bench is a random box I had lying around...
I'm going to be playing around with this a bit more, trying to clean up the powering solution and maybe looking into a nice way to bundle this all together into a custom case or something.

Let me know if you guys want me to run any specific benchmarks or tests on the system in the mean time.


If a Skylake NUC comes out with 8 or more PCIe lanes over m.2 (or some other connector) then you could in theory do something like this using this.
If it wasn't so expensive and if I could borrow another 980, I'd be willing to try this out with my current setup to see what happens; each GPU would end up having only 2 PCIe lanes though which I imagine would hurt performance a lot...

Wow such a marvelous experience, but I sure do hope intel is seeing this, they opened their minds on allowing external based gpus for future laptops with USB 3.1 connections!
Buddy I suggest you can post the same images and setup over at techinferno forums where a lot of same external gpu enthusiasts are too so better try and head over that place. Congrats!
 
Wooooooow.... it worked! :cool:
Is there any sort of risk connecting the gpu directly to the AC-DC? Like it geeting available energy even when the system is off?
 
Is there any sort of risk connecting the gpu directly to the AC-DC? Like it geeting available energy even when the system is off?

As far as I know, it's not a problem. I did however look into into a method discussed on SPCR for controlling the on/standby state of the DA-2 using the front panel headers (power switch and power LED) but I'm still not sure it's worth the effort.
 
Very cool, love to see people trying new stuff with the NUC.

I hope the next generation has a pcie3.0x4 based m.2 because that should alleviate a lot of the performance issues.

You might also want to try running furmark to see if it has any limitations there.
 
AsRock's new Skylake NuC specs are out and it looks like a PCI 3.0 U.2 slot will be available on it (x4). Bodes well for future iterations of this build.
 
AsRock's new Skylake NuC specs are out and it looks like a PCI 3.0 U.2 slot will be available on it (x4). Bodes well for future iterations of this build.
Do you have a link to this? Does that mean that the u.2 slot will be replacing the m.2?

Also sorry I haven't posted any updates for a while.
I've been looking into possible cases/enclosures for the build and maybe a more elegant storage solution.

You might also want to try running furmark to see if it has any limitations there.
I'll run furmark and a couple more benchmarks when I get a chance.
 
nice, gj. :)

It's shame you don't have the H version NUC, cos then you won't have to use USB enclosure for the SSD and you can probably also mod the case a bit(cut thin hole for riser cable) so you don't have to keep it open etc.
 
Do you have a link to this? Does that mean that the u.2 slot will be replacing the m.2?

Also sorry I haven't posted any updates for a while.
I've been looking into possible cases/enclosures for the build and maybe a more elegant storage solution.


I'll run furmark and a couple more benchmarks when I get a chance.

When you do your benchmarks, could you also measure power consumption? I'm wondering truly how capable the DA-2 brick is.
 
Furmark preset:1080 results:


Edit: 3DMark benchmark results:
Seems like it struggles with physics which is understandable given the CPU.

I ordered a power usage meter so I can measure wattage at idle and under load; I'll let you guys know once I have the numbers in.
 
Last edited:
nice build, i have been dreaming to build something like this with a 970 mini in a custom box
 
Wow such a marvelous experience, but I sure do hope intel is seeing this, they opened their minds on allowing external based gpus for future laptops with USB 3.1 connections!
Buddy I suggest you can post the same images and setup over at techinferno forums where a lot of same external gpu enthusiasts are too so better try and head over that place. Congrats!

Considering Intel stopped eGPU solutions over Thunderbolt, I would hope Intel isn't seeing this. The holy grail of just owning a laptop on the go, and plugging in external screen and external GPU into it when coming come... yeah, they effectively stopped that.

I don't think the industry in general wants this. The life cycle of laptops/NUCs would become longer, and the market for gaming rigs and gaming laptops less. Intel's own integrated GPUs would become even more worthless.

In general, industry believes in product differentiation. They want you to own multiple devices, for each purpose, and they want to control the lifecycle of that hardware precisely, making it obsolete in 2 years, so you need to upgrade/buy new. That's the successful formula of Apple.
 
Considering Intel stopped eGPU solutions over Thunderbolt, I would hope Intel isn't seeing this. The holy grail of just owning a laptop on the go, and plugging in external screen and external GPU into it when coming come... yeah, they effectively stopped that.

I don't think the industry in general wants this. The life cycle of laptops/NUCs would become longer, and the market for gaming rigs and gaming laptops less. Intel's own integrated GPUs would become even more worthless.

In general, industry believes in product differentiation. They want you to own multiple devices, for each purpose, and they want to control the lifecycle of that hardware precisely, making it obsolete in 2 years, so you need to upgrade/buy new. That's the successful formula of Apple.
Your argument is a little flawed as top of the notch tech will still progress and your tiny little NUC+miniGPU combo will be 'obsolete' within a year or two anyway and you need to get the next iteration of it.
It's even better with the NUC as I won't expect them to put sockets on there for you to be able to swap CPU's.. ;)

With AMDs HBA and Nvidia going down similar paths (form factor/size) it's just a matter of time till it will be physically possible to have a current card of the 11" kind in a form factor that can sit on top of a NUC sized board.
 
Physical possibility doesn't come into it; it's been physically and technically possible for the last few years. Indeed, people have done it home-brew by hacking external Thunderbolt boxes intended for storage controllers, using non-certified Thunderbolt interface cards, and many companies have demonstrated (and subsequently not released) all-in-one GPU adapter boxes. The problem is that Intel effectively has sole control offer Thunderbolt certification (and particularly trademarking), and repeatedly torpedoes attempts to release PCIe-over-Thunderbolt GPU boxes.
Notice how recent laptops using outboard GPUs (e.g. MSI Gaming Dock, Alienware Graphics Amplifier) use custom PCI-E interfaces rather than Thunderbolt, even though thunderbolt has already been proven to work for this application AND has already had all the interface R&D done (and would be an extra value-add as an interface anyway)? Blame Intel.
 
So, here's a question. Let's say I have an ASRock X99-ITC/AC. Is it possible to SLI two cards between the M.2 adapted slot and true PCIe x16 slot? Yes, I want to build a Frankenstein. Has anyone tried?
 
Back
Top