S4 Mini - An ULTRA-SFF Chassis shipping this December!

Guys I made a build guide. This might answer some questions for people looking at moving to the S4 Mini.

If you purchased an S4 Mini I HIGHLY recommend watching BEFORE you install! It should make things go much smoother. Being a small case, there are some tricky things that are made simple with the right technique.

Thanks!

S4 MINI ASSEMBLY GUIDE

Just watched the video. Thank you. I sincerely appreciate the legitimacy and professionalism you are bringing to the SFF scene. Your presentation and that case oozes craftsmanship and class. I'm hoping that the S4 will be a huge success for you. Not to mention that the S4 in orange is pretty tempting for us Clemson Tiger fans. Merry Christmas all!
 
Nicely done video - looks really professional. For future assembly videos (if planned) I'd recommend some secondary angled views for better depth perception when relevant.

Using two zip ties for 24pin cable to clamp those wires together could do the trick.

Also you need to figure out how to connect 8pin PEG to the 970 mini - the R9-285 in your video has peg connector facing front of the case instead of side.

Mounting 2.5" hard drives with gpu installed looks like quite a pain - you might want to include that in future videos (if planned).
 
Wow, thanks for the compliments!

Using two zip ties for 24pin cable to clamp those wires together could do the trick.

Agreed, I just wanted to show it stuffed in there so people could know how much room there is (or isn't).

Also you need to figure out how to connect 8pin PEG to the 970 mini - the R9-285 in your video has peg connector facing front of the case instead of side.

Apparently my language is still confusing and thank you for staying on me about it. I will re-iterate: I sell a computer case and can't guarantee compatibility with anything outside MITX "standards."

The 970 is just not a good fit for this chassis, and I don't have one or plan ever to get one. I would love to borrow one...but I doubt anyone will take me up on that offer.

All I can do is offer the Sketchup model for download, and show how stuff mounts. Now you were kind enough to point out in this thread that you don't think it would fit. You are obviously an expert, so people can use your recommendation if they want to--and personally I would back you up.

I have a feeling that if the 285 ITX had connectors on the side the install would be difficult, but could be done if the GPU is angled 30 degrees.

At the end of the day I could just not say ANYTHING about hardware or DC-DC power supplies and let the customer figure it out...but I'm trying to share my excitement and enthusiasm for tinkering, and people should just take my opinions as opinions and nothing else. :) [/QUOTE]

Mounting 2.5" hard drives with gpu installed looks like quite a pain - you might want to include that in future videos (if planned).

It's not (if you are talking about the SSD bracket). If I thought it was difficult at all I would have showed it. The only difficult part is installing the GPU if you can't move the drives over the motherboard...but its not really HARD, its just not a cakewalk.

Thank you for your concerns, SaperPL, you no doubt are voicing the concerns of the community and I appreciate it.
 
About 970 and it's PEG connector - in general that's how it's made now to face this way instead of classic front facing connector as in your radeon and in R9 Nano.

There's also quite a few GTX 960 with same - as in identical - 'ITX' oversized pcb that might look optimal in terms of TDP for this case so it's not necessarily a problem limited just to 970's. It looks like this pcb format might become a new standard.

Hopefully there's some of your customers that will try it out on their own and let you know.

About the drives installation - I get a feeling that might be the mentioned problem with no depth perception in the angle used for this video. The problem I have with this, and I really didn't notice that before, is that assembling something inside of a case feels problematic so if I have to mount the drives over motherboard and then move the whole bracket with drives over gpu it's what feels off me. What happens if someone has a bit taller cpu cooler here?

I know it's still a case that has it's limits - I'm just noting the stuff that in my opinion were missing in the video or just I wished they were there, after watching it.

By the way - I think I'm not the only one that waited for a completing a build in that video and that didn't happen :)
 
About 970 and it's PEG connector - in general that's how it's made now to face this way instead of classic front facing connector as in your radeon and in R9 Nano.

There's also quite a few GTX 960 with same - as in identical - 'ITX' oversized pcb that might look optimal in terms of TDP for this case so it's not necessarily a problem limited just to 970's. It looks like this pcb format might become a new standard.

Hopefully there's some of your customers that will try it out on their own and let you know.

About the drives installation - I get a feeling that might be the mentioned problem with no depth perception in the angle used for this video. The problem I have with this, and I really didn't notice that before, is that assembling something inside of a case feels problematic so if I have to mount the drives over motherboard and then move the whole bracket with drives over gpu it's what feels off me. What happens if someone has a bit taller cpu cooler here?

I know it's still a case that has it's limits - I'm just noting the stuff that in my opinion were missing in the video or just I wished they were there, after watching it.

By the way - I think I'm not the only one that waited for a completing a build in that video and that didn't happen :)

This is a custom case with limitations, this has been reiterated a 100 times. He is going above and beyond by showing hardware not recommended for the case crammed in there anyway. He isn't SilverStone and doesn't have to design a case for the idiot masses.
 
As i said before, i like the nice quality of your videos, Josh. This is the way how youtube tutorial should look like. Great work :cool:

Now the technical part:

1. If I understood right, You have to remove almost all of your pc components if You want to remove your ssd from ssd bracket? I know You won't be doing it very often, but is it possible to remove the ssd just by sliding it over GPU pcb?

2. How long will last Your pci-e riser bended in the Z-shape? In the one of Your previous videos You said You want to make Your pc-case as designed for lan-party. When You will be moving your case very often and You will be using "heavy" graphic card, then i think the weight of the card can damage the ribbon connections, simply by moving it a little bit during the travel. Most of the graphic cards use at least 2 stable mounting points: PCI-E slot and screws next to dvi connectors. You have only one of them, because riser is "in the air". Did You tell Your manufacturer how you want to use Your riser and how it will affect its warranty? Maybe there is somewhere on the market a non-flex 90 deg riser which would fit in Your case. Do You recommend something instead of this flex-riser?

I asked my 2nd question, because i'm traveling a lot and it would be nice if i don't need to buy a new riser every 2 or 3 months. Riser ribbon is connected to the pcb with SMD soldering. I know how it works. Thanks for answer.
 
Last edited:
Great work on the video!

I wish I could get a 12 core xeon V3 (or 16 core V4) and something like a EVGA 960 4GB mini. Just don't think I could find PICO to power that. Even though a google search says they're available for 400 and 500 watts I'm feeling there is a big catch somewhere. Wondering if a closed loop slim water cooling solution fit like the SilverStone low profile slim radiator model.
 
1. If I understood right, You have to remove almost all of your pc components if You want to remove your ssd from ssd bracket? I know You won't be doing it very often, but is it possible to remove the ssd just by sliding it over GPU pcb?

If you want to remove your SSDs I don't think you would have any trouble doing so inside the chassis--especially 7mm SSDs. I just recommend you install them while the bracket is out because...why not? It is easier then. ;)


2. How long will last Your pci-e riser bended in the Z-shape? In the one of Your previous videos You said You want to make Your pc-case as designed for lan-party. When You will be moving your case very often and You will be using "heavy" graphic card, then i think the weight of the card can damage the ribbon connections, simply by moving it a little bit during the travel. Most of the graphic cards use at least 2 stable mounting points: PCI-E slot and screws next to dvi connectors. You have only one of them, because riser is "in the air". Did You tell Your manufacturer how you want to use Your riser and how it will affect its warranty? Maybe there is somewhere on the market a non-flex 90 deg riser which would fit in Your case. Do You recommend something instead of this flex-riser?

I asked my 2nd question, because i'm traveling a lot and it would be nice if i don't need to buy a new riser every 2 or 3 months. Riser ribbon is connected to the pcb with SMD soldering. I know how it works. Thanks for answer.

I don't know--I have a level of confidence in it as over the past two years I have shipped out over 85 systems with ribbon cables, but this is a new design and you make good points.

What is your solution for your chassis and your full length cards? Is there anything you can recommend?

Great work on the video!

I wish I could get a 12 core xeon V3 (or 16 core V4) and something like a EVGA 960 4GB mini. Just don't think I could find PICO to power that. Even though a google search says they're available for 400 and 500 watts I'm feeling there is a big catch somewhere. Wondering if a closed loop slim water cooling solution fit like the SilverStone low profile slim radiator model.

Thanks!

Yeah you can't have an internal AIO cooler AND an expansion card. I think it's sad that the Intel Xeons I've looked at don't have graphics built in!
 
E3 Xeons with a "5" at the end of their product number have an iGPU, however, note that Xeon V5 (Skylake) can only be used in server boards (C232 / C236 chipset) and most of those don't have onboard display outputs for the iGPU. The intended use of the iGPU is actually for virtualization, where a client would remote into a box and leverage the iGPU. It's like Nvidia's GRID, except Intel's implementation is tied to a specific VM and can't be re-assigned to different VM's based on usage.

http://ark.intel.com/Search/Advanced?s=t&FamilyText=Intel® Xeon® Processor E3 v5 Family

Keep an eye out though. Manufacturers are starting to release "Gaming" versions of server chipset boards, and there are some rumours of overclock boards coming within the next month or two. It more likely these boards will have display output on them, though I don't know how many will end up being ITX boards.
 
This is a custom case with limitations, this has been reiterated a 100 times. He is going above and beyond by showing hardware not recommended for the case crammed in there anyway. He isn't SilverStone and doesn't have to design a case for the idiot masses.

I know, I'm just addressing two things here:

1) There are things that could be added to the video since it's better to make even better informed decision for end user - and that's not only about whether to buy the case or not but what components to buy. So far I've seen a lot people asking what will fit in our case, dondan's A4 etc so they can buy that already and move to that case in the future. Josh omitted some of those concerns by simply stating (or making an educated guesses) that something should work. Omitting something in the video just because it looks obvious from the point of view of someone who already know how it works is not the best decision.

2) Josh used the gpu that has the PEG connector facing front of the case which isn't a standard at this point and regardless whether pcb is oversized or not combining that cable plugged in with all other things you need to do to install the card with the riser it's not so obvious to me that you'll be able to do this without problems. For example you need additional room to slide the bracket in and the PEG connector might block that maneuver.


If you want to remove your SSDs I don't think you would have any trouble doing so inside the chassis--especially 7mm SSDs. I just recommend you install them while the bracket is out because...why not? It is easier then. ;)

I think that was the point Zombi made - You need to remove the gpu to remove the bracket which mean it isn't easy to access the drives which usually are the parts that are the most accessible piece of the system.

I don't know--I have a level of confidence in it as over the past two years I have shipped out over 85 systems with ribbon cables, but this is a new design and you make good points.

I actually didn't note that before but I am afraid that what you're doing to the riser (the force bending before installation) is really prone to damage the ribbon weld points. And while you might not notice that there may be damaged lanes even if the card is working, it might be unstable. I did damage my first full length riser's power lane and while one gpu worked properly the others crashed at random until we replaced the riser.

What is your solution for your chassis and your full length cards? Is there anything you can recommend?

Our riser's "slot end" in which we plug the gpu is hardly attached to the center wall so there's no big forces working on a riser's ribbon and thus the weld points.
 
I think that was the point Zombi made - You need to remove the gpu to remove the bracket which mean it isn't easy to access the drives which usually are the parts that are the most accessible piece of the system.

You can remove the SDDs when the system is assembled.

You need to remove parts of the system to remove the bracket.

There is a big difference here.
 
Yup, it's just that "easier" way to install the drives to the bracket means the bracket is outside, that was the point. Anyway if it's doable it's not such big of a problem
 
NFC, do you think tis case is mountable on the back of a monitor easily? It sure does look so. Would you happen to have any recomendation for this idea?

Also, you said you plan on doing cool niche production cases in the future, so, does this mean I'll probably be able to buy the S4 mini in 1 or 2 year into the future? (or even a evolution of this case)
 
NFC, do you think tis case is mountable on the back of a monitor easily? It sure does look so. Would you happen to have any recomendation for this idea?

I should have thought of that...but I didn't. So you would have to drill holes, but I suppose the design does lend itself to working in this manner. You could put a cross plate and VESA mount it.

Also, you said you plan on doing cool niche production cases in the future, so, does this mean I'll probably be able to buy the S4 mini in 1 or 2 year into the future? (or even a evolution of this case)

I plan on it, but it's just feelings at this point--I can't guarantee anything :)
 
Last edited:
My S4 just came in!!!!!

So excited, this thing is really beautiful.

I know there has been a ton of talk as to whether a 970 will fit in here. I am glad to report back that there is a ton of room for side peg video cards.

LAdUnK7.jpg


If you plug the connector in first and then put the card in, it makes life alot easier but honestly it wasn't even a challenge to get it in there.

Look out for my full build log in a couple of days. I am so excited to see my temps first!

Thanks Josh!
 
Last edited:
That's really exciting news. I want this case so bad (it's my favorite out of the projects I've seen on hard forum due to its size and looks) but I currently just own a full-length 970. :( I'm planning on doing a completely new build once pascal comes out though, so this gives me hope that a future mini Pascal can fit in here.
 
Looks freaking amazing Updawg! Still trying to justify buying this case and but can't seem to pull the trigger!
 
First set of bad news is the 4790t + Nano is too much power for the HDPLEX 250 and the Dell 330 Watt power adapter. It hard powers off when playing a game.

I also have a Pico 160XT that I might be able to run in tandem with the HDPLEX. What is everyone's thoughts?

Lastly are all Dell Power adapters made equal? I picked up a XM3C3 - it doesn't seem to be getting warm when using the system - not sure what to think.
 
First set of bad news is the 4790t + Nano is too much power for the HDPLEX 250 and the Dell 330 Watt power adapter. It hard powers off when playing a game.

I also have a Pico 160XT that I might be able to run in tandem with the HDPLEX. What is everyone's thoughts?

Lastly are all Dell Power adapters made equal? I picked up a XM3C3 - it doesn't seem to be getting warm when using the system - not sure what to think.

QinX over on SFF has reported that his R9 Nano is spiking hard and causing resets while benchmarking.

Sapphire was kind enough to loan me a R9 Fury to do my test builds with and I had no issues, but since I had an early model this summer I am raising my eyebrows. I contacted them and asked to see if I could test a new Nano so I can run it through its paces with various bricks. I have a DA330PM111, HP 466954-001, PA-9E, and a couple test Meanwells that I would like to do another run through.

I am wondering if with their Crimson software if some tweaking can be done to control peaks, as the Nano should operate at 175w but apparantly for seconds it can spike much higher. :mad:
 
I'm taking the Nano off my site until I can get a retail version in to rerun everything. I hope that I haven't elevated anyone's expectations based on that example config, but I probably did, and I apologize.
 
I was playing Battlefront and it crashed within seconds. So it wasn't even an unrealistic benchmark scenario.
 
The R9 Nano does spike much harder than the 970. Toms reported some brief 437W pulls from the GPU alone. QinX 's thread on SFF is seeing brief spikes over that.

@UpDawg the Would you mind upgrading your power brick to the HP Firebird Voodoo 350W? That is the most powerful power brick that is widely available. If the 350W (underrated) Firebird can't sustain the Nano, then we'd all know that the 970 is the end of the road for HDPlex. Thanks! Good luck!
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Hey guys,

Thought I would report back after playing games in this thing for a while. After playing Just cause 3 for about an hour the highest temp I hit was 84 degrees. If I elevated the side where the video card was the temps would drop to around 79 and stay there. I should also point out that I thing there was something wrong with my fan control settings when I was doing this as the fan only increased in speed once and topped out at about 44%. I'm sure playing around with this thing I can find the optimal fan to temp ratio.

Updawg, the power supply I am using is 350 watt. It is the HP Voodoo 350w power supply and it hasn't seemed to give me any issues yet (I've had minimal testing). It is comically large for a power supply but it does the trick.

I was just playing around with it though, over the next couple days I'm going to do some proper benchmarks and see the whole story. So far I'm incredibly happy and excited that I got this much power into this little guy. It's smaller then my xbox one and it can legit do some 4k gaming (at 30 fps and not maxed settings but its a start!). I'm sure once Pascal is here it will get a nice upgrade.

I should also point out that the CPU in here isn't a T variant. I have a regular 4790 in here.
 
@UpDawg the Would you mind upgrading your power brick to the HP Firebird Voodoo 350W? That is the most powerful power brick that is widely available. If the 350W (underrated) Firebird can't sustain the Nano, then we'd all know that the 970 is the end of the road for HDPlex. Thanks! Good luck!

I got one on the way as of tonight. If nothing else I at least can do some fun modding with custom bricks. I'm hoping I can find a plug and play solution with the retail card.

Hey guys,

Thought I would report back after playing games in this thing for a while. After playing Just cause 3 for about an hour the highest temp I hit was 84 degrees. If I elevated the side where the video card was the temps would drop to around 79 and stay there. I should also point out that I thing there was something wrong with my fan control settings when I was doing this as the fan only increased in speed once and topped out at about 44%. I'm sure playing around with this thing I can find the optimal fan to temp ratio.

Dude those temps are promising...970 meta confirmed?
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Updawg,

I know you have a T variant for your processor but do you think you could try undervolting it? I wonder if that would help the situation. Though if what an earlier person said was true, about the nano drawing 490 Watts on its own, then that might be a problem. It uses an 8 pin connector so the maximum it should draw is 150 Watts + 75 Watts. The 150 coming from the connector and the 75 from the motherboard. Total that and its 225 Watts. According to the math, that video card should not be able to draw more than that. That still leaves wiggle room for the processor and motherboard. Though the hdplex is rated for 250 Watts continuous with a 400 Watt peak. I don't know if it can do more than 250 continuously like 300 or or 275. I remember reading on the small form factor forums that the hdplex has a regulated power input so its cap is fixed. Contrast that with the picopsu which relies on regulation from an external source, mainly the actual power adapter. What the pico mainly does is switch the power on and off, usually when you turn the computer on and off. Power loads during these times are low, around 60 or 70 Watts.

This is interesting, Josh I'm seriously considering your case. I hope you haven't sold out yet! :D
 
It's nice to see the oversized PCB 970 fitting.

First set of bad news is the 4790t + Nano is too much power for the HDPLEX 250 and the Dell 330 Watt power adapter. It hard powers off when playing a game.

I also have a Pico 160XT that I might be able to run in tandem with the HDPLEX. What is everyone's thoughts?

Lastly are all Dell Power adapters made equal? I picked up a XM3C3 - it doesn't seem to be getting warm when using the system - not sure what to think.

Before ordering bigger brick I'd check out the config without the riser or with some other riser to make sure that's not the problem.

I've already said that but I'll clarify - with damaged pci-e power lane gpu's without PEG or with TDP over 150W will be unstable and what NFC told you to do with the riser in the video does look quite worrying and prone to damage the ribbon unless there's some really flexible connection between pcb and ribbon instead of welding.

I'd also try and check if lowering the gpu power draw limit in catalyst control center or in this new crimson thing will let you run without hard crashes.
 
It's nice to see the oversized PCB 970 fitting.



Before ordering bigger brick I'd check out the config without the riser or with some other riser to make sure that's not the problem.

I've already said that but I'll clarify - with damaged pci-e power lane gpu's without PEG or with TDP over 150W will be unstable and what NFC told you to do with the riser in the video does look quite worrying and prone to damage the ribbon unless there's some really flexible connection between pcb and ribbon instead of welding.

I'd also try and check if lowering the gpu power draw limit in catalyst control center or in this new crimson thing will let you run without hard crashes.

I can back up SaperPL on that. A torn PCIe riser cable will exhibit similar symptoms. You can boot and do some things, but you'll get extremely low FPS, reboots, etc. Retest with the GPU inserted directly into the PCIe slot.
 
I'd like to chip in as i've been doing some looking into this as well.

I've done all my testing with either the HP 350W Voodoo and the HDPLEX 250 or the Meanwell RSP-1000-24 and the HDPLEX 250W.
I'm also using my custom riser that has seperate power for the 12V rail.
The R9 Nano is the only part that is using the HP 350W, the rest has separate power.

You can also lower the power limit of the card, but honestly I wouldn't even want to use the R9 Nano then. I'd rather get the GTX970 mini or the Sapphire 295 Compact.

My own conclusion is, AMD made the power limit management way to aggressive.
In order for the GPU to be stable you either need it to be:
1) Thermally limited. which takes a little bit of time.
2) 100% Power limited. You can't give the card any breathing room or else it will spike.

Thermally limiting the card is the "best" option but that might be hard to do.

Here is a quote from my topic over on SFF
I've been running Heaven for 1 hour and all seems stable even with the Nano Powerlimit set to +50%
Powermeter at the wall reports:
Peak Power: 451W
0.17kWh so 170W average power consumption over 1 hour.

The RSP-1000-24 is fairly efficient, its typical rating is 88% for the 24V version.

I'm going to run Furmark next at the previously impossible resolutions and see if the system crashes.
AND it works, I'll leave it running for a little bit to get some data.

I think we are safe to say that running a R9 Nano of a 350W AC-DC Adapter is not enough to guarantee a 100% stable system.

The RSP-1000 has fans that are controlled by powerdraw.
What is funny is that Furmark forces the Nano to change the clockspeed between 700MHz and 880Mhz and you can hear the RSP-1000 rev up and down as the clock and with it the powerdraw changes.

Here are the Furmark runs/data in the same format as a couple of posts back

Furmark 1080p 0xAA = Seems stable after 5 minutes (PowerLimit +50% - Fanspeed Fixed 100% - GPUTemp 63C - Core Clock ~840MHz - Core Voltage ~1.00V)
Furmark 1080p 2xAA = Seems stable after 5 minutes (PowerLimit +50% - Fanspeed Fixed 100% - GPUTemp 63C - Core Clock ~800MHz - Core Voltage ~0.96V)
Furmark 1080p 4xAA = Seems stable after 5 minutes (PowerLimit +50% - Fanspeed Fixed 100% - GPUTemp 67C - Core Clock ~967MHz - Core Voltage ~1.14V)
Furmark 1080p 8xAA = Seems stable after 5 minutes (PowerLimit +50% - Fanspeed Fixed 100% - GPUTemp 64C - Core Clock ~1000MHz - Core Voltage ~1.19V)

I might have some more info:
Why did Furmark crash at 0xAA and 2xAA before but not at 4xAA and 8xAA?

0xAA Peak Power was 457W
2xAA Peak Power was 450W
4xAA Peak Power was 410W
8xAA Peak Power was 315W

Peak Power is higher when using 0xAA and 2xAA. 4xAA drops about 40W and 8xAA a whopping 100W!

Remember the 1024x786 crashing?
Now 614W Peak Power!
Rerunning the test and the highest I've seen now is 542W. It is a very short peak.

Running at a regular Power Limit of 0%:
Furmark 1080p 0xAA (PowerLimit 0% - Fanspeed Auto 40% - GPUTemp 68C - Core Clock 727MHz - Core Voltage ~0.93V)
Furmark 1080p 2xAA (PowerLimit 0% - Fanspeed Auto 40% - GPUTemp 70C - Core Clock ~687MHz - Core Voltage ~0.92V)
Furmark 1080p 4xAA (PowerLimit 0% - Fanspeed Auto 38% - GPUTemp 72C - Core Clock ~796MHz - Core Voltage ~0.98V)
Furmark 1080p 8xAA (PowerLimit 0% - Fanspeed Auto 100% - GPUTemp 64C - Core Clock ~1000MHz - Core Voltage ~1.19V)

0xAA Peak Power was 399W
2xAA Peak Power was 428W
4xAA Peak Power was 375W
8xAA Peak Power was 321W

at 0xAA and 2xAA I consistently see the power meter hit 350W
at 4xAA it is lowered to hit 320W consistently
at 8xAA I see it rarely goes over 270W.

Even more facinating is that the power meter goes from 60W all the way to 350W in seconds, the powermanagement is real guys.
Only at 8xAA does the power meter stay very stable between 200W and 270W, almost never dipping below 200W or going over 270W. This is most likely because it is no lowering clockspeed because of power but temperature reasons.
 
It's nice to see the oversized PCB 970 fitting.



Before ordering bigger brick I'd check out the config without the riser or with some other riser to make sure that's not the problem.

I've already said that but I'll clarify - with damaged pci-e power lane gpu's without PEG or with TDP over 150W will be unstable and what NFC told you to do with the riser in the video does look quite worrying and prone to damage the ribbon unless there's some really flexible connection between pcb and ribbon instead of welding.

I'd also try and check if lowering the gpu power draw limit in catalyst control center or in this new crimson thing will let you run without hard crashes.

I tried the R9 directly in the motherboard and it exhibited the same behavior so we definitively say it is not the riser.

I'd also like to mention that NFC is awesome to deal with, we exchanged some PMs and we worked out a trade and he is taking the Nano off my hands. You can tell he is really passionate about his work. So I would like to publicly thank NFC!
 
This R9 Nano issue seems quite worrying, I hope some news-site picks up on this so we get a statement from AMD. 473W peak on a 150W card is just too much.

Congratulations on the build, virus897, it seems like a 970 does work in the case quite well after all!
 
Besides the r9 nano, I'm assuming the gtx 970 is the second most powerful mini itx graphics card? Anyone have any opinions on the asus or gigabyte gtx 970? Which one is quieter, better temperatures under load? Also any fan issues on either one? My current gtx 750 ti is nice, but the fan makes a strange sound; it's annoying.

Also, I still find it strange how the r9 nano is pulling so much wattage. 400+ Watts?! Where is that coming from? My electrical theory is pretty basic, but I'm pretty sure pcie has a hard limit of 150 + 75 with one 8 pin connector. That's nowhere near 400 Watts. I mean yeah, the wires if they're thick enough could supply a lot of current. But that would be outside the pcie specifications. I've always thought these video cards have a hard limit on how much wattage they could pull. That's why you have two 8 pin or 6 pin or a combination of both connectors on some video cards. Standard wire is 18 AWG so that's like 10 amps max per conductor. 8 pin has 3 12 V conductors, so that's 30 amps total. 12 * 30 is 360 Watts, though that's ignoring the pcie specifications. Still not 400 though. I don't know, it's weird. I could understand if that 400 number was total system wattage. It would be nice these tests were done with something like a Kill A Watt. I have one, it's nice.

I must know the reason why!
 
Besides the r9 nano, I'm assuming the gtx 970 is the second most powerful mini itx graphics card? Anyone have any opinions on the asus or gigabyte gtx 970? Which one is quieter, better temperatures under load? Also any fan issues on either one? My current gtx 750 ti is nice, but the fan makes a strange sound; it's annoying.

Also, I still find it strange how the r9 nano is pulling so much wattage. 400+ Watts?! Where is that coming from? My electrical theory is pretty basic, but I'm pretty sure pcie has a hard limit of 150 + 75 with one 8 pin connector. That's nowhere near 400 Watts. I mean yeah, the wires if they're thick enough could supply a lot of current. But that would be outside the pcie specifications. I've always thought these video cards have a hard limit on how much wattage they could pull. That's why you have two 8 pin or 6 pin or a combination of both connectors on some video cards. Standard wire is 18 AWG so that's like 10 amps max per conductor. 8 pin has 3 12 V conductors, so that's 30 amps total. 12 * 30 is 360 Watts, though that's ignoring the pcie specifications. Still not 400 though. I don't know, it's weird. I could understand if that 400 number was total system wattage. It would be nice these tests were done with something like a Kill A Watt. I have one, it's nice.

I must know the reason why!

Yes, the 970 is the second strongest ITX card.

Well, physics don't care about the PCIe specifications, and both the cables and connectors are capable of a lot more, about 350W for the 8pin, I think.
Keep in mind that this is the absolute minimum power those connectors can handle over a continued amount of time without heating up.
Much higher currents might be possible for short periods of time, and this is exactly what peak load is. A really short, maybe less than 1ms, peak of current that the cables and connectors can easily handle, but that the PSU can't supply.

The question why the card actually does that is hard to answer, AMD themselves would probably have to look into that for a few days or weeks, it could be a bug in the software, a hardware design mistake, nobody knows.
What we do know is that the Fury X, which uses the exact same chip, can have a continuous power draw of over 400W, and the R9 Nano is just a underclocked and power regulated version of that.
So probably, something in the power management is at fault.
 
I think you guys didn't quite get what TDP means - it's THERMAL design power.

That means the gpu is consuming the AVERAGE of it's TDP like it was a heater of that wattage releasing this amount of heat and that's the amount of heat that's need to be dissipated.

At the same time that TDP as an average number tells you how much will your PC draw power by average and how your bills will scale to that (It's relevant to servers and supercomputers).

The true nature of that is there is a power modulation and you'll get a gpu workflow phases where there's low power consumption and spikes that exceed the TDP but the average power drawn in fraction of time shouldn't exceed the TDP.

For reference:
Gigabyte 970 ITX - recommended psu of 400W
MSI R9 Nano - recommended psu of 600W

So considering rough number of 100W for basic intel platform that could be assumed for casual user that recommended psu is addressing and let's say current standard of 80+ you get something like:

65W cpu TDP + let's say 35W motherboard + memory -> 100W / 0.8 (psu efficiency) -> 125W (how much will platform draw from the wall including the efficiency)

400W - 125W = 275W (real max/spike wall draw of 970?)
600W - 125W = 475W (real max/spike wall draw of Nano?)

275W * 0.8 = 220W (real max/spike direct draw of 970?)
475W * 0.8 = 380W (real max/spike direct draw of Nano?)

Those numbers also might be a bit overestimated depending on the recommended wattage headroom because of manufacturer's legal safety reasons and considered psu efficiency, but that's how I would figure out how much power I need for the gpu. The only problem is whether manufacturer's data on required psu wattage is real or just exaggerated for their safety.
 
Last edited:
This R9 Nano issue seems quite worrying, I hope some news-site picks up on this so we get a statement from AMD. 473W peak on a 150W card is just too much.

Tom's covered it here. I wish AMD had engineered the peak draw of this card with us SFF DC-DC folks in mind. They had a knockout for our niche with that shrunken PCB and HBM. I guess we'll all be waiting for Pascal.
 
True but I'm going more on the specifications and the physics of the device. The PCIE 8 pin specification has a limit of 150 Watts.

http://cdn.overclock.net/a/ac/1000x2000px-LL-ac82eb1d_pinout.png

As you can see, the 8 pin has 5 ground pins and 3 hot pins. Two of the ground pins serve as sense pins which let the gpu know that it can draw 150 watts from the connector. I've always thought GPUs had some sort of BIOS like chip which puts a hardware limit on the amount of power it draws. This, in turn, sets the 150 watt hard limit which abides by the specifications.

Now if that's just a minimum power draw, instead of a maximum, that is unusual. All the sources I have seen list it as a maximum 150 Watt power draw from the connector. Even if the conductors can supply more current, which at 18 AWG they can, the hardware on the gpu itself will limit the current draw. Same can be said for the PCI express connector. For the 16x connector, the maximum power draw is 75 Watts. All of this wattage is at 12 V by the way.

https://en.wikipedia.org/wiki/PCI_Express#Power

Now if gpu can just ignore the power draw, even with a hardware limit, and max out the current draw by physics alone then that can be a problem.

http://www.powerstream.com/Wire_Size.htm

Taking a look at this, 18 AWG wires can transmit 16 amps of current. That's a lot and with 3 12 Volt conductors, that's 16 * 3 amps. 12 V * 48 A = 576 Watts. Huh, what do you know. That number matches what updawg and the other poster were getting. So it looks like even with a hardware limit, if the gpu has it, the card can still ignore it, for a fraction of a second, and follow the laws of physics and take as much current as the wires can support. That doesn't seem good though. As a manufacturer, you're supposed to abide by the specifications. If you put one 8 pin connector with a max power consumption of 150 watts, then your video card should follow that to the letter. Anything outside of that, even for a millisecond, would seem to be a defect and out of specifications.

Hmm, maybe AMD shouldn't have shrunk the full size fury if their power management operates in this way.

On another note, GPUs do have a bios, don't they? I remember reading that they do.

Edit: I thought it was the GPU that picked up on the two sense pins in a 8 pin connector. Apparently it is the PSU, according to this topic:

http://forums.anandtech.com/showthread.php?t=73012

It's pretty old, from 2007, but if it's the PSU's job, then I can see the problem. I'm pretty sure not every PSU has this sort of circuitry, especially something like the PicoPSU. I don't know about the HDPLEX, whether it would have this circuitry. Then again, I don't even know if what this topic discusses is accurate. Whether the PSU or GPU read the sense pins and determine power output. But if it's the PSU and it doesn't have the sense circuitry, then it may just go down to the laws of physics. In that case, how much current can the conductors carry and how much current is the video card requesting.
 
Last edited:
AMD isn't going out of PCIe spec. it averages the TDP of 175W that it is designed for. That this means it can pull 400W for 2 milliseconds and 170W for the remaining 997 milliseconds is another story.

How do you think overclocking works? You go way over the PCIe specification at that point.
PCIe spec is just a sticker for AMD and Nvidia and a way to make sure that the card meets certain requirements, in this case average power consumption is within spec,

I fully agree that the GPU should stick to what is was designed for at least when used out of the box. But if being flexible with the standard means that they can build the R9 Nano the way it is and have it run in 99% of the computers out there then I don't blame them.

I'm just really excited and scared at the same time for the next GPU cycle. GPU Boost 3 and PowerTune are only going to be more aggressive.

Also yes GPUs do have BIOS.
 
Huh, so the pcie specifications only limit an average power consumption instead of a maximum. What do you know? In that case, this totally makes sense. Hmm......so how you would ever choose the right power supply. Theoretically, power consumption would only be limited by the conductors then. Even then, if the video card wants more, it'll get more, conductors be damned. Lol.

So I found this part on molex's website:

http://www.molex.com/molex/products/datasheet.jsp?part=active/0455870002_CRIMP_HOUSINGS.xml

Looks like it'll allow 16 AWG wire. It says 13 Amps max, but according to physics at 16 AWG each conductor has a max current draw of 20 amps, and that's before exceeding specifications. Technically it can go higher, it just won't be comfortable. Combine that with 3 12 V lines and that's 60 Amps.

Ugh, maybe a maximum power limit instead of an average power limit would have been better.

Actually I just realized something, QinX mentioned gpu overclocking. Well if we say that the TDP of r9 nano is a 175 Watts. This is its power consumption. Well if they used a 8 pin power connector that would allow a max power consumption of 225 Watts including the motherboard pcie socket. This is how I thought overclocking video cards would work. You would still abide by pcie specifications as long as you did not exceed a total power consumption of 225 Watts. This can change depending on how many pcie connectors you have on your video card. Though with an average, that changes things. Overclocking goes beyond specifications and all bets are off.

Edit: Forgot to include this link from Molex's website:

http://www.molex.com/pdm_docs/ps/PS-45750-001.pdf

It details the type of wires you can use for these connectors.

2nd Edit: Take a look at this power consumption graph from this pcper website:

http://www.pcper.com/reviews/Graphi...ano-Review/Power-Consumption-and-Overclocking

According to the graph, the 175 Watts appear to be a max power consumption instead of an average. Only when overclocking is that amount surpassed. Hmm, it does appear when overclocking they're surpassing the 225 Watt PCIE Specifications with one 8 pin connector. Hmm....

3rd Edit: Here's another power consumption graph:

http://www.guru3d.com/articles-pages/amd-radeon-r9-nano-review,10.html

Looks like at 213 Watts. If 175 is the max, then the extra wattage could be from idle motherboard components. I know my GTX 750 TI is supposed to draw 75 Watts max, but that graph lists the consumption at 93 Watts. This can be accounted for idle CPU and other components. I know when I've run tests with just stressing the GPU, I get around the same wattage.
 
Last edited:
Back
Top