Supermicro 846 / 847 chassis and power supply noise questions.

Johnyblaze

Limp Gawd
Joined
Nov 9, 2003
Messages
300
Hi everyone,

I am planning on building either a 24 or 36 bay FreeNAS ZFS SAN with WD 6TB Red drives and a single or dual 9211-8i HBA(s) with two SFF 8087 connectors per HBA, and want to use a Supermicro board and chassis. I only have a budget for 24 drives, but I figure why not get a 36 bay chassis for future expandability as long as it's not much more expensive or considerably louder? I have narrowed down my selection to these relevant components:

Motherboard
X10SLR-F motherboard

24 bay E16 (single expander) chassis options
846BE16-R920B
846BE16-R1K28B

36 bay E16 (single expander) chassis options
847E16-R1K28LPB
847E16-R1400LPB

Now for my questions:

  1. Is there a huge difference in noise output between the Supermicro 846 24 bay vs. the 847 36 bay chassis?
  2. In the case of the 24 bay chassis options, am I correct in assuming that I can simply plug two SFF 8087 connectors into the backplane and spin up all my WD 6TB Red drives?
  3. In the case of the 36 bay chassis options, am I correct in assuming that I can plug one SFF 8087 connector into the 24 port backplane, and the other SFF 8087 connector into the 12 port backplane?
  4. Regarding the 847 36 bay chassis that has one 24 and one 12 port backplane, would there be any performance benefit to using two 9211-8i HBA's so that I could plug two SFF 8087 connectors into each backplane, keeping in mind I'm only using SATA drives?
  5. I know these servers are not silent, but I would like to get one that is as quiet as possible as it will be living in a rack in my basement. I understand that the Supermicro "SQ" power supply line stands for "Super Quiet", so I would assume that's my best bet to keep things quiet, but I just wanted to know if anyone had any personal feedback regarding loudness of the PWS-920P-SQ, PWS-1K28P-SQ, or PWS-1K41P-1R power supplies? I know the PWS-1K41P-1R is not "SQ", but based on this post, it may be a low noise option.
  6. Is one out of the three of these particular PSU's the clear winner in terms of lowest noise output? Thanks for any insight.
 
1. no

2. yes and no - they have various backplanes

3. yes

4. doubtful

5. the "SQ" ending PSU are the "Super Quiet" and what you want for silent pSU fans. I have the SQ, 1R Platinum, and the 1R 80+.... Platinum is good enough for me, and the price diff. between plat and plat sq is a big jump, 80+ gold are CHEAP but prob too loud for same-room usage.

6. Platinum Super Quiet is the clear winner in power consumption at idle, at usage, and sound. They also go for $80+ each oftne where the others are <$40/each.

The price between a 847 and 846 is also a big difference, not tiny.

You can get the 846 w/sas2 expander (1 8087 needed) backplane, gold psu, rails, ~400 to your door.

847 is going to be at-least 500-600.


(Talking ebay here)

Experience:
I have numerous 846 and 1 847, and 30+ SuperMicro PSUs ;) as I have some 3Us, and others too.

I also believe I have that same motherboard -- any questions let me know.

TOdd
 
Thanks so much, Todd! You answered all my questions, which I really appreciate. I'll definitely keep an eye out on eBay. As of now, I don't see any of the 846 or 847's E16's with SQ PSU's, but it would be awesome to get either one for cheap!

As for the motherboard, CPU, and RAM options, I'm trying to decide between these combos:

1x - Intel Xeon E5-1620 v3
4x - SAMSUNG 32GB M393A4K40BB0-CPB0

Pros: 3.5 GHz
Cons: quad core, doesn't support LRDIMMs

vs.

1x - Intel Xeon E5-2620 v3
4x - SAMSUNG 32GB M386A4G40DM0-CPB

Pros: six core, supports LRDIMMs
Cons: 2.4GHz

This SAN will be hosting media along with a homelab that consists of an iSCSI target with a VMware datastore. Plex will run on a separate ESXi host that will be in HA. Regarding my particular setup, I'm not sure which is better, having 4x 3.5 GHz cores and no LRDIMMs, or having 6x 2.4 GHz cores with LRDIMMs... Any thoughts regarding which of the two combo's to go with?
 
In my testing of E5 & high-performance (NVME) either CPU will perform the same. Once you start hitting the iSCSI hard (production/work) the CPU load could spike, but I doubt ever at home.

For reference, my ESXI "All in One" is running 2, E5-2690s because I wanted the higher frequency for Windows and Linux VMs that run software that benefit from the high frequency.

For my bare-metal Napp-IT SAN I personally have a E5-1620 v3 too.

IMHO:
ESXI Host = high frequency
SAN = Doesn't matter too much until you start doing a lot of heavy-usage.

I went with the E5-1620 v3 because I have a E5-1650 v3 in their and don't need that ;)

If you ever plan to go with 2 CPUS then the 2620 is nice choice too.

IMHO The E5-1620 v1 / v2 / v3 lineup is awesome for home SAN and ESXI host due to high frequency, high ram options, and very low cost. I have an 8 node with all 1620s, and another with 1650s and 2 10 cores for work... for the $/power they're great.

The other thing to consider is PCIE LANES if you start adding a lot of NVME, HBAs etc..

Ebay sellers will NOT include the 'Platinum' or 'SQ' with the 846, you'll have to ask if they have them likely they don't. This is why I mention these are more $ as you have to buy them apart from the chassis.
You just want to make sure you have a SAS2 or SAS3 backplane WITH expander, and if you want the better backplane the "2" at the end = 2 expanders.
 
Perfect. I'll go with the E5-1620 v3 for the higher frequencies and not worry about the LRDIMMs. I'm not too worried about PCIE lanes as it will only ever have 1 HBA. Also noted about the eBay sellers and PSU's. Thanks again.
 
Last edited:
No problem!

Be sure to "make offer" and not pay their asking price too :)

Also, be sure to inspect the chassis closely when it arrives, I've gotten broken PSU handles, dented PSU plugs, chassis bent where I couldn't pull a caddy out, and other random problems. The "big" E-recycle sellers are really good about refunding % for repair cost, or sending item back for replacement, "individual" sellers are hit or miss with their packaging which can break rack ears.
 
Thanks again for the tips on the eBay sellers, they're highly appreciated!

OK so I've got another question for you...

Given 36 total disks, which of the following ZFS layout configurations do you go with and why?

  1. 3x 11 disk vdevs in RAID Z3 with 3 hot spares (3 disks can fail per vdev, 3 hot spares)
  2. 6x 6 disk vdevs in RAID Z2 (2 disks can fail per vdev, no hot spares)
  3. Some other configuration?
The way I see it is that though both configurations have 24 disks of usable space, I am guessing that the 6x 6 disk vdevs in RAID Z2 would be better because it can take 33% disk failure per vdev vs. 27% for the 3x 11 disk vdevs in RAID Z3, would be faster, have better IOPS due to having double the amount in vdevs, and be faster to resilver when a disk fails. Does this make sense?

Which do you go with and why? If neither is ideal, is there another option I'm missing?
 
Last edited:
I'm not running anything that big, and am new to ZFS myself, so I'm not the best to answer that.


GEA will chime-in he's the maker of Napp-IT ZFS and has great input.
 
I'm not running anything that big, and am new to ZFS myself, so I'm not the best to answer that.


GEA will chime-in he's the maker of Napp-IT ZFS and has great input.

That's great, thanks.

I also asked on the FreeNAS forums and was reminded of a few more possible configurations:

4x drives in stripped mirror for the iSCSI target for the VMware datastore. (2 drives usable. Well really in this case 1 drive usable because it's not recommended to go over 50% storage capacity when using iSCSI on ZFS.)
Throw the other 32x drives in 4x 8 disk vdevs in raid z2. (24 drives usable)

36x drives in 4x 9 disk vdevs in raid z2. (28 drives usable)

36x drives in 4x 9 disk vdevs in raid z3. (24 drives usable)

It seems I'm just going to have to build each one and test, which I'm definitely not looking forward to doing...
 
Yes, that's what I'm doing building out testing, rebuilding etc...

For the VM Datastore are you thinking of using 4xSSD? That's the route I'd go if I wasn't going with NVME SSD.

Either way, I would sep. the VM store from the general storage pool
 
Yes, that's what I'm doing building out testing, rebuilding etc...

For the VM Datastore are you thinking of using 4xSSD? That's the route I'd go if I wasn't going with NVME SSD.

Either way, I would sep. the VM store from the general storage pool

I was going to try the VM datastore with 4x spindles first just to see how it is, and if it's terrible then I'll go with 4x SSDs. Since it's just a homelab I am wondering if I can get by with regular spindle disks.
 
As for the noise, I found something out with this chassis while testing. The fan speed setting in BIOS only changed the fans connected directly to the mobo headers. Which in my case was, just 1 fan. All the others were connected via SAS backplane. After re-wiring to use all the near-by power connectors on the mobo it made a big difference - I also pulled 3x fans out that wouldn't reach the power connectors on the far side of the mobo without extensions which I didn't have or feel like I needed. My servers are serving in a light duty role.

With your SAS expanders, if you're REALLY worried about throughput, try connecting 2x cables to your 24 port backplane for aggregation and then 2 connectors FROM the 24 port to the 12 port downstream. If your aggregation works, that'll get you the best bandwidth other than having a card for each backplane.
 
As for the noise, I found something out with this chassis while testing. The fan speed setting in BIOS only changed the fans connected directly to the mobo headers. Which in my case was, just 1 fan. All the others were connected via SAS backplane. After re-wiring to use all the near-by power connectors on the mobo it made a big difference - I also pulled 3x fans out that wouldn't reach the power connectors on the far side of the mobo without extensions which I didn't have or feel like I needed. My servers are serving in a light duty role.

With your SAS expanders, if you're REALLY worried about throughput, try connecting 2x cables to your 24 port backplane for aggregation and then 2 connectors FROM the 24 port to the 12 port downstream. If your aggregation works, that'll get you the best bandwidth other than having a card for each backplane.

Thanks for the feedback regarding the fans, that's good info to know.

To connect the SAS expanders as you described, that would require the dual link expanders that are in the 847E16-R1K28LPB, correct? Are you able to use SATA drives with those dual link expanders?
 
As for the noise, I found something out with this chassis while testing. The fan speed setting in BIOS only changed the fans connected directly to the mobo headers. Which in my case was, just 1 fan. All the others were connected via SAS backplane. After re-wiring to use all the near-by power connectors on the mobo it made a big difference - I also pulled 3x fans out that wouldn't reach the power connectors on the far side of the mobo without extensions which I didn't have or feel like I needed. My servers are serving in a light duty role.

With your SAS expanders, if you're REALLY worried about throughput, try connecting 2x cables to your 24 port backplane for aggregation and then 2 connectors FROM the 24 port to the 12 port downstream. If your aggregation works, that'll get you the best bandwidth other than having a card for each backplane.

Just to be clear that's not a problem with SAS2 backplane, only the older one.
PWM fans for all on SAS2 :)
 
Back
Top