[H]ard Forum Storage Showoff Thread

Anyone have a recommendation for a good power supply that will power 12+ drives (3.5 and 2.5)?
 
Anyone have a recommendation for a good power supply that will power 12+ drives (3.5 and 2.5)?

As long as you have staggered spinup, you don't have to worry about having a crazy power supply. Drives use by far the most power when they are spinning up at power on. If they are staggered so they aren't all spinning up at the same time the power requirements go way down.

I've been powering my old dual socket X8DTE and L5640 system with 12x 7200rpm spinners and 10x SATA SSD's using a puny little 550W Antec Earthwatts Platinum PSU för years now. Never had an issue.

The dual EPS connectors (needed for my server board) and stellar efficiency sold me on this PSU.
 
Ok.

I guess the part I don't understand is, of what possible value could it be to have a hard drive not spin up if a system has 3.3v power?

Why is this a part of the spec?

It allows management software to do a hard power reset of a specific drive or array when they want/need to. This is in the SAS and SATA spec. The problem comes in the consumer space that doesnt meet the new spec and gives 3.3v to the drives all the time even when they do not use 3.3v at all. This causes the drive to never spin up when the feature is only supposed to be used to give it a "disable voltage" for a short duration during diagnosis or maintenance without shutting down a server or removing a drive from the chassis.



Legacy SATA power spec:

Legacy.gif




Newer 2016 specifications:

NewSATA.png
 
Western Digital "shucked drives" are very modern spec, which means they use 3.3v disable pin from enterprise which causes the drive to not spin up when power is active on that pin.To get around this, you either tape the pin off so it doesnt get power (thats what I do), cut the sata power wire for 3.3v, or use a molex adapter with no 3.3v at all (what Spartacus09 is doing).
Correct, at the time I didn't have any kapton tape, and I didn't want to go the destructive method snipping the pins on the drive or the cables themselves.

Hmm.

I foresee this causing real problems with older backplanes. Much more difficult to disable one 3.3v connector in those. Unless - of course - the backplane didn't wire 3.3v to begin with since hard drives don't use the 3.3v rail for anything.


Actually, since my backplanes are all powered by 4-pin molex connectors, (12v, 5v and two grounds) they probably don't.
You hit the nail on the head, most backplanes don't have the 3.3v, hence why most consumer QNAP and Synology enclosures can take the shucked drives with no issue.
My assumption is that it costs more money to add in more electric circuits so most backplanes won't have it.
 
my case came in and WOW.. so easy to work with and so clean... well made and I love it.
I did not calculate something into the mix...

I have 2x 8i HP HBA and using 6 onboard sata 3gb connectors....for 23 drives.

I need to get more SATA power adapters.... so that is easy...

the concern...

I can remove the 10gb nic in my PCI Express slot (not gonna do that) and pop in my dell HBA for 8i... but even that wont be enough as I have more drives to put in like another 6x 2.5" icy dock (after I buy it)...
and 3 more 3x bays... so 3x3+6 = 15 more sata needed....

now I know there are hba add on cards but they use ports of other cards to activate right? so a 16i add on will give 16 ports but uses off of other? haven't played with that yet...

so what is a cheap alternative HBAs to get 32 internal hba ports?

im using the onboard 6 sata for regular drives but the hbas are for ssd so want 6gb...


ive been out of the hardware market for so long and all of this is new to me..

20190820_150959.jpg 20190820_151009.jpg 20190820_151635.jpg
 
I can remove the 10gb nic in my PCI Express slot (not gonna do that) and pop in my dell HBA for 8i... but even that wont be enough as I have more drives to put in like another 6x 2.5" icy dock (after I buy it)...
and 3 more 3x bays... so 3x3+6 = 15 more sata needed....

now I know there are hba add on cards but they use ports of other cards to activate right? so a 16i add on will give 16 ports but uses off of other? haven't played with that yet...

so what is a cheap alternative HBAs to get 32 internal hba ports?

The "add on cards" would be considered expanders or multipliers (or backplanes for when you attach drives directly to the PCB they are on). SAS calls them SAS Expanders, sometimes I see ones for SATA just called multipliers. I believe you must match the chipset on the expander to the chipset used by the HBA. For instance there is a Marvel 88SM9715 multiplier that turns 1 port into 5 that matches with the Marvell 88SE9485 and supposedly 88SE9215 sata card chips. These are what BackBlaze used for sata cards and backplanes. LSI has the 9305 series chip (what I have) and it matches to their SAS3x40 expander. Intel also sells expanders that match to their raid cards.


So you either need to use some expanders that match the cards you already have, or buy something like the LSI 9305-24i HBA to get 24 ports in a single slot.
 
The "add on cards" would be considered expanders or multipliers (or backplanes for when you attach drives directly to the PCB they are on). SAS calls them SAS Expanders, sometimes I see ones for SATA just called multipliers. I believe you must match the chipset on the expander to the chipset used by the HBA. For instance there is a Marvel 88SM9715 multiplier that turns 1 port into 5 that matches with the Marvell 88SE9485 and supposedly 88SE9215 sata card chips. These are what BackBlaze used for sata cards and backplanes. LSI has the 9305 series chip (what I have) and it matches to their SAS3x40 expander. Intel also sells expanders that match to their raid cards.


So you either need to use some expanders that match the cards you already have, or buy something like the LSI 9305-24i HBA to get 24 ports in a single slot.

thats almost $600...

could just do 2x 16i cards for $119 each. way cheaper...
 
my case came in and WOW.. so easy to work with and so clean... well made and I love it.
I did not calculate something into the mix...

I have 2x 8i HP HBA and using 6 onboard sata 3gb connectors....for 23 drives.

I need to get more SATA power adapters.... so that is easy...

the concern...

I can remove the 10gb nic in my PCI Express slot (not gonna do that) and pop in my dell HBA for 8i... but even that wont be enough as I have more drives to put in like another 6x 2.5" icy dock (after I buy it)...
and 3 more 3x bays... so 3x3+6 = 15 more sata needed....

now I know there are hba add on cards but they use ports of other cards to activate right? so a 16i add on will give 16 ports but uses off of other? haven't played with that yet...

so what is a cheap alternative HBAs to get 32 internal hba ports?

im using the onboard 6 sata for regular drives but the hbas are for ssd so want 6gb...


ive been out of the hardware market for so long and all of this is new to me..

View attachment 182057 View attachment 182058 View attachment 182059
Keep in mind though the multipliers aren't intended for SSD use case, don't go thinking you can get away with SAS2008 controller, you need a 2308 or better.
 
Keep in mind though the multipliers aren't intended for SSD use case, don't go thinking you can get away with SAS2008 controller, you need a 2308 or better.
runnin hp h220... lsi 9205-8i x 2 now...another system has a dell h310 in my freenas... both do ssd just fine
 
runnin hp h220... lsi 9205-8i x 2 now...another system has a dell h310 in my freenas... both do ssd just fine

Aren't those SAS 2308 products though?

He was talking about SAS2008 products, like the oft recommended IBM M1015.

Those are great for large arrays of hard drives, but do fall short on SSD's, in large part because they are PCIe 8x gen2 and thus don't have enough PCIe bandwidth to saturate all the SAS2 connections.



But the "add-on cards" you are talking about are "SAS Expanders". These are great if you want to attach a large number of drives, but those drives will still be sharing the total bandwidth available on the HBA.

For instance, that H220 has 8x 6gbit, for a total of 48gbit. Add a SAS expander and you are simply splitting that bandwidth over more drives. For hard drives this likely won't matter, as they tend to max out at 250MB/s, but for modern SATA SSD's, they are already pretty much maxing out the 6gbit bandwidth as it is, so if you use an expander you will definitely become controller limited in that scenario.

So, SAS Expanders, great for adding large quantities of (comparatively) slow drives, not great for setups where you care about drive performance, which I am guessing you do, or you wouldn't be using SSD's
 
Last edited:
for now I said screw it... I removed my 10gb nic and put the dell perc 310 in... got 4x 2tb in.. need power adapters then will get 4x 2tb more added...
I gotta ebay some stuff to get $ to then get 2x 16i cards... I figure 32 on hba plus 6 onboard is more than enough....for now... lol

so I ran out of drive letters in windows.. seems stablebit drive pool doesn't require letters.. so guess ill remove them all later.. going to bed now hah...

upload_2019-8-20_21-33-2.png
 
Aren't those SAS 2308 products though?

He was talking about SAS2008 products, like the oft recommended IBM M1015.

Those are great for large arrays of hard drives, but do fall short on SSD's, in large part because they are PCIe 8x gen2 and thus don't have enough PCIe bandwidth to saturate all the SAS2 connections.



But the "add-on cards" you are talking about are "SAS Expanders". These are great if you want to attach a large number of drives, but those drives will still be sharing the total bandwidth available on the HBA.

For instance, that H220 has 8x 6gbit, for a total of 48gbit. Add a SAS expander and you are simply splitting that bandwidth over more drives. For hard drives this likely won't matter, as they tend to max out at 250MB/s, but for modern SATA SSD's, they are already pretty much maxing out the 6gbit bandwidth as it is, so if you use an expander you will definitely become controller limited in that scenario.

So, SAS Expanders, great for adding large quantities of (comparatively) slow drives, not great for setups where you care about drive performance, which I am guessing you do, or you wouldn't be using SSD's


To expound on this, I actually have two of those older SAS 2008 HBA's (crossflashed M1015's) in my main server. They are soon to be upgraded to 9300-8i's, but in the meantime I only use 6 drives on each to make sure I don't become HBA limited.

My calculations are as follows:

PCIe Bandwidth: Gen 2 has 500MB/s per lane. 8 lanes means 4000MB/s. Unclear how much overhead the SAS controllers have on the PCIe bus.

SAS bandwidth: 6Gb/s * 8 - 48Gb/s = 6144MB/s.

So, if all drives hit at the same time, this adapter is going to be limited by the PCIe bus. It cannot saturate all the drives at the same time.


Now, with your H220 (and your H330, I believe) they similarly have 8x 6Gb/s SAS connections, but they are Gen3, which has 985MB per lane. So on these you have 7880 MB/s to play with which is greater than the 6144MB/s of the total of the drives, so those can max out all of the drives before they saturate the PCIe bus.

Note that they have reintroduced these problems with later controllers.

The 9205-16i and 9305-24i are still "just" 8x Gen 3, but now they are 16 and 24 drives at 12Gb/s respectively. No way you can ever max even close to all of those at the same time.

The theory of course is that if you use all of the channels, you are likely using lots of slow spinners, or have SSD's which don't all hit max burst at the same time, so it doesn't matter, but if you are configreing several SSD's in some form of RAID this is not going to be true.

Also worth noting here. While you technicallt have 6Gb/s per SAS connection, which corresponds to 768MB/s, the SATA protocol has some nasty amount of overhead (between 25% and 30%) resulting in actual max performance over SATA falling somewhere in the high 500's MB/s
 
Last edited:
for now I said screw it... I removed my 10gb nic and put the dell perc 310 in... got 4x 2tb in.. need power adapters then will get 4x 2tb more added...
I gotta ebay some stuff to get $ to then get 2x 16i cards... I figure 32 on hba plus 6 onboard is more than enough....for now... lol

so I ran out of drive letters in windows.. seems stablebit drive pool doesn't require letters.. so guess ill remove them all later.. going to bed now hah...

View attachment 182149


Yikes, you are using Windows for this? :eek:
 
gotcha. been using this for years and sofar all good... i know not as fast as raid levels...
There are definitely better solutions out there, but also tough to move over to another when what you have works, I know how you feel.
However its good to get out of your comfort zone sometimes, there are alot of options for you use case even with how daunting it may be.
Frankly though for how you have it setup the SAS2008 controllers might be sufficient as you wouldn't hit more than a couple of SSD at a time likely.

To expand on SAS2008 vs SAS2308 thats just the underlying chipset that the raid card uses as Zarathustra[H] already explained you can only do about 6x SSD of simultaneous usage on the SAS2008 controllers which include the H310s, H200s, and M1015 (there a list of them here)
The 2308 are the 9207-8i, H220, etc that can handle 16 simultaneous SSD, the key with the multipliers/expanders is they act as a switch so make sure your controller can handle all the connected drives to the expander.
For the cost your best bang for buck while still maxing out for the SSD would be to pickup a HP H220 (9207-8i equivalent) for about $75 and a IBM SAS expander for about $40 (I recommend Art he's on the forums here and tests the stuff he sells thoroughly).
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Here's my mini-showcase, I haven't done alot of cleanup yet and only have 11 of the drives and the SSD in it currently.


Edit: also apparently I have Parkinsons I didn't realize the pictures were that shaky.
 
Last edited:
There are definitely better solutions out there, but also tough to move over to another when what you have works, I know how you feel.
However its good to get out of your comfort zone sometimes, there are alot of options for you use case even with how daunting it may be.
Frankly though for how you have it setup the SAS2008 controllers might be sufficient as you wouldn't hit more than a couple of SSD at a time likely.

To expand on SAS2008 vs SAS2308 thats just the underlying chipset that the raid card uses as Zarathustra[H] already explained you can only do about 6x SSD of simultaneous usage on the SAS2008 controllers which include the H310s, H200s, and M1015 (there a list of them here)
The 2308 are the 9207-8i, H220, etc that can handle 16 simultaneous SSD, the key with the multipliers/expanders is they act as a switch so make sure your controller can handle all the connected drives to the expander.
For the cost your best bang for buck while still maxing out for the SSD would be to pickup a HP H220 (9207-8i equivalent) for about $75 and a IBM SAS expander for about $40 (I recommend Art he's on the forums here and tests the stuff he sells thoroughly).

i have 2 - hp 220s as noted few posts up.. so guess im good there. you mention ibm sas expander.. is that pci-e? googled for a minute and havent found that answer...

so to confirm... id buy the expander... plug both ports of my h220 into the expander and I would gain 16 ports... but id loose 8?
 
As an eBay Associate, HardForum may earn from qualifying purchases.
i have 2 - hp 220s as noted few posts up.. so guess im good there. you mention ibm sas expander.. is that pci-e? googled for a minute and havent found that answer...

so to confirm... id buy the expander... plug both ports of my h220 into the expander and I would gain 16 ports... but id loose 8?
Correct, think of it as a splitter shares the controller bandwidth so you can connect more drives, it doesn't actually use any bandwidth on the pcie though, it just uses that for power.
There are some Intel ones that offer molex power instead of pcie (model RES2CV240) but they cost about 4x as much so if you have a spare 8x port that can power it, thats ideal for cost.
There are also models with 8 connectors instead of 6 (2 in 6 out), but again cost.
 
Last edited:
over 400 when i have added rest of the hdds i can add at this point..

doin ssd cache? I just saw an nvme slot on my mobo (friend gave me mobo recently and cpu/memory..) I hit him up and he didn't know it was there..
so now I have stablebit use the ssd cache plugin on the nvme... so cache before sata...
 
The 2308 are the 9207-8i, H220, etc that can handle 16 simultaneous SSD, the key with the multipliers/expanders is they act as a switch so make sure your controller can handle all the connected drives to the expander.

I'd challenge this. It can handle the SSD's at full speed, but not if they are all being maxed out at the same time.

The SAS 2308's have 8 6Gb/s ports and are connected to the host via 8x Gen3 PCIe.

The PCIe Bandwidth is going to be 8*985MB/s = 7880MB/s

The SAS bandwidth is going to be 6*1024*8/8 = 6144MB/s

So, it looks like you'll be limited by SAS bandwidth if you use more than 8 drives on one of these. With 16 SSD's all fully loaded at the same time, the most you can hope for is ~275MB/s per drive because your bottleneck will be the SAS controller.

Now, will you really notice? Really depends on your workload.

i have 2 - hp 220s as noted few posts up.. so guess im good there. you mention ibm sas expander.. is that pci-e? googled for a minute and havent found that answer...

so to confirm... id buy the expander... plug both ports of my h220 into the expander and I would gain 16 ports... but id loose 8?

There are many different brands of expanders. The HP ones used to be very popular because they had 9 ports in total, 1 external, and 8 internal, and supported dual linking to the controller. They are also very cheap (seriously, lots of them on eBay for under $15) The downside of those is that despite supporting 6Gb/s in SAS, when you connect a SATA drive they drop down to 3Gb/s.

The HP one mentioned above I am not familiar with, but just reading up about it it seems like a good bang for the buck.

Another one that is very good and popular is the Intel RES2SV240. It has an optional molex power connector, is pretty highly regarded, but also costs more.

So, lets be very clear about what SAS expanders are and what they do:

The PCIe connector is for power only. They do not communicate with the host at all. They should be invisible to the host and its drivers. The SAS controller and the expander handle everything and just present the drives to the host. Some people if they don't have a spare PCIe pwoer will power these with Molex to PCIe adapters just for power, and then screw or tape them to the side of the case somewhere (being careful not to short anything to the metal of the case). The Intel RES2SV240 is actually designed with this in mind, which is why it has that molex connector. You unscrew the PCIe bracket, and use standoffs to mount it to the side of the case, powering it via the molex connector.

So, you connect the SAS expander to your SAS controller using one or two SAS Cables (usually SFF 8087, but newer parts also use SFF 8643, check to make sure you get the right cables). Then connect your drives tot he SAS expander. If both your SAS controller and your SAS expander support it you can run two SAS cables from the controller to the expander in so called "dual linking" mode, which allows for greater bandwidth use. If you are using SSD's, this is probably what you want to do.

Think of it essentially like a an Ethernet switch. Plug your router in on one end, and you get more ports for various devices, but you are still going to be limited by your upstream bandwidth.
 
doin ssd cache? I just saw an nvme slot on my mobo (friend gave me mobo recently and cpu/memory..) I hit him up and he didn't know it was there..
so now I have stablebit use the ssd cache plugin on the nvme... so cache before sata...


Yes i do.. atm 2 x 860 evos at mobo sata connectors in raid but those are too slow so i gotta change those to sas card and maybe add 1-2 more ssds for 10 gbit network..
 
my case came in and WOW.. so easy to work with and so clean... well made and I love it. ...

Hey TeleFragger, how are the HDD/CPU temps like in this box?

I assume you have 2 140mm fans intake and the CPU fan attempting to pull air up and out of the case. Good to see the adapter has a dedicated fan.
 
Spartacus09 - THX DUDE!!!! 4 cages ordered....
I was going to follow this DIY but hell DIY $3 or Corsair $54...


Hey TeleFragger, how are the HDD/CPU temps like in this box?

I assume you have 2 140mm fans intake and the CPU fan attempting to pull air up and out of the case. Good to see the adapter has a dedicated fan.

so I will have to check. haven't checked that.... I believe the mobo software will tell me.. but do you have a good utility to check with?

and yes.. standard 3 fans only that came with it....
 
Spartacus09 - THX DUDE!!!! 4 cages ordered....
I was going to follow this DIY but hell DIY $3 or Corsair $54...




so I will have to check. haven't checked that.... I believe the mobo software will tell me.. but do you have a good utility to check with?

and yes.. standard 3 fans only that came with it....


Damn that is a sexy power spacing and connectors on that stack of 2.5" I need to look at doing a custom cable like that, all my stuff is just shoved in there currently.

I did actually end up scrapping the enterprise SSD for generic 860 evos so I could use the internal ssd backside trays on the 750D.
They may have insane endurance, but the heat and thickness was too much, they can go in my backup unraid box.
 
Damn that is a sexy power spacing and connectors on that stack of 2.5" I need to look at doing a custom cable like that, all my stuff is just shoved in there currently.

I did actually end up scrapping the enterprise SSD for generic 860 evos so I could use the internal ssd backside trays on the 750D.
They may have insane endurance, but the heat and thickness was too much, they can go in my backup unraid box.

I was looking to use that for SSD's, thus not worried about heat... still wonder if I should of gone that route.. but for $10 each... for OEM cages for future.. if I ever need to sell it.. doubt it...

so to do the power like that, I found the connectors for like $1-$2 each... so more expensive doing diy vs buying the 1 male to 5 female...
 
I was looking to use that for SSD's, thus not worried about heat... still wonder if I should of gone that route.. but for $10 each... for OEM cages for future.. if I ever need to sell it.. doubt it...

so to do the power like that, I found the connectors for like $1-$2 each... so more expensive doing diy vs buying the 1 male to 5 female...
I'm doing all of the above, but doing the thinner normal SSD allows me to fit more disks in before I have to add my DAS, I should be able to get at least 17 of my data disks (possibly all 20) instead of only 12-15.
 
Hey TeleFragger, how are the HDD/CPU temps like in this box?

I assume you have 2 140mm fans intake and the CPU fan attempting to pull air up and out of the case. Good to see the adapter has a dedicated fan.

using hw monitor...

System 93F
CPU 119F
Hard drives - from 87F to 140F

fans running at 700rpm..
 
Spartacus09 - THX DUDE!!!! 4 cages ordered....
I was going to follow this DIY but hell DIY $3 or Corsair $54...




so I will have to check. haven't checked that.... I believe the mobo software will tell me.. but do you have a good utility to check with?

and yes.. standard 3 fans only that came with it....

Side note if you keep the 2.5" caddys on the back side I recommend right angle SLIM sata connectors and power connectors, makes the fitting alot easier on the back side.
using hw monitor...

System 93F
CPU 119F
Hard drives - from 87F to 140F

fans running at 700rpm..
140F~60c :hungover:
All my drives so far sit in the 35-45c range (45c is only during the parity check and the highest I've seen on the drive that has the least airflow, most are 39-42c), tbf they sit idle most of the time and are the 5400rpm drives except the parity.

My front fractal design fans are 1000rpm though the noctua on the cpu and in the enclosure are 1200.

My System and CPU temps are about the same though (slightly lower generally depending on load).
 
well its my fault. i got those drives stacked in case. i put another fan blowing on them and its lowering the temp
when my bays come in.. i hope the correct spacing helps with temps.
 
Side note if you keep the 2.5" caddys on the back side I recommend right angle SLIM sata connectors and power connectors, makes the fitting alot easier on the back side.

140F~60c :hungover:
All my drives so far sit in the 35-45c range (45c is only during the parity check and the highest I've seen on the drive that has the least airflow, most are 39-42c), tbf they sit idle most of the time and are the 5400rpm drives except the parity.

My front fractal design fans are 1000rpm though the noctua on the cpu and in the enclosure are 1200.

My System and CPU temps are about the same though (slightly lower generally depending on load).

Mine are at about 35-37C no matter what they are doing, All air intake is over the drives, so if the drives are working hard, the system temp goes up, and results in faster fan speeds.

well its my fault. i got those drives stacked in case. i put another fan blowing on them and its lowering the temp
when my bays come in.. i hope the correct spacing helps with temps.

I agree that the 30-40C range is probably the safest for long term survival, that said, there have been many datacenter papers suggesting that drive temperature has a very poor correlation with drive reliability, so it may not matter all that much. That said, I don't recall those tests actually testing any temps above 40C...

You should check the operating temperature spec for the drives you use on the manufacturers page.

I've only use two types of drives in recent years. 4TB WD Red's and 10TB Seagate Enteprise drives.

Both of these drives have a specified max operating temperature of 60C.
 
I've only use two types of drives in recent years. 4TB WD Red's and 10TB Seagate Enteprise drives.

Both of these drives have a specified max operating temperature of 60C.

Most drives are rated to 60c consumer and enterprise alike.
That being said most of the data and stats I read (mainly on ent drives) showed no increased failure rate as long as the drives stay under about 50c with only a small increase in failure 50c+ (personal preference of under 45c for me though).

I use the WD red and white shucks, with good luck and no failures so far in 3y (20 drives currently so a null sample size).
 
logged into corsair and looked at my order.. i was billed $53 so this made no sense ...4 bay kits..

Screenshot_20190912-170833_Chrome.jpg


so i clicked tracking...

wooooot
but im at the movies...lol

Screenshot_20190912-170858_Chrome.jpg
 
Back
Top