[H]ard Forum Storage Showoff Thread

Ugh, I said garage. :banghead: That's a good size sqft, and the cooling effect of being underground probably helps keep it under wraps then. Surprised it's 85F in there though--that's pretty hot!

Well, the servers may contribute to that temperature, but there is also the gas furnace. Now it's not running as much in the Summer as in the winter, but it still fires up every now and then to maintain the hot water tank.

The basement tends to stay cool for several days after the summer heat hits here, but it doesn't last forever. It eventually slowly rises.
 
Can't wait to see that--37 drives is amazing. I thought we did it big back in the late 1990s with a cases similar to this one by California PC Products:
View attachment 181098
Ours has drive bays all the way to the floor total of 14 HH 5.25". This was a Cyrix P166 set up back in the day with a Mylex DAC960SUI SCSI-SCSI Raid running Raid 5 and Seagate Cheetah drives. Would do 10MB/sec raw in DOS. Still have the system and need to get the Supermicro motherboard repaired as it still booted until the battery must have leaked and damaged the board. :(

dude.. best system ive ever had..
i had a cyrix 6x86 p166
adaptec aha-2924 or something like that.
14x 1gb scsi uw drives runnning dr dos 7 and wfw 3.11 lol

i remember having a full tower.. was huge. how tall is your case?
 
So far, one of the better options I've seen for itx form factor combined with hot swap bays:

http://www.istarusa.com/en/istarusa/products.php?model=S-917

7 5.25 bays, so depending on hard drive size, 10-12 drives minimum.
Nice! Using those 2x 2.5" to 1x 5.25" cages you could easily put 28 drives that way. :eek:

And with the 2tb 2.5" Seagate pretty common and 4tb ssd common (but pricey), that's 56tb/112tb in that little box. :eek:
 
Well, the servers may contribute to that temperature, but there is also the gas furnace. Now it's not running as much in the Summer as in the winter, but it still fires up every now and then to maintain the hot water tank.

The basement tends to stay cool for several days after the summer heat hits here, but it doesn't last forever. It eventually slowly rises.
Yeah a gas furnace will do it for sure. We're lucky that all we have is a commercial gas water heater with recirculation pump (same setup we would put in hotels), so it's really efficient and doesn't need to be on that often. It's also in its own little 'storm shelter' which is basically just the elevator pit.

Oh the heat hits hard here--to the point I've had to turn servers off this year due to an ac finally giving out. I'll be able to turn them back on in the winter, but I do have to watch out for runaway temps once the room goes over 74F, then the increased heat going into the system and coming back out just compounds the btus and it will get up to 90F before I know it (and then something will pop a cap and die :().
 
dude.. best system ive ever had..
i had a cyrix 6x86 p166
adaptec aha-2924 or something like that.
14x 1gb scsi uw drives runnning dr dos 7 and wfw 3.11 lol

i remember having a full tower.. was huge. how tall is your case?
Ah yes! Your system sounds like a twin of ours! Adaptec 2940UW was probably it or maybe the 3940UW like we had if you ran dual channels. That's a sick number of drives! What raid level did you run? Ours also ran dos 6.0/win3.1 with 3x 9gb uw cheetah drives, and it screamed! Was solid at multitasking too even though it was time slicing--you could play an mp3, scan at 600dpi, write a cd, and copy a file all at the same time. I wish I had the files with the original specs on the beast, but they're on the beast, lol.
 
Yeah a gas furnace will do it for sure. We're lucky that all we have is a commercial gas water heater with recirculation pump (same setup we would put in hotels), so it's really efficient and doesn't need to be on that often. It's also in its own little 'storm shelter' which is basically just the elevator pit.

Oh the heat hits hard here--to the point I've had to turn servers off this year due to an ac finally giving out. I'll be able to turn them back on in the winter, but I do have to watch out for runaway temps once the room goes over 74F, then the increased heat going into the system and coming back out just compounds the btus and it will get up to 90F before I know it (and then something will pop a cap and die :().

Where is "here"?


I'm born in the U.S. but grew up in Scandinavia. Long cold wintrs sure, but the summers were pretty dry and tolerable.

When I later moved back to the U.S. and settled in New England, I had no idea I was moving into a humid jungle climate in the summer. Spring, fall and winter are pretty OK here, but summer is pretty goddamned brutal.
 
Where is "here"?


I'm born in the U.S. but grew up in Scandinavia. Long cold wintrs sure, but the summers were pretty dry and tolerable.

When I later moved back to the U.S. and settled in New England, I had no idea I was moving into a humid jungle climate in the summer. Spring, fall and winter are pretty OK here, but summer is pretty goddamned brutal.
Ah, yes, I couldn't have been more vague, right? :ROFLMAO: Here is Alabama--it hit 110F heat index earlier this week. :dead: To give you some idea of the comparison, your humidity I would probably consider bone dry. Here you're drenched in your own sweat 5 minutes after you leave air conditioned space. Only the tropics are worse imo.

Ah that sounds like a wonderful place to grow up. :) I don't mind snow when everyone is used to it and therefore life is safe. It's unreal how dangerous it gets here in the US when it snows and life must continue as normal.
 
working on friends computers, I get to keep all the old hardware as they don't want it... with that I have a ton of drives that are different sizes...

I was googling and landed on a site where a guy used this same case, but he used paper to label his drives, and he said he should of used a labeler... was this you? did you label your drives with a labeler later???

my machine is just a file server for family pics/movies, family data, plex movies and dvr tv shows on a windows server 2019 box...

anywho... this case seemed to intrigue me and I almost went his way... but then was looking at the Deep Silence 5 to which I figured I would get instead; however my OCD went even farther and I landed my next case...
hope ebay links are allowed
https://www.ebay.com/itm/264403896522

Corsair Obsidian Series 750D CC-9011078-WW Airflow Edition Computer Case Black

specs say..
Case Drive Bays 3.5" 6
Case Drive Bays 2.5" 10

but I'm not sure how you get 10x 2.5 in it... but doesn't matter.. (re reading I think 6 in 3.5 bays and 4 hidden in side panel)

trying to find the optional 3 drive bay cage (3 of them as I can see 2 will fit but I'm hoping 3 will too...) for this case... I think the 450 series all the way up to 750d are the same...
but with that I can get ( if I can get the 3x cages added )...

15x 3.5
18x 2.5" (2x icy dock 6x2.5" owned - need to buy 1 more)
_____________
33 drives

BUT another killer thing about this case.. there are also 4 spots for ssd behind the side compartment where all the cables run so...
33 + 4 = 37 drives.....

I am running server 2019 now with Stablebit Drive Pool and love it. I get it that it isn't real raid, isn't gonna be the best performance but the SSD pool alone is not bad. I have 8x 500gb ssd now and my plans are as follows:
3x 4tb - 3.5"
12x 2tb - 3.5"
12x 500gb - 2.5" ssd
_________________
27 drives and room to grow....

ive got 1tb drives and other stuff too.. so who knows.
since drivepool allows you to add any size drives to the pool, it works well...

case should be here by next friday, but I'm so busy with my kid in travel soccer, and both of us in MMA, doesn't leave much time after work... and were going on vacation so will be some time till I can get to it...

I just bought 3 of them from corsair directly a couple of weeks ago, they go outta stock quick though.
I was checking daily ~ I was actually able to buy them before I got the email saying they came back in stock (took almost 48 hours).
FYI a triple stack does secure to the 5.25" cage, super sturdy ~ the smaller stack though I'm gonna have to secure better somehow its a bit wobbly.
I'm planning on using a supermicro 5x 3.5" to 3x 5.25" to get 20-3.5" internal total.

oDTkqqPl.jpg
 
As an eBay Associate, HardForum may earn from qualifying purchases.
I just bought 3 of them from corsair directly a couple of weeks ago, they go outta stock quick though.
I was checking daily ~ I was actually able to buy them before I got the email saying they came back in stock (took almost 48 hours).
FYI a triple stack does secure to the 5.25" cage, super sturdy ~ the smaller stack though I'm gonna have to secure better somehow its a bit wobbly.
I'm planning on using a supermicro 5x 3.5" to 3x 5.25" to get 20-3.5" internal total.

View attachment 181210


damn 5x of those!!!!..
ill keep looking for them.
 
working on friends computers, I get to keep all the old hardware as they don't want it... with that I have a ton of drives that are different sizes...

I was googling and landed on a site where a guy used this same case, but he used paper to label his drives, and he said he should of used a labeler... was this you? did you label your drives with a labeler later???

my machine is just a file server for family pics/movies, family data, plex movies and dvr tv shows on a windows server 2019 box...

anywho... this case seemed to intrigue me and I almost went his way... but then was looking at the Deep Silence 5 to which I figured I would get instead; however my OCD went even farther and I landed my next case...
hope ebay links are allowed
https://www.ebay.com/itm/264403896522

Corsair Obsidian Series 750D CC-9011078-WW Airflow Edition Computer Case Black

specs say..
Case Drive Bays 3.5" 6
Case Drive Bays 2.5" 10

but I'm not sure how you get 10x 2.5 in it... but doesn't matter.. (re reading I think 6 in 3.5 bays and 4 hidden in side panel)

trying to find the optional 3 drive bay cage (3 of them as I can see 2 will fit but I'm hoping 3 will too...) for this case... I think the 450 series all the way up to 750d are the same...
but with that I can get ( if I can get the 3x cages added )...

15x 3.5
18x 2.5" (2x icy dock 6x2.5" owned - need to buy 1 more)
_____________
33 drives

BUT another killer thing about this case.. there are also 4 spots for ssd behind the side compartment where all the cables run so...
33 + 4 = 37 drives.....

I am running server 2019 now with Stablebit Drive Pool and love it. I get it that it isn't real raid, isn't gonna be the best performance but the SSD pool alone is not bad. I have 8x 500gb ssd now and my plans are as follows:
3x 4tb - 3.5"
12x 2tb - 3.5"
12x 500gb - 2.5" ssd
_________________
27 drives and room to grow....

ive got 1tb drives and other stuff too.. so who knows.
since drivepool allows you to add any size drives to the pool, it works well...

case should be here by next friday, but I'm so busy with my kid in travel soccer, and both of us in MMA, doesn't leave much time after work... and were going on vacation so will be some time till I can get to it...

I use a 750D for my desktop. I don't use any of the drive cages though. I have two PCIe SSD's and that's it.

For storage servers I have quickly found that live is too short to not use something with a hot swappable backplane.

Considering how cheaply you can get a 12-24 bay 2U-4U barebones server used online these days, I just don't bother with consumer hardware anymore for my servers.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
I was googling and landed on a site where a guy used this same case, but he used paper to label his drives, and he said he should of used a labeler... was this you? did you label your drives with a labeler later???
Nope, not me. Up till this recent drive install I was doing a thing where I had to pull each drive out and look at the fine text. Which was tedious and time consuming. I realized "wait a second, I have a label maker" and just went from there.
 
Nope, not me. Up till this recent drive install I was doing a thing where I had to pull each drive out and look at the fine text. Which was tedious and time consuming. I realized "wait a second, I have a label maker" and just went from there.


I used to do that as well back when I used a desktop case. (I used to use an NZXT Source 210, large number of bays for a very low price) Was a pain in the ass to unscrew drive by drive looking for the right serial number when one died, so I labeled them all with my label maker.

These days I just keep a spreadsheet of disk serial numbers and which bay number they are in in the backplane.
 
damn 5x of those!!!!..
ill keep looking for them.
Well it comes with 2 of’em :).

I use a 750D for my desktop. I don't use any of the drive cages though. I have two PCIe SSD's and that's it.

For storage servers I have quickly found that live is too short to not use something with a hot swappable backplane.

Considering how cheaply you can get a 12-24 bay 2U-4U barebones server used online these days, I just don't bother with consumer hardware anymore for my servers.
Problem is the reasonably price ones are loud af generally and not very power efficient unless you put quite a bit into either modding and/or replacing fans.
I looked at the Norco/Chenbro route as well as the supermicro 4u.
The redundant psu is the only real miss, but semi-negligible with a good psu and bbu.
I was patient and picked up the corsair case for $35 off craigslist plus another $40 for the cages, and the 5x drive enclosure for $50 on ebay.
I agree on the non-consumer aspect, I’m using a supermicro x10 board and dell sas cards, the affordable enterprise stuff just generally isn’t intended to be quiet unfortunatly.
The full size case is alot lighter and more managable than the racked server too (even though I still have a half rack enclosure in my office).
For my use case and using unraid where I have to stop the array anyway.
The hotswap is at best a minor convenience I’m sure others might get more benefit out of it.
Just laying out my 2 cents, and specific use case justifications.
 
Last edited:
Well it comes with 2 of’em :).

yeah but still u landed 3 more!!!!
good job. i got an email into them...

Problem is the reasonably price ones are loud af generally and not very power efficient unless you put quite a bit into either modding and/or replacing fans.
I looked at the Norco/Chenbro route as well as the supermicro 4u.
The redundant psu is the only real miss, but semi-negligible with a good psu and bbu.
I was patient and picked up the corsair case for $35 off craigslist plus another $40 for the cages, and the 5x drive enclosure for $50 on ebay.
I agree on the non-consumer aspect, I’m using a supermicro x10 board and dell sas cards, the affordable enterprise stuff just generally isn’t intended to be quiet unfortunatly.
The full size case is alot lighter and more managable than the racked server too (even though I still have a half rack enclosure in my office).
For my use case and using unraid where I have to stop the array anyway.
The hotswap is at best a minor convenience I’m sure others might get more benefit out of it.
Just laying out my 2 cents, and specific use case justifications.

picked case up $35 off CL? what was the price? i got mine.. open box with ding in back... $119 shipped...
 
Idk how that dude made dude with
I looked for that lian li case too, but couldnt find it anywhere I got my idea from EniGmA1987 from his post here:
You could grab a Corsair 750D and get 12 drive bays without modification. Or use some snips and modify the case and fit 24 drives into it

View attachment 165472
(not my pic but I also have the same tower and drive setup)

The Corsair was cheaper, much more readily available, and easier to get parts from the Corsair store.
My hope was to use the 5.25” and only get 20 drives instead of 24 so I wouldn't have to mod the case, and it worked out perfectly.
I’ll be doing a DAS to get to a full 30 drives for my unraid build.
 
Idk how that dude made dude with
I looked for that lian li case too, but couldnt find it anywhere I got my idea from EniGmA1987 from his post here:


The Corsair was cheaper, much more readily available, and easier to get parts from the Corsair store.
My hope was to use the 5.25” and only get 20 drives instead of 24 so I wouldn't have to mod the case, and it worked out perfectly.
I’ll be doing a DAS to get to a full 30 drives for my unraid build.

enigma1987 helped me thoroughly with my 10gb network... very helpful.

no link from your post on your idea...
 
I just bought 3 of them from corsair directly a couple of weeks ago, they go outta stock quick though.
I was checking daily ~ I was actually able to buy them before I got the email saying they came back in stock (took almost 48 hours).
FYI a triple stack does secure to the 5.25" cage, super sturdy ~ the smaller stack though I'm gonna have to secure better somehow its a bit wobbly.
I'm planning on using a supermicro 5x 3.5" to 3x 5.25" to get 20-3.5" internal total.

View attachment 181210
Nice! How do you plan to keep it cool enough? 20x 3.5" is a lot of heat. :eek:
 
enigma1987 helped me thoroughly with my 10gb network... very helpful.

no link from your post on your idea...
My bad the picture is what made the idea click though which was directly listed in the quote, but if you click the little on the quote it takes you to that post. I shoulda posted it directly though : https://hardforum.com/goto/post?id=1044218286#post-1044218286
picked case up $35 off CL? what was the price? i got mine.. open box with ding in back... $119 shipped...
The price was that it didn't come with any fans and it was a bit dusty (However I didnt have to pay for shipping off ebay and I get to choose which fans I want so its a win-win) Everything else was in perfect condition including all the covers, cables, and dust filters etc.
Since I was ordering the drive cages it was only an extra 4$ to get the missing screws and misc accessories.
I would have replaced all of the fans anyway the Corsair stock ones the 750D tended to be great static pressure, but a bit loud I'm aiming for sub 30db.
Nice! How do you plan to keep it cool enough? 20x 3.5" is a lot of heat. :eek:
1) Its primarily for plex media storage running unraid so 95% of the time the drives will be spun down except when serving media or running a parity check twice a month.
2) I'm using WD white shucks that are cool running 5400 rpm when they are being used, the 140mm in the front are pretty sufficient.
3) The supermicro 5x enclosure (model CSE-M35T-1B) has a fan slot in the back that cools it, that I replaced with a quiet 92mm noctua.

The enclosure I know keeps the drives under 35c running full load.
The build is still in progress overall though as the system is sitting in a Rosewill nighthawk 117 (also CL for $35 :D, it'll become a second backup unraid box eventually)
So the cooling is all theoretical but calculated, I used some spare 140mm fractal design GPs that I had in the front for the main 9 drives.
If theres room I'll aim to mount a couple of 120mm to pull air though the smaller tower (or maybe just rely on the 140mm in front)
I have two PCI-e brackets that I'll be mounting 120mm or 140mm fans to pass air over the mobo LSI heatsink and the SAS/10Gb SFP+ add-on cards (still working that out) that will likely help pulling air over that smaller tower as well.
Unsure which fans I'll go with, I like the fractal design, but may just end up with noctua's as I'm using a NH-U12DX i4 for the CPU and aiming for quiet, all depends on the temps I get with the stack of generic fans I'll be using to test.


Edit:
Btw, this is what I'm doing for the 4x cache drives and additional 10 drives to get it to 30 data drives total (still need one more 5x 3.5" enclosure hence the hole, but already have the LSI 9206-16e):
b7WDprgl.jpg
 
Last edited:
Huh, never noticed this thread here before. Thanks for the tag! and good to see you again Spartacus09 and TeleFragger








Here is a current picture of my setup:
PLEXserver.jpg



That is a Corsair 750D case, seven 8TB drives, ten 10TB drives, and five 12TB drives, and a 6TB drive on its own for now. I am going to be migrating this whole system into a BackBlaze storage pod I got sometime soon and that will raise my capacity to 45 drives! I already have the new PSU for the storage pod, and an RTX 4000 ready to go in as well. Just kinda waiting on the Ryzen 3k threadrippers to come out before I do the move. Hopefully rumors are true and they will be out within a couple months.
I used a mix of Seagate IronWolf and Exos drives for the 12TBs. An HGST 10TB drive, and then Western Digital "shucks" for the rest of the 8TB and 10TB drives
 
Huh, never noticed this thread here before. Thanks for the tag! and good to see you again Spartacus09 and TeleFragger








Here is a current picture of my setup:
View attachment 181595


That is a Corsair 750D case, seven 8TB drives, ten 10TB drives, and five 12TB drives, and a 6TB drive on its own for now. I am going to be migrating this whole system into a BackBlaze storage pod I got sometime soon and that will raise my capacity to 45 drives! I already have the new PSU for the storage pod, and an RTX 4000 ready to go in as well. Just kinda waiting on the Ryzen 3k threadrippers to come out before I do the move. Hopefully rumors are true and they will be out within a couple months.
I used a mix of Seagate IronWolf and Exos drives for the 12TBs. An HGST 10TB drive, and then Western Digital "shucks" for the rest of the 8TB and 10TB drives

how do you currently get all those powered? power expanders or power supply have enough?
 
how do you currently get all those powered? power expanders or power supply have enough?
Depending on your wattage current gen PSU come with 1-2+ molex lines and 2-3+ sata lines.

Since I'm using the shucked drives to avoid the 3.3v issue I have several of these: https://www.performance-pcs.com/premium-ribbon-wire-4-pin-molex-to-3x-sata-adapter-cable-black.html
I plan to get a couple of sata to molex adapters as well so not all my hard drives are on a pair of cables.
I'm considering purchasing an additional molex modular cable so I don't need as many conversions/adapter cables as well.
 
Depending on your wattage current gen PSU come with 1-2+ molex lines and 2-3+ sata lines.

Since I'm using the shucked drives to avoid the 3.3v issue I have several of these: https://www.performance-pcs.com/premium-ribbon-wire-4-pin-molex-to-3x-sata-adapter-cable-black.html
I plan to get a couple of sata to molex adapters as well so not all my hard drives are on a pair of cables.
I'm considering purchasing an additional molex modular cable so I don't need as many conversions/adapter cables as well.

hmm i gotta find my adapters. i have the 2009 620hx or something modular power supply. bought it when i built my q6600...
im running a few sata adapters like that. guess i gotta find my molex ones too....
nice thing on my icy dock 6x 2.5 is they only take 2x sata... so 4 for 12 drives...
 
how do you currently get all those powered? power expanders or power supply have enough?

My PSU has four sata power output cables with four connectors on each (had to buy an extra sata one direct from Corsair store when I got more drive cages), so that covers the majority of the drives, then I use a few of these splitters to get more connectors for the rest of the drives:
https://www.amazon.com/gp/product/B0086OGN9E

My PSU is full modular, so I didnt plug in any of the molex cables, I just used all 4 output connections to plug sata cables in and then used the sata splitters.


On my new case I will be moving to, I had to get a new Corsair HX1200 PSU, since it is one of three power supplies I could find with 30A on the 5v rail to power 48 hard drives and the rest of the things in the system. In the new case, it has a bunch of backplanes that just take one power input and split it to the five drives that plug into each backplane.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
My PSU has four sata power output cables with four connectors on each (had to buy an extra sata one direct from Corsair store when I got more drive cages), so that covers the majority of the drives, then I use a few of these splitters to get more connectors for the rest of the drives:
https://www.amazon.com/gp/product/B0086OGN9E

My PSU is full modular, so I didnt plug in any of the molex cables, I just used all 4 output connections to plug sata cables in and then used the sata splitters.


On my new case I will be moving to, I had to get a new Corsair HX1200 PSU, since it is one of three power supplies I could find with 30A on the 5v rail to power 48 hard drives and the rest of the things in the system. In the new case, it has a bunch of backplanes that just take one power input and split it to the five drives that plug into each backplane.

whats this new case you speak of?
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I've seen those in the past. I've always been amazed they can get enough air through those to adequately cool them. The fans must be high speed (and loud!)

Where does the server itself reside? Does it connect via an external SAS connector?
Theres enough room in those pods for a full size atx board, its all internal just really long.
 
Nice! How do you plan to keep it cool enough? 20x 3.5" is a lot of heat. :eek:
I finally got most of the system moved into the case (I'll have some pics later), using the 2x 140mm fractal design GPs in front, 1x 140mm generic thermal take fan blowing on the mobo/add-on cards, and my noctua fan on the CPU pointing upwards. (no exhaust yet but the top is open and I can feel heat venting well)
Running a parity check now which is the most intensive thing it'll do at any point, 2 hours in I haven't seen over 44c on any of the drives.
About half are 38-40c, the other half are 41-44c, I imagine thats due to airflow based off how much fan is blowing through the drive enclosures (side note the second cage tower does get airflow through them from the front fans, I can feel the air coming out it).
 
Depending on your wattage current gen PSU come with 1-2+ molex lines and 2-3+ sata lines.

Since I'm using the shucked drives to avoid the 3.3v issue I have several of these: https://www.performance-pcs.com/premium-ribbon-wire-4-pin-molex-to-3x-sata-adapter-cable-black.html
I plan to get a couple of sata to molex adapters as well so not all my hard drives are on a pair of cables.
I'm considering purchasing an additional molex modular cable so I don't need as many conversions/adapter cables as well.


Could you expand or link on the 3.3v issue?

Do shucked drives have different electronics that don't draw 3.3v?
 
Could you expand or link on the 3.3v issue?

Do shucked drives have different electronics that don't draw 3.3v?
Western Digital "shucked drives" are very modern spec, which means they use 3.3v disable pin from enterprise which causes the drive to not spin up when power is active on that pin.To get around this, you either tape the pin off so it doesnt get power (thats what I do), cut the sata power wire for 3.3v, or use a molex adapter with no 3.3v at all (what Spartacus09 is doing).
 
Western Digital "shucked drives" are very modern spec, which means they use 3.3v disable pin from enterprise which causes the drive to not spin up when power is active on that pin.To get around this, you either tape the pin off so it doesnt get power (thats what I do), cut the sata power wire for 3.3v, or use a molex adapter with no 3.3v at all (what Spartacus09 is doing).

Hmm. Why do they do this? Just do discourage shucking?
 
Hmm. Why do they do this? Just do discourage shucking?
Its part of the sata power specification, so technically they are following the newer spec. Though you are probably right they follow that specific spec to discourage shucking since their power system doesnt provide 3.3v and normal consumer connectors do.
 
Its part of the sata power specification, so technically they are following the newer spec. Though you are probably right they follow that specific spec to discourage shucking since their power system doesnt provide 3.3v and normal consumer connectors do.

Ok.

I guess the part I don't understand is, of what possible value could it be to have a hard drive not spin up if a system has 3.3v power?

Why is this a part of the spec?
 
Back
Top