old server hardware / ebay? / suggestions for building with old server parts

Pocatello

DC Moderator and [H]ard DCOTM x6
Staff member
Joined
Jun 15, 2005
Messages
6,703
I remember seeing someone post something here about a group of hardcore storage people purchasing cheap but high-end server hardware that is out of date.

Server cases with server CPUs and server HBAs.

I wish I had saved the link.

Any suggestions for what to look for? Where to look for it?

I've got two Norco server cases at home, and I think they were a great deal for the price brand new... but they never impressed me with their build quality. That was about 5 years ago.

I would like to create a server with 200tb of storage capacity.

Thanks in advance!
 
I don't know the link you're referring to, but I've mostly done this.

I've got a Dell-rebadged LSI 9265-8i SAS raid card plugged into an HP SAS expander.

I bought the controller here and I elected to get the $75 one, which is the refurb unit that comes WITH the battery backup.

I bought the SAS expander on eBay a couple years ago, before I knew about the serversupply website. They carry them here and here. Those are slightly different model cards, and to be honest I don't actually know which one mine is or what, if any, differences they have. Please note, SATA drives connected to these will run at 3 Gb/s and not 6 Gb/s, which for 'bulk storage' is absolutely fine, but it's obviously not good for wiring up a bunch of SSDs or whatever.

My stuff is stuffed inside a NORCO 24-bay case, but I've only got 12 drives hooked up right now. But you could easily drive two of those expander cards for 48 drives total capacity, or more if you really need it.

I'll also note that I bought the mid-frame replacement on the RPC-4224 case that removes the 80mm fan mounting brackets and replaces them with 120mm brackets. Coupled with replacing all the fans, this case is now nearly silent, where before it was a small airplane, and everything still stays cool since the 120mm fans are so much more effective. Since it's in my house and not in a datacenter, this matters to me.

And you can slap it all on a server mobo if you want, or really they'll work in anything. Mine is running on a Supermicro X8DTL-iF with 2x Xeon X5680 CPUs and 96 GB of RAM, because I got the CPUs and RAM for free so that was a no brainer. But if all you're looking to do is store a bunch of stuff then this would be an inexpensive method.

Alternatively, if you wanted to use something like FreeNAS you'd want to find just a HBA and not a RAID card (mine is running in RAID) but the rest of the recommendation stands.
 
Oh, and just a FYI, if you end up with a server mobo that requires dual 12v connectors like the X8DTL-iF does that I got, but you don't have a power supply with dual 12v connectors, you can use this to convert an unused PCIe power connector to the second EPS 12v connector. I was in this position, and since this was my 'server' I didn't have a video card installed anyways and the converter was *way* cheaper than the least expensive power supply with dual 12v outputs.
 
I bought my Dell T3500 from Server Supply on their Ebay store and also the SCSI controller card I have came from them as well. Bought other stuff I have forgotten what was too.. They are my go to place for parts.
 
Get the following on eBay:

Supermicro 846E16 - $400
Supermicro X9SRH-7F - $180
Xeon E5-2630L - $35
2 x reverse 8087 breakout cables - $15

Add however much RAM you want for about $2-3 per GB. Then get whatever 8TB drives you want at $160-200 each.

All said and done, you'll be looking at about $5000. That's really all there is to it. Pretty simple to be honest.
 
Just beware if you go Blue Fox's route, the onboard RAID controller on the X9SRH-7F is a LSI 2308 based device. He's got no cache and no support for parity-based RAID levels (5, 6, 50, 60, etc). In other words, you'd want to treat it like a HBA and use FreeNAS or something. Otherwise, I generally agree with anything the man says and this is no different.
 
Great! Thanks for the information!

I have a question about this Supermicro motherboard regarding the 8x SAS2 (6Gb/s) ports via Broadcom 2308. http://www.supermicro.com/products/motherboard/Xeon/C600/X9SRH-7F.cfm

Does this motherboard really have 8 SAS ports which can each support 4 SATA drives for a total of 32 drives using the reverse 8087 breakout cables? Am I correct that each breakout cable supports 4 SAS/SATA drives?

Thanks!
 
He's saying reverse breakout cables; they take single-channel SAS inputs and 'collapse' them into a multichannel cable. They have 4x SATA type "inputs" and one SFF-8087 (4-channel multilane SAS) type "output" you'd plug into a backplane.

On that motherboard there are 8 SAS ports, shaped like SATA ports. So you're connecting those 8x single-port SAS connections to two SFF-8087 multilane inputs on the backplane, which acts as a SAS expander to the 24 drive bays. So those 24 drives will share 8 channels of bandwidth. 8 ports on the mobo, converted to 2 ports that plug into the backplane, then expanded to 24 ports for drives to connect to.

With all of that said, proper SAS controllers work like switches; you can daisy chain stuff. In theory, you could plug those 8 ports on the mobo into a SAS expander, and then plug that SAS expander into 8 more SAS expanders, and so on and so forth until you have 512 or 1024 drives plugged in. Obviously at the end of the day all these drives would be sharing the initial 8-ports worth of bandwidth, but you could technically do it.

Not that you should.
 
Okay. I think I got it. 8 ports on the motherboard connect to 2 inputs on the backplane which is enough for 24 drive bays?
 
Thanks for the clarification. I am thinking back to my two Norco cases. Both can hold 24 drives, but I think one is SATA only and has lots of SATA connections on the backplane that need individual SATA cables coming from somewhere: the motherboard or expansion SATA cards. My new Norco case is more complicated and modern... and maybe more like a standardized case such as the case mentioned above. I think it has SAS and SATA connectors on the backplane.

What I really don't like about my Norco cases is the "dead" spots in the 24 drive bay. I spent lots of time trying to get them running and then realizing I had a problem with the backplane. The other problem was the hard drive caddies didn't work half of the time and I had to seat the drives without the caddy... which makes removal kinda difficult.
 
I would like to create a server with 200tb of storage capacity.

That's a lot of storage. Very expensive and IMHO, not necessarily something you want to go cheap on (because of the size+cost).

A good and supported 108TB 2U storage unit would cost about $32K (that would be with redundant active/active controllers, 10Gbit iSCSI). Same with 216TB would be about $60K.

Why so high? Well, that gets you 3yrs of warranty, drives included and it's renewable. Obviously, you can over purchase your drives and maintain your own replacements. But, you'd have to do that. Otherwise you could put the whole thing at risk, you know?



So... let's say you still want to move forward, you want to maintain your own drive inventories and you want to go ultra-cheap, again, not my recommendation, but here goes:

https://www.supermicro.com/products/system/4U/6047/SSG-6047R-E1R36N.cfm (maybe $1500 - 2000 used)

4TB drives isn't going to be enough, 6TB is almost there... so 36 * $150 USD (I'm assuming you get a good deal on drives) = $5400, again, if you don't buy a lot of spares, I think youi'll be making a huge mistake that could be very costly.

As with most things, realize that the equipment is going to age quickly and will be painful to maintain after about 5 years.

That enterprise solution I mentioned at the top, if you keep paying for the maintenance after the 3 yrs will continue. Which means you'll always get certified drives, always can get controller replacements, etc. Even past 5 years.

Even so, most enterprises will consider age out in the 5 - 7 year time frame.

In other words, it's a lot of money to spend for 5 - 7 years before you spend it all again.

Btw, I've worked in a datacenter doing the storage using the Supermicro solution above. One day the LSI firmware crashed and we lost it all (not kidding). That could happen to anyone, just remember the size+cost in this case.

Building with used drives just means a lot more drive replacements. Just saying. Might be an ok answer. We do this where I work today and it's not unusual for us to see a replacement drive fail in only 1 year. And as the years go by, it's harder to find replacement drives.

(this is going to loud, heavy and power hungry btw)

I am making some assumptions. There can be cheaper options. But really depends a lot on what you need the 200TB for.
 
Last edited:
So, my NORCO case is an RPC-4224. It has 6x of these 4-drive backplanes that are long 'strips'. Each of these 6x backplanes takes one of the SFF-8087 type connectors into it, and splits it out into the 4 individual drive connections.

The key point here is that these 6 'backplanes' are individually removable and replaceable. So if yours is similar, and you can figure out which ones are having problems, you can replace them. I've used 6 or so of these RPC-4224s over time, and one of them initially had some problems with 2 of the backplane pieces and I was able to get the replaced, under warranty at the time. I don't know how expensive they would be individually, but I would guess not particularly since they're relatively simple devices.

I've never had any problems with the drive trays, and I prefer them to the Supermicros because they all simultaneously support 2.5" and 3.5" drives, where Supermicro uses different drive trays for the different drives necessitating an additional purchase if I want to plug in a 2.5" incher. In general the Supermicro whole chassis is better built though, so I prefer everything about the Supermicro except the trays.

The expander type backplane, like used on the linked supermicro, is different. It has 'actual electronics' on it, and actual logic and such. The SAS switch on the expander does actual work. The simple breakout backplanes like on my NORCO are essentially just a normal SFF8087-SATA breakout cable committed to PCB instead of as a cable.
 
Thank you for the suggestions in this thread. I've been searching the internet, the SuperMicro website, as well as eBay. Lots of things to learn.

Is there a reason to max out at 8TB for hard drives, or are the 10 or 12 TB drives worth looking at?
 
Something else to think about: what's your backup strategy for this server? As folks have mentioned above, doing this project with used and/or cheap hardware increases your risk. HBA card failure as cjcox mentioned, motherboard failure, power supply, fire in the data center, etc. - there are many possible single points of failure that could lock your data away temporarily or permanently. Better to think about it now and pick some sort of backup strategy that you can live with when something breaks. Even if it's just the religious backup (i.e. pray nothing goes wrong).
 
It's really easy and relatively inexpensive to pick up refurbished full servers from Ebay or Amazon these days. Do a search for Dell R710, R520, or HP DL360s. (Do NOT get an HP DL180, those suck, badly.) There are even some IBM P-series on ebay from time to time if you want to get an old school AIX (Unix) server going.
 
It's really easy and relatively inexpensive to pick up refurbished full servers from Ebay or Amazon these days. Do a search for Dell R710, R520, or HP DL360s. (Do NOT get an HP DL180, those suck, badly.) There are even some IBM P-series on ebay from time to time if you want to get an old school AIX (Unix) server going.
None of those can hold the number of drives/capacity that the OP requested. AIX is also a horrible choice. I would not go for anything you've recommended based on the presented requirements.
 
None of those can hold the number of drives/capacity that the OP requested. AIX is also a horrible choice. I would not go for anything you've recommended based on the presented requirements.
With the Dell units, an external RAID controller, an H800, H810, or H830, or an external SAS HBA can easily be added coupled with MD1200 expansion units for ramping up the number of drives, and all those are available used from both eBay and Amazon.

BTW, I have an H830 and an external enclosure (not Dell, but 3rd party, and not suitable for enterprise level drives) that I'm trying to sell, if anyone is interested.
 
With the Dell units, an external RAID controller, an H800, H810, or H830, or an external SAS HBA can easily be added coupled with MD1200 expansion units for ramping up the number of drives, and all those are available used from both eBay and Amazon.
You can get a Supermicro 24 bay chassis with SAS expanders and such for way less than the MD1200. A full server like I described as above actually winds up being similarly priced to an MD1200, which is just an expensive 12 bay chassis with SAS expanders. No reason to go for Dell when it offers no benefits (I'd argue the Supermicro is actually more flexible).
 
You can get a Supermicro 24 bay chassis with SAS expanders and such for way less than the MD1200. A full server like I described as above actually winds up being similarly priced to an MD1200, which is just an expensive 12 bay chassis with SAS expanders. No reason to go for Dell when it offers no benefits (I'd argue the Supermicro is actually more flexible).
Yeah, but with the way Supermicro does their SAS expanders, it really hurts performance. The Supermicro JBOD encloure only uses 2 SAS channels out of the 4 in an external SAS cable, and those two are split into 6 drives each with single 1 to 8 SAS expander chips. The MD1200 uses all 4 SAS channels with a more sophisticated expander chip that does a 4 to 12 expand.

I ought to know about those. I dealt with Dell's MD1200, MD1400, and the Supermicro while working in Quantum's test lab as the sysadmin. Quantum's DXi 6500 and 6700 line used the Supermicro JBOD trays, while the DXi 4700, 6800, and 6900 uses the MD1200 and MD1400 trays. The DXi4700 even uses the Dell H830 RAID controller. (I got mine from Dell at a deep discount specifically because of my position with Quantum, as well as many other things.) I helped set up all the pre-production test units for every one of those product lines.
 
Yeah, but with the way Supermicro does their SAS expanders, it really hurts performance. The Supermicro JBOD encloure only uses 2 SAS channels out of the 4 in an external SAS cable, and those two are split into 6 drives each with single 1 to 8 SAS expander chips. The MD1200 uses all 4 SAS channels with a more sophisticated expander chip that does a 4 to 12 expand. .
I don't know where you're getting that information. Supermicro does not just use 2 out of the 4 lanes per cable, which I can confirm with my 2 personal file servers. They also do not use 1-to-8 expander chips. Their backplanes use the industry standard LSI SAS2X36 expanders. Dell also uses this chip in their products and the MD1200 is probably using the 28 port version (LSI SAS2X28). Dell does not have some more sophisticated setup. I think you might need to so some more research as to how SAS topology works as most of what you've stated is incorrect.
 
I don't know where you're getting that information. Supermicro does not just use 2 out of the 4 lanes per cable, which I can confirm with my 2 personal file servers. They also do not use 1-to-8 expander chips. Their backplanes use the industry standard LSI SAS2X36 expanders. Dell also uses this chip in their products and the MD1200 is probably using the 28 port version (LSI SAS2X28). Dell does not have some more sophisticated setup. I think you might need to so some more research as to how SAS topology works as most of what you've stated is incorrect.
The LSI SAS2X28 is a 1 to 8 port SAS2 (6Gb) expander, and the Supermicro backplane in their JBOD enclosure uses 2 of them, meaning only 2 SAS channels get used. The Dell MD1200 uses a PLX 4 to 12 SAS2 hub switch. The Dell MD1400 uses an updated Broadcom (formerly Avago that bought up PLX and Broadcom and changed their name to Broadcom) SAS3 4 to 12 hub switch. The big difference difference between the LSI and the PLX/Broadcom is latency. Where the LSI take traffic from 1 of the 6 (or 8, depending on the enclosure) drives it is attached to and channels that down one SAS channel at a time, with a 2ms switch latency to move from one drive to another, while the PLX/Broadcom can channel a single drive down all 4 of the SAS channels if necessary with a 200-300 microsecond switching latency. The PLX/Broadcom chip costs over ten times as much as the LSI chip.

I know this because when developing the DXi6700, the developers kept running into a serious performance block that they couldn't figure out. I had to look up the specs for the LSI chip and figure out how the Supermicro JBOD, as well as the backplane of the Supermicro 3U 16 drive server chassis backplane, were operating. We pinpointed that issue due to the LSI expander chip. So, when we were working on the successor to the 6700, we figured out that the Supermicro equipment would not be suitable and changed to Dell because of their use of the PLX chip in their MD1200 and how it didn't have the same bottleneck and did have a good failover. (The LSI chip isn't even capable of utilizing a second expander on a second board to allow for a failover. if the backplane dies, the whole jbod is useless. So, the Dell trays CAN'T be using the LSI chip.) That eventually became the DXi4700.
 
The LSI SAS2X28 is a 1 to 8 port SAS2 (6Gb) expander, and the Supermicro backplane in their JBOD enclosure uses 2 of them, meaning only 2 SAS channels get used. The Dell MD1200 uses a PLX 4 to 12 SAS2 hub switch. The Dell MD1400 uses an updated Broadcom (formerly Avago that bought up PLX and Broadcom and changed their name to Broadcom) SAS3 4 to 12 hub switch. The big difference difference between the LSI and the PLX/Broadcom is latency. Where the LSI take traffic from 1 of the 6 (or 8, depending on the enclosure) drives it is attached to and channels that down one SAS channel at a time, with a 2ms switch latency to move from one drive to another, while the PLX/Broadcom can channel a single drive down all 4 of the SAS channels if necessary with a 200-300 microsecond switching latency. The PLX/Broadcom chip costs over ten times as much as the LSI chip.
If Supermicro only uses 2 channels, then how am I able to use all 4 per cable? I have both the 28 port and 36 port ones in service, the latter of which I'm using 2 cables on:

areca_sasx28.png

areca_sas2x36.png


All the other stuff about latency makes no difference for the average user on this forum. They want cheap 200TB, not an enterprise solution.
 
All the other stuff about latency makes no difference for the average user on this forum. They want cheap 200TB, not an enterprise solution.

Anything for that much storage is an enterprise solution. There simply is no mundane storage solution for that much data. It would take 25 8TB drives to store that much data, and I would expect they would not want to lose it on the first drive failure. That is going to take an enterprise level solution.
 
Anything for that much storage is an enterprise solution.

I completely disagree.

For me, the defining components of 'enterprise' are high end warranty support, high end pricing to go with it, and occasionally proprietary components and software design. As soon as the warranty runs out, it's just hardware - sometimes, hardware that users are in a worse position to maintain longterm (looking at you SAN devices that take drives in proprietary drive caddies) than consumer-level devices. The capabilities of the device are often not the defining point of an enterprise device; an 8TB 'enterprise' drive holds the same amount of data and performs around the same with mostly equal reliability to an 8TB 'consumer' drive.

So if you're not going to get the warranty, then what the heck is the point?

25x 8TB drives is ~$6000 brand new. That's a decent chunk of change, but it's *not* exorbitant by any means and certainly by itself doesn't move the project into 'enterprise' territory. My company has 396 TB of storage on commodity level devices for backup and archive storage that we collectively paid ~$34k for over the course of several years (it's spread across four 24-bay servers we bought one at a time). If we'd bought it brand new? Well shit, even with today's lower pricing a Dell MD1200 with 12x8TB drives in it is ~$14k and we'd need four of the damn things to get into the neighborhood of our storage capacity so that'd be $56k, and those are just JBODs and you'd need a server to actually drive them as well, further adding to cost. *that* is getting closer to enterprise, and would presumably come with the warranty and support that might justify the cost.

Of course, I obviously disagree with that value proposition or that's the direction we would have gone, versus assembling the storage myself like I did.
 
I remember seeing someone post something here about a group of hardcore storage people purchasing cheap but high-end server hardware that is out of date.

Server cases with server CPUs and server HBAs.

I wish I had saved the link.

Any suggestions for what to look for? Where to look for it?

I've got two Norco server cases at home, and I think they were a great deal for the price brand new... but they never impressed me with their build quality. That was about 5 years ago.

I would like to create a server with 200tb of storage capacity.

Thanks in advance!
Saw this on r/datahoarders a couple months ago and saved it: https://imgur.com/a/y9CtE/

Supermicro case is filled with shucked 8tb drives + two 1TB ssds for ZFS
$12k
 
Back
Top