SAS confusion

grimster

Limp Gawd
Joined
Oct 21, 2011
Messages
435
I plan on getting one of the 1200-watt Supermicro SC216 series. It's between the SC216E16-R1200LPB and the SC216E26-R1200LPB. The only difference is one has one SAS expander daughterboard on the backplane, and one has two.

The system is going be used for encoding and storage. I was going to go with an ARC-1882-i card. So which case do I choose? I want to utilize all 24 drives. Do I get the version with one daughter board and run one cable to the RAID card, or run two cables? Or get the version with two cards and run a cable from each to the RAID card? Or do I need an expander and have to run six cables?

From what I'm assuming I will need the two daughter cards, an expander card, and six cables (because are the SFF-8087 just four SAS/SATA cables in one?). Everything is in the planning stage, but the case is going to be the first thing I buy.
 
The E versions of Supermicro chassis have built in expanders. You connect one 8087 cable to the expander, and that one connection allows access to all 24 drive bays.

In the case of the SC216, each expander also has 2 downstream ports. This is where you could connect a 8087 cable internally, have it terminate with an external port in the PCI slots, and connect "dumb" standalone enclosures to it, and control them with the SAS card connected with the upstream port

With E2, you have dual expanders. You have the option of connecting one card to both, or connecting two cards. This allows one connection/expander to fail, or even one complete controller without service interruption. Both of these are beyond what you would normally use from home, but if you wanted to, you could contact your SAS controller manufacturer and see if they support either mode.

In your case you can get the E1 version, connect a single 8087 cable from your SAS card to the expander, and access all drives.

And just so that you are aware, these are 2.5" enclosures, meaning laptop/SSD/10-15k SAS drives.
 
Last edited:
What kind of bandwidth hit would I take running everything over one cable versus say getting the A version with the 6x 8087 cable and a more literal connection to the drives? Basically...can one cable handle 24 drives' worth of data? If I go the cheaper A version route, is 900 watts even enough? And yes, I am aware that these are for 2.5" drives. Thank you for your input.
 
Someone might have to correct me, but I believe a single 8087 will be 2400MB/s @ 6Gb.

Most SAS controllers do staggered spinup, so 900W should be enough. Take note though, the 900W chassis are 3Gb, and the 1200W are 6Gb.
 
I guess I am going to go with the SC216A-R900LPB then and just pop in an expander card. Storage-wise 900 watts is enough, but is that true even with two Xeons (say X5650) at full load?
 
You'd only want the dual expander version if you need HA and are using SAS drives. If you are using SATA, don't bother.
 
I have no idea what drives I am going to use yet. I was even considering looking until after this shortage price scam is over in like a year. I am not going to use the version with any expander, I'm going to use the version with just a plain backplane with six SFF-8087 jacks. That is, only as long as 900 watts is enough to power my processing needs.
 
Quad 10 core Xeons would be hitting 900W, but dual x5670 are under 400W without disks (full load).

My UPS shows around 400W for 18 disks, 1 dual xeon box, 1 s1156 system, switch and atom firewall with low usage.

Why go with controller, expander and 7+ cables instead of controller, integrated expander and 1 cable?
 
Because I don't like the idea of trying to cram 24 disks' worth of bandwidth into one cable...and I don't want to go 1200 watts if I don't need to, so I will be knocked from 4x 6Gbps down to 4x 3Gbps. It will only be 6 cables, by the way. Good to know that 900 watts will easily handle what I need. Too bad their 900 watt line isn't 80+ gold rated.
 
6 from expander to backplane, 1 cable from controller to expander (in a tight loop, generally crammed in there).

Which means you are still limited by the single cable from controller to expander. You can also double up there if the expander supports it, but the controller also has to support it. Which is the same as with the Supermicro E2 chassis.

If you don't want to be limited by one cable, you need a controller with 6 (or however many you want) 8087 connectors on it, and remove the expander from your setup.
 
I thought an expander connects to a RAID card through the PCI-e bus...

EDIT: more specifically, an expander card, like the HP ones a whole topic here is dedicated to.
 
I thought an expander connects to a RAID card through the PCI-e bus...

EDIT: more specifically, an expander card, like the HP ones a whole topic here is dedicated to.

Nope, the HP expander has 8 SAS ports, 2 for input, 6 for output. The PCI-e pins are for power only, no data.
 
So I guess I am going to totally scrap this project, since that is very very stupid.
 
@grimster:

It dosen't matter if you use the SM chassis with the build in expander on the backplane and have a single SFF-8087 cable between the HBA and the backplane or if you use the SM chassis without the expander on the backplane and use a HP (or equivalent) expander in a PCIe slot connected via a single SFF-8087 cable to your HBA. The setup is more or less identical in terms of bandwidth the only difference is you save yourself a lot of cabling in the chassis (and actually keep an extra PCIe slot that you can use for other things).

If you look at most HBAs that have more than 2 SFF-8087 connectors, they usually incorporate the expander chip directly on the HBA and depending on the design, they either have 4 or 8 lanes (equivalent to a single or two SFF-8087 cables) between the controller chip and the expander. So again, depending on what HBA you pick, you may still only have 4 concurrent connections into your drive pool. There are a few HBAs that have more than two SFF-8087 connections (some LSI based models come to mind, but I don't have the exact model number handy right now) that allow more than 4 drives to be accessed simultaneously. LSI has a controller chip that can handle up to 16 lanes (4 SFF-8087 connectors).

Another alternative would be to use 3 dual SFF-8087 HBAs and have 6 cables connected to the backplanes. That's how I wired my second server.

In regards to the 900W vs 1200W, my 100TB server has 4 redundant 600W power supplies and only during spin-up (all 50 drives at once) does the server hit just over 1000W. When idling (all drives in standby/sleep mode) the server consumes about 180W (I'm still working on getting this down a bit). This is based on a single Xeon E3-1240 processor board. When all drives are spinning and are being accessed (calculating parity data) with a processor load of approximately 40 to 50%, the server will go as high as just under 500W. I doubt with 24 2.5" drives and even a dual xeon processor board will you be needing the 1200W power supply.

My second server is powered by a 750W power supply and even with 24 drives during spinup, I never see it going past half it's capability. This server has the same processor and motherboard as the first server but uses 3 HBAs to access all drives directly. The 100TB server has a single dual SFF-8087 HBA connected to two HP SAS expanders (one SFF-8087 connection per expander) and the expanders are connected via 12 SFF-8087 cables to the 12 backplanes.

Hope this helps.
 
I was considering going to the 1200 watt, despite being overkill on power, simply because of the 6 Gbps backplanes and use the 2 expander cards. I really don't want all my data going over 1 cable. 2 is fine. But 1200 watts is overkill and probably really inefficient. Who knows how the 900 watt one would compare. Will an HBA be connected to by a RAID controller through the PCI-e bus?
 
The 1200w power supply is more efficient than the 900w one. Power supplies also have nothing to do with data throughput rates on the backplanes nor the SAS expanders. I think you need to do a bit more research before you embark on a project like this.
 
I know the power supplies have nothing to do with the transfer rates :) Look at this line of cases, and the 900w ones are 3Gbps-based and the 1200w ones are 6Gbps-based. The 1200w may be more efficient in higher loads, but I am wondering, since there will be less of a % load on a 1200w supply than a 900w one, is it still as efficient? They don't seem to publish any kind of load vs. efficiency curves.
 
I'm quite familiar with those cases. The 1200w one is more efficient at any load.
 
Then it looks like I will be going with the 1200w case with two expanders and two cables to one controller. This should mean the bandwidth of two cables, correct? This is my research. There really isn't server hardware for dummies books or tutorials. Things in the server world seem to just be you either buy a server from Dell, HP, or some other maker, already made, or information is passed down from generation in the workplace.
 
The 1200W power supply may be marginally more efficient than the 900W at the load you will be putting on the power supply, especially if you don't populate all the drives.

You don't need the dual expander backplane if you don't intend on spending the money to get SAS drives for the server. SAS drives are considerably more expensive then their SATA equivalent models. Do some research on this BEFORE you spend money on something you will most likely never use!

You haven't really mentioned what this server is going to be used for. If it's to house your media library and do some occasional trans-coding, you do not need the power (power supply as well as CPU processing power) as well as throughput that the chassis and the configuration you posted above will provide. If you are going to stream video from the drive pool, even at only 4 3Gbps connections, this server would be more than sufficient to handle multiple concurrent High-def video streams!

Another thing you need to consider is the fact that these types of server chassis are intended for data centers and/or server rooms and hence they are not designed with low noise fans. They are in fact VERY loud and unless you have a soundproof room you can stick this server into, you will have to look into replacing the factory fans, which where specked according to the chassis air flow requirements.

There are LOTS of threads on this forum with a TON of useful (technical) info especially on D-I-Y home servers! All you need to do is search and READ, READ, READ!
 
I want two backplanes because I do plan to populate all of the bays, and I don't want to be limited to a theoretical 125 MB/s per drive. Headroom.

I did mention what I was going to be using this for. Media and encoding. I don't do occasional encoding, I do at least 8 Blu-ray movies per week. I won't accept anything less than real-time or greater power for transcoding. Currently, on my i7-970 I can only hit real-time for very uncomplex and wide aspect ratio stuff.

I know how loud servers are. I don't plan on being near it. I will probably get or make a controller for the fans, because fans constantly screaming suck down a SHITLOAD of power.

I read plenty here, and I haven't seen much in the way of they way I like to do things.
 
Encoding isn't I/O bound, its CPU bound. 24 SSD drives are not going to help your encoding time, and will only help if you are transferring from or to the encoding box with 10 gigabit network.

Uncompressed blu-ray is around 50MB/s tops, and if you are going for real time, you would be needing (off the top of my head) 8+ cores depending on options.

Maxing out 8+ cores in 2U will mean that fans will have to run at least at 50%, which for my 2U Supermicro, is much louder than say, a vacuum cleaner. I wouldn't consider manually controlling them, with Supermicro motherboards they have decent PWM fan management that will automatically ramp them up to screaming demons when you are encoding.
 
I'm not using SSDs, I'm using hard disks. I need 24 disks for storage, not speed. Speed will be nice because I am going to have a 4 Gb link between the server and my computer. Good to know about the Supermicro boards controlling the fans! This didn't seem to be the case on a server at work I worked on. They were full blast all the time.
 
What are you even doing with this server? What kind of drives are you going to be using? Having multiple expanders isn't going to help you unless you require specific configurations that I mentioned earlier. I also don't see why you need the bandwidth of two links. How would that benefit you?
 
What are you even doing with this server? What kind of drives are you going to be using? Having multiple expanders isn't going to help you unless you require specific configurations that I mentioned earlier. I also don't see why you need the bandwidth of two links. How would that benefit you?

Encoding and massive storage. I am going to worry about drives when the time comes. By that time they will be cheap again. I don't want 24 drive's worth of data and IOs going through one cable. Theory never pans out in the real world, so I want headroom. I am not going to be operating on the machine other than encoding. Muxing and other shit is going to be done over a fibre channel link, so there's no excuse to just cheap out and say I don't need the bandwidth.
 
Why are you using 2.5" drives for large scale storage? You obviously aren't going to be using SAS disks so you couldn't even utilize a second expander. You also need to look into how much throughput any given RAID HBA is capable of if you think that a single cable is going to be a limitation...not to mention the fact that 4 SAS lanes provide a maximum of 24gbit throughput and you only need 4gbit for your FC HBA. You have much research to do (also, you can't just buy disks on a whim, HBAs are only compatible with certain drives).
 
Because I can. Because in the future I will be using SSDs once they become massive and cheap (a long wait) and I don't want to put 2.5" SSDs in 3.5" bays. I still haven't completely written off a 3.5" system, I just like the density of a 2.5" system. Who says I am not going to buy SAS drives? Never did I say anything obvious enough to come to that conclusion.
 
You said you were after massive storage, which SAS disks aren't. The fact that you are going to be using SSDs at some point changes nothing. All the limitations I mentioned are still in place. There are many better and significantly cheaper options that fit the roles you want out there and you are just wasting money if you must have SAS disks, multiple expanders, and so on...
 
Since when does SAS mean a drive has to be a small one? There aren't many, but there ARE large drives that are SAS. And why would I also want to create a bottleneck at once point just because there is a bottleneck elsewhere, even if the latter is the main bottleneck? I would much rather not even use an expander at all, but no one ever answered if there was any solution to expand what a RAID card can see without running more cables (goes through the bus).

A good reason for needing the quickest link possible between the RAID controller and drives is creating and expanding arrays. I am most likely not going to go right for 24 drives right off the bat.
 
Because I can. Because in the future I will be using SSDs once they become massive and cheap (a long wait) and I don't want to put 2.5" SSDs in 3.5" bays. I still haven't completely written off a 3.5" system, I just like the density of a 2.5" system. Who says I am not going to buy SAS drives? Never did I say anything obvious enough to come to that conclusion.

Because a single 5400 RPM drive can feed your CPU encoding power. Which makes SSD and SAS drives a giant sinkhole of money with absolutely zero return. Which completely negates the need for 2.5" bays. Not to mention that you are planning to utilize technology that is 2 year down the road (massive and cheap SSD drives), which might even be a different form factor by then.

I am going to have a 4 Gb link between the server and my computer

I don't want 24 drive's worth of data and IOs going through one cable. Theory never pans out in the real world

Your single 4Gb link is slower than the single 8087 link to an expander. Even if you upgrade to 8Gb, you are still 2.4x slower than the single 8087 link.
 
I am not going to use a 5400 RPM drive. Lots of assumptions flying around here. I don't care how fast the drive can feed the CPU for encoding. You seem to think because I came here asking specifics about these SAS things for these specific cases that I know nothing at all. I don't want any bottlenecks I don't need just because of cheaping out. Drive form factors haven't changed in 20 years. About 20 years ago my first hard drive was a 256 MB one, and it was a 3.5" drive. I have a 5.25" drive sitting in a dresser drawer, too. Still, the standards for how things mount haven't, and won't' change. If they do, it will be decades, and closer to a century. I see drives as we know them completely disappearing before the form factor ever changes.
 
Since when does SAS mean a drive has to be a small one? There aren't many, but there ARE large drives that are SAS. And why would I also want to create a bottleneck at once point just because there is a bottleneck elsewhere, even if the latter is the main bottleneck? I would much rather not even use an expander at all, but no one ever answered if there was any solution to expand what a RAID card can see without running more cables (goes through the bus).

A good reason for needing the quickest link possible between the RAID controller and drives is creating and expanding arrays. I am most likely not going to go right for 24 drives right off the bat.
This is because nearline drives are 3.5", not 2.5". Your bottleneck is first network throughput. With 5400rpm drives as given by the example above, you could still saturate multiple 10gbit links. The second bottleneck is your RAID HBA. You want multiple cables when it can't even saturate one. You are concentrating on meaningless bits and ignoring the important stuff.
 
I'll do things the way I want, because I want to, and I know about it. It's like that bullshit with Steve Jobs and some circuit board. He didn't like the way the wires looked, despite that they will never be seen or cared about by the customer, yet he insisted on having them look a certain way, and that would mean it possibly wouldn't even work. It literally not working was his only compromise on keeping it the way he didn't want it.
 
If you are adamant on doing things your way even when presented with better alternatives, why even bother asking for help?
 
Welp, I'm done. If you buy new drives every 3 months, you are spending about 10x what you need for .38% time savings.

You are asking what your disk subsystem should be for encoding. We've answered that a single disk is enough.
You don't want everything going through a single connector (2400MB/s minimum), yet use a single connector (400MB/s) to access the drives.
You are wanting to budget around $4000 for this, when a $79 drive will have the same results, or maybe 3 drives if you want to have some headroom.

One more thing for your research, the way you have this as 2 completely separate computers is the least efficient and lowest performance option.
 
1. You all seem to act like I am ungrateful for all given advice because I don't follow your line if thinking or way of life, and want to do things a certain way. It's pretty much strayed from what I first asked into flaming. Fine, be that way. I love how people get offended for another person having an opinion. You've all given me good insight, but it's not like I am going to just go out tomorrow and put all this shit on a credit card...jeez.

2. I've repeated over and over that I need tons of storage space. A single disk is not enough. I already have 12 TB worth of disks in an 8 TB array and only have 1.5 TB free. It's losing space quick. I don't even have any bays left in my case I only built this array about 6 months ago. It's nice to see you so offended and accusing me of ignoring your facts when you blatantly have been ignoring what I've been saying my uses for this are.

3. I have no budget. Where did I ever say "I am looking to spend $4,000 for this."?

4. Why would I sit at a computer with many drives (as I need for my storage demands, which I've repeated over and over, but have been ignored) buzzing and fans needed to cool such a system? I am going to have the server in a basement or something, and my computer, with no drives, in my room, completely silent. My 6 drives and 3 big fans on low speeds already annoy me enough to the point where it's hard to fall asleep in the same room as it.
 
Too bored to stay away

I've repeated over and over that I need tons of storage space. A single disk is not enough. I already have 12 TB worth of disks in an 8 TB array and only have 1.5 TB free. It's losing space quick. I don't even have any bays left in my case I only built this array about 6 months ago. It's nice to see you so offended and accusing me of ignoring your facts when you blatantly have been ignoring what I've been saying my uses for this are.

When hard drive prices are normal, you can get a 3TB 3.5" drive for $79. The enterprise 3TB 2.5" drives are normally $250. $6000 vs $1900 for 2.5" vs 3.5" just for drives. You want to spend $4000 more because you "like the density".

3. I have no budget. Where did I ever say "I am looking to spend $4,000 for this."?

Case is $1400, 24 port RAID controller is $1300. Add in motherboard, CPU, memory, cables, you are looking at over $3k with no disks.

Why would I sit at a computer with many drives (as I need for my storage demands, which I've repeated over and over, but have been ignored) buzzing and fans needed to cool such a system? I am going to have the server in a basement or something, and my computer, with no drives, in my room, completely silent. My 6 drives and 3 big fans on low speeds already annoy me enough to the point where it's hard to fall asleep in the same room as it.

If you want the best disk performance, you would want to eliminate your slowest bottleneck, which would be the connection between your workstation and the disk chassis. You make it sound like you want to do the encoding on your workstation with this system as the storage system over a 4Gb link, and in that case, speed will increase with the addition of the first 4-5 drives, and then the other 19 drives will not cause your workstation to see any additional performance. This in turn causes everyone, including myself, to point out that investing in a 2.5" chassis vs a 3.5" chassis is insanely unnecessary.

1. You all seem to act like I am ungrateful for all given advice because I don't follow your line if thinking or way of life, and want to do things a certain way. It's pretty much strayed from what I first asked into flaming. Fine, be that way. I love how people get offended for another person having an opinion. You've all given me good insight, but it's not like I am going to just go out tomorrow and put all this shit on a credit card...jeez.

Look at it from our end. You seem dead set in using 2.5" drives and show very little understanding of the majority of the subject in this thread (like having to physically connect the RAID controller to an expander). We're trying to show you that you can still have your required capacity with 3.5" drives, with the same bottleneck of your 4Gb link, for massive cost savings.

If you want to buy a 2.5" chassis, go ahead. The end result will be that you spend 3-4x more than you would if you bought a 3.5" chassis, with the same performance.

Going back to your original question, it will be a few years before the connection between your workstation and this storage chassis makes the single connection between a cheap RAID card and expander a bottleneck.

Everything else has been pointing out that the cost of using 2.5" drives for encoding over a 4Gb link is magnitudes more costly instead of using 3.5" drives
 
When hard drive prices are normal, you can get a 3TB 3.5" drive for $79. The enterprise 3TB 2.5" drives are normally $250. $6000 vs $1900 for 2.5" vs 3.5" just for drives. You want to spend $4000 more because you "like the density".
Biggest 2.5" disk available currently is actually 1TB (and SAS tops out at 600gb with a cost of about $500 per unit). 3TB 3.5" enterprise drives were well over $300 before the flooding occurred...nearline SAS tacked on an extra $100 on top of that.
 
I don't want 3 TB drives, I want smaller, like 1 TB. I don't need my RAID expansion to take days. I personally just like the look of these 2.5" cases versus the 3.5" ones. If nothing of this mattered, I could just put this all in a cardboard box with a cage fan blowing on it.

Holy shit! So I didn't know one fact about something I asked, that you have to physically connect an expander to a RAID card. I must be a total dumbass who doesn't know shit about anything! I bow down to people more worthy.

Don't have a budget...

My computer will not be a so-called workstation, more a terminal to the server. The server is going to be doing all the work.

I also never said I am dead set on this. I don't care about cost, but if space gets away from me too fast I will just go the 3.5" route, use my current 6 drives, buy more of the same ones, and get a case to hold 16 of them, one of the 836 or 936 series, and do this all over GbE. My only budget is time.
 
I don't want 3 TB drives, I want smaller, like 1 TB. I don't need my RAID expansion to take days. I personally just like the look of these 2.5" cases versus the 3.5" ones. If nothing of this mattered, I could just put this all in a cardboard box with a cage fan blowing on it.

Holy shit! So I didn't know one fact about something I asked, that you have to physically connect an expander to a RAID card. I must be a total dumbass who doesn't know shit about anything! I bow down to people more worthy.

Don't have a budget...

My computer will not be a so-called workstation, more a terminal to the server. The server is going to be doing all the work.

I also never said I am dead set on this. I don't care about cost, but if space gets away from me too fast I will just go the 3.5" route, use my current 6 drives, buy more of the same ones, and get a case to hold 16 of them, one of the 836 or 936 series, and do this all over GbE. My only budget is time.

So instead of saying you want massive storage, you could have said nothing matters other than RAID rebuild/expansion (or cost/complexity/power consumption comes last behind rebuild speed). In that case 2.5" is your only option, as you will only want 15k SAS drives and SSD. 7200 rpm drives will increase RAID rebuild and expansion times.

In order to meet your rebuild time requirements with 2.5" drives, you will need a chassis that holds at least 40 drives, taking in consideration of 600GB 15k SAS drives for minimum rebuild times.

Since you are encoding directly on this box, networking needs are minimal, but if you have spare 4Gb cards and switch to use, more power to you. Although I know nothing about using 4Gb FC for TCP/IP protocols.

Biggest 2.5" disk available currently is actually 1TB (and SAS tops out at 600gb with a cost of about $500 per unit). 3TB 3.5" enterprise drives were well over $300 before the flooding occurred...nearline SAS tacked on an extra $100 on top of that.
Oops, thats what I thought, forgot to look at the fine print when doing a quick search, the 3TB Seagate Constellations are SAS but 3.5"
 
Back
Top