SuperMicro Storage Build - some questions

CombatChrisNC

[H]ard|Gawd
Joined
Apr 3, 2013
Messages
1,140
So this is the plan... getting a used supermicro server, 12, 24, or 36 bay, and then using it for archival storage, media streaming, and potentially host a few VM's.

The host will be running Server 2012.

The question I've got is about the SAS connectivity that the backplanes provide.

This is the server I'm strongly considering. http://www.ebay.com/itm/Supermicro-...016?pt=LH_DefaultDomain_0&hash=item35e9b05888

The backplanes that come with it, according to that chassis number from Supermicro, are these: BPN-SAS2-826EL1 and BPN-SAS2-846EL1

Which according to these... http://www.supermicro.com/manuals/other/bpn-sas2-846el.pdf - page 17

http://www.supermicro.com/manuals/other/BPN-SAS2-826EL_1.0.pdf - page 15

.... are each capable of handling all the disks plugged into them via a single SAS cable. That's OK, I'll get myself an LSI 9211-8i and call it a day if that is indeed how it works.

But this is the real question... I want to use Storage Spaces for the needs listed above while running the OS on RAID1 from the Mobo's SATA ports. So how can I get the SATA ports connected to the disks if they're sitting in their trays and connecting right up to the backplane? What's even the point of having those mobo SATA ports if you can't get the disks housed in a way where you can connect them! I'm considering even getting a pair of ~128GB SSD's and ZIP TYING them somewhere in the case to support my OS install.

Thoughts? Anyone with hands on experience with these chassis?
 
As an eBay Associate, HardForum may earn from qualifying purchases.
You get what's called a reverse breakout cable. It will aggregate 4 SATA ports to one SFF 8088 connector that will plug right in to your backplane. Mount your SSDs in the drive sleds like any normal 2.5" drive.
 
Brilliant! Then I just need to ID which 4 of the 24 ports are lit up by the reverse breakout and let the others feed off the SAS connection from my 9211-8i.

That 9211-8i by the way, I'm planning on having 1 SAS connection to the front 24 (20, since 4 will be fed from the reverse-breakout) and the 2nd SAS connection going to the back 12. Shouldn't be too much for the device? I've seen documentation stating 16 devices to a card and some documentation saying up to 256 drives. The reason I'm getting this set up with this card is to that in the event of Server 2012 and Storage Spaces being a complete failure, I know it'll work well with FreeNAS.
 
I'm going to be including pictures when I get these set up and populated. I've got a friend who can provide pretty much unlimited 1TB 7200RPM drives from an R&D lab.
 
You get what's called a reverse breakout cable. It will aggregate 4 SATA ports to one SFF 8088 connector that will plug right in to your backplane. Mount your SSDs in the drive sleds like any normal 2.5" drive.
That will NOT work with the backplane/chassis mentioned.

You have no way to connect the SATA ports on the motherboard to the backplane. Don't despair however. The 24 bay chassis has 2 internal spots which can each mount either a 3.5" drive or a pair of 2.5" ones. The 36 bay one has 4 internal spots for that.

As for your LSI card, for maximum bandwidth, run both ports to the 24 port backplane and then a cable between the 24 port one and the 12 port one. That will require an extra cable over just going straight to the 24 port and 12 port backplanes, but considering cables are $10, why not go for extra bandwidth. That card will have no problem with that many disks either.

This is the part you need to mount drives internally on the 36 bay: MCP-220-84701-0N
The one for the 24 bay: MCP-220-84603-0N

I can photograph what this looks like if you'd like. I own both the 24 bay and 36 bay chassis.
 
That backplane uses a SAS expander, making it incompatible with SATA controllers (SATA disks will still work). Also, because it uses a SAS expander, there is no direct connection from any HBA facing port to any disk as everything goes through the aforementioned chip first.
 
I have a couple of these. You can get a mounting plate from SuperMicro that lets you install two 3.5 inch or four 2.5 inch drives internally. These won't be hot swappable but you can connect the drives using standard SATA cables and boot the OS that way.

For what it's worth I have one of these systems set up with internal drives that boot the OS (Server 2012 R2). I use Storage Spaces to combine the hot swap drives into whatever MS calls SS's equivalent to RAID 10 is. Works nicely, extremely stable.
 
Last edited:
The case is an CSE-847E. So a SAS hba is required, sata ports won't work.

If the case was an CSE-847A or a CSE-847TQ then what you suggested would be fine.

I would set it up as described above, two cables to front, then one to rear, OR you could use two hba's, one to front and the second to the rear, if your feeling you need that much bandwidth.
 
This is different, but still very good news.

I'll get the brackets and get the disks mounted.

In a similar vein, do you know if I can set up a 4 disk RAID 1?
 
2 servers, 2 SAS cards, and 6 SAS cables. Thanks, Blue Fox for the idea about how to set up the wiring for max throughput.
 
So...

Things are moving. Let's see where to begin on this update.

1: The fans: SUPER LOUD! 7k RPM 80mm fans. Only 1 of them was fed via the mobo and would listen to the management - all the others were powered by the SAS backplanes. I pulled 3 of them out of the system entirely and now a total of 4 are connected to the mobo. At the lowest setting, they're tolerable, but you know sure as hell there's a computer near by. That's OK for this setting. I might get the other 3 fans back in, but I would have to get some PWM extensions to feed the fans off the mobo ports which are out of reach.

2: The SAS cards: The M5110's don't work since there's no way to get a truly IT mode out of them. Disks would show up in Storage spaces as being on a 'RAID' bus and not a 'SAS' bus. I shipped the cards back and got a pair of M1015's and after flashing them to default 9211-8i's in IT mode, things are working wonderfully. I've found 2 of my disks have re-allocated sectors, so I'll be replacing them when I get more disks on hand.

3: Storage Spaces: It's easy to work with and allows for setting up hot spares in a pool of disks, which I like. But it's slow. Not slow enough to kick out of use, but man, it's just slow.





Also, I can't tell if I'm getting the full bandwidth of all 8 SAS ports connected to the first backplane. I don't have enough disk to stress it enough to see if that's where I'm capped! When I fully populate this server I'll set up a 36 disk RAID0 and see what it'll do. :) Should hit the cap at either the 8x600MB SAS ports or the PCI-E2.0 8x port.
 
Under no circumstances should anyone be using the disaster that is Storage Spaces.
 
Under no circumstances should anyone be using the disaster that is Storage Spaces.

+100

Poor mans raid/SAN. Interesting concept, but it just isn't there. You need lots of assets to make it move properly, and at that point there are far better solutions.
 
Yeah thats definitely slow enough to "Kick it out of use"

Dude, this system is a PERFECT candidate for ZFS! Man just DO IT! If you need a windows server, run it as a VM. You can even do ZFS on linux now, it works GREAT!
 
Hey, it's for archive purposes. Write once, read rarely. I won't be hosting any VM's off of this. And no problems with AD based file permissions.

Unless someone has a REALLY good idea, this is good enough.

Trust me, if I NEEDED the speed I could get it by going ZFS. I'll be doing it for my home rig. This is archive and some media storage. My IT manager doesn't want something without 'Enterprise Support' behind it like MS can offer. And MS is CHEAP for us as a not-for-profit. $57 Server 2012 License?
 
Currently Microsoft is years behind ZFS in the storage sector, regarding
performance, features and stability. ReFS is on the right way, as it adopts the
two main features of ZFS like realtime checksums (per default only on metadata)
and CopyOnWrite. But like ZFS it needs time (ZFS is 10 years old now from first steps)

A ZFS option with full AD support (including Windows SID, ACL and previous versions,
that is usually not available on Unix with SAMBA) can be a Solaris based solution with its
CIFS server.

Enterprise support is available from Oracle with Solaris and OmniTi with the free Solaris fork OmniOS
or Nexenta with NexentaStor. For me this is more or less a cheaper NetApp alternative
(that offers checksums and CopyOnWrite as well)
 
Currently Microsoft is years behind ZFS in the storage sector, regarding
performance, features and stability. ReFS is on the right way, as it adopts the
two main features of ZFS like realtime checksums (per default only on metadata)
and CopyOnWrite. But like ZFS it needs time (ZFS is 10 years old now from first steps)

A ZFS option with full AD support (including Windows SID, ACL and previous versions,
that is usually not available on Unix with SAMBA) can be a Solaris based solution with its
CIFS server.


Enterprise support is available from Oracle with Solaris and OmniTi with the free Solaris fork OmniOS
or Nexenta with NexentaStor. For me this is more or less a cheaper NetApp alternative
(that offers checksums and CopyOnWrite as well)

just more information on samba:
could you swing to samba site please...
https://wiki.samba.org/index.ph/Setup_and_configure_file_shares_with_Windows_ACLs
Setup and configure file shares with Windows ACLs
https://wiki.samba.org/index.php/Samba_AD_DC_HOWTO
Samba AD DC HOWTO


if you are following SAMBA progress... you would retract your saying...
...

I prefer smb is not included in filesystem eco. why? troubleshooting is easy and moving forward much not pain in the a$$(time)..
 
SAMBA (Linux, Solaris, Unix) has many feature advantages over Solaris when using the
CIFS server instead of SAMBA. But the goal of SAMBA is mainly to allow a Windows client
to access any Unix filesystem around its uid/gid, Unix permissions and Posix ACL structure
while Windows ntfs uses ntfs ACL and Windows SID as reference.

Only Solaris CIFS allows to store real AD Windows SIDs as extended ZFS file attributes.
While ACL can be assigned via SAMBA and uid/gid mappings as well, its an item if you backup AD data
and restore them on another AD server. In this case ZFS + Solaris is the only system beside
ntfs that can keep permissions intact (ntfs alike ACL permissions not Posix ACL).

While Solaris CIFS keeps the credidentials they may be lost on (Solaris or others) SAMBA.
Beside that, Solaris CIFS is fast and ultra easy to configure - mainly set on and ok.

- this is at least my knwowledge of SAMBA as I had not used it for these reasons -
 
Last edited:
EDIT: OOPS! Those were numbers from the OS's RAID1 setup outside of SS.

This is the 10 disk Raid6 (SS Double Parity)....

-----------------------------------------------------------------------
CrystalDiskMark 4.0.3 x64 (C) 2007-2015 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 91.013 MB/s
Sequential Write (Q= 32,T= 1) : 8.623 MB/s
Random Read 4KiB (Q= 32,T= 1) : 0.712 MB/s [ 173.8 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 0.136 MB/s [ 33.2 IOPS]
Sequential Read (T= 1) : 152.683 MB/s
Sequential Write (T= 1) : 32.086 MB/s
Random Read 4KiB (Q= 1,T= 1) : 0.615 MB/s [ 150.1 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 0.209 MB/s [ 51.0 IOPS]

Test : 1024 MiB [P: 0.5% (32.6/6509.8 GiB)] (x2)
Date : 2015/05/28 11:39:59
OS : Windows Server 2012 R2 Server Standard (full installation) [6.3 Build 9600] (x64)


I still aim to have initially a 12 disk pool with 2 hot spares for 10TB of raw space. Then I can expand the same pool twice if I choose or make new pools and have plenty of space to do it with.
 
I am trying to figure out if I can use a RAMDisk as WriteCache but haven't been able to make it work. With 32GB of RAM on this server, I can spare 10 for a RAMDisk.
 


The little dips are when it's got to write a bunch of small files, but for the most part it saturates the gig just fine - that's good enough for this application!
 
SAMBA (Linux, Solaris, Unix) has many feature advantages over Solaris when using the
CIFS server instead of SAMBA. But the goal of SAMBA is mainly to allow a Windows client
to access any Unix filesystem around its uid/gid, Unix permissions and Posix ACL structure
while Windows ntfs uses ntfs ACL and Windows SID as reference.

Only Solaris CIFS allows to store real AD Windows SIDs as extended ZFS file attributes.
While ACL can be assigned via SAMBA and uid/gid mappings as well, its an item if you backup AD data
and restore them on another AD server. In this case ZFS + Solaris is the only system beside
ntfs that can keep permissions intact (ntfs alike ACL permissions not Posix ACL).

While Solaris CIFS keeps the credidentials they may be lost on (Solaris or others) SAMBA.
Beside that, Solaris CIFS is fast and ultra easy to configure - mainly set on and ok.

- this is at least my knwowledge of SAMBA as I had not used it for these reasons -

some pro and not

as I said, I do not like store data on ext zfs
linux/samba is more modular, this is a good for troubleshooting

to understand, get deeper on samba 4.

CIFS is always slow compare with NFS 4 :p..
but CIFS is a PLUS when sharing among windows machines.

-------
the reason replaying on your post: to clarify betwen solaris (open) and linux(Samba running on the top and ZoL running all together with Samba).

what is the best solution? both, this is depend on you expertise dan your goal.

I try to give a bit clear understanding, samba 4 is start nailing every corner..
if someone like solaris cifs or samba 4... let them decide, not just pointing: this bad, and that good

Cya
 
Back
Top