Cheap Whitebox SAN

Sorry BDS, i guess i should have qouted. That post was pointed at REDYOUCH. I have greatly appreciated the discussion you an i have had. I think you understand what i am trying to accomplish here.
 
Sorry BDS, i guess i should have qouted. That post was pointed at REDYOUCH. I have greatly appreciated the discussion you an i have had. I think you understand what i am trying to accomplish here.

I think you misunderstood, I agree with you 100%. No offense taken at all!

I just wanted to point out to REDYOUCH that your goals were being achieved.
 
I picked up 4 of these for $75 each

SuperMicro 113-6 1026T-URF

Would one of these serve ok as a small SAN? From a concept perspective anyway. Either way I dont think we have gotten into a whole lot of discussions around disks. Would a handful of 73 or 146gb SAS and an ssd for cache work out ok or am I chasing after a headache?
 
jesus that's a killer deal.

Yes they would work for a small ISCSI SAN.
 
so if i use it as an iscsi SAN how do I determine what my requirements are from a disk perspective. would like to see at least 500gb of space and as fast as possible within reason.
 
Absolutely ignore anyone dissing the fiber channel option here.

Remember that it wasn't that long ago that 4GB fiber was one of the fastest things you could get.

4GB FC is dirt cheap, i run a VOIP company and have now swapped exclusively to 4GB FC for storage simply for the price.

I have 10GB ethernet but its just unnecessary for us and switches still don't come cheap. FC for serving up ESXi VMs is perfect. Look at any documentation that compare technologies and check out the very low latency of FC.

We have a couple of Brocade 200E now but until very recently we just had multiple QLE2462 in our storage box. If you put two of these in a box you can serve up to 4 hypervisors.

It cheap,very easy to set up and unless you want redundant SAN will serve any lab or small production set up very very well. Just my 2c worth :)
 
so if i use it as an iscsi SAN how do I determine what my requirements are from a disk perspective. would like to see at least 500gb of space and as fast as possible within reason.

If you are converting these servers into SAN systems then it's a matter of finding the right software for you to put on these servers to make them appear as a SAN.

Grab openfiler, it's an ISO, it's easy to setup, and it gets your feet wet with ISCSI.

1. install Openfiler
2. Configure the disks to be ISCSI volumes
3. Have your systems talk to the server via ISCSI protocol (Google for Win/Mac/Linux instructions)

You should have a decent ISCSI SAN running in a matter of hours.
 
The more disks you put in it the faster it will be remember that using if using gigabit ethernet the maximum sequential read is going to be about 110-120 megabytes per second irrespective of the amount of disks that you put in there.

The random io and something that usually will be your main workload when accessing VMs will be completely determined by the number of disks in the array and the array configuration itself eg RAID10, RAID5, etc

For the fastest performance you will usually want RAID10. I find best performance, hands down is using ZFS with striped mirrors and an SSD for L2ARC but you really need to be using a board with ECC RAM if you choose this route and plenty of it.
 
My server supports ECC as I will be running this SuperMicro 113-6 1026T-URF.

I have 8 drive bays so i would want to run 7 15K sas drives and one ssd for l2arc?
 
Omnios+nappit really is the way to go with a fc setup. Makes things easy.

with 8 bays you should run 6x spinners in 2x raidz1 pools + an ssd zil. For the zil you should be running a low latency drive like a intel s3700 100gb drive. The read cache is actually the ram. 16gb is usually plenty of memory for home labs. Always use the most memory you can afford. If you have extra ssd's around you can use one as a read cache (l2arc).

For a home lab I have no issues with using 2.5" sata drives as long as you have some redundancy, like raidz. With 1tb or under drives your rebuild time won't be too bad so double parity (raidz2) isn't really necessary.

Edit:
Forgot to add the reasoning behind the pool I reccomended. You want to stick to an even number of "data drives". With a 3 drive RaidZ1 you have 2 data drives and 1 parity. With a 5 drive RaidZ1 you have 4 data drives 1 parity, and so on. An even number of data rives (2,4,8) you will get the best performance. You could also use mirrors, but your usable capacity loss will be 50%

Having two 3x drive RaidZ1 vdev's in one pool gives you good performance. Basically you are striping the two vdev's together. Think of having 2x RaidZ1 vdev's as Raid50.
 
Last edited:
The server will have roughly 30gb of ram. so now i just have to find my 2.5" sata or sas drives as well as a zil drive and I should be good to roll.
 
The server will have roughly 30gb of ram. so now i just have to find my 2.5" sata or sas drives as well as a zil drive and I should be good to roll.

Honestly, the s3700 100gb is the way to go for a zil. Nothing else (under $500) even comes close. You can find a Dell variant on ebay for less than $100. The zil is not something you want to cheap out on.

For spinners, you got a budget? Don't go with less than 6 drives, you'll be passing up performance.

6 of these would be good.

if you need to spend a bit less these are good too

Can't beat the price (considering they are new) and I've had very good luck with the seller. They stand behind their drives and warranty.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
the guy I bought the servers from will give me drives for free, I need to go through his inventory and see if he has anything worth having.
 
Last edited:
Modder man,

It seems you are well on your way to outfit your home lab.

I just would like to suggest to stay away from the easy software solutions like openfiler and such, not because these are bad products, (they are in fact very good) but because the purpose of your setup is to learn.

I did the same thing, and while it took me a lot of time to actually get good at it, I remember that time fondly. It is very very interesting to dive into things like the linux kernel SCSI target (can do iscsi, fibre channel, srp,...).

I did all my learning on Ubuntu server. Not because it is the best, but because there is lots of information out there. The information available is usually enough to get you unstuck, but it still requires piecing together to make the whole solution work.

Maybe before you decide anything, read up on ZFS on Linux and the Linux kernel SCSI target.

Just my 2(euro)cents. :)
 
what is the differtence between an S3500 and S3700? S3500s seem to be quite a bit cheaper. are they vastly different?
 
what is the differtence between an S3500 and S3700? S3500s seem to be quite a bit cheaper. are they vastly different?

The S3500 has a lot lower endurance than the S3700. The endurance on the S3700 is in the PB range vs the hundreds of TB range for the S3500.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
As an eBay Associate, HardForum may earn from qualifying purchases.
Absolutely ignore anyone dissing the fiber channel option here.

Remember that it wasn't that long ago that 4GB fiber was one of the fastest things you could get.

4GB FC is dirt cheap, i run a VOIP company and have now swapped exclusively to 4GB FC for storage simply for the price.

I have 10GB ethernet but its just unnecessary for us and switches still don't come cheap. FC for serving up ESXi VMs is perfect. Look at any documentation that compare technologies and check out the very low latency of FC.

We have a couple of Brocade 200E now but until very recently we just had multiple QLE2462 in our storage box. If you put two of these in a box you can serve up to 4 hypervisors.

It cheap,very easy to set up and unless you want redundant SAN will serve any lab or small production set up very very well. Just my 2c worth :)


Do you have any tips for setting up or configuring the 200E? I havent found much about these in home labs. seems solid for the price though thats for sure

Also no luck with drives so far. Hanging tight hoping this local guy will come through for me. Havent had much time to play with the lab anyway.
 
I had to set the management ip via a serial terminal, from there I just set all the ports to auto-negotiate and enabled all 3 port types. The default ip is 10.77.77.77/24 , try that first. If someone before you changed the ip you'll have to use the serial console.


If you don't need fc zoning (think vlans) it really is plug and play.

defining what volumes each wwn can see if defined on the fc target (San server)not the switch.
 
Last edited:
Recieved My ram for the hosts today, all three Hosts are now running 24gb and the san node is running 27gb
 
Awesome, have you started instalation of software on the San node and fc setup?
 
No unfortunately the SAN node has no cpus in it right now, looking for one or two more on ebay then I will be set. Would these drives work for the spinning disks?

Savio 15k.2 or Fujitsu MBE2147RC
 
Last edited:
For the amount of space you are looking for, those disks will work just fine. Be aware that the 2.5" 15k disks are a bit of a power hog, but that shouldn't be much of a concern because you aren't running a 25 drive array. The disks aren't any more power hungry than your typical 3.5" 7200rpm disk.

Individually those disks will do 90MB-130MB. In a 6 disk, 2 vdev RaidZ1 array i would expect to see 375MB-500MB/sec reads.

If you have the array in "sync always" combined with the s3700 ZIL you would see 80MB-100MB seq writes. I know that sounds really low, but keep in mind that you are looking for high I/O per second, which the ZIL will help you achieve. The big advantage to the zil is that your disks can stil read while the ZIL takes on writes, keeping your access times extremely low. Add the 24GB of read cache you will have into that and you will be rocking.

Cached reads over the MPIO 4Gb FC link should be around 700MB-775MB/sec.

I think you are going to find yourself investing in 6 cheap(er) SAS SSD's by 2016. :D Can you say 750MB seq read/writes and 300MB/sec 4k QD32 writes at the ESXi host, all while maintaining data integrity?
 
For the amount of space you are looking for, those disks will work just fine. Be aware that the 2.5" 15k disks are a bit of a power hog, but that shouldn't be much of a concern because you aren't running a 25 drive array. The disks aren't any more power hungry than your typical 3.5" 7200rpm disk.

Individually those disks will do 90MB-130MB. In a 6 disk, 2 vdev RaidZ1 array i would expect to see 375MB-500MB/sec reads.

If you have the array in "sync always" combined with the s3700 ZIL you would see 80MB-100MB seq writes. I know that sounds really low, but keep in mind that you are looking for high I/O per second, which the ZIL will help you achieve. The big advantage to the zil is that your disks can stil read while the ZIL takes on writes, keeping your access times extremely low. Add the 24GB of read cache you will have into that and you will be rocking.

Cached reads over the MPIO 4Gb FC link should be around 700MB-775MB/sec.

I think you are going to find yourself investing in 6 cheap(er) SAS SSD's by 2016. :D Can you say 750MB seq read/writes and 300MB/sec 4k QD32 writes at the ESXi host, all while maintaining data integrity?

PCIe NVMe SSD to rule them all. Hopefully by 2016 we will have better support for NVMe devices in Solaris variants.
 
I am sure I will swap to ssd at some point but I am wanting to keep this cheap-ish for now. Its the back end configuration stuff I am wanting to learn. SSD or Spinning either will take care of that role for now. This should get me enough of an array to habndle a few vm's. I know this is a highly subjective question, but could i expect this to run 20-30 reasonable vm's?
 
As long as you aren't running a highly active sql server you absolutely will be able to run 30VM's. Most applications are more read intensive than write. Even with a few VM's writing heavily you'll still be fine on the reads because of the read cache and the write zil absorbing the writes.
 
Doesnt my raid card have to support jbod to be able to run raid-z, it seems that this card is only able to do hardware raid. does that make any sense?
 
Does your 1026T-URF have the optional UIO SAS contoller? If it is the UIO card, what model is it?

To answer your question, you should not be using a RAID card, you should be using a SAS HBA but you can get by with JBOD mode as long as the raid card doesn't have a cache. If the RAID card doesn't support JBOD you could make multiple single-disk raid0 arrays and get by, but you will be missing out on the majority of the "wanted features" of ZFS. You could see performance suffer also.
 
Last edited:
the card is a
aoc-usas-s8ir

EDIT: After some digging there is no way for this card to serve as an hba, so all drives will have to be raid zero arrays or i will need to find a new card.

here is the list of cards from super micro that supposedly fit in there. I have read about cards being flashed, I dont know if that's a possibility in this scenario.

http://www.supermicro.com/products/nfo/storage_cards.cfm
 
Last edited:
unfortunately there is only two of those available on ebay and they are quite pricy
 
I would just get a ibm m1015 and flash it to it mode. If you look hard enough you can fine one for $80 or so. Worth every penny.
 
I would just get a ibm m1015 and flash it to it mode. If you look hard enough you can fine one for $80 or so. Worth every penny.

dell h200 has lower price than M1015,

H200 is 9211IR that can be flash with dellizer 9211 IT or LSI

I flashed with dellizer 9211 IT, since easy to follow and sure that always works

http://www.ebay.com/itm/131414269154
seller accept $55 best offer...

the reason dell is less price due on popularity of IBM M1015
if you want 9240... get dell H310 that can be flashed the same way as M1015
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Will that fit in the uio slot? I guess whats throwing me off is that a normal card would be upside down in there wouldnt it? I found this Supermicro AOC-USAS-L8i i think this is only 3gb that will be a bottleneck? Basically the only slot i have is the uio so it has to fit there.
 
AOC-USAS-L8i is a hba, but it's based on thr lsi 1068e. You won't get more than 350MB/sec per sff8087 port. With spinners you won't see a bottleneck.

That being said you are better off putting a regular sas2008 based hba in the pcie slot and diverting the cables over there. It may take some longer sff8087 cables but it'll be done the right way.
 
I dont have a regular pci-e slot. The qlogic fiber channel cards are on the other slot
 
AOC-USAS-L8i is a hba, but it's based on thr lsi 1068e. You won't get more than 350MB/sec per sff8087 port. With spinners you won't see a bottleneck.

That being said you are better off putting a regular sas2008 based hba in the pcie slot and diverting the cables over there. It may take some longer sff8087 cables but it'll be done the right way.

please remember, 1068 is very old,... does NOT support greater than 2T drive.

this would be bottleneck if you are using 10 drives/more or using expander with more than 10 Drives.
 
Back
Top