The absolute cheapest SAN solution possible

lozaning

2[H]4U
Joined
Mar 30, 2005
Messages
3,757
As the title says Im curious as to your opinions on how to go about building the cheapest possible SAN.

The goals for this project are in order of importance:
  • Spend less then 800$
  • Have a lot of storage, at least 1TB
  • Fast transfer/access speed
  • Fancy software features

This is for my own home lab and my own personal learning experience with the technology, none of the data that will be stored is mission critical or really of any significant financial value so that is a rather low priority. I've looked into doing iscsi with FreeNas and Open Filer, but that seams much eassier and not as much as a challenge, and like I said the main reason im doing this is to learn about something I have relatively little experience with.

So far I've done somewhere in the neighbor hood of 10 hours of reading on this stuff and so far have gathered that to do this I'll need
  • the disc enclosure
  • a fibre switch
  • HBA cards
  • GBICs
  • fiber cable

I already have the HBA cards for all my servers, and enough GBICs to fill a potential switch for connectivity to all servers and the disk array, and can get free fibre cable from a friend.

All of this stuff can be found easily enough on ebay but my main concern is the inter polarity among devices. If I buy a xyratex raid array and connect it to an IBM switch will that still work? My second question concerns the controller, from what I've read I need some kind of controller to sit between the servers and the disk array so that the servers can actually access the disks. From what I've gathered sometimes this is built into the disk array enclosure and sometimes its not.

So far Im looking at something along the lines of this for the storage, and this for the switch. Will these work together or does that EMC DAE work strictly as part of a larger EMC system?

If those wouldnt work what would you guys recommend to get the most storage for under 800. I could also prolly stretch the budget up to 1000 if the storage enclosure had the ability to use SATA drives so I could potentially upgrade all the drives to 2TB ones later on down the road and get a lot more storage.

Also if you have any good links to places to read more about this stuff that would be awesome too.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Look for an old Left Hand SAN,

Your not gonna get one for 800 btw

If you want a new SAN, gonna run you 15k + Or else your just building a NAS device
 
Out of curiosity what does the Left Hand Solution offer that the things that I linked to dont asides from much more storage?
 
You would probably be looking at iSCSI vs fibre channel.

Fibre channel is expensive as hell to get in a SAN. Left Hand is now owned by HP so you will have to research the feature sets of each SAN

An EMC for 600? I would question that to be honest, but give it a try
 
I'm actually looking at the same sort of thing...

Currently i've got a rackable systems server with 4xSATA hot swap (running 4x2TB in Raid5) running Freenas (this was dirt cheap). The files are shared via samba for my windows/apple boxes and my esx & xenservers via NFS.

I've heard NFS vs iSCSI there isn't much in it (and NFS seemed a damn sight easier to configure :)

I'm keen to understand my options for a new fileserver.

L
 
To be frank, I don't think its possible to setup a SAN environment for 800. This type of environment is best learned when you work for a company, since they will have support and access to software to make this all work. The only labs that I have seen where one has a SAN environment are those who've bought or gotten the EOL devices from their employer.

The EMC DAE will not work independently of a DM or controller. Since we are talking about a CX series chassis, that's going to be big bucks.

At a minimum you are going to need to buy an array (meaning disks + raid controller), a FC switch (with recent software) and hosts. Some of the FC switches may not even contain software and you may not be able to find any online (without any software support)

If you can get an ISCSI network running and fundamentally understand it, that would give you good foundation in learning FC.

You might be able to pickup an Dell/EMC AX100 series for around 2K and that would fulfill your array. Still need to get a fabric switch from Cisco, Brocade, etc to simulate actually have a SAN. You could just plug a host directly to an AX100, but there isn't too much to learn beyond having a LUN shown to your hosts.
 
I was able to pick up an AX150i dual iSCSI, with 12 x 250GB HD's for under $3,000

Above your budget, but for something decent it's not bad.

You might need to settle for something like these if you are on a small budget.
http://www.newegg.com/Product/Product.aspx?Item=N82E16816111135
http://www.newegg.com/Product/Product.aspx?Item=N82E16816111094

You can definitely get more bang for your buck with used or refurbished gear though. But there are other risks involved with that. You could probably piece together a small system as well. But either way you are going to be limited, if you piece a system you can leave it open for expansion though. One of these two is just stuck with 4 drives.

EDIT: You might have better luck if this was moved to the Data Storage Systems category on the forum.
 
LOL this is a good thread, i was gonna create a thread myself on this exact same question. i can create a new thread if you guys would rather or if OP doesn't mind i'll ask in this thread as well.
 
I think i'm currently weighing up Windows Home Server vs Openfiler.

It looks like many of those involved in the storage showoff thread are running WHS. I don't know if i've fully understood this, but it looks like it doesn't use raid it just "pools" your disks into a big storage pot which you can then share. The beauty of this you can mix/match drives and add more at any time. The disadvantage presumably being that you are limited by the speed of a single disk. I think there is an option to use 1 or more drives for redundancy... still need to do more research :)

I'm going to start hunting for a rackmount chasis which has 8 or more hot swap SATA bays...

L
 
Contemplating the Norco 2208 (or 2008) chasis, but not sure if i can get one in the UK
 
Initially my thought was "not gonna happen" -- but if it doesn't need to be cutting edge then there is probably a lot of room out there. I know that when I've seen fibre channel drives that are refurbished the price is comparatively super cheap (since the demand for refurbished fibre channel drives is probably TINY) -- things should interoperate fairly well, but do check beforehand to see if the manufacturer of any piece of equipment has a compatability matrix -- especially for gbics, which some manufacturers are picky about.

for $800 you're going to need to make your own

ebay has some interesting stuff: http://cgi.ebay.com/LOT-24-HITACHI-..._Components&hash=item4cf07b9701#ht_1136wt_911
 
As an eBay Associate, HardForum may earn from qualifying purchases.
WHS is great for the mulitple HDD of different types. And yes you can pick what you want duplicated. One downside is only being able to use one GB Nic card. WHS wont allow the use of more then one. Im not sure if they changed this in VAIL (will be the new version).
 
I just did some googling and it appears link aggregation/nic teaming/bonding is doable in WHS- not quite sure what you've heard/read? Possibly you're referring to trying to make the WHS multi-homed?

I'm keen to understand where the bottleneck would be: Disk(s) / Network.

If it's the network then LAG/teaming would increase performance... if it's the disk(s) i'm not sure if WHS has a way of tackling this. More research :)
 
I would maybe look for a used Netapp FAS270 in a DS14mk2, pick up a cheap 2Gb FCP switch, a Mcdata 4500 or something?, fiber cables from monoprice are cheap, and you can pick up Qlogic fiber HBA's and 2Gb SFPs dirt cheap on ebay.

Pretty sure we recently bought a spare FAS270 for one of our lab filers for like $600-$800. and if it's a lab you only need one head in your filer.

Fiber switch: http://cgi.ebay.ca/McData-Sphereon-...=ViewItem&pt=COMP_EN_Hubs&hash=item53e21cb672

SFPs: http://cgi.ebay.ca/FTRJ-1319-3-2-5-...tem&pt=LH_DefaultDomain_0&hash=item5193716b15

Disk Shelves: http://cgi.ebay.ca/23R2965-IBM-NETA..._EN_Networking_Components&hash=item1e5caa8a3e

Filer head(this particular one is kinda pricey, but I"m sutre you could find cheaper): http://cgi.ebay.ca/IBM-N3700-NetApp...tem&pt=LH_DefaultDomain_0&hash=item53e1be4b94

Qlogic fiber HBA 2Gb: http://cgi.ebay.ca/QLogic-QLA2340-2..._EN_Networking_Components&hash=item335b7ea757
 
Apple xServe RAID. They can take up to 14 750GB ATA drives and preform realitively OK. For the 1TB+ that you require you could probably get one in very good shape with all the drive trays, cables, and documentation for about $1000. The only downside is they are limited to 7200RPM drives, so you don't get the speed of USCSI or SAS 10,000/15,000 RPM drives in RAID.

It may not be the best, but it works for as close to your budget as you are going to get.

Also you could look into the Buffalo TeraStation. We just got a TS-RIX6.0TL/R5 (6TB) for $1800. Currently it is setup as an iSCSI SAN hanging off our backup server for extra room. It works well so far.
 
Also to those talking about WHS and teaming. While you may or may not be able to team in WHS, the preferred method for iSCSI SAN is to use two independent NICs + an iSCSI dedicated switch and use multipathing + jumbo frames. Doing it this way will end up giving you more throughput and have lower latency.

For switches I would recommend a Dell 53xx/54xx or an HP 2810 / 1810 (I have used both HP switches, the 2910 is really the preferred but most can't afford it. The 2810 has excellent throughput even when completely saturated, and the 1810 works well in a small environment.
 
2 questions :)

The apple xServer raid- is it ready to rock? What "hardware" does it contain? (Like cpu/ram etc) and what operating system does it run? If it's not "ready to rock" what does one need to get it up and running?

With regard to the netapp route... i was looking at that at one point but have to admit i got completely lost. What are the components one needs to have a functioning "SAN". i.e. shelf + head + fiber switch? or?

Re: the switch- i have a Linksys/Cisco SRW2024 which support VLANS / LAG and quite a few other useful feature from what i understand. I'm yet to purchase a fiber switch (or fiber module(s) for the SRW2024)- again, i have a lot of questions/research i need to complete first :)

Really appreciate everyone's help & feel free to tell me to start a new thread or threads if you feel i'm jacking this one!

L
 
I was able to pick up an AX150i dual iSCSI, with 12 x 250GB HD's for under $3,000

I use this very same SAN at my job, though I've been trying to convince my supervisor to upgrade the processor on it to a dual-core. ;)
 
2 questions :)

The apple xServer raid- is it ready to rock? What "hardware" does it contain? (Like cpu/ram etc) and what operating system does it run? If it's not "ready to rock" what does one need to get it up and running?

The xServe RAID is a closed system, so other then giving the specs and abilities of the RAID controllers there isn't anything to be swapped or upgraded. The SAN is ready to go out of the box, and just needs to be cabled and configured. I have only worked with them on Macs before, however according to apple they will work just fine as a iSCSI target for windows and linux. As Apple doesn't support Virtuialization on any platform they wouldn't comment on using it with VMWare.

Re: the switch- i have a Linksys/Cisco SRW2024 which support VLANS / LAG and quite a few other useful feature from what i understand. I'm yet to purchase a fiber switch (or fiber module(s) for the SRW2024)- again, i have a lot of questions/research i need to complete first :)

If you go with the apple you would need fiber, if you go with another type you may not need fiber. In the last 4 SAN installs I have done we have used all copper instead of fiber.

Just remember that if you decide not to get a dedicated SAN switch you will need to VLAN off the two fiber ports + two gigabit ports for your server.
 
Wow- that makes the xServe RAID a really cheap tempting option (cheap on eBay). Presumably i would need to mount the iSCSI device on my domain controller or something to allow network users to use it via samba?

Is the VLAN separation to prevent broadcast traffic slowing things down?
 
I would suggest getting a cheap DAS or Cheap used server with a bunch of local storage and putting Openfiler on it. Built in ISCSI initiator and add a fiber card if needed.
 
Sounds like the option i'm going to end up going with. My main problem is that i'm struggling to find a used rack server w/ 8 hot swap sata bays (i don't really know what to search for lol. I was bidding on a Dell Poweredge R510 but the price went up too high!

I might install Openfiler and run some speedtests then WHS and do the same. I don't expect there to be much difference, in which case i'll go with Openfiler :)
 
I would maybe look for a used Netapp FAS270 in a DS14mk2, pick up a cheap 2Gb FCP switch, a Mcdata 4500 or something?, fiber cables from monoprice are cheap, and you can pick up Qlogic fiber HBA's and 2Gb SFPs dirt cheap on ebay.

Pretty sure we recently bought a spare FAS270 for one of our lab filers for like $600-$800. and if it's a lab you only need one head in your filer.

Fiber switch: http://cgi.ebay.ca/McData-Sphereon-...=ViewItem&pt=COMP_EN_Hubs&hash=item53e21cb672

SFPs: http://cgi.ebay.ca/FTRJ-1319-3-2-5-...tem&pt=LH_DefaultDomain_0&hash=item5193716b15

Disk Shelves: http://cgi.ebay.ca/23R2965-IBM-NETA..._EN_Networking_Components&hash=item1e5caa8a3e

Filer head(this particular one is kinda pricey, but I"m sutre you could find cheaper): http://cgi.ebay.ca/IBM-N3700-NetApp...tem&pt=LH_DefaultDomain_0&hash=item53e1be4b94

Qlogic fiber HBA 2Gb: http://cgi.ebay.ca/QLogic-QLA2340-2..._EN_Networking_Components&hash=item335b7ea757

What are your thoughts on this disk shelf, http://cgi.ebay.com/NetApp-DS14MK2-...tem&pt=LH_DefaultDomain_0&hash=item3a567c2bee

It already has drives, power supplies and 2 X5612A cards. Are those cards in any way shape or form going to work with what Im trying to do?

That McData switch looks great, and I already have HBAs, SFPs, and plenty of cable.

Apple xServe RAID. They can take up to 14 750GB ATA drives and preform realitively OK. For the 1TB+ that you require you could probably get one in very good shape with all the drive trays, cables, and documentation for about $1000. The only downside is they are limited to 7200RPM drives, so you don't get the speed of USCSI or SAS 10,000/15,000 RPM drives in RAID.
I looked into this, as I have an Xserve Dual G5 render node that I could repurpose as the controller, but I was under the impression that one has to purchase the Xraid software in order to get any functionality out of the the Xraid unit. Im also interested in using this with my ESXi host so if it doesnt support any virtualization its prolly a no go.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
I looked into this, as I have an Xserve Dual G5 render node that I could repurpose as the controller, but I was under the impression that one has to purchase the Xraid software in order to get any functionality out of the the Xraid unit. Im also interested in using this with my ESXi host so if it doesnt support any virtualization its prolly a no go.

You don't need the XSAN software if you are using the XRAID as a DAS. And the XSAN software is basically just an iSCSI initiator and a controller software for multipathing as OS X doesn't support these out of the box.

As far as ESXi I have no idea. Apple has no comment that I was able to get out of them, as a DAS I can see it working just fine, as a SAN I am not sure.
 
As far as ESXi I have no idea. Apple has no comment that I was able to get out of them, as a DAS I can see it working just fine, as a SAN I am not sure.

DAS + Controller w/Openfiler = iSCSI SAN Friendly :D
 
DAS + Controller w/Openfiler = iSCSI SAN Friendly :D


What would this look like with an xserve raid? Would I still need to use fibre anywhere, and where would the switch go, between the DAS and the controller (seams like that contradicts the definition of DAS) or between the controller and the rest of the servers? Is it possible to use link aggregation and use both fibre ports on the Xserve to the controller with open filer and then load that controller up with nics and then point each of the servers needing access to a different nic on my vlan?
 
What would this look like with an xserve raid? Would I still need to use fibre anywhere, and where would the switch go, between the DAS and the controller (seams like that contradicts the definition of DAS) or between the controller and the rest of the servers? Is it possible to use link aggregation and use both fibre ports on the Xserve to the controller with open filer and then load that controller up with nics and then point each of the servers needing access to a different nic on my vlan?

The way this would work is

xServe Connects to xServe RAID using Fiber.

xServe connects to SAN switch (or VLAN) that then connects to all the servers who will be accessing the iSCSI target.

As far as LAG goes, LAG is not used for a SAN. The reason being is that the controller on the SAN is not capable of teaming its NICs. That is why you use multipath, this allows IO jobs to fully saturate multiple NICs without teaming.

As far as where the switch goes it is in between the SAN and the servers using the iSCSI target. You are creating a whole separate LAN for your SAN traffic which is why an independent switch is recommended, specifically ones that will preform well under full load. You can use a single switch and do VLANs, however as your switch becomes more saturated with normal LAN traffic your iSCSI performance will suffer.
 
I've ended up with a 3U supermicro chasis with 15x hot swap sata bays (server is 2x dual core amd opteron w/ 8gb ram + 2x 8port lsi megaraid controller) from eBay for $550 (yes- it's going to cost me $350 to get it shipped to the UK)
Now i'm looking at drives- thinking of starting with 8x 1.5TB WD green disks
If i have the patience i'm going to try Freenas with hardware raid, Freenas with ZFS and WHS.
 
Back
Top