SAN for Small Business Recomendations

marley1

Supreme [H]ardness
Joined
Jul 18, 2000
Messages
5,447
What is the most bang for the buck SAN? I have a few companies that plan to go virtual and wondering what is the best model to look at. We are a Dell Reseller but they seem to start in at around13-15k with SAS Drives.

Just curious thanks!
 
how small is the business? If it's really small (like 40 or 50 people), maybe local storage and VSA might be an option also.
 
Most of the business are under 50 people. We tend to be using Hyper-V instead of ESX
 
$13K to $15K actual cost for a decent starter SAN is about as good as you're going to get. What price range are you targeting?
 
wasnt. just wanted to see what i should be looking at.

so nothing like vsa for hyper-v?

another member here told me about using Hyperooo for a time of backup to another physical server
 
Use something like a larger Synology or QNAP. 5, 8, or 12 bay units. Work well. Don't have all the features but you can do that with drives for half the ones you mentioned.
 
Now is that better than just doing local storage?

I am still a noob to this so not sure what the benefit is compare to local.

I know I could do high availability if I use higher end packages. But I also see that warranty repair will be slower compared to local storage on a dell server

Much appreciate the fast response.
 
I like to have my storage separate from my hosts for maintenance purposes. You can use DAS and some sort of a VSA and it works...but everything is depending on that host.
 
it also depends on how many vm's you are going to run off of it, and what the vm's are.

I currently use a EOL qnap R410U with 6 vm's off it and it all runs perfectly.

Added exchange, and it put more load on it.

exchange-qnap-usage.png


I know a few people that use the qnap's and LOVE THEM!
 
The newer Iomega line demoed at EMC world looked promising as well. I love my PX4-300d. I currently run about 20 VM's over NFS and it works great to include an SRM lab with vSphere replication working great.
 
What is the most bang for the buck SAN? I have a few companies that plan to go virtual and wondering what is the best model to look at. We are a Dell Reseller but they seem to start in at around13-15k with SAS Drives.

Does a 50-person company really need a SAN?
 
Does a 50-person company really need a SAN?
Depends. I think we need to stop thinking that "SAN" is this behemoth storage system that you have to spend hundreds of thousands of dollars on.

There are several products out there that are positioned for the SMB market, ie EMC VNXe, Dell Equilogic, HP Lefthand, not too mention some of the beefier products coming out of Synology, Iomega..that provide enterprise features at SMB prices.

Some of these smaller business may need these features, some may require higher capacity than what the lower end can offer..etc. It all depends on the specific needs.
 
We use a HP P2000 which is an MSA "SAN" Dual controllers and 15k sas comes in right around that price point.

You could pare it down by using a single controller and midline sas drive but it depends on it's function - I wouldn't recommend it for production.
 
I keep trying to get one sent over as well... but you / Varrow has a bit more pull. :-P

Talking on Twitter..talking on here....

I'll ping him when I'm done reviewing the Synology DS3611xs....

And as for a 50 person company needing a SAN... Vader nails it. SANs aren't just 6 or 7 figure storage frames...they can be $5K. $15K. Whatever. Storage is hella cheap these days and if you really want to use virtualization you need one.
 
NetJunkie what site do you review for?

So another question, what is a good way to estimate disk needs?

In physical servers from experience I never use SATA drives unless very very small offic ewith just file serving. Normally do Raid 1 for OS, Raid 5 for data in SAS 15k.
 
Really depends on your business requirements.

I personally don't like it how everyone* automatically thinks SAN when they want Virtualisation, or how it is pushed by the reseller.

Do you need vMotion/HA and so on, then yes, you need shared storage. Does it need to be a SAN, no.

Dedicated SAN's are very expensive, what happens if that SAN dies, you lose all your VM's from those hosts, so then you need to look at having two SAN's, there is no point on having a resilient front end and not looking at your storage. A good product is VMware's VSA appliance, you get the best of both worlds within reason (there are limitations and slight downsides)

The question is prices vs functionality, if the downtime of the business costs more than the implementation of shared storage (done correctly) and maintenance, then go for it, if it doesn't, then don't do it (unless you have lots of spare cash)

If the business doesn't require it (which most SMB's fall under) , then get yourself 2 or 3 nodes, with local drives, have a good back up plan, you can even get a NAS and backup from your local storage to your NAS with something like Veeam. This will work out cheaper, increased resilience (compared to one SAN), reduced complexity and leaves you with a good amount of flexibility.

These are just my thoughts, lots of people don't agree with local storage, but it really all depends on your business requirements. I know I would and will only look at a SAN/NAS for my shared storage when I must have it (last resort) prior to that, I will only look at local storage. Don't get me wrong, I love vMotion and all of that, but does my business really need it on a production environment? No.



:)
 

I've had really, really bad luck with Iomega. Went through 2x IX series rack mount NAS w/ total data loss which they replaced with a PX series rack mount which is poor at best. Bad product, bad support, bad results. I picked Iomega due to being on the HCL list and with their big brother EMC. Mistake.

If I had used Iomega as a shared storage medium I'd be out of a job. I only used it as a dumping ground for backups and some misc. storage.

I'm on the Veeam + 2 hosts + local storage model right now. Replicate from one host to another in Vmware, backup to NAS. (Plus replicate offsite to a 3rd host)

35 employees. Right now I can't justify the cost and complexity of a SAN. Also it's not only the cost of a SAN but another $5k for VMware licensing for all the grown up functionality.

Honestly I could get the funds but when I factor in the complexity and single point of (total) failure I find myself just not able to justify it. VSA makes me think a lot... Having things so simple has it's own advantages for small shops.
 
Iomega is pretty nasty.

I would definetey recommand a qnap pros they are unbelievably good buys and on Vmware HCL>
 
Really depends on your business requirements.

I personally don't like it how everyone* automatically thinks SAN when they want Virtualisation, or how it is pushed by the reseller.

Do you need vMotion/HA and so on, then yes, you need shared storage. Does it need to be a SAN, no.

What other shared storage is there? I sure hope you're not planning on putting production VMs on a NAS :eek:

Dedicated SAN's are very expensive, what happens if that SAN dies, you lose all your VM's from those hosts, so then you need to look at having two SAN's, there is no point on having a resilient front end and not looking at your storage.

The IBM DS3500 is an excellent SAN and comes in at around 15K for about 8TB of 10K SAS, dual controller. Same for Dell MD3600, HP MSA. That is not expensive.

If your SAN dies, you call your tech support which was included in your package (24x7x4) support. They should have responded within 4 hours and you should be back in action pretty quick. SANs don't just die, this has to be a catastrophic disaster. They have redundant pathways to the drives, redundant power, redundant controllers...everything is redundant. Of course that is exactly why you back up your VMs from SAN to a cheap Netgear or similar NAS, and then to tape if you want to. Anyone not doing this is missing a few bolts in the noggin.

A good product is VMware's VSA appliance, you get the best of both worlds within reason (there are limitations and slight downsides)

This could work potentially, but not ideal for 2+ servers attempting to use vmotion/HA. But yes, it'll function as the poor-mans SAN. But you have to ask yourself about the administrative costs of maintaining that vs a true SAN and maybe then you'll fork over the extra 5K for the DS3500/MD3620/MSA

The question is prices vs functionality, if the downtime of the business costs more than the implementation of shared storage (done correctly) and maintenance, then go for it, if it doesn't, then don't do it (unless you have lots of spare cash)

Very true. However I can pretty much guarantee you that any company with 50 employees will lose far more than the 15K a small SAN costs if they're down for a day. Employee salaries, product not sold, customer confidence in the company goes down, etc.

If the business doesn't require it (which most SMB's fall under) , then get yourself 2 or 3 nodes, with local drives, have a good back up plan, you can even get a NAS and backup from your local storage to your NAS with something like Veeam. This will work out cheaper, increased resilience (compared to one SAN), reduced complexity and leaves you with a good amount of flexibility.


These are just my thoughts, lots of people don't agree with local storage, but it really all depends on your business requirements. I know I would and will only look at a SAN/NAS for my shared storage when I must have it (last resort) prior to that, I will only look at local storage. Don't get me wrong, I love vMotion and all of that, but does my business really need it on a production environment? No.

I bet if I did an analysis of your environment, you are spending more money on local storage and administration thereof with less functionality, less resilience, than you would on a small SAN. You still have to buy the drives, and you probably have to purchase more drives than you really need because it's spread out in silos. You spend more time using Veeam to move the data around and micro-manage things that you would with a SAN.
 
The new iomega stuff is nicer than Qnap..

The emc software ROCKS!

Proof :p

There is nothing wrong with having production NAS for VM. They work pretty good. I just don't use iSCSI. NFS and Veeam to other nas or USB. Granted tho my enviornments don't go over 75 users.

I also use VMSA(Storage appliance) its not bad either you just need your own server. This is another alternative. Works great, if you have a licence for essentials plus you already have this.
 
Last edited:
Don't some NAS do iscsi? I thought that was preferred for VM?

When use NFS vs Iscsi.

Thanks this thread is great.

I also have been doing all stuff with hyper-v. My first installation was for a small police station. 8 of inside 5 in cars. Host is a Dell t410, 16gb with 4x300gb 15k sas and hot spare in raid 10.

Vm1 is server 08 with 8gb for dc, file, print and 2 small applications.
Vm2 is server 03 box with a old records program in cold fusion
Vm3 is server 08 with SQL database for another new records program

Running great. I miss ordered memory should have had 32gb but machine is fast. Went smooth. Took a p2v using disk2vhd for vm3

Plan to wipe the old server(raid1, t310, Sata drives) and user hyperoo for backup
 
Don't some NAS do iscsi? I thought that was preferred for VM?

When use NFS vs Iscsi.

Thanks this thread is great.

Some SAN's can do it all, FC, iSCSI and NFS. There is no "preferred" storage protocol, each present advantages and trade offs to weigh for your organizational needs.
 
Don't some NAS do iscsi? I thought that was preferred for VM?

When use NFS vs Iscsi.

Thanks this thread is great.

I also have been doing all stuff with hyper-v. My first installation was for a small police station. 8 of inside 5 in cars. Host is a Dell t410, 16gb with 4x300gb 15k sas and hot spare in raid 10.

Vm1 is server 08 with 8gb for dc, file, print and 2 small applications.
Vm2 is server 03 box with a old records program in cold fusion
Vm3 is server 08 with SQL database for another new records program

Running great. I miss ordered memory should have had 32gb but machine is fast. Went smooth. Took a p2v using disk2vhd for vm3

Plan to wipe the old server(raid1, t310, Sata drives) and user hyperoo for backup

Some people use NFS some use iscsi. There is no right or wrong protocol. Depends on situation/budgets and ultimately preferences..
 
What will you be hosting on the VM's? That will decide whether you need NAS vs. SAN (and most likely you need NAS). Go NFS, easier to manage. The biggest obstacle is determining how you will backup and/or replicate it.

And a full SAN will not be 13-15k. I think you may have your terminology slightly skewed...
 
I looked at the Dell unit was near 15k with single controller only like 6-8 drives
 
@OP

I'm in a similar position to you, except that I'm a customer not a reseller. I was looking at the Dell MD3200i as a SAN node - with dual controllers, 2 500GB Near Line SAS drives and 10 1TB Near Line SAS drives, it comes in at $14K.

As another has said, you don't necessarily need a SAN, and an iSCSI box isn't a SAN anyway - it's a node in a SAN. But that node might be all that you need. Or you might not need it at all - it depends on how much data your VMs will be handling.

For my company's environment, I plan on using a Dell R510 to hold and run the VMs - they'll be run from a RAID 10 array off an H700 controller, and backed up to tape.
 
So Hyper-V doesn't support NFS or am I reading this wrong?

So since iScsi is block based I need to cut a section of the storage pie out to each VM? Can you still do dynamic disk?
 
Hyper-V in it's current state does not support NFS, however 2012 will support File Protocols.

You do not need to cut out storage for each VM using block, you create a volume, assign it to Hyper-V, vSphere, and Hyper-V puts .vhd files with config files on that storage for each VM, same with vSphere, except it uses .vmdk in the case of Block storage. vSphere, however, does support NFS.
 
Hyper-V in it's current state does not support NFS, however 2012 will support File Protocols.

You do not need to cut out storage for each VM using block, you create a volume, assign it to Hyper-V, vSphere, and Hyper-V puts .vhd files with config files on that storage for each VM, same with vSphere, except it uses .vmdk in the case of Block storage. vSphere, however, does support NFS.

The other way is to create a block-based storage volume for each VM's OS disk, and then when you create the VMs you tell it to "use existing disk", and point it at the iSCSI target. This way is more involved though, and more complicated to manage.
 
What other shared storage is there? I sure hope you're not planning on putting production VMs on a NAS :eek:

Tell me what is the real difference between a NAS and a SAN, if they both support the same protocols and have the same hardware ;)

The IBM DS3500 is an excellent SAN and comes in at around 15K for about 8TB of 10K SAS, dual controller. Same for Dell MD3600, HP MSA. That is not expensive.

They are excellent units, but here in the UK they are quite a bit more, specially if you are constantly not spending money with the manufacturers, you don't get the premium discounts.

If your SAN dies, you call your tech support which was included in your package (24x7x4) support. They should have responded within 4 hours and you should be back in action pretty quick. SANs don't just die, this has to be a catastrophic disaster. They have redundant pathways to the drives, redundant power, redundant controllers...everything is redundant. Of course that is exactly why you back up your VMs from SAN to a cheap Netgear or similar NAS, and then to tape if you want to. Anyone not doing this is missing a few bolts in the noggin.

If you are backing up from a SAN, to a NAS and then to tape/online, how is that different from Local storage to a NAS then tape/online?

My point is, in a smaller environment, say you have a single SAN, 3 hosts, with 30 servers per host. That is a total of 90 servers. if your SAN dies (which I agree with you, unlikely if redundant) that is 90 servers down for 4 hours (let's be honest, it will be more than 4 hours)

If the host dies, that is 30 servers down, instead of the 90, and depending on how the business is run, they can continue using 60 servers. If they have a spare old host, they can get it up and running from the backups.

This could work potentially, but not ideal for 2+ servers attempting to use vmotion/HA. But yes, it'll function as the poor-mans SAN. But you have to ask yourself about the administrative costs of maintaining that vs a true SAN and maybe then you'll fork over the extra 5K for the DS3500/MD3620/MSA

VSA isn't as functional as a real SAN, you're right, however, even though it has a limited function set, the redundancy is actually better than a SAN (within reason) if you are using 2N + 1, as you are spreading the risk, but it actually sucks on storage space, as you can only use 25% of your total storage space if you are using a RAID10. It's not an extra 5k by the way, but I get what you are saying.

Very true. However I can pretty much guarantee you that any company with 50 employees will lose far more than the 15K a small SAN costs if they're down for a day. Employee salaries, product not sold, customer confidence in the company goes down, etc.

Not true, the number of employees is irrespective, it's the data setup and complexity.

For instance, my brother used to work for a company with 6 employees, turnover of 60 million. Very complex setup.

I help a company of 150 users, turn over 12 million, basic setup.

:)

I bet if I did an analysis of your environment, you are spending more money on local storage and administration thereof with less functionality, less resilience, than you would on a small SAN. You still have to buy the drives, and you probably have to purchase more drives than you really need because it's spread out in silos. You spend more time using Veeam to move the data around and micro-manage things that you would with a SAN.

Well, it all depends how it's done, but I think on this instance, no :)

Excellent feedback though and thanks for all your input.

Good stuff :)
 
I am somewhat late to the discussion but I can recommend JetStor iSCSI http://www.acnc.com/
I bought a three 616iS last year and couldn't be happier with them.

No annual maintenance fee.
Lifetime (of the unit) phone/email support.
Dual controller units.
Will accept foreign disks without fuss (save BIG $$$ when you upgrade capacity).
Not a single array like many low end "enterprise" units. You can configure multiple arrays on one unit any which way you like, meaning you can have different raid configurations on a single unit.

IOPS doesn't cap at 3500 like the shitty EQL PS5000E I had before.

Worth noting that the 616iS doesn't do VAAI, not sure which of their models do it. Not a big deal to me but if you require VAAI then verify that the model you are looking at supports it.

All units are VMware certified.

Didn't have any support issues in the year I had the units. I did email them about with a FW question and to check on the progress of VAAI and received replies within the hour.

Whenever I am going to buy more storage it will be JetStor yet again. The lack of annual maintenance fees and the ability to plug in your own disks make those units come in way ahead of the competition for my needs. YMMV.

EDIT: I bought 16 TB raw units with 7200rpm SAS drives at the time and while I received a discount they came in under 15k each with dual controllers. That was for the 1 GB version, not the 10 GB.
 
Last edited:
So I was looking at the 8 bay synology that is like $2200 not including drives.

The client I was thinking about is a municipality in one building with 3 seperate networks. 1 for the court, 1 for the police and 1 for the village.

The village would have a SBS Server (running Exchange) for 15 users
The police would need a DC/File server, an Exchange Server, and a Application Server (SQL driven) this is for 10 users.
The court is like 5 computers and a flat database program and dc all one one server

I was thinking load that synology up with 7 drives one as a hot spare in raid 10 and put a 4 port nic in a host and run it all on one host with the san. We did put a server in there which could also be repurposed into a second host. Not sure I'd we could do live migration.

Would this be a good idea? Would the performance be there?
What is needed for live migration? How would the Synology be hooked up? I imagine to a nic that is set to Internal in Hyper V?
 
If you are going to run on 1 physical box, I'd go local storage, if you get a nice raid controller with cache and BBU. 512mb or 1GB of cache on the controller can make a pretty dramatic performance difference over a small NAS unit. Especially true if you wanted to use something like cachecade on the LSI/dell perc controllers. Your health status info will all be in 1 place, your server will (or should anyway) have dual power supplies (which synology doesn't) etc.
 
I was thinking load that synology up with 7 drives one as a hot spare in raid 10 and put a 4 port nic in a host and run it all on one host with the san. We did put a server in there which could also be repurposed into a second host. Not sure I'd we could do live migration.

Would this be a good idea? Would the performance be there??

You want to watch out for legal issues wrt confidentiality etc. You may need to ensure physical separation of the networks and data. 3x MS SBS with internal storage and tape backup. Job done.
 
Back
Top