Business SAN -> iSCSI

Joined
Apr 10, 2002
Messages
3,306
I have a lot of questions to just get a basic understanding of what common, needed, etc to visualize a bunch of servers.

The case in point right now is a company with 100 users, 10 servers, in house Exchange, SQL, WebServer, Terminal server, a few business applications (running on SQL server), and file/print server.

Exchange is 2007 and uses about 360gb
Home / Share etc directories use about 400gb of space

There are 2 domain controllers on site, and 2 at remote offices, all on 2008r2 with domain functional level at 2008r2. (the 2 off-site, can just stay as they are)

So we had some Dell guys come in and tell us we should get an EqualLogic for $16000, 2 layer 2 switches, 2 r710 servers, and VMware vSphere Essentials for $4000 + and another $2000 for VMware 3 year support.
And AppAssure for backups

So I have looked at other options EMC VNXe, HP Par3, NetApp.. They all seem to come in at the same price point of $16-20k.

I am under the basic understanding of how this work:
2-3 servers run ESXi (which technically can boot from a 70mb flash drive)
This then can boot up a bunch of VMware images that reside of whatever SAN device is out there.
These virtual servers use iSCSI to connect to the SAN/NAS device to use that hard drive space as if it were a local drive.

How far off is this?

I hear or see of all of these "features" with the out of box $20k devices where you can Spin up 1tb of space in a minute or 1000 virtual desktops in 10 minutes, I don't know what we would actually need or want to use.

Question.
I thought VMware used to be free, and now it is something like $80 a computer. What is that $4000 VMware quote for?
What needs to be purchased to run ESXi and some VMware images on a server?

There is a Synology DS509+ with about 6tb of drives in it here.
Could we technically, use it to do the same type of setup? It wouldn't be fast, high availability, redundant, etc. But it would technically work, right?

I also created a server 2008r2 with exchange 2010 in VMware player on a computer. That image, vmdk could be put onto a NAS/SAN with iSCSI and booted up on a server running ESXi. right?

Thanks
 
Well you got some things right.
ESXi was never free.

It actually requires a 4gb but the image once installed is 144mb.

SAN/NAS are in about the same ballpark. You can look at something like Jetstor if you need cheaper SAN.
They are quoting you VMWare essentials Plus+ which actually come with builtin Site Recovery and Backup. It also come with VSA so if you populated the internal storage on the server you can utilize it as a SAN as well.
Stupid Dell being stupid Dell should have said get 3 Hosts not 2. Building a cluster with 2 Hosts I think is borderline retarded.
I would also think about getting 10GB switches to make the back end fly You can now purchase Netgear 10gb switches for about 1700 bucks (I got two now and my cluster flies)

I use Veeam for backups and I also use it to send servers in whole offsite.
The synology will not hold up to the amount of traffic you want to throw at it. You could use it as a backup.
As far at the last portion yes you could send your build to a SAN You wouldn't just put a VMDK on the SAN it doesn't work that way you actually need to run it past a program (Vcenter Converter) to upload it to your cluster.
 
As wrench said, you need 3 hosts, not two.
I personally would consider Netapp or Nimble for iSCSI SAN. Nimble is definitely worth a look and their OS is very similar to ontap.
 
Dell told us to run at least 3 hosts. We have old servers that would be re-purposed.

It seems like to get one of these "setups" you have to throw in $30k.

What is the next level down? What do you lose?

I think we are looking at some redundant servers, redundant switches, and ESXi somehow can move them around among the physical servers on it's own.

10 minutes of downtime or an hour, is actually not going to kill this business too badly. I don't know if it is worth the money to them.
 
Well depends how you think important these systems are. When building clusters always think of Failure first. Build things to fail, if the customer doesn't care then they will pay the price later down the road. You can do it cheaper but you would need to cutdown on SAN costs.
 
My suggestion :
Front load the budget with a very nice storage device. This SAN is your life line and data loss isn't an option for any IT shop. Everyone says,"backup backup backup" but restoring data back to a crappy array is only asking for trouble.
Netapp is amazing for smaller builds and has a great monitor and backup setup plus easy to expand. I've used equallogic ISCSI Sans before the Dell brand was slapped on it and into their new SSD class. The dell is a disk array and the Netapp is a disk array with an amazing software stack.

For your hardware, you can use anything as long as your CPU core count and memory is high enough to fulfill your workload plus a little room for failure. The comments between two and three hosts are only valid if your entire workload cannot run on a single host. I can say with confidence that a 100 user domain controller and 100 user exchange mailbox isn't going to need much CPU. Exchange will need a chunk of ram and I'm sure the MSSQL servers will need want even more.
Do this:
Find out what your operating workload is and add 30 to 50 percent to it and buy that much compute power. These days you can buy a 2U server with 48 cores and 256GB of RAM that would leave plenty of room for expansion for what you've listed. Throw in the IOPs that the Netapp can provide to your SQL servers and you are golden.

Now for vmware costs.
If you want redundancy and the ability to live migrate VMs between hosts and storage locations then you will need to buy host licenses and vcenter to manage the hosts.
You could run ESXi free if you want to go thin but check the virtual limitations before making that decision.

I hope this helps start the conversation.
If you would like more details let me know.

Nicholas Farmer
www.pcli.me
 
Two hosts is fine for your size. We run this setup all the time in smaller offices. Just put a bunch of memory in the hosts and you'll be fine (get dual CPU's in each host). The storage recommendations are all great. Although, if you're looking for something even cheaper, consider DAS, such as Dell's MD3200 or 3220. Still very fast, and cheap. Ideal for small setups with two hosts. Use a mix of SAS and SATA depending on what data needs to go where, (SATA works well for general file storage - SAS for Exchange and SQL, although techinically you can get away with SATA for Exchange in smaller environments on Exchange 2010+.)

Synology is not an alternative, as mentioned, use it for backup.
 
You can also just forget the SAN. Run 3 Hosts with disks in them and just use VSA, you still get a faux, SAN, backup and recovery with Vmware Essentials+ I am building more and more with out SANs.

My Favorite setup is 3 Lenovo rd630 with raid 50(Also 60) 8 Drives 10gb lan, couple netgears switches 10gb and VSA. You can essentialy get the whole thing around 10-15k.
 
For your hardware, you can use anything as long as your CPU core count and memory is high enough to fulfill your workload plus a little room for failure. The comments between two and three hosts are only valid if your entire workload cannot run on a single host. I can say with confidence that a 100 user domain controller and 100 user exchange mailbox isn't going to need much CPU. Exchange will need a chunk of ram and I'm sure the MSSQL servers will need want even more.
Do this:
http://www.pcli.me

These are the 3 servers that I think we could turn into hosts

R710 2 Xeon E5530 2.4ghz Family 6 Model 26 Stepping 5 32 GB

R710 2 Xeon E5504 2.0ghz Family 6 model 26 stepping 5 8.0 GB

R610 1 Xeon E5530 2.4ghz Family 6 Model 26 Stepping 5 32 GB



We should be able to buy a Xeon E5530 for the R610 and some Ram for the 2nd R710 and run on those fine, right?
 
These are the 3 servers that I think we could turn into hosts

R710 2 Xeon E5530 2.4ghz Family 6 Model 26 Stepping 5 32 GB

R710 2 Xeon E5504 2.0ghz Family 6 model 26 stepping 5 8.0 GB

R610 1 Xeon E5530 2.4ghz Family 6 Model 26 Stepping 5 32 GB



We should be able to buy a Xeon E5530 for the R610 and some Ram for the 2nd R710 and run on those fine, right?

Yeah these will do just fine. More ram the better IMHO, remember if one fails you need to move the workload else where so you need to make sure the other hosts can cover each others workloads.
You can use these with a SAN or even a Fast QNAP NAS with a 10ge card. I have a similar setup and I run only 2 hosts (HP G6 76gigs of ram)1 qnap 879u-ts (order extra drives in case of failure)with x520 gb card and Netgear xs712t switches. I have a lower end qnap 410 to handle backups. I don't use Iscsi just NFS which works just fine ( I should switch them to iscsi since QNAP does VAAI now). I use a basic essentials vmware for 600 bucks and I use Veeam essentials standard to backup VM offsite and onsite. This supports about 135 Users about 110 workstations and bulk of them have redirected folders as well. I have not recieved a complaint yet. I have retired 6 servers and I am down to only two now. Makes my life so much easier.
 
For storage you should look into brands that don't shaft you as the customer.

The Equallogic offer for 16k is meh because you are stuck with paying them 2k per year for support & maintenance and you cannot put your own disks into them but rather have to buy disks from Dell at HUGE markups.

Not sure exactly which EQL they offered you but for a few thousand dollars less you can buy the equivalent storage from http://www.jetstor.com where you will receive free lifetime support and you can later on load the array with your own disks when it's time to upgrade.

I run one EQL array and multiple JetStor arrays, so I have first hand knowledge of how those units compare.

What is true is that you don't get the "same business day" level of parts replacement from JetStor. The way I hedged against failure is that I buy 2 extra disks from JetStor at the time of array purchase, so I have the exact same disks that the array is loaded with on the shelf in case of a disk failure.

Over the years I had one disk go back (obviously not JetStor's fault). RMA was easy, I requested a replacement, send my old disk in, received a new one, done.

In fact, for not much more money than the 16k for one EQL you can actually buy two JetStor units.

The cost savings of using a solution like JetStor is simply undeniable. It may be different if you run a datacenter that has a million arrays and you benefit from the management software that NetApp, EMC etc have, but if you are only deploying a small number of arrays there's no reason to not buy JetStor or a similar brand and save yourself tens of thousands of dollars over the life of the array.

You can also just forget the SAN. Run 3 Hosts with disks in them and just use VSA, you still get a faux, SAN, backup and recovery with Vmware Essentials+ I am building more and more with out SANs.

I have tried the VSA and I found it to be meh.
The recommended amount memory for running it is 24 GB and both the number of disks as well as the total amount of storage you can get out of it is rather limited. Wouldn't recommend running it in any kind of production environment due to those limitations. You basically have to tie up a server than could easily run dozens of VMs just to serve up meh storage.
 
If I may ask, what kind of business are we talking about here? 24/7 shop or banker's hours? How much of a hit to the business does an outage cause?
 
Property Management
90% of business is 9-5 but it is 7 days a week.
But, being down for a few hours isn't going to be a huge hit.

We got an EMC quote that seems more reasonable
EMC VNXe 3150
Dual Processor
12x 600gb 15k SAS
$12500

Dell Sent a new EqualLogic Quote for $31k
Dell EqualLogic PS6100XV, 24 x 600GB 15K SAS Drives
And 2 switches
PowerConnect 6224, 24 GbE Ports, Managed Switch $2300 each

IOPS are in the low hundreds. We don't even need a 10GbE capable switch. We are sticking with Gigabit.
 
Last edited:
I can configure a PowerVault MD3200i with 24 300gb 10k SAS drives for $10k that's 7.2TB. Way more than we need.
What does the EqualLogic, or VNXe, or NetApp give us that an MD3200i wouldn't?
 
I can configure a PowerVault MD3200i with 24 300gb 10k SAS drives for $10k that's 7.2TB. Way more than we need.
What does the EqualLogic, or VNXe, or NetApp give us that an MD3200i wouldn't?

Probably some features, probably some optimization here and there. But for your size of a shop, and your needs, it would probably do just fine. Make sure you have plenty of NIC ports if you're doing 1GbE iSCSI.
 
I'd stick with a Powervault or a JetStor. No need to make it any more complex than it needs to be. Toss your backups on the Synology.
 
Last edited:
Also don't discount backup software. If you don't do Essentials+ your going to need something like veeam.
 
For one EqualLogic comes with every feature out of the box, block level tiering, snaps, sync and async replication, SAN HQ, Dell VSM---which is the best plugin out of the box i've used so far and i've used NetApp, EMC VNX and Clariion, HP Lefthand.

There is no additional hidden costs with EqualLogic, you get it all..and with the next software release you get dedupe.

If you require IOPS in the low hundreds why are you even looking at 10k or 15k SAS when 7.2k NL-SAS or SATA will do the trick?

As for the VSA, I would wait for vSAN if that's the route you think is feasable, meaning, are you really just going to leverage this storage for your Virtual Environment.

I also don't rule out what Thuleman suggested, even though I have very little experience outside the big vendors, there seems to be some very intriguing products coming out of companies like Jetstor, Nextena, Nimble..etc...that are very cost effective and provide a solid feature set.

The PowerVault shouldn't even be on your list really, too me, Dell has no clue where to price it and compete against it's own EqualLogic line. You can get an entry level 4100 with all the features I listed for $10k.

BTW, yes I used to work for a Dell Partner, starting Monday, i'm back to my roots with an EMC partner and yes..I still stand behind what I said for EqualLogic. It's like storage legos, but all the features are included. Just add shelves with different capabilities for IOP's, tiering..etc.
 
For one EqualLogic comes with every feature out of the box, block level tiering, snaps, sync and async replication, SAN HQ, Dell VSM---which is the best plugin out of the box i've used so far and i've used NetApp, EMC VNX and Clariion, HP Lefthand.

There is no additional hidden costs with EqualLogic, you get it all..and with the next software release you get dedupe.

If you require IOPS in the low hundreds why are you even looking at 10k or 15k SAS when 7.2k NL-SAS or SATA will do the trick?

As for the VSA, I would wait for vSAN if that's the route you think is feasable, meaning, are you really just going to leverage this storage for your Virtual Environment.

I also don't rule out what Thuleman suggested, even though I have very little experience outside the big vendors, there seems to be some very intriguing products coming out of companies like Jetstor, Nextena, Nimble..etc...that are very cost effective and provide a solid feature set.

The PowerVault shouldn't even be on your list really, too me, Dell has no clue where to price it and compete against it's own EqualLogic line. You can get an entry level 4100 with all the features I listed for $10k.

BTW, yes I used to work for a Dell Partner, starting Monday, i'm back to my roots with an EMC partner and yes..I still stand behind what I said for EqualLogic. It's like storage legos, but all the features are included. Just add shelves with different capabilities for IOP's, tiering..etc.

Except an Equalogic is complete overkill.
 
I don't think I recommended EqualLogic ..if your read my post I was simply stating what you get with EQL if it's in consideration.

I don't recommend anything without knowing all the facts such as IOPs, Capacity, usage..and most importantly, business needs...etc.
 
For the most part, the biggest difference between the Dell Powervault iSCSI stuff and Equallogic is primarially in the Software and Licensing. As Vader said, the EQL's are all-inclusive. Snapshotting, Replication, Tiering, etc. No licensing to buy EVER. The basic Dell PV stuff is just an iSCSI disk array with separate license for Snaps and Replication. I don't believe that automated tiering is possible with those. Now the $64,000 question is the OP's near future (<3yrs) usage of these features...

That being said, what support level do you need for this equipment?
As other's have said, if your IOPs demand at present is only a few hundred, what do you expect the business growth to raise that to in the near future?
Do you have a local Vendor for ANY product you've mentioned that you trust? What are their thoughts?
 
Agent-based backup software. Bleah

I get it if you're keeping physical servers around and you want an all-in-one solution, but if you're going all virtual, agent-based is silly. I'm sure they're giving a good discount since you're keeping it 'in the family'.
 
I noticed some comments about an EMC SAN, one thing I can say is if your looking at EMC I highly recommend staying out of the 3000 series sans. They are far more limited then the 5000 series. We purchased a 3300 in 2011 per our vendor and EMC certified professional for $90k. The problem was the 3k was limited to drive configurations and the backup appliance required 250% overhead for replication.

After working with EMC for a year they made it right with our company as the above items did not meet the requirements we told them we needed. In the end they let us keep the 3300 and sold us a 5300 valued at 200k for 100k. I guess in the end we got a sweet deal but dealt with performance issues and limited hard drive space. It did not help we purchased the initial unit during the hard drive shortages and ended up 20 drives short of what we required due to the overhead for replication.
 
Back
Top