VM storage planning

Joined
Mar 28, 2009
Messages
14
I only recently found this forum, but havn't found an answer yet.

Goal: virtual cluster home rig for training/testing etc
to have an enterprise-type setup at home ( As I will have to know all of this in the near future, to start planning for blade/VM implementation and migration at work )


infrastructure:
3x Dell Poweredge 2450s 2GB ram 2x1Ghz proc 3x18.4 (raid 1+1 hot spare)

currently purchasing:
8x blade 2x3Ghz proc 2GB ram 2x36GB (raid 1)

Planning:

for <$500 (best case <$300) storage for VMs, and overall network file storage:

I am thinking the best plan is to get an external SCSI array and just have one of the Dells run storage for everything.

The dells all have SC fiber, so running a fiber SAN is possible (but I only have limited experience with using a SAN, and was not involved in implementation.. a brocade switch is out of the price range of this project)

More details:

Currently I am not running Gig ethernet at home in my rack
The majority of guests/servers will be running Windows server 2k3 standard, Cisco devices, and some virtual appliances for testing
ESXi is the host (already running on the Dells, but probably going to switch with the new hardware incoming)


The question:

Suggestions for a storage solution? (go ahead with the external array and change to Gig ethernet/fiber between servers, or something else I havn't thought about yet)
 
What is the problem you are trying to solve? Are you looking for cheap disk for a SAN, or performance disk for a NFS datastore? Where do you want the guest OS to live, on the host or backend?

If you're looking for a pool of cheap disk, you could always put together a Openfiler NAS, with this combo:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813121359

Either way, you should upgrade your network to gig.
 
I'm not sure what I was questioning now.

1. Upgrade to Gig
2. One of the 2450s with openfiler and an external array (storageworks, etc. Cheap SAN .. 500MB-1TB)
3. Guests to live on the backend

With this I should have everything I need for now, or am I missing something (again)?
 
1. Openfiler for iSCSI, FreeNAS for NFS. Repeat it. Pick either, they both work for the appropriate task, but don't try to mix. OF's NFS performance is meh, and FreeNAS iSCSI target is broke. (VPD page 80 doesn't work right, so ESX loves to eat the entire filesystem)
2. You can't afford fibre at that level - You'd have to pay the OF gutys to help you make a fibre target, and it wouldn't be all that robust yet. Go iSCSI or NFS.
3. FAST SATA drives or SCSI. Don't mess with the cheap SATA drives - trust me.
4. Get a real disk controller, with a battery-backed cache.

Local storage will probably beat OF or FreeNAS, if you can afford to have disks in every machine, but the closer to enterprise solution will ahve the filer.
 
You may want to consider other options if you plan to use iSCSI. After I was finished doing my tests with FreeNAS, OpenFiler, and Ubuntu Server, I found that IET under Ubuntu was much faster than the others.

I was nearing around 50MB/s-70MB/s over Gig-E using a P4 630/1GB RAM/80GB 7200 RPM hard disk combination. You'll have to spend a bit of time getting IET up and going on Ubuntu, but I love it and works quite well.
 
You may want to consider other options if you plan to use iSCSI. After I was finished doing my tests with FreeNAS, OpenFiler, and Ubuntu Server, I found that IET under Ubuntu was much faster than the others.

I was nearing around 50MB/s-70MB/s over Gig-E using a P4 630/1GB RAM/80GB 7200 RPM hard disk combination. You'll have to spend a bit of time getting IET up and going on Ubuntu, but I love it and works quite well.

odd, openfiler also uses IET, so I'm not sure why you'd see much performance difference. What raid level / controller were you using?

Openfiler does one thing though - you can carve luns and thus do load-balanced pathing on a per-target basis, which isn't so easy to configure with IET alone, and can help with network congestion.
 
I was just using the basic ICH7. No RAID, just a single disk. The network controllers on the VM server and iSCSI server were both the Intel Pro1000-GT variety. I must have spent a week going through all three of these solutions, doing O/S installs, network transfer tests, and uploading ISO's through VI3 to the iSCSI box. Every time, Ubuntu was faster.
 
I was just using the basic ICH7. No RAID, just a single disk. The network controllers on the VM server and iSCSI server were both the Intel Pro1000-GT variety. I must have spent a week going through all three of these solutions, doing O/S installs, network transfer tests, and uploading ISO's through VI3 to the iSCSI box. Every time, Ubuntu was faster.

Single disk in OpenFiler? Get a good raid controller - it'll switch around then :) At least, we got far better speeds in our tests.
 
Yeah, that will be in the works as the office will need the redundancy once we start doing our P2V's. Speaking of which, has 3Ware been good to you?
 
Yeah, that will be in the works as the office will need the redundancy once we start doing our P2V's. Speaking of which, has 3Ware been good to you?

Pretty solid for a consumer card. Make sure it has cache. The other one to look for is the Dell PERC cards - especially with a battery backed cache, they perform great.
 
1. Openfiler for iSCSI, FreeNAS for NFS. Repeat it. Pick either, they both work for the appropriate task, but don't try to mix. OF's NFS performance is meh, and FreeNAS iSCSI target is broke. (VPD page 80 doesn't work right, so ESX loves to eat the entire filesystem)
2. You can't afford fibre at that level - You'd have to pay the OF gutys to help you make a fibre target, and it wouldn't be all that robust yet. Go iSCSI or NFS.
3. FAST SATA drives or SCSI. Don't mess with the cheap SATA drives - trust me.
4. Get a real disk controller, with a battery-backed cache.

Local storage will probably beat OF or FreeNAS, if you can afford to have disks in every machine, but the closer to enterprise solution will ahve the filer.


What you consider by cheaply made SATA drives? Just curious cause i consider a possibility of something like this. Thought of using SATA drives.


Thanks in advance :)
 
What you consider by cheaply made SATA drives? Just curious cause i consider a possibility of something like this. Thought of using SATA drives.


Thanks in advance :)

if you can find decent 10-15k rpm ones, go with that (yay velociraptor). 7200 will need 32mb of cache - don't get the cheap 8mb ones, and whatever you do, don't even try 5400 rpm drives.
 
Google around for Dell Perc 5i. Pretty much a re-badged LSI, though motherboard compatibility is something to look at.

You can get one for ~100 bucks on fleabay.
 
Google around for Dell Perc 5i. Pretty much a re-badged LSI, though motherboard compatibility is something to look at.

You can get one for ~100 bucks on fleabay.

make sure it has the battery backed cache, and GET THE FIRMWARE UP TO DATE, but once you've got those done, AWESOME card.
 
if you can find decent 10-15k rpm ones, go with that (yay velociraptor). 7200 will need 32mb of cache - don't get the cheap 8mb ones, and whatever you do, don't even try 5400 rpm drives.

Ah ok. I figured the 7200 and 32mb cache.

Thanks

Google around for Dell Perc 5i. Pretty much a re-badged LSI, though motherboard compatibility is something to look at.

You can get one for ~100 bucks on fleabay.

There a fix for that problem that causing problems with certain motherboards. Some put tape on the 5th pin.

Here a thread about it. Nitrobase24 mention the tape part. There also links on there for the PERC card troubles.
 
I just built a new ISCSI rig for my 2, Dual Quad ESXi servers. So far (its in boxes...) its a older PD2.8/2gigs/ Perc5i + 4 640gb WDAALS drives raid 10. I will let you know how it performs. I typically run 8-12 machines on each box.

Upgrade to gigabit with a managed switch- HP Procurve 1800-24G is my choice. Grabbed one for $75 on fleabay.
 
Back
Top