SAN Vdisk / LUN Performance Discussion

SKiTLz

2[H]4U
Joined
Aug 3, 2003
Messages
2,664
This is something I've been battling over for some time now. We've really been pushing ESX / Hyper-V clusters as of late which obviuslly rey on shared storage. We've been using HP's MSA line of SAN's as we don't deal with massive companies.
My question shouldn't be to SAN specific though. There is always a lot of talk on the ideal vdisk/lun layout for a SAN. There is one scenario I've never worked out which way was better for performance.

SAN is going to host VMFS/CSV and also Exchange DB.

I can either do

Scenario 1.
16 X 146 GB SAS in Raid 10 for my VMFS/CSV w/ 2 X 146 GB SAS in Raid 1 for Exchange (I know that's not ideal for Exchange but for the sake of this example go with it).

Or

Scenario 2.
18 X 146 GB SAS in Raid 10 for everything with different volumes.

My thoughts are in Scenario 1 Exchange only has the performance of 2 drives (well 1). In scenario 2 Exchange can pull performance from all 18 and the VMFS also gains a few more IOPS from the additional spindles.

Keep in mind these are 100 user max clients.

Again. This could be anything, file storage, sql whatever. Not just exchange. Just used Exchange for the sake of the example. What are people's opinions. Bunch of smaller vdisks or 1 large vdisk with different volumes/luns? In my head the 1 big vdisk offers better performance.
 
Your question is very interesting...I wish I could answer it =)
In the absence of concrete information, I always assume that more iops is better than attempts to improve performance by tweaking caching/spindle positioning are doomed to failure, as you did.
 
Last edited:
I can say our mail server killed our SAN performance when they where all on the same volume. Things improved significantly when we got a new shelf of disk for the mail.

Now this might not be very useful for your case, since we don't have exchange, and I don't know how comparable our mail loads are (considering you'd manage with a 146GB volume for mail I'm guessing not very comparable). We have a few terabytes of mail for about 300 users, running on Zimbra. Also you make no mention of what you are running on the virtual machines, and I'm not figuring out what you mean by csv (did you mean cvs?).

I'd start by trying to measure how much IO your current mail server setup produces, compare this with the IO from other services and see how close to capacity you end up. The thing with mail is that at least for us it keeps on rippeling quite evenly through office hours. Virtual machines don't have as even IO patterns so they might not disturb each other as much (depending on what you are running on them)...
 
I guess the real question isn't what I'm running, it's for anything IO intensive like SQL, Exchange etc that usually likes its own spindles, does the advantage of the additional spindles in one big vDisk negate the fact that it's not on it's own spindles.

That's what I'm trying to get at. I've got NetApp lined up to lend us a SAN for 30 days so I might be able to answer my own question with some testing. Unfortunately all my SAN's are live, so it's a little hard to play around and test things out.
 
Well it's always a trade off.

Having the data spread over more spindles will increase the performance for the IO specific applications, however IO intensive services will severely hamper the performance of other services that have more linear access patterns.

We have two netapp SANS, one with FC shelves and one with Sata shelves. Originally we had a plan to put "not so heavy" services on the sata disks, and IO heavy stuff on the FC discs. However it ended up so all our email, fileservers and virtuals where on sata-disk while we mainly had databases on the FC disk.

Pretty soon it became clear that we just could not have the email on the same aggregates, so we got a new shelf of fc disk for email and everything started working again. While you could argue that we where asking for trouble by putting so much on the sata disks (which the sales man said was ok), it will just amplify the real underlying issue - once all the heads are jumping around with IO heavy stuff it will start to show. However any vendor should be able to help you with capacity planning, it could be that your workloads are so small that you won't be close to touching levels where performance becomes such a huge problem.

While we do also have an HP EVA and some EMC SANS I have never touched them in my line of work, so I can't say how different the set up is there. With Netapp while planning you have to also take into consideration how much extra space you want to reserve for snapshots (and I do recommend doing that, it's a super useful feature), I would imagine a 2 disk mirror will run out very quickly.
 
Back
Top