Raid question for the vmware tinkerers...

nicholasfarmer

Limp Gawd
Joined
Apr 19, 2007
Messages
238
I'm playing with a new Adaptec HBA\Raid adapter and would like some feed back from the board about stripe size. My options are 64,128,256,512. The default is 256 but figured 64K would be better? I've tried 256 and 64 and ran some IO meter tests inside a VM. The raid rebuild time is a little over eight hours for my 24TB so im fishing for input instead of rebuilding over and over and waiting a week to pick between raid type and stripe size.

My thought atm is if I select 512, each small read will load in more data then needed and cause latency or cause more swap in the cache.
Or if I go 64K stripe it will cause excessive IOPS on the disc. (One 1MB read is 16iops?)
VMFS 5 uses an 8k sub block size but a 1MB block, so maybe 256? Bleh!!!

I know the type of workload has an impact if its large sequential data reads or lots of random IO from many VMs.

Thanks for any feedback
Nick
 
What did IOMeter tell you? My guess is that your results varied by the I/O size...which is why you need to understand your I/O to fine tune the RAID stripe size. There is no magic answer.
 
I was just looking for some input based on VMFS block size, the sub block size, and wasted IO based on making the Raid stripe too small or too large.

Kind of like the old windows 2003 disk alignment where a VM could have eight IO on the array per single IO in the VM.
 
Yeah it's tricky, the worst is when you're running a bunch of different loads and have to try and figure out what is the least bad way to configure it for everything. The bigger you go the more wasted space you can be looking at too... Gah it is a brain buster at times.

What RAID version are you using too? cause running a RAID 5 or 6 is going to cause you probably more IO issues than a non optimal stripe size.
 
Back
Top