Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
sorry, been tied up - sec...
I did this on a test Win2K3 VM - and writes went from 160MB/s to 260MB/s (with 16K cache block size on the san). Using 4K cache block size it went from ~80MB/s to 160MB/s.
Holy Jeebus.
Oi.
edit: Removed for correction below.
http://www.vmware.com/pdf/esx3_partition_align.pdf - follow this on a test vm. align the partition with the VMFS volume, and lets see what performance is then, as we'll minimize split writes. this normally gains about 3-5%, but if it's a major issue on this san, we might get a lot more out of it.
Let's take our 260MB/s beast here. I think we can get you ~300MB/s by getting our RAID segment right. What's the segment size on the array that did 260MB/s?
Oi.
edit: Removed for correction below.
http://www.vmware.com/pdf/esx3_partition_align.pdf - follow this on a test vm. align the partition with the VMFS volume, and lets see what performance is then, as we'll minimize split writes. this normally gains about 3-5%, but if it's a major issue on this san, we might get a lot more out of it.
Can you explain why the boot volume does not need to be aligned. Also per microsoft Server 2008 automatically aligns its partitions to 1MB. I assume tahts ok because it can be devided by 32 ?
That's the funny part - 64K,128K,256K, and 512K all yielded 260MB/s!!
any change with different raid levels?
Nope. There will be no change except in RAID0. We're gonna need to retest the direct attach Windows box with the same alignment. I wanna see if that does what I think it will - being, the exact same. I think we're hitting cache limit, but I don't see cache mirror turned on.
Given the overhead for differing RAID calcs, there should be some change, from what all I'm reading Not major, but some.
On other arrays, sure. Part of it is understanding that FC is NOT 4Gbit! It is 2x2Gbit. Presuming utilization of both paths, with the DS3400 having two internal loops, that gives us ~500MB/s with cache enabled. The DS3400 is a different beast, much like the DS4200. Except it contains less failure than the DS4200, typically. The CPU is basically packing enough overhead in the design that there's far less variance between RAID types. We definitely should not be hitting CPU limits on the DS3400 at 260MB/sec in RAID5.
The other reason I know it's not the CPU is because it's 260MB/s at multiple segment sizes. Segment size has a significant effect on CPU when you're doing less than full stripe writes. The read-calc-write penalty would be hammering the numbers down very hard. The same goes for smaller segment sizes - there should be at least 20MB/s of variance from 64K to 256K. I may be misremembering the DS3400's controllers, but with dual, it should have 4 host ports total (2 per controller,) so we shouldn't be hitting limit at one path+cache. Just looking at it with ignoring all our other information, 260MB/s says "single path limit" to me. I think it may be that ESX is only using one path, so I want to see the numbers out of Windows direct attach, where I know it'll use both paths.
Oh, we're definitely using one path. We ~only~ use one path. ESX multipathing in 3.5 is failover only, with load balancing per lun on the SP (path balancing really, vs. load) - but only pathbalancing. We do not offer MPIO or any kind of bandwidth improving multipathing
yet. It's coming in 4.
AreEss - another question. I've removed the luns that I created and it has left me with discontiguous free space that I need to be one big glob. My once 500GB free space has become 150 and 350. Now I need to recreate the original 500GB lun that it once held - any ideas how to do this without having to redo all my luns and shuffling data?
I suspect changing segment sizes are what caused it all to shift around.
Start Array[N] Defragment ;
Open Storage Manager.
Select the DS3400 on the left window.
Tools->Execute Script
Note- a defrag will generally knock any ESX hosts offline.
Ok - kicked it off. Why is defragmenting used instead of something simple sounding like "move" ?
it's not the blocks being gone that matter - it's the performance We tend to timeout a lot and drop connections as a result of the load, especially at higher-priority rebuilds. ~shrug~
SpaceHonkey said:Beautiful - worked like a charm!
FYI - apparently AreEss is the ONLY freaking source of good info related to these DS series enclosures on the internets. I've searched high and low and it just doesn't exist out there. Thanks again!