AX4-5i Disk Performance | RAID10, 14 7.2k 1TB

RiDDLeRThC

2[H]4U
Joined
Jun 13, 2002
Messages
3,963
Thoughts?

VM is running on ESXi 4.1 configured to use MRU, no powerpath/ve


SAN-filebenchmark.JPG

SAN-benchmark-read.JPG
SAN-benchmark-write.JPG

SAN-randomaccess-read.JPG
SAN-randomaccess-write.JPG

SAN-extra-read.JPG
SAN-extra-write.JPG

 
Last edited:
From what I know about partition alignement it is, The VMFS was created within the vSphere client, the VM OS is windows 2k8R2 and that automatically aligns the partition.

The only thing that may not be aligned is the LUN? not even sure that applies though. I don't remember coming across any recommendations when it comes to creating that. There wasn't many Options in Navisphere Manager when creating the RAID or the LUN for that matter.
 
Last edited:
From what I know about partition alignement it is, The VMFS was created within the vSphere client, the VM OS is windows 2k8R2 and that automatically aligns the partition.

The only thing that may not be aligned is the LUN? not even sure that applies though. I don't remember coming across any recommendations when it comes to creating that. There wasn't many Options in Navisphere Manager when creating the RAID or the LUN for that matter.

The LUN should have been created with 64k, 128k, 256k, etc. striping so you're aligned properly.

Write performance seems low, however.
 
iSCSI? You're up to the max of a single Gb link. How much were you expecting?
 
Yes, its iSCSI. for this array we are just trying to squeeze every bit we can out of it.

With the AX4 being a active/passive array I shouldn't use RR but when I look at the paths screen with MRU active I see that its talking to SPA ports 0 and 1. can't I push data to both those ports with RR?

Would powerpath/ve help?
 
iSCSI? You're up to the max of a single Gb link. How much were you expecting?

Bringing back an old thread here. Trying to get to the bottom of if we can use round robin without powerpath on our esx servers.

port%20config.JPG


On the AX4 I setup each SP on a different subnet. each esx server has two iscsi links (one to each subnet).
 
With the exception of LUN design (we created one 14 disk RAID 10 with 1.5TB LUN's) I followed that guide to a T.
 
Interesting little snip out of one of the EMC whitepapers on powerlink (document H5773)

Network separation

It is important to separate the storage processor management ports into seperate subnets from the iscsi network ports. Also, try to separate the CLARiiON's storage processors onto separate subnets. Do this by placing SP A's ports on one subnet and SP B's ports on a different subnet. If enough network resources are available you may even consider putting each SP port on its own subnet. VLAN's are a convenient way to configure the network.
 
So the network separation is important, but that's due to an EMC Flare issue that will result in you getting kicked off the array. Suffice to say, that's not so much a performance problem as much as a "WTF happened to all the paths to SPB" issue ;)

Still reviewing the performance numbers - I may have missed it, but how many disks/what speed?
 
Yes, its iSCSI. for this array we are just trying to squeeze every bit we can out of it.

With the AX4 being a active/passive array I shouldn't use RR but when I look at the paths screen with MRU active I see that its talking to SPA ports 0 and 1. can't I push data to both those ports with RR?

Would powerpath/ve help?

RR is fine on A/P arrays - we simply RR to the active paths and ignore the standby ones unless we have to fail over. Powerpath might get you more, if the limitation is your interconnects - but in an actual production shared storage environment, it almost never (95%+ of the time) is.
 
Back
Top