SpaceHonkey
Gawd
- Joined
- Jan 25, 2007
- Messages
- 983
How can I verify in 3.5 if disk write caching is enabled? The lun on the SAN is set to enable but I'm having some bad performance issues - Read = 349MB/s, Write = 90MB/s. Any clues?
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
wait...what? your running disk performance metric tools inside of a vm, using vmfs3 on a ds3400? if this is the case, no wonder your numbers are shit. you can't do accurate performance tests on SAN from a inside of a guest VM...there's two file systems for each VM, in this case, NTFS on top of VMFS3, and the host is artibtrating all of the IO and other resources.
Did I completely misread this, or is this test completly FUBAR?
Let me recap what it going on. SAN write performance blows. In a VM or directly attached. RAID5 5 disks or RAID0 2 disks. Heck at one point in the past I tried 5 disk RAID0, still sucked.
The problem is that READ performance is more or less excellent, and write speeds are at BEST 1/2 (usually 1/3) read speeds even w/RAID0. For the fun of it, I'm going to try just a single drive too.
On a RAID5 5 disk lun, using a server (no VM), while DISABLING write caching on the lun, I only see a drop from 73MB/s to 67MB/s when testing sequential writes using 2MB blocks. That to me is awful.
My concern is that I can't put this thing into production if p2v'd servers are going to have to deal with a massive decrease in storage performance. I'm trying to troubleshoot what could be causing the problem, since IBM refuses to provide performance based support without a "software" contract. When I asked them why RAID0 write speeds where 1/2 read speeds, they asked me if that was a bad thing?!
I'm open to all suggestions and testing methodologies. I've got very little hair left to pull out at this point.
I really think it must be related to write caching, but I can't figure out how. One other thing, disabling write caching kills IOPS, so I'm certain it's on.
Surely all sans don't perform like this!
I'll take a look - you're now the 4th performance case I'm working on
Ok, well I'm stuck again. I just got done speaking with the currently assigned tech, and he said that after talking to the escalation engineers, it seems my numbers are actually about average. I re-ran the same IOmeter test on the physical server, and it saw nearly identical numbers.
OK - so IOPS are good. But I've still got problems with real world file copies. Physical = ~250MB/s, VM = ~75MB/s.
I feel like I'm at square one again. Basically I was told that there was nothing else he can do. So no more love from VMware or IBM
The vmdks are aligned on the luns, I've tried 32K block size in VM (changed to 88MB/s), no other san activity, no pathing issues. dd in the service console writes at about 42MB/s or so. I'm at a complete loss. The problem only exists on ESX yet we're out of solutions.