ESXi iSCSI Benchmarks

nicka

n00b
Joined
Dec 30, 2010
Messages
27
After much google and you tube videos, I have come to ask the advice of the pros. I had better performance expectations for my current setup in the area of through put over iSCSI between my ESXi 5 host and shared storage SAN/NAS. I have attached as many screen shots and listed the settings and hardware to see if there is any options you guys can think of to help me get a better "realistic" benchmark of dual gigabyte pipes; or have I hit as good as I'm going to get with my current hardware? What are my upgrade paths, or should I just settle with what I have since this is where I should be?

ESXi Host
  • Xeon 1230
  • MBD-X9SCL+-F / 16GB ECC
  • 500 GB WD Blue (Local storage)
  • Intel PRO/1000 PT Dell X3959 Dual Port Gigabit (VM Network / Management)
  • ESXi 5 installed to USB Flash Drive

SAN/NAS
  • Celeron G530
  • Gigabyte GA-H61M-S2H / 8GB
  • x2 Intel Gigabit CT PCI-E EXPI9301CTBLK (iSCSI traffic)
  • x1 Intel PWLA8391GT PRO/1000 GT PCI (Management)
  • x4 Hitachi HDS72105 connected to AOC-USAS2-L8i in Raid0
  • OpenIndiana 151a /Napp-it installed on some leftover 2.5 drive.

Switch: NETGEAR ProSafe GS108T

Network Settings:

Netgear has DCHP disable, jumbo frames enabled, and no LACP settings being used. It runs in the subnet of 255.255.255.0 and IP addresses of 192.168.10.*, with 192.168.10.1 as the gateway. 4 Ports being used, 2x to each box; all cat6e cables. NIC on the ZFS side are set as static, with mtu=9000, no aggr0s or lagg0s configured.

Benchmarks with ARK delayed disabled in W7 VM:


CDM:


Screen shots of all other settings I can think of:
http://imgur.com/a/yWTgA#0

Thanks in advance!
 
Go run iometer, 4/8/16k, mix reads and writes, show me what it gets back for those. run it on a second, 5g disk, on a separate vscsi controller, no filesystem.

Pipes never matter - spindles=iops matter, given what vms do.
 
4/8/16k:




Running one a separate disk now (If I can figure it out without a filesystem.)
 
I'm a little confused here. CDM shows you getting about 900mb/sec over a gigabit link. Why would you expect much more than this? At least for the sequential thruput? If it's the 4k thruput that bothers you, you haven't really said...
 
I'm a little confused here. CDM shows you getting about 900mb/sec over a gigabit link. Why would you expect much more than this? At least for the sequential thruput? If it's the 4k thruput that bothers you, you haven't really said...

CDM is showing 100MB/s over two gigabit links.
 
1. You don't get to use 2 links. That's not how NMP works :) You're getting pretty close to single link speed from your test, which is what you want for that kind of test (but doesn't matter in our world).
2. Spindle speed and IOPS are what you care about, unless you're using 1 host and a single VM (in which case, why virtualize?).
3. Numbers look good. 1500-2500 CMD/s is great for 4 SATA disks (if my googlefu is good). Good caching there :)
 
Not an expert on iSCSI here, but I seem to recall that any individual stream is only going to use one NIC (or somesuch...)
 
Not an expert on iSCSI here, but I seem to recall that any individual stream is only going to use one NIC (or somesuch...)

Path to logical unit (LUN).

ESX uses a single path at a time to each LUN. When you have multiple, you can utilize multiple paths via round robin (switching every 1000 io by default), or by having each one set to a different path (plus failover, of course). Enough luns, and RR will get you full utilization... BUT, at any one given time, each LU is only going through a single path. unless you have PowerPath.
 
So I will benefit with more disk, and more ram in the zfs unit, you think? SSD?
 
Back
Top