Adaptec 81605ZQ Thread (Powering 12 x 2TB WD RE Drives)

Hurin

2[H]4U
Joined
Oct 8, 2003
Messages
2,410
Hi All,

I should have put something like "modest" in the thread title since this is not a "barn burner" setup by any means.

Been upgrading the CPUs and storage subsystem in our relatively simple Hyper-V server that uses direct-attached storage.

Specs:

  • Asus Z8NA-D6 1366 Motherboard
  • 2 x Intel Xeon X5675 (12 Cores Total)
  • 96GB RAM
  • Adaptec 81605ZQ
  • Twelve WD RE 2TB 7200 RPM 64MB Cache (WD2000FYYZ)
  • Windows Server 2008 R2 w/ no Hyper-V while testing (will be installing 2012 R2 after testing)

Point of the thread is to both share and ask if things look right. . .

All RAID Arrays using 256KB stripe. All NTFS volumes have default cluster size (4KB or 8KB depending on volume size).

IOMeter Results
1 Worker, 32 QD/Outstanding I/O, 4KB Aligned, 100% write, 100% random

Single Drive
344 IOPS
1.41 MB/s
92ms average seek
128ms worst seek

RAID-0 24TB (all drives)
5,360 IOPS
21.96 MB/s
5.96ms average seek
792ms worst seek

RAID-10 12TB (all drives)
2,479 IOPS
10.16 MB/s
12.9ms average seek
821 worst seek

RAID-6
871 IOPS
3.57 MB/s
36.73ms
1076ms

I'm pretty new to IOMeter and taking this much care with a RAID array. I had been just using a series of two-disk RAID 1 mirrors to host 2-3 lightly-used Hyper-V guests each. And prior RAID-5 and RAID-6 arrays have been pretty much "build 'em and forget 'em" on non-Hyper-V servers. Our relatively light I/O usage will likely let us get away with throwing all VMs on a RAID-6 array. Which will allow me to more easily deploy more storage capacity to VMs as needed.

So, again, really just posting to share some numbers since I wasn't able to find anything for a modest setup with this particular controller. Would also like to hear from anyone who spots anything egregiously out of whack with those numbers. The seek/access times seem high to me, but I guess that's a result of using a queue depth of 32 on spindles?

I've got two Intel 730 480GB I was going to use as a MaxCache. But I'm just not sure they have the endurance necessary to act as cache for 12+ VMs running 24/7. And IOMeter doesn't leverage them at all. So I have them disabled during this phase of testing. They'll possibly become my boot drives in a RAID-1 mirror.

If anyone is curious to have other I/O test done with this controller, I'll have it in a testable state for another couple days before I wipe and deploy it for realsies.

--H
 
Back
Top