Disk IO speed is terrible on R710

tgs

Weaksauce
Joined
Feb 9, 2011
Messages
91
Server - DELL R710

All speeds are designated in megaBYTEs, NOT BITS.

Setup 1:
Drive 1 - 240Gb Revodrive (PCIe SSD in RAID 0, retarded fast speeds)
Drive 2 - 1Tb WD drive which shipped with server.
Controller - Crap $15 "raid" card bought from new egg.
OS - Server 2003 x64
Result of 5Gb file FROM Revo TO WD - Steady speed of about 80Mb\s with initial burst of 300Mb\s.
Result of 5Gb file TO Revo FROM WD - Steady speed of about 60Mb\s with initial burst of 100Mb\s.


Setup 2:
Drive 1 - 240Gb Revodrive (PCIe SSD in RAID 0, retarded fast speeds)
Drive 2 - 1Tb WD drive which shipped with server.
Controller - SAS 6/iR
OS - Server 2003 x64
Result of 5Gb file FROM Revo TO WD - Inconsistent speed of about 18Mb\s-30Mb\s (3Mb\s prior to enabling "Enable Advanced Performance" on the WD drive in Windows; a setting change which was NOT REQUIRED by the cheap $15 newegg-bought controller for additional performance).
Result of 5Gb file TO Revo FROM WD - Steady speed of about 55Mb\s.


Setup 3:
Drive 1 - 1Tb WD drive which shipped with server. CONNECTED VIA $15 ADDON CARD.
Drive 2 - 2x 2Tb WD drives. CONNECTED TO ONBOARD SATA
OS - Server 2003 x64
Result - 80Mb\s-90Mb\s sustained reads (this is while transferring to both destination drives at the same time, each getting 40+Mb\s).


I've done a lot of googling on this issue. So let me pre-emptively address some things:
  1. Through testing, I've determined that the 1Tb and 2Tb drives have a real world throughput of 80-90Mb\s read, and 60Mb\s write. This is real world and not benchmark.
  2. It doesn't matter if the destination drives are in single or Raid 1. Write speeds suck through the SAS controller.
  3. The requirement of enabling "Enable Advanced Performance" in windows is ridiculous considering onboard and cheapy controller don't require this to be on to saturate the drive's bandwidth. Additionally, this only gives us HALF of the bandwidth that we should be seeing.
  4. No configuration changes reached through wdidle3, LSIUtil_1.62, or Hitachi Feature Tool 2.12 have resolved the issue. All settings look optimal with both drives and controller.
  5. The 1Tb WD shipped with the server, and as such has been the focus of my testing (since some people claim that DELL optimizes their shipped drives for their controllers). The 2Tb drives are third party purchases. All three drives achieve their proper speeds when not connected through the SAS controller. Once on the SAS controller, 3Mb\s or 20-30Mb\s depending if that write cache option is enabled.
  6. Firmware and drivers for the SAS 6\iR are the most recent that can be found online.
  7. The drives have been formatted a number of different ways with different block sizes during tests. None made any notable change on performance with the SAS controller.
  8. The backplane has already been replaced by DELL, it is not the culprit. The cables as well.

So here I sit with a $1500 server in my home office that gets shitty write speeds and no solution to be had that doesn't involve more money (which I am not willing to do and should not have to do). I've been patient in waiting for certain back ordered parts to arrive for this so that it can finally be configured the way it should be, but with the problems I'm running into, I'm wondering if it's within my right to return this after three months of problems with it (the disk IO speeds are just the straw that broke the camel's back, I've had other issues such as parts back ordered for months, idiot reps who can't place simple orders, etc).

I'm frustrated that DELL would sell garbage, I'm frustrated at myself for having bought this instead of building one myself and saving both time and money.

I wouldn't even be so bothered about needing to mess with the write cache setting in Windows if it actually provided the full performance I expect from these drives. It's not like 60Mb\s is breaking any land speed records. Crap "raid" card from new egg handles that, so does onboard. But for some reason, too much load for little ol' SAS 6\iR. That's unacceptable.
 
Last edited:
I got a little lost on what speed you're complaining about, as it sounds like you are complaining about the 6/iR 30MB/s as well as the 60-70MB/s speeds.

From my quick glance it sounds like you need to replace the 6/iR.

The 1068e isn't a real powerhouse, although it doesn't have to be to support only RAID 1 and 0, but I never had write issues on mine.
 
I got a little lost on what speed you're complaining about, as it sounds like you are complaining about the 6/iR 30MB/s as well as the 60-70MB/s speeds.

From my quick glance it sounds like you need to replace the 6/iR.

The 1068e isn't a real powerhouse, although it doesn't have to be to support only RAID 1 and 0, but I never had write issues on mine.

Only complaining about the 3Mb\s and 30Mb\s.

I'd be quite happy if it was handling 60Mb\s writes.
 
in openmanage on the server check to see if the SAS6/ir data cache if enable.
 
@OP

Try using HD Tune and doing a write test on the WD when it's connected to the SAS 6/iR. This should tell you if the SAS 6/iR really is the culprit.

I have one of these cards, with 7 Samsung HD204UIs attached to it. Benching them individually, I got average reads and writes of ~113MB/sec. What model is the WD?
 
Have you tried testing with Windows Server 2008 R2/Win7? The 6/IR is no powerhouse but should be capable of 75-80MB/sec transfers. I am surprised that you are using Windows Server 2003 R2 x64 on such new hardware.
 
Can I just ask, did you have any issues getting your RevoDrive to work with the R710 or was it just a case of install it and away you go?
 
The 6/ir is a turd. I was getting 80-ish on sas drives. I got a powervault hooked up, I don't recall the card, but now I am at something like 400-600ish write speeds and 600-800ish read speeds.
 
Back
Top