Perc 5/i Bad Speeds

truffle00

2[H]4U
Joined
May 16, 2001
Messages
4,087
So I bought a Perc 5/i and have a RAID 5 array of 5 x 750 GB Seagate HDs on there. Some may be 7200.10s, some may be 7200.11s, I don't know.

Anyway, according to HD Tach I only get about 113 MB/sec average read speeds. Isn't this a bit low? I have write back enabled (without a battery backup, but I forced it in the RAID card BIOS and will get it soon) and read-ahead enabled.

The system specs are:

E6600 @ 2.40 GHz (will overclock AFTER setting up system)
IP35 Pro
4 GB RAM
Vista 64 Home Premium
4870 512 MB
Dell Perc 5/i w/ 256 MB RAM

Everything else is probably irrelevent. I thought I should be getting upwards of 200 MB/sec on such a system? Now, it doesn't really matter because my network copy speeds are only ~70 MB/sec or less, but what can I do to maximize the potential of this array?
 
I know this is impatient, but I would like some feedback, especially from people who own the Perc 5/i's...

Basically, I don't want to transfer over all my videos until I am confident that it is safe to do so...and that won't happen unless I think that I won't have to reformat and everything to get the settings correct. What should I do to get 200 MB/sec+ read speeds?
 
What block size are you using when testing /w HD tune? I have a perc 5i /w 4x750 7200.10's on it.Ill run a quick test to see how far off you are from my times.
 
In BIOS I left it at the default of 64k. I did not change any of the HD Tune defaults. Is that what you mean?

Anyway, I appreciate your effort. Anything I can get will help, even though I am about 99% sure that something is wrong. It just helps to know how far off of actual I am.

Thank you.
 
I really don't think you're that far off....this is an old screen shot when I had 3x750 in RAID 5

perc.jpg


Here is one I just ran, the array is now 4x750GB 7200.10 drives

percnew.jpg
 
Basically, my 5x 750 GB is just under your 3 x 750 GB benchmark. HD Tune also shows my array as 2199 GB even though it's 2.8 TB, but maybe that's just an idiosyncrasy within the program. Again, thanks for posting something to compare to.

Can you please post your settings? It's entirely possible that I've screwed this up.
 
Those results seem kinda low to me. This is what I got on my sisters comp with a perc5 with 3x500 GB drives + 1x400 GB in raid5:

hardware_hdtune.png

hardware_hdtach.png


This is better than what you are getting and these drives are pretty slow. The upper limit of this card (for reads) is 350 MB/sec from what I have heard. Which should be easily doable with 5x750 GB.
 
Your results seem low to me. I'm running 8x greenpowers in raid 5 and getting over 200MB/s. You should realize that the perc isn't designed to be run in anything other than a dell workstation and might have issues. Having different version drives could potentially cause performance issues with the controller also.

Another concern is that you should be seeing more than 2.2GB on the array. Did you convert the array to a GPT?

Things to try:
Is write-back or write-through in use? Try using a different (perhaps larger) stripe size? Try switching which drive revisions are on which fan out connector? Try a different PCI-e slot if you have another available?
 
Another concern is that you should be seeing more than 2.2GB on the array. Did you convert the array to a GPT?

hdtune does not support lba64 (only lba48) which means a 2TB limit. Using the hard-drive companies scale (where lba32 which was 137 GB (hd companies) or 128 GB (actual)) this comes out to around 2200GB since 2 TB =2199023255552 bytes which would be 2199 GB in the hd-companies scale.

Hdtach does not have this limitation. Here is what hdtune looks like on my 18TB array (also is limited to 2TB):

hdtune_raw.png
 
Your results seem low to me. I'm running 8x greenpowers in raid 5 and getting over 200MB/s. You should realize that the perc isn't designed to be run in anything other than a dell workstation and might have issues. Having different version drives could potentially cause performance issues with the controller also.

I realize it's supposed to be run in a Dell computer but when other people don't seem to be running into this, I am calling it into question. The different version drives could be an issue but I was hoping it wouldn't make the HDs slower than the slowest one, each (i.e. all should be as fast as a 7200.10).


Another concern is that you should be seeing more than 2.2GB on the array. Did you convert the array to a GPT?

It was set up as GPT, and I see 2.8 TB in the RAID card BIOS as well as in Vista. The only place I don't see it is in HD Tune.


Things to try:
Is write-back or write-through in use? Try using a different (perhaps larger) stripe size? Try switching which drive revisions are on which fan out connector? Try a different PCI-e slot if you have another available?

Write-back is in use, which I thought was faster. Should I be using a stripe size larger than 64 KB? I thought that was a fairly standard and unrestrictive size.

Fan out connector? I don't know what that means.

I can only use that PCI-e slot because the other is occupied by my video card.
 
What is your burst speed like?

You might want to look into updating the bios with an LSI BIOS file. When I did I saw a big performance boost.

Here is some HDTach comparisons of my various setups I tested.

http://docs.google.com/Doc?id=ddnjmgzg_148gxdpdvfv

My burst speed is 274 MB/sec, which seems low. It also should mean that my slot isn't restricted to 1x.

I tried to update to an LSI BIOS, but it wasn't working the last time I tried it. I forget the error that it was giving me, but basically, it refused to reflash the card.

Also, all my data is now on the drive. Anything that I do to the array from here on out can't force me to get data off the array to preserve it. If that is too restrictive, then I can always live with the array as it is.
 
Make sure you have "enable advanced performance" marked in vista for the controller.

10035288ji3.jpg


There should be a "DELL PERC 5/i SCSI Disk Device" in device manager under disk drives. Open the properties for it and click the Policies tab. Oddly enough most people don't mention this when troubleshooting slow RAID speeds.
 
serbiaNem is right, you need to play with combinations of your RAID stripe size and your windows logical block size to get max performance. I've found when testing on disk arrays (14 drive arrays) that tweaking stripe size and logical block size can make a huge difference in transfer rates as it can affect if a given transfer hits all drives in the array at once for optimal speed.
 
Back
Top