Dell PERC 5/i slows down...

Alexke

n00b
Joined
Dec 2, 2010
Messages
22
So I have this Dell PERC 5i in my file server for a year now.
lately its slowing down drastically.
When I transfer a File from another drive the raid will transfer the file for the first 10 seconds at a good 110mb a second. But after those 10 seconds it will fall down to 25mbsec.
If I do 2 transfers it gets even worse it will fall down to 2mb a second.
Im almost at the point to kill the raid and go to JBOD , but thats a big hassle.

Crystalmark has shown a sequential speed of 615Mb a sec , but 4K is a different story I get read speeds of 44Mb/sec and 0.533 write !!!
its a 11Tb raid 5 with 7 drives and have 2.7Tb free space
All drives are WD EARS or EARX.
Stripe size is 64k
Just noticed that the BBU is bad , but my adaptec 31205 did not slow down when that occurred.

Anybody got some idea´s?
Thanks in advance [H]!
 
Last edited:
Replace the BBU, or go into the bios and check the box for Force WB(Write Back) with no battery.
 
You may want to see if your raid 5 array is degraded due to a failing / failed drive in the array.

The BBU won't cause the type of slow down you are describing.

Fire up MegaRaid and see what the health status of each physical and virtual drive is
 
its now on Write through .
Doing a Consistency check on it as we speak! 8 hours remaining...
I am also doing one on the 9Tb array.
I dont have an individual health indicator for each drive on MegaRaid manager...

Would suck if a drive is failing. Im in my exams :(.

Thanks for the info!
 
its now on Write through .

Definitely failing drive. I have been using PERC 5/i controllers for several years and that is the tell tale sign of a drive going south. If a drive does fail, you can still limp along in degraded mode until you can replace the failed drive. The thing that sucks with the PERC 5/i is that individual SMART drives errors are not reported, just when a drive fails. I guess it one of those things 'You get what you pay for".....
 
Definitely failing drive. I have been using PERC 5/i controllers for several years and that is the tell tale sign of a drive going south. If a drive does fail, you can still limp along in degraded mode until you can replace the failed drive. The thing that sucks with the PERC 5/i is that individual SMART drives errors are not reported, just when a drive fails. I guess it one of those things 'You get what you pay for".....

Damit! Damit!
Thats not what I need with exams and high priced HDD´s :mad:

So I did a Consistency check on the two raids (adaptec and the dell) both didn't warn me or fixed anything.

Also did a HDtune test on all drives for errors, which was negative. :cool:

Now is there a way to locate the bad drive :confused: ?
Because I am at my boiling point with that PERC 5i crap !
Thinking to go back to JBOD , its a big hassle , but I never lost data :rolleyes: .
This 7x2tb array with EARX´s and EARS´s makes me nervous.

And I will need carefull planning to transfer 9Tb of data... i only have 4.5tb left , and all my other systems have SSD´s... :(
but I am willing to spend 450euro for 4x2tb´s:rolleyes:


thx!
 
Last edited:
Remember that a lot of raid cards will shift to write through when the BBU is not working. And also remember the write through will give you abysmal random performance. I would check ebay for a while and see if you can find one of those cheap BBU units from overseas..
 
Remember that a lot of raid cards will shift to write through when the BBU is not working. And also remember the write through will give you abysmal random performance. I would check ebay for a while and see if you can find one of those cheap BBU units from overseas..

both BBU´s died recently , but my data is not critical , most critical data is backed up in the cloud.
I also never have power outage´s here... already replace 2 UPS batteries and they never had to work.
So i forced both cards to write through .
I get :
Sequential 595MB/s read and 111MB/s write.
Random 512k : 587.8MB/s Read and 21.24MB/s write!!!:rolleyes:
4K is even worse... 59.17MB/s read and 0.459MB/s write :( , compared to the adaptec which has 25.09MB/s write...
 
Look at the SMART on the individual drives. One or more could be dieing.

Is there a program for it , while the system is running ?
Or do I have to remove the raid drives and test them? and how do I do that without losing data?

Thx guys ! I appreciate the response !

p.s: I am ordering two 3Tb EARZ drives for 170 each.
 
nobody any suggestions ?

I already started transferring the 11Tb of data to various other drive´s...
Took me almost 2 days of careful planning and transferring.
Still have 500Gb to go and then Im done
I think I am going to remove the Virtual Drive and individually test each drive...
I am so dissapointed in the Perc 5/i and those WD EARS drive´s...
 
Personally?

I would remove the Label each drive, what port it was plugged into. Remove the perc/5i card.

Then connect the drives to the motherboard directly. Boot off something like, the linux sysrescd, and to a smartctl check on each drive, and also run the long smart selftest on them.

Reboot again, to test more drives.

Then when done, just put the perc/5i card back in, attach drives back, and you haven't lost any data.

Then it all depends on what smart reported for each drive.
 
Back
Top