Stable RAID5 on ICH10R with 3x Samsung F4 2TB HD204UI's any1?

KrazeyKami

n00b
Joined
Dec 31, 2010
Messages
61
Hi all,

I bought 3x WD20EARS for my ICH10R raid 5 setup. Turns out a big mistake, and i guess it's the problem with the TLER not being enabled / available on the WD20EARS. They failed twice on the Initializing, thus this is my 4th day trying to initialize the raid -.-
I'm using the Intel Rapid Storage Tool 9.6.

I'm looking to trade em in for 3x HD204UI's.
I can't seem to find any reports on problems with this drive, regarding to CCTL (supposedly the equivelant of WD's TLER). Does any1 know if they have the CCTL enabled, where as the WD20EARS have no more TLER?
I think this is one of the key factors in making a stable RAID5 (so they won't drop out of the array in case of a bad sector).

I hope these drives (with the necessary Firmware upgrade ofc. http://www.samsung.com/global/busine...bbs_msg_id=386) will result in a stable RAID5 via ICH10R onboard. I use an Asus P6T SE.

Im really curious if anyone have been running a stable RAID5 with the HD204UI's on ICH10R.


PS: What's with this alignment? I understand that when you use Win7 x64, there is no need to manually change something to get em aligned proper, yet some people claim that in a RAID array they need to be aligned?
Could any1 explain as to how and where u make this change, and especially, why?


Thanks in advance.

Regards,
Kami.

My setup:

Asus P6T SE, Bios v.0808
Intel i7 920 @ 2,6 GHz *stock*
ProlimaTech MegaHalems + 2 Cooler Master120mm fans
6 GB OCZ Gold PC3-8500U and 6 GB OCZ Gold PC3-10700U
nVidia GTX295
Sound Blaster Fatal1ty X-Fi
Cooler Master 1000W Real PowerPro
1x Intel X-25 M SSD 80G (SATA 0) (non-raid)
2x 300GB Maxtor 6L300S0 (SATA 1, 2) (non-raid)
3x 2TB WD20EARS (SATA 3, 4, 5) (RAID5)
Sweex PU102 SATA150 Controller
1x 1TB WD Caviar (PU102-Port 1)
1x GGW-H20L BlueRay (PU102-Port 2)
RWCooler Master Stacker 831 case with 6x Cooler Master 120mm fans
Windows 7 x64 Professional
 
Last edited:
Any1? Sorry for my persistance, but i am really curious what the experience of others is with this kind of setup, and wether i have to make some additional adjustments.

Thusfar i've done the following:

I flashed the HD204UI's with the latest firmware and popped 'em in a RAID5 array via ICH10R.
It's currently running the Initializing via the Intel Rapid Storage Tool, got em on 64k data stripe and enabled Write-Back cache just for the initializing. I'll turn it off when complete.

Anything i forgot or that is recommend? There is no need for me to do anything with the CCTL settings or with the alignment / 4k sectors?
 
Samsung supports CCTL, but it won't be retained after reboot; there is no really reliable RAID on Windows platform, but you can get it decently reliable with TLER disks, RAID6 and a very good backup. Even then, there are issues which you would have limited protection against, such as Bit-Error-Rate.

Alignment would be good when using Windows 7 and when letting Win7 create the partition for you during Setup. You can check alignment with AS SSD, also makes a good benchmark just like CrystalDiskMark.

Be aware that enabling write caching is required for decent RAID5 write speeds, but you should know that a simple power cycle could kill your data.

Consider the RAID5 to be less secure than a single drive without any protection; meaning you should use backups to protect your data! It's easy to overestimate the protection that RAID provides, especially on the Windows platform.
 
Hi Sub.Mesa,

Thank you for your reply. I did my homework (been reading for almost a week now ;)) on these subjects. It's just that there are so many contradictions, that it starts to get confusing on what applies to what.

The only thing i hope to prevent right now, is the disks dropping out of the array. But for as far as i have read, there is no 'automated' way to set a CCTL value through the Onboard Raid controller.

Are there any recommendations on how to get this setup run as stable as possible with the current hardware?
 
Not without changing your operating system to Linux/BSD, where you won't need TLER/CCTL enabled disks any longer.

You can use this setup on Windows, if you have a very good backup. If a bad sector occurs and it kicks 2 drives out, you can just delete the RAID5 and create it again from backup. If you're focusing on a windows-based storage solution, i would concentrate on a backup instead; no single RAID is really reliable on windows, in my opinion.

Why not just use RAID0 for performance on Windows, and build a FreeNAS box or something for a backup? Would be alot more reliable than a windows-based RAID5 IMO.
 
Hey Sub.Mesa,

The array will be used for Multimedia storage only; Tough luck if i lose it, but not something to go (very) insane over. I have my documents and data that i want to keep on 2 different disks; a robocopy batch keeps the directories in sync on both locations.

My windows OS runs on a single SSD drive, so the OS is out of scope.

Personally i'm not to fond of RAID0; sure, it's faster, but the risk that u lose the array is twice that of lets say a RAID1 (since 2 disks, if 1 fails, the entire array is gone), where a RAID1 only has 50% capacity. I was considering a RAID10, but atm i only have 3x F4 disks. Then still i'm not sure if the CCTL / TLER issue is in effect in a RAID(1)(0), or that it only gives trouble on a RAID5.
Overall i think the storage capacity loss in GB is too big on a RAID 0 / RAID 10.

I'm also looking into a Synology NAS solution.
From what i've heard, it seems that those NAS' dont have problems with Desktop disks that don't have the TLER/CCTL enabled, as long as the disks are capable of it (like the HD204UI's), cause the build-in OS runs on UNIX, thus using it's own commands to make sure the disk won't drop out of the array.

The only problem right now is $$$. That's the reason im trying to use the Onboard ICH10R.
If, however, one, or God forbid, 2 drives will drop out within a month or so, i think i'll buy a Synology NAS and plug the drives in there for a RAID5, maybe even a RAID6 for more reliability.

Hopefully that will be a more stable / reliable solution with those drives.

Don't get me wrong, if i had the cash, i'd be doing this on a hardware controller, with RAID / Enterprise disks or even a NAS solution.

Thank you for your time and responses.
If any1 is interested, i'll post my findings and experiences with current setup in this forum as they unfold. Maybe it will help some1 else make the right decision.

Kind regards,
Kami.
 
Last edited:
Thank you for your time and responses.
If any1 is interested, i'll post my findings and experiences with current setup in this forum as they unfold. Maybe it will help some1 else make the right decision.

Actually I'm very interested with your findings and experiences with 3 x Samsung F4 2TB in RAID 5 with ICH10R. Been wanting these questions answered for a while now:
1) What's the CPU usage like when the RAID 5 array/system is idle?
2) What's the CPU usage like when data is being read from the RAID 5 array?

Pretty much anything related to CPU while the RAID 5 array would be helpful!
 
Np, i'll get you that info. It's currently initializing... 68% thus far :)

Estimated it takes about 24 hours non-stop initializing (write-back cache enabled).
 
Last edited:
Np, i'll get you that info. It's currently initializing... 68% thus far :)

Estimated it takes about 24 hours non-stop initializing (write-back cache enabled).

Remember with write-back cache enabled, and no back-up power unit, if you get a brown-out you could end up with corrupted data. I'd strongly suggest getting an UPS to guard against this.
 
Well,

Here are the testresults on the write / read speed.
@ Danny Bui,
I will get you your CPU usage data after my 2nd initializing run.
It's currently initializing the 128k stripe size array, so i think i'll have your data by friday.

@ParityBoy,
It's mainly storage of large multimedia files (movies e.g.), so in the event of a power outage while copying a movie, i'm not really bothered with losing that movie. All my important documents and such are backed up on different disks and kept in sync with Robocopy.



After 24 hours of initializing, i was dieing to toy around with the Stripe Size vs. NTFS Cluster Size. And the results are... suprising.

RAID%20speeds.jpg


I apologize for the large image.

Long story short; My best results on my current setup:
RAID5 / 3 disks (HD204UI's): 128 stripe size with 32k NTFS Cluster size, with Write Back Cache on.
In that configuration:

Seq. Read: 240 MB/s
Seq. Write: 261 MB/s.

I am stunned. I've read threads all over the place of people trying to get the best out of their setup while being stuck on ridiculous low write speeds, and i read on various forums, that the most optimal setting for a 3-disk RAID5 = stripe size x 2 (3 disks - 1 parity disk) must equal cluster size. Thus, thinking that RAID5 Stripe 32k = 64 Cluster.

As u can see in my results, in that config i only get:

Seq. Read: 219 MB/s
Seq. Write: 222 MB/s.

Further more, there aren't that much big differences in the various combo's.
There are 2 combo's that suffer heavily from having WBC turned off, but for the most, it didn't matter that much on Seq. Writing.

The thing that stood out to most and was kinda interesting, is that the 64k stripe / 32k cluster with WBC off only gives 25 MB/s Seq. Write, and the 128k stripe / 32k cluster gives around 38 MB/s.
With the further exception of 128k stripe / 64k cluster (168 MB/s), all is above 200 MB/s Seq. Write speed, wether WBC is ON or OFF.

So, basicly... im stunned.

Any comments on this? And maybe some feedback wether or not this 240 read / 260 write is good?

*And make a note here that i didn't do *ANYTHING* to manually align partitions or any of that.
This is almost Out of the Box installment, and frankly one has to try VERY hard to get BAD results, no matter what Stripe / Cluster combo you choose (See the results).


p.s.,
I didn't bother with letting the benchmark go through the entire 4k / 64k tests, as first glance they never got above the 0,8 mb/s.

However, when copying a 4 GB file from my SSD to the Array, i got 260-320 MB/s according to Windows and the file was there in no time.

This practical test was also done to make sure i didn't suffer from cache / benchmark polution. I did the the tests a number of times, and also re-created the entire volume / partition on every try.

I will run a final check with ATTO on the 128 Stripe / 32k NTFS Cluster combo after initializing and post the results.

Kind regards,
Kami.
 
Last edited:
Don't want to threadjack or anything but I need some help with a setup similar to the OP's. I'm running 5*2TB Samsung HD204UI's in RAID 5 on an ICH10R with a Gigabyte 775 board. I know, I know I should be on a dedicated RAID card (saving for an Areca 1880i and a 4u server case), but I'm making due.

These drives are an upgrade from 4 * 1TB WD RE3 drives that didn't have the write issues I've had with these new drives. Below is the best benchmark I was able to achieve after turning off the writeback cache in the latest Intel RST driver (10.1.0.1008).

http://tinypic.com/r/2nqfpsj/7

The write performance is abysmal for anything but the 4k transfer size which would make sense due to it being a 4k sector size set of drives. I found a lot of articles on formatting and aligning single 4k drives in Windows but nothing on formatting a RAID 5 set of them. I picked a 128K stripe size and a 4K NTFS cluster size in Windows (formatted in Windows 7x64 Ultimate using GUID partition table). I made 1 big partition and transferred all my old data.

The read speeds are great and streaming full quality BDrips is flawless. Does anyone know what I need to do to improve these write speeds? Is this an alignment issue? If I can't fix this I'm going to need to get that RAID card sooner rather than later. Thanks in advance!
 
@ Jahjahwarrior:

Format your volume with a 32k NTFS cluster and try again.
Altho i have to say, my first try (that i didn't list) was a 64k Stripe Size with 4k NTFS Cluster size, and that resulted in a 220 mb/s read + 210 mb/s.
The only big difference i see between our setups is i use an ASUS P6T SE and have the Intel Raid Storage Manager v 9.6 installed. I don't think that would make such a large difference in write speed tho.
Ofc, you have 5 disks where i have 3, but that doesn't matter for the Stripe / Cluster relation.

One more thing... why turn off Write Back Cache?
All that can happen is that, while (or shortly after) copying a file to the array AND you experience a powerloss (a pretty small window for a single desktop user), u can lose the file u was copying.
I take it that the files u really need to have saveguarded are stored on another disk, or multiple disks. A RAID is not a Backup.

What is the purpose for your RAID5 array anyway?

Kind regards,
Kami.
 
Last edited:
@Kami

It's to late to reformat. I have nowhere to temporarily store the data. I don't really write to the array unless I'm ripping something. I'll be moving to a RAID card in the next 4 months.

I turned of write back cache bc for some odd reason with it on, the write results were even slower than my above chart. I have read about people discovering similar quirks on the Intel chipset.

I have all my irreplaceable data backed up in multiple locations along with using Sugarsync. I don't have enough redundant storage to store my 30 or so full Bluray rips. I'm okay if I ever lost that (but would be a PITA). As you can infer from above the array is mostly for media storage. If I had the resources right now I'd be on a RAID 6 card.

I might try to install the same driver you're using to see if that makes a difference.
 
After 24 hours of initializing, here are my final results:

Onboard ICH10R (Asus P6T SE)
RAID5:
3x HD204UI
128k Stripe
32k Cluster
WBC ON
No Manual Alignment or other tweaks needed.

READ: 251 MB/s
WRITE: 265 MB/s

RAID5.jpg


Wonder how long it stays stable ;)

@Danny Bui:
As per your request, here is the data on CPU time vs RAID5 usage:

BLUE LINE: CPU USAGE %
RED LINE: ARRAY WRITE TIME %
GREEN LINE: ARRAY READ TIME %

First, the IDLE load:
CPU%20vs%20RAID5%20-%20IDLE.jpg



Next, the WRITE load, while writing a 3 GB file from my SSD to the RAID5 Array:
CPU%20vs%20RAID5%20-%20WRITE1.jpg


CPU%20vs%20RAID5%20-%20WRITE2.jpg



And finally, the READ load, while playing the DVD "300" from my RAID5 Array:
CPU%20vs%20RAID5%20-%20READ.jpg



As you can see, while:

IDLE: The CPU and ARRAY LOAD is close to 0%.
WRITE: The CPU LOAD is around 10% while at it's peak (ARRAY WRITE being used 80-100%)
READ: The CPU LOAD is between 5% and 10%. Funny thing is the ARRAY READ is virtually 0; probably cause all data is currently loaded into RAM.


Do note that this was tested on a i920 CPU with 4 Cores / 8 Threads (Hyperthreading) and 12 GB RAM.

If you need additional data, let me know.

Kind regards,
Kami.
 
Last edited:
Wonder how long it stays stable ;)

Me too! Just installed the 3 of the same drives in my PC about an hr ago. Went with a 32k strip size based on what I read. However, after seeing your results...I'm gonna switch to 128k. Same use as you apparently...media rig. I hope these drives don't get puked out of the array!
 
Just got done with most of my own testing. I don't know what to make from my results, especially within AS SSD. I can rerun a test back to back (with a clean format in between) and get results that are 5mb off...so I think the margin of error is about 5mb, which puts all my results at roughly the same speed no matter the cluster size (with 128K stripe).

ATTO isn't much better. Pictures are worth a thousands words...
Note: the NTFS default cluster size is 4k...so I don't get why the run where I formatted to 4k is so much off. I might have saved over the default allocation run with something else.
At this point I don't know what to do lol. So if anyone has any insights, I'm open to em. I think I'm going to format my small file partion with a 4K cluster size, as it hosts smaller files (documents, text files, etc). Then for my large Media storage partition, I'll do 32k, as the average file is over 1gb. Otherwise, I can't get a reliable difference in performance no matter the cluster size.


My system specs are as follows.
Q9550 @ 3.8Ghz
8gb DDR2 @ 1080Mhz
HD 6970
OCZ Vertex 2 Boot
2x Veloci-Raptor RAID 0 - programs (raid not enabled in these tests)
3x Samsung HD204UI RAID 5 - storage
 
Back
Top