RAID ZERO Question

dabiggoober

Limp Gawd
Joined
Feb 27, 2004
Messages
202
Just bought two SATA 37GB WD Raptor drives 10k rpm.

Also bought the AMD 64 3200 with Gigabyte Motherboard.


I have never hooked up a RAID Zero Array before and would greatly appreciate any ADVICE/WARNINGS or in general anything else I should be aware of when I get my new puter gear in this coming week.

I also have normal 200gb ide ata drive as storage drive. Going to use the two Raptors for OS and programs and temp space for video editing.

Thanks,

DABIGGOOBER :D
 
Sounds very similar to my system. First, I disconnect the power to the regular IDE drive until the OS is installed on my Raptors. This way, I am gauranteed to have the C drive as the Raptors. Second, make sure your driver floppy is handy. You'll set up the array in the raid controller's BIOS, and then boot to the 2000/XP CD. It will ask you to hit F6, so do so, and then it will prompt for the floppy. Once that all happens, everything else is normal.
 
I have a similar setup, only with an asus mobo, and a smaller ide drive.

There were only two bits that made it different from a regular setup (apart from configuring the sata bios, but that was very straight forward).

First, the "F6" during windows setup, and second I have to tell the bios to first-boot from SCSI.

Neither was a surprise, and it worked first time (and ever since)
 
What should he set the block size to? I used to use 64K for best on my old AMI RAID 0 setup, but that was after lots of reformating and benching. I found that it makes a huge difference in performance.
 
anything above 32k is meant for vid editing. I recommand 32k or lower. I have 15k scsi I run 16k blocks, but 32k is my 2nd choice most of the time.
 
do not disconnect any drives. if you do the drive letter allocation will be in disarray. follow this post for setting up drives correctly: http://www.hardforum.com/showthread.php?s=&threadid=732042

if i remember correctly the allocation size is dependent on the size of files you are working with. 128k is considered the highest in performance. i have not tested a raid array with various sizes myself so i will not comment any further, but what i have read contradicts what many have posted here. note: intel's instructions is one literature that i am talking about http://support.intel.com/support/chipsets/iaa_raid/sb/CS-009333.htm (manual 2 web p.34)

i do understand other raid array devices may have optimum settings that differ, but as i have said all literature that i have read performance=128k.
 
I've never objectively benched cluster sizes, but I've been given to understand that it depends on the size of the traffic.
For instance if the drive is going to be used mostly for large individual files, like images or video, then a bigger cluster size is better.

But for lots of little files (like a system drive) a smaller cluster will be more efficient.

If like me, and most other users, it'll be a mix, 16k or so would probably be best all round.
 
Originally posted by MartinX
I've never objectively benched cluster sizes, but I've been given to understand that it depends on the size of the traffic.
For instance if the drive is going to be used mostly for large individual files, like images or video, then a bigger cluster size is better.

But for lots of little files (like a system drive) a smaller cluster will be more efficient.

If like me, and most other users, it'll be a mix, 16k or so would probably be best all round.

I think another thing to factor in is the performance of the drives themselves. For instance, if you have let's say a pair of midrange drives, going with smaller cluster size lets them get what is in their cache a chance to be written to the disk while the controller is sending data to the other disk.

With a faster drive with a 8MB cache which can keep up with the controller better, going with a smaller cluster size doesn't help as much and just adds overhead to the system, since almost all on board ATA/SATA controllers are software controlled by the host CPU.

In my benchmark tests that I ran, I saw the CPU time increase substantially when the block size was smaller. It's a trade off to find the "sweet" spot between RAID performance and system performance. I used HDTach, and Sandra, and WinBench to benchmark the speed. I then copied and pasted a file with windows explorer and watched the task manager. HDTach also measures the CPU time. CPU time would be irrelevant if the controller was a true controller and not a on board software controller.
 
block size is also a correct term for stripe size :D
it was when MartinX refered to it as cluster size that I thought Id chime in
thats done alot and is generally confusing ;)

most benchmarks tends to be biased towards smaller stripes
and in truth most "typical" desktop use is smaller access
unless your running graphics aps with big reads and writes
your better off at 32k or lower
 
I apologise.
I was talking about stripe/block size too, just sloppy use of terminology on my part.
 
Thanks for all the input. I will be assembling my new system this weekend.

Regards,

Dabiggoober
 
Back
Top