Best Strip Size for SATA Raid 0?

USMC2Hard4U

Supreme [H]ardness
Joined
Apr 4, 2003
Messages
6,157
I have 2 WD Raptors and was wondering what your opinions for best Strip size are in general....

Right now I use 16 cuz someone told me that...

waht about j00? :D
 
It all depends on your access patterns. Lots of random accesses -> smaller stripes. Lots of big multimedia files -> larger stripes. 16-32K are good general purpose stripe sizes, but to be honest, I've never been impressed with RAID0. If you're really picky about speed, just buy yourself a U160 or U320 controller and a 15krpm drive.
 
Originally posted by Snugglebear
It all depends on your access patterns. Lots of random accesses -> smaller stripes. Lots of big multimedia files -> larger stripes. 16-32K are good general purpose stripe sizes, but to be honest, I've never been impressed with RAID0. If you're really picky about speed, just buy yourself a U160 or U320 controller and a 15krpm drive.

I would buy a U320 however wont that be slower than SATA raid because I would have to use a Regular PCI slot for the U320 COntroller Card
 
Not really. You could put two 15k3s on a U160 or U320 controller before you max the PCI bus out. With two raptors you'd be in about the same place, and depending on your SATA controller, it may be attached to the PCI bus as well. Either way the SCSI units will have lower access times and a better TCQ implementation. What it boils down to is how you use your computer. If you manipulate large files and don't do a whole bunch of stuff at once, the RAID stripe is probably your better bet. If you do many things simultaneously, the SCSI really helps out.
 
TCQ = Tagged Command Queueing. This facility allows the kernel to send several asynchronous requests at once to a block device. The device can service the requests as it sees fit; the advantage being that the device can arrange the transfers as best suits the device hardware. TCQ works for both reads and writes.


6. Tagged Command Queuing

"Unlike system memory, storage media are not separating command and address lines from the data bus, rather, commands are sent time multiplexed over the same bus as the data. This can cause bus contention and stalling of data transfer whenever additional commands are needed. A workaround for this situation is known as tagged command queuing. Depending on manufacturer and model, current parallel drives have a limited capability for TCQ, in the commodity market mostly the current series of IBM GXP drives are capable of TCQ. Briefly, TCQ means that the device itself can make intelligent decisions regarding the sequence of execution of tasks. That is, instead of being required to send each command by itself, the host can send an entire list of command to the drive which then can make decisions regarding the most economic order of command execution"

Native command queuing, part of the Serial ATA specification, allows up to 32 instructions to be queued and reordered by the hard disk controller itself.

SATA II Features Make a Mark at IDF
"There were a number of SATA II products shown in booths on the show floor. One had a SATA drive with native command queuing dueling an identical mechanism without the support. It was no contest: the drive with the technology averaged 335 I/Ops while its slower sibling ran at 160 I/Ops. Seagate made a splash announcing a drive with the technology. (For more information see Hard Disk Command Queuing Goes Native in Seagate Drive.) " < link above this one

But I agree with Snugglebear's assessment
it really depends on what you need it for ;)
 
The implementation of TCQ on ATA devices is still lacking, though. It runs in firmware along with everything else, is slower to execute, and traverses the wire through ATAPI encapsulation, which clogs up the interface. There are also some obscure issues with each implementation (follow kernel or driver mailing lists, esp. for FreeBSD).
 
I was wondering about that and alot of other SATA "improvements" (Port Selector, Port Multiplier)
Im hoping they will be addressed in the "As the Disk Spins" series (along with SCSI comparision) @ Lost Circuits
 
The 64/66 PCI cards are backwards compatible. I've got a whole bunch of LSI U160 64/66 controllers that work fine. When you get a board that has a more robust bus you can migrate it over (SCSI is one of those things you end up carrying with you for a long, long time). If not, all that spare bandwidth can be used if you have multiple drives and transfer between them often, since it doesn't have to traverse the PCI bus. Or you could buy a slower controller and save some cash. Anyway you go, just do your homework.
 
well i know i would get a Cheetah 15K Drive U320... I just want a good card I guess that will work great in my computer....
 
well...
its always risky trying to guess which way standards will change in the future, what specifically effects you is that its unlikely 64bit 66MHz or PCI-X 64bit 100\133MHz will be in gaming boards.

Those 2 are the backwards compatible PCI specs,
while PCI-Express, which probably will be adopted in gaming boards as a replacement for the AGP8X standard isnt backwards compatible.

If I was guessing Id say that workststion\server boards
will eventually have both PCI-Express and PCI-X slots

While Gaming Boards will have PCI-Express and PCI 32bit \ 33MHz slots as a transition phase, with no real place for you to take full advantage of the U320

My Tyan K8W is a good example of the current phase of workstation boards, with 2xPCI-X buses and a Single legacy 32bit\33MHz slot, and AGP8X

http://www.eetasia.com/article_content.php3?member=no&article_id=8800303971&DD=c040508b

"This seamless migration to PCI-X 2.0 is in sharp contrast to the discontinuity that would occur in a move to Express. Without backward compatibility of the slot/adapter, Express will not easily replace PCI slots in servers. Adapter vendors would need to provide two separate product lines during the transition, and server vendors would have to provide multiple servers with different mixes of PCI and Express slots to satisfy customers in various stages of transition. Customers would, for the first time in 10 years, have to manage deployment of incompatible adapter types among their servers. "


http://www.pcstats.com/articleview.cfm?articleID=1087

"Now there are three other PCI specifications in existence, all designed to increase the amount of available bandwidth. These are 66MHz PCI, PCI-X at 64bit/133MHz, and the soon to be introduced PCI-X 2.0.

The trouble is, while these technologies have, or soon will find a permanent home in the server market, the complexities and extra costs they introduce to motherboard manufacturing mean that they will be virtually unknown at the desktop level. PCI-X, for example, requires a controller for every slot and that is just too expensive."
 
Back
Top