is pci raid any good?

ug_rulz_all

Limp Gawd
Joined
Aug 28, 2002
Messages
378
im thinking about setting up an eventual raid array

is a pci raid card any good as opposed to motherboard raid? such as speed differences...
 
Originally posted by ug_rulz_all
im thinking about setting up an eventual raid array

is a pci raid card any good as opposed to motherboard raid? such as speed differences...

They should be identical. The controllers that're on the motherboard still hook into the PCI bus, just as though they were a card.
 
IDE and Serial ATA controllers now often have near-direct links to the southbridge, bypassing the 132 MB/s bandwidth ceiling imposed by the PCI bus. Most, if not all new Intel chipsets offer this advantage; other chipset makers may do this as well. The bottom line is that the speed differences may come into play with very fast hard drives in RAID 0, but other RAID levels won't be hampered by the bandwidth limitations of PCI.
 
Originally posted by xonik
IDE and Serial ATA controllers now often have near-direct links to the southbridge, bypassing the 132 MB/s bandwidth ceiling imposed by the PCI bus. Most, if not all new Intel chipsets offer this advantage; other chipset makers may do this as well. The bottom line is that the speed differences may come into play with very fast hard drives in RAID 0, but other RAID levels won't be hampered by the bandwidth limitations of PCI.

Really? Huh. I didn't know that. Thanks. I always thought they went through the PCI bus first.
 
My i875P chipset has that, both onboard ide and sata bypass the pci bus. Noted a big difference in performance that I mentioned in this thread. FWIW someone commented that my Promise controller wasn't that hot, but it blew away my onboard VIA controller at the time.
 
Just to clarify:

If you are running the Intel ICH5 (or the ICH5-R) your two SATA channels are part of the Southbridge, and have 266MB/s of bandwidth (shared with everything else on the Southbridge). Same with your two native IDE channels.

Any onboard 3rd party PATA/SATA will run off the PCI bus.

Using 3rd party RAID 0 and the newest 8MB cache drives, you will probably hit the PCI ceiling during burst reads/writes. You might even hit it with sustained reads under ideal conditions. What effect this will have on performance, I have no idea. Probably not signifigant. It also depends on what other devices you have on your PCI bus.

If you really care get an Intel board with ICH5-R and two SATA drives. Or, Windows XP Pro using dynamic disks can RAID 0 two drives off your native PATA/SATA channels.

I have no idea what is on the AMD chipsets.
 
a few other considerations

First
If your mobo goes tits up, you can migrate a RAID card
whereas, if its an onboard RAID chip,. you need to replace the motherboard with another having an identical* RAID chip

That happend to my 1st RAID array, and rather than do that I wrote it off

*sometimes you can migrate to a related brand chip, from one Promise to another

Second
Your only limited to 133MB/s if your using a 32bit 33MHz PCI bus
PCI Comparison, 32 Vs. 64-Bit and 33Mhz Vs. 66 MHz not to mention PCI-X & PCI Express
Of course currently 64bit slots are not something you see on enthusiast desktop boards, but that will be changing

bringing me to my final point

Third
As the Disk Spins
specifically Part IV: DMA Latencies and Speed Matching
Speed Matching Condition

"What happens if the effective internal performance exceeds the effective host transfer rate? As long as only the transfer capacity and potential bandwidth is concerned, nothing is going to happen. However, if there is a massive request of data from the drive and the effective host transfer rate is too low to handle the onslaught of data, the drive's cache will fill up to the point where the heads will no longer be able to read the data out from the platter into the cache until some more data have been dumped from the latter to the host bus adapter.

The problem in this case is that the "window of opportunity" is missed, that is the data are on the platter and the LBAs on those platters are moving away from the head before they can be read. That is, they can be read by the head but the head cannot write them to the cache because the cache is full. The solution in this case is very simple, just wait one rotation of the platter and by then, the cache will be emptied and the data can be written to the memory.

Once the host transfer rate is lower than the effective internal transfer rate (e.g. because of a jammed I/O bus as shown in the illustration or else because the combined media performance of two or more drives in a RAID Level0 configuration exceeds the host transfer rate), the cache fills up and the data can no longer be transferred from the media to the cache because there is no capacity left. In that case, the head will have to skip reading e.g. LBA #1 (as shown above) but since it is necessary for all further transfers, the drive will have to wait one full rotational latency to "get back to business as usual". During this artificially imposed latency, the cache can be emptied by outputting the data to the system bus.

This condition of having to wait one rotational latency is called a Speed Matching Condition, a term I have used occasionally in the last few articles. In single drive configurations, speed matching conditions rarely occur, however, all it takes is two modern drives in a RAID Level 0 configuration and the bus will fill up very fast in sequential transfer tests. In RAID Level 0 configurations, data transfers have to abide by the striping pattern, meaning they are internally scheduled to complement each other. As a consequence, if the first drive misses the target LBA because the cache is full, then the second drive will be subjected to some sort of domino effect, that is, it will have to wait for the first drive to complete the transfer before it can start its own transfer. In other words, each drive can cause a speed matching condition on the other drive(s).

Two Seagate Barracuda SATA-V on a SiliconImage 3112AR controller in RAID-Level0 configuration. The combined sequential transfer rates are higher than what the bus back-end can handle and, therefore, a speed matching condition is created. The problem is only solved once the head goes to more inward located "zones" with a lower sequential media performance. At that point, the measurements "stabilize", however, because of the algorithms used by HDTach 2.61, things still look rather ugly. Part of it relates to the fact that HDTach does not use true sequential reads, rather, it stacks eight consecutive block reads of 1,056 KB that are then written into the drive cache in write-through mode before they are burst into the bus. If both drives start bursting simultaneously or if the bursts are overlapping this will cause bus contention, speed matching errors and other artifacts. In real time operating systems this would not be a problem, neither is it with slower drives but in the current environment (OS AND drive technology), HDTach appears to be out of its league.

Two Seagate Barracuda SATA-V on a SiliconImage 3112AR controller in RAID-Level0 configuration. The combined sequential transfer rates are higher than what the bus back-end can handle and, therefore, a speed matching condition is created. The problem is only solved once the head goes to more inward located "zones" with a lower sequential media performance. At that point, the measurements "stabilize", however, because of the algorithms used by HDTach 2.61, things still look rather ugly. Part of it relates to the fact that HDTach does not use true sequential reads, rather, it stacks eight consecutive block reads of 1,056 KB that are then written into the drive cache in write-through mode before they are burst into the bus. If both drives start bursting simultaneously or if the bursts are overlapping this will cause bus contention, speed matching errors and other artifacts. In real time operating systems this would not be a problem, neither is it with slower drives but in the current environment (OS AND drive technology), HDTach appears to be out of its league.

Two Seagate Barracuda SATA-V on a SiliconImage 3112AR controller in RAID-Level0 configuration. The combined sequential transfer rates are higher than what the bus back-end can handle and, therefore, a speed matching condition is created. The problem is only solved once the head goes to more inward located "zones" with a lower sequential media performance. At that point, the measurements "stabilize", however, because of the algorithms used by HDTach 2.61, things still look rather ugly. Part of it relates to the fact that HDTach does not use true sequential reads, rather, it stacks eight consecutive block reads of 1,056 KB that are then written into the drive cache in write-through mode before they are burst into the bus. If both drives start bursting simultaneously or if the bursts are overlapping this will cause bus contention, speed matching errors and other artifacts. In real time operating systems this would not be a problem, neither is it with slower drives but in the current environment (OS AND drive technology), HDTach appears to be out of its league.

A single Maxtor Diamondmax 9 160GB (6Y160M0) in WinBench 99 on a SiI 3114 controller on the ABIT KV8 MAX3 (VIA chipset) appears to suffer from a similar Speedmatching condition as the RAID setup shown above. As soon as the sequential reads are reaching the second zone, which is within the transfer capabilities of the back-end bus, the sequential read data are graphing in a straight line.

Once the host transfer rate is lower than the effective internal transfer rate (e.g. because of a jammed I/O bus as shown in the illustration or else because the combined media performance of two or more drives in a RAID Level0 configuration exceeds the host transfer rate), the cache fills up and the data can no longer be transferred to the cache because there is no capacity left. In that case, the head will have to skip reading e.g. LBA #1 (as shown above) but since it is necessary for all further transfers, the drive will have to wait one full rotational latency to "get back to business as usual". During this artificially imposed latency, the cache can be emptied by outputting the data to the system bus.

In case the entire platter length of the drive(s) is measured from OD to ID, this speed matching condition will only pass when the heads are reading from more inwardly located tracks, that is, zones with a lower effective transfer rate. "

That gives you a real good Idea why the SX4000 did so well with a 64bit slot and 256MB of cache

Im unaware of any onboard RAID controllers be they SATA or PATA that have large caches, I run 128MB in my SX6000 (32\33), but of course its for a RAID5 array and thats a different ball O wax :p

Saturating the PCI bus with realworld data is much easier said than done, in all likelyhood, if you actually need 64bit\66MHs or PCI-X 64\100-133 you probably have it already
(Animation, 3DCG, CAD, Video, Scientific Modeling ect) even that wont typically saturate a 64\66, its really for multiuser server environments

unfortunately current 32\33 PCI cards can easily saturate the bus given a large enough stripe width (# of drives) and even with all that cache suffer penalties, the difference in the 32\33 133MB/s bus and the SATA 150MB/s isnt all that great and Im not sure how comparing that against a larger cache balances out in real world performance
 
Back
Top