Issues with PCI-X raid card

Greg613

n00b
Joined
Apr 6, 2005
Messages
34
Thought I would try here first before I went to the manufacturer of my motherboard. My issue is with my raid controller card(3ware 9500s-12). Its a PCI-X raid card. I currently have raid 50 setup with my 6x 1tb hdds. My problem is that when I plug the raid card in to the PCI-X slot I get slow write speed. For example if I try to transfer a 3gb file to the raid it will get down to like 7mb/s write speed.... I take the same card and plug it into the PCI slot on my motherboard and it gets 40-50 mb/s write speed. I have write cache enabled on my raid card including the performance set to the max settings. I've googled a ton and did not see anyone having this type of issue with PCI-X.. I noticed that quite a few people had 3ware 9500s's around here so I thought I would ask. I've read few sites where it says this card is for pci 64bit and some that say pci-x.

Any ideas on what I can try here to get this working properly? Has anyone else heard of this issue? If you need any more info or dont' understand exactly please ask.

One other question I have is, shouldn't I have better write speed than 40-50 mb/s ? with raid 50?

Thanks






Specs:
win 7 RTM 64bit
750wat PSU
6x 1tb WD Green
3ware 9500s-12 Raid controller
ASUS M3N WS http://www.newegg.com/Product/Product.aspx?Item=N82E16813131337
AMD Phenom 9650
 
I have heard of PCI-X issues on those Asus workstation boards, especially the intel based version. Never heard of a solution though. There are some reviewers on newegg talking about the PCI-X slot.
I think I remember hearing that the PCI-X slot is somehow tied to the PCI-e bus on those boards....maybe there are some compatibility issues. Might be why it works better on the PCI bus.

I have one of the PCI-X 8-port 9550s, and I certainly get better write speeds than that, though I'm running RAID 5 on a dual opteron Supermicro server board.
 
I've got an Asus P5BV/SAS with PCI-X slots and a SM SAT2-MV8 card. Not write speed issues for me. Heck, I'm getting better write speeds than your RAID 50 without having RAID at all.

Side note - Newegg reviews aren't worth a shit unless verified somewhere trustworthy.

Edit: Do you have up to date mobo drivers. I believe my Asus mobo has some sort of I/O Bridge driver for interfacing the PCI-X to the northbridge.
 
I've got an Asus P5BV/SAS with PCI-X slots and a SM SAT2-MV8 card. Not write speed issues for me. Heck, I'm getting better write speeds than your RAID 50 without having RAID at all.

Side note - Newegg reviews aren't worth a shit unless verified somewhere trustworthy.

Edit: Do you have up to date mobo drivers. I believe my Asus mobo has some sort of I/O Bridge driver for interfacing the PCI-X to the northbridge.


I was thinking the same thing about my drivers and made sure they were up to date and they were. Looked under all OS's and they didn't have anything I needed for I/O. May just have to email ASUS and see what they say and hopefully they know of something I can tweak to fix this. I looked on their forums and didn't find anything close to this happening. Anyone else with any ideas?
 
This is expected performance and behavior. This is also why I don't sell that board, or any other board that uses the NEC uPD.

The PCI-X is not a PCI-X. Period. It is a PCIe to PCI-X bridge via NEC uPD, which is a steaming pile of crap. Under load, the clock differences create huge problems (100MHz to 66MHz) which result in abysmal performance at best. This is true of any motherboard using the NEC bridge, doubly so on the Asus where they used very, VERY bad signaling paths.
 
This is expected performance and behavior. This is also why I don't sell that board, or any other board that uses the NEC uPD.

The PCI-X is not a PCI-X. Period. It is a PCIe to PCI-X bridge via NEC uPD, which is a steaming pile of crap. Under load, the clock differences create huge problems (100MHz to 66MHz) which result in abysmal performance at best. This is true of any motherboard using the NEC bridge, doubly so on the Asus where they used very, VERY bad signaling paths.

That still doesn't explain why my raid performance in a PCI slot is 40-50 write speed. I know pci tops out at a 100ish..
 
Using SATA or SCSI drives on a 32-bit 33MHz PCI slot, I hit a wall at about 80MB/s read. Calculating parity and writing, I'm sure that number will drop. 40-50 sounds about right for a PCI bus to me.
 
This is expected performance and behavior. This is also why I don't sell that board, or any other board that uses the NEC uPD.

The PCI-X is not a PCI-X. Period. It is a PCIe to PCI-X bridge via NEC uPD, which is a steaming pile of crap. Under load, the clock differences create huge problems (100MHz to 66MHz) which result in abysmal performance at best. This is true of any motherboard using the NEC bridge, doubly so on the Asus where they used very, VERY bad signaling paths.

Not true of every board using the NEC uPD. I have an Asus P5BV/SAS with a Supermicro SAT2-MV8 SATA HBA in one of the PCI-X slots. Upon seeing your post I went and tried to test this "abysmal performance" you speak of. I was able to achieve 300+MB/s read and write from the SAT2 card via the PCIe to PCI-X bridge chip. Only reason I couldn't go higher than that was that I didn't have any more spare drives to attach.

To the OP, I can't help you with RAID, but the PCI-X slots on my Asus mobo work fine.
 
The SAT2-MV8 behaves VERY differently from the 3Ware, and does not significantly load the slot in certain ways. It's also a very insensitive card. Honestly, I could probably throw a SAT2-MV8 in a PCI-X slot with +-5MHz skew and have no issues. You're also trying to compare a P5BV with a M3N - that doesn't work. They're totally different boards.

Any card that demands, requires, or expects a stable clock will fail miserably in a uPD slot. NEC dropped the ball, then kicked it miles away on the uPD's design and implementation. Asus compounded this on the M3N by routing signals very poorly, which causes greater levels of instability on all sides. 3Ware cards are notoriously pissy about slots out of spec. The numbers he's posting are low period on the PCI side as well. The only way to be 100% certain as to what's going on is to throw a bus analyzer in the mix, but that's not going to happen.
Thus, it's sufficient to say that it is the M3N and the uPD. Dollars to donuts, the problem will not reproduce in any other board.
 
What kind of drives are you using? Are they all matched?
Is this array full of data? Would it be possible to do some troubleshooting and set up a RAID 0 array? That would tell you if your bottleneck is the PCI-X bus or if its the card.
 
There all wd green drives I did notice that one of the is a 16mb cache one. I wouldn't think just 1 drive would make the difference though would it? I could take that one out and put a 32mb cache one in its place. I just wanted to handle this speed problem now because I plan on making this 10 tb total one day. The 3.6tb array right now is basically 3/4 the way full. Speed is not that big of a deal for me but if i could get better with the stuff I have I would like to improve it.


Has anyone tried this before? http://www.3ware.com/KB/article.aspx?id=12546 I'm going to try it tonight to see if it fixes my issue
 
Last edited:
One drive can make a big difference. I was playing with different setups on my 9550 and 2 samsung spinpoint 500gb drives in RAID 0 were faster than 2 samsungs and 1 seagate in raid 0. The seagate drive actually seemed to be holding the samsungs back.

The fact that you're getting the same results on 2 different boards tells me that the issue is the card or the drives, or the setup and not the motherboard, especially since that board looks like it has a genuine PCI-X bus, and only 1 slot on the bus. Now if the NICs are on the PCI-X bus too, that would create some overhead on that bus.

Do you have it set up as write-back or write-through?
 
One drive can make a big difference. I was playing with different setups on my 9550 and 2 samsung spinpoint 500gb drives in RAID 0 were faster than 2 samsungs and 1 seagate in raid 0. The seagate drive actually seemed to be holding the samsungs back.

The fact that you're getting the same results on 2 different boards tells me that the issue is the card or the drives, or the setup and not the motherboard, especially since that board looks like it has a genuine PCI-X bus, and only 1 slot on the bus. Now if the NICs are on the PCI-X bus too, that would create some overhead on that bus.

Do you have it set up as write-back or write-through?

I tried transfering a 8gb file yesterday over the network and it seems to be working better I was getting 70- 80mb transfer speed which isn't bad because anymore will almost max out gigabit

Not sure what you mean about write- back or write- through. I haven't seen that setting on my controller
 
70-80MB/s is perfectly reasonable considering network overhead and consumer? level switches.
 
Greg, 70-80MB/s IS network maximum. That's as good as you're going to get over the network.

I definitely want to see the PCI-X registers on the uPD and the HT1000. The HT1000 is a ... pretty ugh board, in my experience. Not bad, just a marginally "average" board.
 
Back
Top