Highpoint 2320 issues

Akirasoft

n00b
Joined
Jul 26, 2004
Messages
27
I recently built what I thought was a pretty nice fileserver:

Coolermaster Stacker 810 case
Supermicro PDSLA 945G board (I'm utilizing the onboard video, could be important later..)
3ghz PentiumD
2GB corsair xms memory
5x 500gb seagate 7200.10 drives
IcyDock 5 bay drive enclosure
Highpoint 2320 raid controller

After numerous issues getting the machine to boot an OS with a raid array initialized I flashed the card to disable int13 and reallocate EBDA. All was well overnight as I transfered files to the new array until this morning....

I wokeup to find errors reported from the machines copying files to the array. Loading the management applet on the fileserver showed that the card was reporting that all 5 drives failed at once..... yeah, 5 brand new drives failing simultaneously? I shut the box down so I could go in and make sure all the cables were secure sure enough they were.

Proceed to power the machine up and low and behold, the card is no longer even detected by the machine. I'm currently addressing the issue with highpoint support.

One thing I noticed was the heatsink on the card was quite warm. The 2320 does not use active cooling on the controller chip, it simply uses what I feel is quite small aluminum heatsink. I guess if I am unable to get anywhere with highpoint (I'm wondering if this card might need an RMA...) I was wondering if anyone has replaced the heatsink on these cards with an active cooling option? I've heard some people have used northbridge coolers on areca cards, I'm wondering if something similiar would be possible on this card?
 
I work with the HP/Compaq SmartArray SCSI RAID adapters on a daily basis, and those run a lot warmer than I would like. Having said that, I have never had one fail me yet....unlike the utter crap Dell pushes out in their servers.

If I had that many drives reporting a failure, I'd start the troubleshooting by looking at the RAID adapter. The odds of that many drives failing at the same time are very slim. You are probably looking at an RMA of the Highpoint adapter. If the server doesn't even see the card anymore, you have probably had an adapter failure.

I would be curious to hear how you make out with their tech support. I am looking at the same Highpoint 2320/Icydock combination you have for my new server at home.

Are there any holes in the board surrounding the heatsink?
 
Can't say I've had any problems with my 2320 yet, the heatsink that came on mine however is much larger then the black square one they show on most pictures and the box. Extends well over the marvel chip thats on the card. Is yours this silver rectangle one or the black square one?
 
mps said:
I work with the HP/Compaq SmartArray SCSI RAID adapters on a daily basis, and those run a lot warmer than I would like. Having said that, I have never had one fail me yet....unlike the utter crap Dell pushes out in their servers.

If I had that many drives reporting a failure, I'd start the troubleshooting by looking at the RAID adapter. The odds of that many drives failing at the same time are very slim. You are probably looking at an RMA of the Highpoint adapter. If the server doesn't even see the card anymore, you have probably had an adapter failure.

I would be curious to hear how you make out with their tech support. I am looking at the same Highpoint 2320/Icydock combination you have for my new server at home.

Are there any holes in the board surrounding the heatsink?
I'm in the process of addressing things with Highpoint support. They responded to my email at 10pm on Christmas eve, so I'm thinking perhaps they are offshore. Pretty commendable english in the email, but they asked me information I had already presented in the initial email (motherboard/drive manufacturer/model). The support rep did say that "pcie cards tend to run warm" and that all 5 drives failing at once seemed odd/unlikely.

We'll see how it goes, I can't fault their response time on this holiday weekend.
 
Sinclair said:
Can't say I've had any problems with my 2320 yet, the heatsink that came on mine however is much larger then the black square one they show on most pictures and the box. Extends well over the marvel chip thats on the card. Is yours this silver rectangle one or the black square one?

mine is the large silver one. IT does extend over quite a bit of the card.
 
Seems to be about normal then.. mine runs pretty warm, no issues from heat so far tho.
 
They've finally sent me a diagnostic utility to run.... not sure how that is going to work out since the card is not being detected anymore but i'll humor them... Perhaps the utility can enumerate all pci-e devices and dump the output to screen or something.
 
protias said:
You sure it's not the power supply?


Good catch. With a 550W Antec my drives would fail once I had 9 drives. With 8 drives it was fine.
 
Dew said:
Good catch. With a 550W Antec my drives would fail once I had 9 drives. With 8 drives it was fine.
I just seem to see those words quite often if the system doesn't have enough power. So that is my guess.
 
Pretty sure it aint the power supply. At this point the card is no longer being detected. It is a 600w power supply, no video card is loaded in this system and it contains 6 drives, total. Should be well within the limits of the power supply.

I've finally been told to pursue RMA. I'm wondering if perhaps i should have just RMA'd it to newegg from the getgo. Only bad thing is now I need to pull all the SATA cables and I didn't do a real good job of matching ports on the card to ports on the hotswap enclosure. D'oh!
 
It couldnt be psu, I run my system (sig) with a 600w psu with 9 drives total, so shouldnt be a psu issue.

The only thing besides a faulty card, is the fact it "may" not recognize the .10 drives, or at least properly and make them useable. Check for a bios update maybe.
 
Akirasoft said:
Pretty sure it aint the power supply. At this point the card is no longer being detected. It is a 600w power supply, no video card is loaded in this system and it contains 6 drives, total. Should be well within the limits of the power supply.

I've finally been told to pursue RMA. I'm wondering if perhaps i should have just RMA'd it to newegg from the getgo. Only bad thing is now I need to pull all the SATA cables and I didn't do a real good job of matching ports on the card to ports on the hotswap enclosure. D'oh!
Should be in the limit there. It probably is the card (or maybe mobo). Have you tried a different slot to make sure it isn't that particular slot? What brand and model is the PSU you are using?

Nacho said:
It couldnt be psu, I run my system (sig) with a 600w psu with 9 drives total, so shouldnt be a psu issue.

The only thing besides a faulty card, is the fact it "may" not recognize the .10 drives, or at least properly and make them useable. Check for a bios update maybe.

How it is absolutely not the PSU? I've said this before, if it is drawing too many amps, the system will not run.

I don't see how Seagate drives would be different than any other drive in being recognized.
 
also having issues w/ my 2320.

the heatsink gets too hot to touch.

using 4x barracuda 7200.10's

want to do raid 5+0

but the array refuses to initialize.

also, no longer have access to a good chunk of my files without windows spitting out KERNEL INPAGE ERROR, making my computer emit a terrible beeping sound and blue screen.

blah.

going to try RMA i gues.s.

haven't been able to test out my 8800GTS + E6600 combo yet b/c of this crap.
 
also having issues w/ my 2320.

the heatsink gets too hot to touch.

using 4x barracuda 7200.10's

want to do raid 5+0

but the array refuses to initialize.

also, no longer have access to a good chunk of my files without windows spitting out KERNEL INPAGE ERROR, making my computer emit a terrible beeping sound and blue screen.

blah.

going to try RMA i gues.s.

haven't been able to test out my 8800GTS + E6600 combo yet b/c of this crap.

You can't do RAID50 with only 4 drives, it requires a minimum of 6 disks, that may be why its refusing to initialize the array. Or are you trying to do a 3 drive RAID5 and 1 drive RAID0?
 
I've also just had the exact same issue as Akirasoft with my new 2320.

Running 4 Western Digital 320GB drives off it in RAID 5 and it initally worked fine, saw all the disks and initialised. Then after a few hours of copying data the cards alarm went off and was showing disks 2, 3 and 4 all failed at the same time.

After a restart it showed all 4 disks as working again so I re-initialised, reformated and started again. Few hours later it again alarmed this time saying that disks 1 and 4 had failed.

However after a reboot the card would not past its BIOS POST and showed a "BIOS Checksum Error" which halted the system from booting into Windows. I reseated the card in another PCIe slot and now in either slot it doesn't even show its BIOS POST at all just like what Akirasoft is reporting.

Not exactly impressed so far I must say.
 
been using my 2320 for several months now, no problems in a stacker 810 or p180
seagate 7200.10 x 5
 
You can't do RAID50 with only 4 drives, it requires a minimum of 6 disks, that may be why its refusing to initialize the array. Or are you trying to do a 3 drive RAID5 and 1 drive RAID0?


i suppose i meant 3 drive raid5 and 1 drive raid 0.
 
I've found that WD drives are highly unstable in my RAID setup if they use the molex connector. Using a molex to sata adapter fixes the issue.
 
WD3200JD 8MB Cache and WD3200SD RE 8MB Cache

well as you probably know the JD series are a bad idea for RAID because of the error recovery issue that causes controllers to drop them, although I don't see why using SATA power would help
 
well as you probably know the JD series are a bad idea for RAID because of the error recovery issue that causes controllers to drop them, although I don't see why using SATA power would help

Yeah, but the SD drives drop as well. Since they have been on SATA power connectors I have not had a single drop.
 
Finally received my card back from newegg. Plopped the card back in the box, flashed the BIOS to 1.04, supposedly disabled reallocated ebda and int13 support (so the box would actually boot an OS) then shut the machine back down and plugged the array in. Now I'm basicaly back to square one, the machine won't even boot one of my troubleshooting cdroms. I'm running a memtest on the machine now, just generally trying to see if I have stability issues unrelated to the raid card that perhaps the card is exascerbating. I don't think that is the case as the machine has been running ever since i rma'd the card...

At this point I'm kind of wondering about the powersupply running all 5 disks... It is a thermaltake 600w which isn't the greatest but isn't exactly hong-kong special generic. One thing I might be bumping against, I believe this is a dual rail PSU... Each seagate 7200.10 draws 2.8amps...
 
Seems the issue with my server not posting now is related to int13/reallocate ebda. No matter what I do with the flash utility on this card it won't update those settings. Sweet!

So far I give this card a -5 out of 10. I might just break it in half and investigate replacement options.
 
After flashing the bios on the card about 100 times, it seems my config change finally took hold. The box is up and running. Now the trick will be to see if the array is still online when I get home this evening.

Ran HDTach through the drive (the free version so write only), I'll put up a screenshot a bit later. 160MB/s sustained and 180MB/s burst. I'm not entirely sure where that stacks up compared to other solutions but that is more then capable of saturating the gigabit ethernet interface in the server so I'm not too concerned.
 
I'm going to follow this with interest... A 500Gb drive of mine died last night, so I'm giving RAID 5 over three 500gbs a lot of thought right now
 
array got a ton of work over the weekend when i had a friend over for some LAN action. I store my isos there and mount via daemon tools vs actually burning things. So multiple boxes installing stuff from the array via gigabit and it remained solid. Still not sure why I'm only getting around 160MB/s out of it vs the 200 or so I was expection, but perhaps my expectations were a bit high for a 5 spindle array on a consumer level card.
 
I pull around 260MB/s reads on my 8 drive 320 RAID6 on a 2320, but thats completely in software. I seem to recall other reviews showing the onboard engine topping out way before the full 8 drives, could be running into that.
 
array got a ton of work over the weekend when i had a friend over for some LAN action. I store my isos there and mount via daemon tools vs actually burning things. So multiple boxes installing stuff from the array via gigabit and it remained solid. Still not sure why I'm only getting around 160MB/s out of it vs the 200 or so I was expection, but perhaps my expectations were a bit high for a 5 spindle array on a consumer level card.

Over gigabit I'm surprised you're getting more than ~110 MB/s, and very, very surprised you're getting more than 125. Or is that just in local, synthetic benchmarks? How are speeds over the network?
 
Hehe, I don't think he was doing 160 MB/s over the network -- he was referring to the HDTach benchmarks. I can comment on that.

IMO HDTach is not really reliable for such tasks. It's been reported elsewhere (Storage Review) to have issues with RAID arrays. Highpoint uses 64k stripes, and HDTach uses 64k requests. So we're "lucky" to see anything better than single drive performance out of HDTach results, and can't expect optimal STR.

I get the same sort of measurements with my 2320 6-drive RAID 5 array using HDTach.

Here's an actual file transfer though, which shows that HDTach is not right sometimes:

D:\tools>dir f:\test\test0\10.gb
Volume in drive F is hrd5-6-6464
Volume Serial Number is xxxx-xxxx

Directory of f:\test\test0

24/08/2006 05:01 PM 10,000,000,000 10.gb
1 File(s) 10,000,000,000 bytes
0 Dir(s) 11,779,047,424 bytes free

D:\tools>time 0<nul
The current time is: 20:54:10.58
Enter the new time:
D:\tools>xcopy /y f:\test\test0\10.gb n:\test\test0
F:\test\test0\10.gb
1 File(s) copied

D:\tools>time 0<nul
The current time is: 20:54:51.28

That's 10GB being transferred from the 2320 array to a local 4-drive RAID 0 array, in 40.7s, i.e. ~ 245 MB/s. I'm sure that with some fiddling, luck, etc., that number could go somewhat higher or lower. But at 245/5 = 49.1 MB/s per drive, it's a good number, and because this is a real file transfer, it's a valid number.

FWIW, this was in Vista.
 
Back
Top