ARECA Owner's Thread (SAS/SATA RAID Cards)

Here are some dd results from my 23x ST32000444SS in raid 6 on an 1880ix-24.

read
Code:
dd bs=1M if=/array/52GB.bin of=/dev/null
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 26.2489 s, 2.0 GB/s

write (cpu limited)
Code:
dd bs=1M count=50000 if=/dev/zero of=/array/52GB.bin
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 52.0657 s, 1.0 GB/s

write (oflag=direct)
Code:
dd bs=1M oflag=direct count=50000 if=/dev/zero of=/array/52GB.bin.1
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 49.8367 s, 1.1 GB/s
 
Yes, thanx i know that, and im using 64bit linux, ive tried downloading the linux 64bit file and the zip only contains 1 file, if im not mistaking the original cd had 2 files?.

Why not put just a tiny little bit of effort into your post here, and give us some information to help you?

What link are you downloading from? What file name are you downloading? What file(s) did you find in the zip? What file(s) are you looking for? What happened when you tried to install and run the file(s) you downloaded?
 
Why not put just a tiny little bit of effort into your post here, and give us some information to help you?
Srsly, thus I pointed to the windows driver. You'd hope a linux admin would know better...

The linux zip appears to contain one linux ELF binary. Have you tried making it executable and running it? It is a bit disappointing the archives for the http, cli and snmp agent don't come with anything beyond the binary. I tried it on an ubuntu box I've got here and it seems to work. In that it fires up, creates a .conf file and starts trying to find the card (which fails in this case as my machine doesn't have an areca card installed).
 
Why not put just a tiny little bit of effort into your post here, and give us some information to help you?

What link are you downloading from? What file name are you downloading? What file(s) did you find in the zip? What file(s) are you looking for? What happened when you tried to install and run the file(s) you downloaded?

My bad, ill explain a little more, just thought maybe someone else had the same issue.

I have Arc-1220 card and using debian 5 64bit

Ive downloaded this file http://www.areca.us/support/s_linux/http/x86_64/archttp64.zip

I dont know what the original file was named or how many they where, but im most certain there was 2 diffrent files, one binary file and one config file but im not 100% sure
 
I just tested that file and it properly started the Areca HTTP server and created the archttpsrv.conf file. I do not remember any version of archttp coming with a configuration file. The program seems to create it on the fly, but you can edit it and restart the server to make changes.
 
I have a question or two concerning Areca cards. Firstly, who do they sell to? All of the major server vendors (Dell, HP, IBM, Sun) ship OEM RAID cards in their servers. This makes life easy for their customers, because there's one person to bellow at when things go wrong. Whiteboxers like Supermicro have their own UIO line they use in their servers. So where's the market for Areca? The Mac Pro?

Second question. The onboard Ethernet port is a very enterprise/datacentre-type feature, and is obviously very useful and mostly likely customer-driven. How is it then that none of the other RAID card manufacturers do this? How do they enable remote management of their respective cards?
 
I have a question or two concerning Areca cards. Firstly, who do they sell to? All of the major server vendors (Dell, HP, IBM, Sun) ship OEM RAID cards in their servers. This makes life easy for their customers, because there's one person to bellow at when things go wrong. Whiteboxers like Supermicro have their own UIO line they use in their servers. So where's the market for Areca? The Mac Pro?

Second question. The onboard Ethernet port is a very enterprise/datacentre-type feature, and is obviously very useful and mostly likely customer-driven. How is it then that none of the other RAID card manufacturers do this? How do they enable remote management of their respective cards?


I too wonder these things. I think the ethernet is a fantastic idea. I don't think the Mac Pro cards are Areca though, I think they're adaptec.

Dell's newer PERCs are LSi, IBM's ServeRAID are LSI. Sun uses some LSI cards if I remember right, and I'm unsure what HP uses. Heck even most of SuperMicro's UIO cards are LSI.
 
Last edited:
They sell to some smaller system integrators out there as far as I know. I've seen a bunch of off-lease Rackable servers with them on eBay for example. As for the ethernet management, they aren't the only ones that include it. HighPoint has some cards with it. Implementing it is rather simple to be honest. It's no different than management software as it just passes commands to the card and is merely a prettier version of the command line tools. You don't need more than a single ASIC to implement a web server (note how basic the management pages are) and all it would then do it send and receive commands over a serial bus like say I2C or RS232.
 
They sell to some smaller system integrators out there as far as I know. I've seen a bunch of off-lease Rackable servers with them on eBay for example. As for the ethernet management, they aren't the only ones that include it. HighPoint has some cards with it. Implementing it is rather simple to be honest. It's no different than management software as it just passes commands to the card and is merely a prettier version of the command line tools. You don't need more than a single ASIC to implement a web server (note how basic the management pages are) and all it would then do it send and receive commands over a serial bus like say I2C or RS232.

Areca and Highpoints web interfaces are nearly identical, I think one purchased from the other or there is a common OEM toolkit to do this.
 
Im sick of this shit, dunno how to fix it anymore.. Today my raid 5 kicked out channel 5 again

Heres my earlier post:

http://hardforum.com/showthread.php?p=1036283937#post1036283937

i changed one of the disks to a new one and thats just couple of months ago, and it is impossible that same channel have another broken drive in 3 months.

Earlier i had Seagate 1.5TB disk in channel 5 which went back for RMA. The current channel 5 disk is a Seagate 5900 LP 2TB disk. So once again i´m experiencing the same issue.

Could it be somehow the LP went for a quick nap? ive heard those low power drive can do so.
 
I don't think that's the main issue. Look at that crappy burst rate. It should be at least 800MB/sec for PCIe 8x. Probably the slot is actually running at PCIe 1x. Check your PCIe lane configuration in your BIOS.

Ding Ding Ding! We have a winner! I deleted the multiple volume sets and created a single volume set that I then let the OS partition up. Got the same results (reads: 2MB blocks = AVG 140ish MBps, Max 190MBps.) I then moved all of the PCIe cards to the right one slot and presto! saw the performance I was looking for! Since having seen the better performance (reads: 2MB blocks = AVG 524MBps, Max 624MBps) I've since moved the cards back to their original slots and am still seeing expected performance. Maybe the card wasn't properly seated?? Very odd though. This was a SuperMicro mobo w/ 4 PCIe 2.0 8x slots & 2 PCIe 2.0 4x slot. Model X8DTE-O.

I'm going to guess I'm seeing piss poor numbers because I'm running w/ 20 - 500GB Samsung spinpoint drives? Next SAN will be probably full of SSDs since I've caught the need for SAN speed :)
 
@WoooT

The common thread in your posts is "Seagate". Perhaps you ought to consider that to be a possible source of your issues. :)
 
Seagate drives should generally be avoided with Areca cards unless you're using one of the SATA cards (or enterprise drives).
 
I have a question or two concerning Areca cards. Firstly, who do they sell to? All of the major server vendors (Dell, HP, IBM, Sun) ship OEM RAID cards in their servers. This makes life easy for their customers, because there's one person to bellow at when things go wrong. Whiteboxers like Supermicro have their own UIO line they use in their servers. So where's the market for Areca? The Mac Pro?

Second question. The onboard Ethernet port is a very enterprise/datacentre-type feature, and is obviously very useful and mostly likely customer-driven. How is it then that none of the other RAID card manufacturers do this? How do they enable remote management of their respective cards?

As far as remotely managing Dell PERC cards we do it via Dell's OpenManage Server Administrator webpages. I say webpages because it typically installs a webserver on your winblows box and accepts https connections on port 1311. It's fairly useless if you ask me because you have to log-in daily to check on your raid cards. We tried setting up blat scripts to send emails on equipment failures but they never seem to work when you need them too :(

Course all of our Dell's are now 4+ years old. I've been building whitebox servers for my company now for about 2 years.
 
Ding Ding Ding! We have a winner! I deleted the multiple volume sets and created a single volume set that I then let the OS partition up.

Volume sets...these appear as logical drives to the OS, right? Then all you have to do is format them, right?

It sounds like Areca is trying to meld RAID sets with some kind of logical volume management. What would be the advantage of volume sets, then? Are they dynamically and easily expandable (unlike partitions)?
 
What drives would you recommend then?

Hitachi's 5K3000 for a start. Western Digital RAID Edition GP. Perhaps the Samsung F4EG, also.


As far as remotely managing Dell PERC cards we do it via Dell's OpenManage Server Administrator webpages. I say webpages because it typically installs a webserver on your winblows box and accepts https connections on port 1311. It's fairly useless if you ask me because you have to log-in daily to check on your raid cards. We tried setting up blat scripts to send emails on equipment failures but they never seem to work when you need them too :(

Course all of our Dell's are now 4+ years old. I've been building whitebox servers for my company now for about 2 years.

Since you whitebox, I assume you've built your own HCL? Also, can RAID cards be managed via IPMI?
 
Hitachi's 5K3000 for a start. Western Digital RAID Edition GP. Perhaps the Samsung F4EG, also.




Since you whitebox, I assume you've built your own HCL? Also, can RAID cards be managed via IPMI?


All my servers are whitebox, and no you cannot manage RAID via IPMI other than using the KVM over IP to connect to the console at boot time to hit ctrl-c. That'd be the cats ass if it could hook into the RAID bios.

For LSI cards, you can use MegaRAID storage manager to connect to the remote/local server running an LSI card. You can use the Linux or Windows variety. The driver itself should carry the appropriate CIM provider to do this.

The one exception I've found here is that it doesn't work "that way" with vSphere ESXi 4.1. I used to connect to the vsphere CIM to view my RAID array and manage it, but I can't do that anymore.
 
Hitachi's 5K3000 for a start. Western Digital RAID Edition GP. Perhaps the Samsung F4EG, also.




Since you whitebox, I assume you've built your own HCL? Also, can RAID cards be managed via IPMI?

My HCL is my house sadly. I spend stupid amounts on computer hardware to see what is compatiable and what isn't. But good question. I've been sticking w/ Intel S5520SCR boards recently and give them 2 thumbs up for vSphere 4.1 support. But I'm not sticking any HDs nor RAID cards in them. I'm using SuperMicro X8DTE-O, linux, SCST & 1680ix-24 card for a SAN at the moment. Give me a year and ask how stable it all is then :)
 
@WoooT

The common thread in your posts is "Seagate". Perhaps you ought to consider that to be a possible source of your issues. :)

Seagate drives should generally be avoided with Areca cards unless you're using one of the SATA cards (or enterprise drives).


Well i have 8 Seagate drives, running over 2 years without problem, and then 1 disk @channel 5 started to fail, i replaced it and now 3 months later same channel another disk failed.

i would say what is the odds?, i could understand if another disk on the other channels failed, but this disk is brand new and the areca card is telling me it is broken?

Well maybe it is broken maybe not, but ill bet on something is fubu with the channel no 5!
 
@qwkhyena

Can I assume then that you actually own the company you whitebox for? If not, they must trust you with their gonads. :p
 
Well maybe it is broken maybe not, but ill bet on something is fubu with the channel no 5!

With the extra info you provide, it may well be that you have a flaky channel #5. Only way to check would be to go buy a brand that is known to have zero issues with hardware RAID and see what happens.

Go buy an Hitachi 2TB or WD REx-GP 2TB, see what happens. If it fails, obviously it's the port. If not...
 
Looks like Highpoint does something to drives directly. I had a similar experience once. All evening could not figure it out as to why my newly installed Areca did not see the drives which were a minute ago working just fine on a RR2320, even the onboard controller could not do much. But after all RAID definitions were deleted and the staggered spin-up feature turned OFF using the RR2320 all became normal. Looks like RR controllers toggle something in the drives' logic permanently.

RAID definitions/signatures are stored on the drives.

I tried turning off the staggered spin up feature, that did not help.

I deleted the raid completely, no luck either.

If I even attach the drive(s) to a onboard sata port and boot to windows 2003, they are not even recognized.

In the highpoing bios, they are shown as initialized, so I guess they are prepared to work for that controller only. I cannot get them to go into new mode, or even legacy.

I searched the manual and google, and cant seem to find a solution to prepare them to use outside that highpoint raid card.

any other ideas?

thanks all
 
It's been a while since the last time touched any Highpoint hardware, but I'd try to delete whatever was defined on the drives and configure them in a JBOD mode or whatever is available (standalone, pass-through). You need to get rid of the Highpoint's definition of a RAID config, sort of de-initialize them.
There was something, I do not remember what that was, what allowed me to get my drives going. Experiment a little.
If it is no go, I'll try to get a user's manual and see what I had found back then. Currently I have too much on my plate and need a couple of days before I could get to this matter.
 
It's been a while since the last time touched any Highpoint hardware, but I'd try to delete whatever was defined on the drives and configure them in a JBOD mode or whatever is available (standalone, pass-through). You need to get rid of the Highpoint's definition of a RAID config, sort of de-initialize them.
There was something, I do not remember what that was, what allowed me to get my drives going. Experiment a little.
If it is no go, I'll try to get a user's manual and see what I had found back then. Currently I have too much on my plate and need a couple of days before I could get to this matter.

thanks, I had already tried what you suggested but it was a no go, created a jbod array, and then blew it away, doesnt work.

that bios is very basic, not many options to play with. Nothing in the gui either to do anything. I just find this weird.

thanks again for the help.
 
OK, try this:
1. Delete definitions of all existing arrays if any.
2. Switch visual presentation to view the list of the channels.
3. For every channel use the Unplug button to evict the drives one by one from the config.

This should disable whatever the controller does to the drives during initialization.
 
Last edited:
@qwkhyena

Can I assume then that you actually own the company you whitebox for? If not, they must trust you with their gonads. :p

Good guess but no. We're employee owned small company. I do own over 2% but not much more.

I typically get it up & running at home first then show them what I've done. Also helps that I've been w/ them for over 12 years now so I do have some pull there too.

Plus let's be honest, no one knows what IT does, just that it works one way or another! :)
 
So the raid card is killing my disks? or is this common for low power disks to shut down, on and off???

your post is very vague but some drives marketed as "green power" and 5400/5900RPM do have minds of their own with firmware that does its own aggressive power saving and head parking thing. conversely i've never known a raid card to "kill disks". that said, your SMART stats look pretty normal.

if you're having trouble understanding SMART stats then you could either google or read THIS or create a separate thread about it. this isn't raid card related.
 
Last edited:
your post is very vague but some drives marketed as "green power" and 5400/5900RPM do have minds of their own with firmware that does its own aggressive power saving and head parking thing. conversely i've never known a raid card to "kill disks". that said, your SMART stats look pretty normal.

if you're having trouble understanding SMART stats then you could either google or read THIS or create a separate thread about it. this isn't raid card related.


So i´m having trouble understanding SMART?, a disk with TWENTY THOUSAND power on counts isn't normal for a disk 3months old, ive had all kind of disks and never experienced this.

Just to be shure, i just ran sea tools couple of minutes ago and it failed(Long test and generic test), so yes the disk is broken.

Not to be rude or so but how is this not Areca related, im having issue with with my areca raid card kicking out disks from the raid, maybe it is just coincidence that those two disk where both on same channel
 
Last edited:
conversely i've never known a raid card to "kill disks". that said, your SMART stats look pretty normal.

While I agree that it seems unlikely the RAID card is "killing" the disk, the SMART value for Power Cycle Count does NOT look normal:

0C Power Cycle Count: current 81, worst 37, threshold 20, raw 0x4F22

I think there is something wrong. But I would put the RAID card low on the list of suspects. I would start with cables and power supply, backplane (if present), and the HDD itself, before I got to blaming the RAID card.
 
I'm going to second Odditory's commend - you have the worst drives possible for your RAID config.

You could try disabling NCQ support and run the disks in SATA150 mode to mitigate the issue. If you do not like it, then get yourself some drives that are better suited for the job - Hitachi or WD REx ones.
 
Relax d00d you had a HDD drive die, it happens all the time. The fact that it was the same port is probably a coincidence.

You say you have had this disk for ~3months but the power on hours counter is at 1068hrs, which is hardly 1.5 months, so maybe it is turning on/off a lot because its a green drive. Do you have another one on this controller to compare to?

First have you put a known working disk in there temporarily to see if the power on count climbs excessively high?
Have you checked all of your connections?

RAID cards/sata controllers/ rarely kill HDDs. Something else was wrong. It probably would kill you to use drives that are better suited for RAID....just sayin.
 
Relax d00d you had a HDD drive die, it happens all the time. The fact that it was the same port is probably a coincidence.

You say you have had this disk for ~3months but the power on hours counter is at 1068hrs, which is hardly 1.5 months, so maybe it is turning on/off a lot because its a green drive. Do you have another one on this controller to compare to?

First have you put a known working disk in there temporarily to see if the power on count climbs excessively high?
Have you checked all of your connections?

RAID cards/sata controllers/ rarely kill HDDs. Something else was wrong. It probably would kill you to use drives that are better suited for RAID....just sayin.


I´m relaxed just frustrated :), i said 3 months but i looked at the receipt and its from 14/oct-2010 so actully 4 months, so yeah abviously it hasnt been online all the time due to the smart values (44days), and i havent had have time to install archttp to check disk status.

This is the only Low power disk on the raid... the other drives which are over 2 years old have around 90 power on counts.
 
I'm almost done initializing a quick 4-drive RAID 5 array of Seagate Barracuda LP 2TB ST32000542AS drives. I'm curious to see how it works out. If it gives me any timeouts, I'll probably just sell them off to people I know for dirt cheap and get the 5K3000 drives in their place for my back-up array.

I've seen one or two people using these drives with an 1880i, anyone else care to tell me if there are any settings I need to be aware of?
 
Well....that was a fast answer:
2011-02-23 00:17:41 Enc#1 Slot#6 Time Out Error
2011-02-23 00:12:28 SEAGATE Complete Init 007:43:23

Anyone need a Seagate LP 2TB on the cheap? haha...

Slot#1 2000.4GB Hitachi HDS723020BLA642
Slot#2 2000.4GB Hitachi HDS723020BLA642
Slot#3 2000.4GB Hitachi HDS723020BLA642
Slot#4 2000.4GB Hitachi HDS723020BLA642
Slot#5 2000.4GB ST32000542AS
Slot#6 2000.4GB ST32000542AS
Slot#7 2000.4GB ST32000542AS
Slot#8 2000.4GB ST32000542AS
 
Well....that was a fast answer:
2011-02-23 00:17:41 Enc#1 Slot#6 Time Out Error
2011-02-23 00:12:28 SEAGATE Complete Init 007:43:23

Anyone need a Seagate LP 2TB on the cheap? haha...

Slot#1 2000.4GB Hitachi HDS723020BLA642
Slot#2 2000.4GB Hitachi HDS723020BLA642
Slot#3 2000.4GB Hitachi HDS723020BLA642
Slot#4 2000.4GB Hitachi HDS723020BLA642
Slot#5 2000.4GB ST32000542AS
Slot#6 2000.4GB ST32000542AS
Slot#7 2000.4GB ST32000542AS
Slot#8 2000.4GB ST32000542AS


Hahaha, same drive as me, what gives the error?, the disk jumping out of the raid?


So it seems the LP is no go for this raid card.
 
:) It did not take too long to see why those Seagate drives should be avoided. They are probably fine for a standalone config, but by their nature not designed to run RAIDed.
Get those matching Hitachi drives and your setup will be almost golden. :)
 
It will probably do for now until I have a chance to find a home. I have 7 of them...boo. Oh well. The Hitachi 7K3000 drives are pretty impressive in just a four drive array. Copying between arrays is unbelievably fast. I copied 4 GB from array to array seemingly instantly. :)
 
Back
Top