ARECA Owner's Thread (SAS/SATA RAID Cards)

I did a little digging, none of the WD RED drives are on the compatibility list neither the new Seagate Ironwolf.

Am i missing something?
Like every vendor out there, Areca probably only adds drives to their compatibility list they test extensively in house.
Do you have drives already? What model are they?
 
Like every vendor out there, Areca probably only adds drives to their compatibility list they test extensively in house.
Do you have drives already? What model are they?

I don't have the hardware yet... Still doing a little research to avoid any difficulties later...
 
Areca (and many other enterprise manufacturers) generally only add either their own internal model drives or the third party "enterprise" drives (Meaning WD RE, Seagate Constellation/Enterprise, Hitachi Higher-End Ultrastar etc) and not SOHO/NAS drives to their HCL lists.
 
Doesn't matter. Areca will run pretty much anything.

So I've ordered an controller from Ebay and got this: ARECA ARC-1880DI-IX-12.
(http://www.ebay.com/itm/272769283782)

Anyone knows what does the "D" on the model name represents?

Is it just an IX-12 version? Is it safe to flash IX firmware?

Edit: Also what is the meaning of this message: No BIOS disk found.. RAID Controller BIOS not installed? I can access the boot menu from the controller...
 
Last edited:
As an eBay Associate, HardForum may earn from qualifying purchases.
You bought an 1880ix-12. You can ignore the "D" . There's no such thing as 'IX' firmware. There's a single firmware across the 1880 line. You can also flash the SAS expander, which would be specific to cards with the "ix" submodel, but I would advise against that. You probably don't have the right cable for that anyway.

As for the message you're getting, that's normal. You don't have a bootable array or disk configured.
 
So I've connected 2 2x3TB HDD to the card's 1st SAS plug and in Web View the card interface shows the drives as connected to an "Enclosure #2" which is apparently is an expander and not the actual SAS controller.

Can anyone explain why?
 

Attachments

  • Untitled.png
    Untitled.png
    78.6 KB · Views: 72
That's expected. The 12 port and up models have integrated SAS expanders. The CPUs in modern RAID cards only have 8 lanes tops, so that's the only way to get more ports. The 8 lanes all go to the SAS expander and then all the drives hook up to it. You're not going to see any performance difference if that's what you're worried about.
 
That's expected. The 12 port and up models have integrated SAS expanders. The CPUs in modern RAID cards only have 8 lanes tops, so that's the only way to get more ports. The 8 lanes all go to the SAS expander and then all the drives hook up to it. You're not going to see any performance difference if that's what you're worried about.
Thanks for the answer :)

Do you happen to know why the controller reports my 2x3TB Red drives as having: "Error Recovery Control (Read/Write) Disabled/Disabled?" Shouldn't a red drive support that?

Also should i leave TLER on default or to set it to lets say 7 seconds?
 
Last edited:
You can click on each device to see the temperatures. The card will also alarm if the temperature on any given drive goes too high.
 
You can click on each device to see the temperatures. The card will also alarm if the temperature on any given drive goes too high.

40C idle and 44C while spinning is decent?

i'm using 2 fans inside the Node 804 HDD section...
 
40C idle and 44C while spinning is decent?

i'm using 2 fans inside the Node 804 HDD section...
Maybe a little on the high side, but still within thresholds. You should be fine. I think the alarm goes off at 50C or 60C. I forget which.
 
If anyone is interested, shoot me a PM, I've got like 10+ Areca cards I picked up. Just learning more about them via this thread, really helpful info.


FAVPOpd.jpg
 
Hi.

I have a small problem.
Today my Areca 1280ML with 11x3TB in a RAID6 started to act funny.
Logged into web-gui and saw a lot of read errors on the last drive.
Immedietly started to copy of critical data and managed to do so.

Looked at SMART status, but nothing special there.
Then i remembered something about Areca cards not passing SMART info and installed HD Sentinel which can get info for separate disks in a RAID.
The same disk throwing the read errors now showed about 100 reallocated sectors.
That number rised to 123 during copy and i had to resume the backup several times before I got a complete backup.

I then saw that uTorrent was having trouble.
All downloads said "files missing".
Checked the download directory and got this; The file or directory is corrupted or unreadable".
Found a whole bunch of files in my backup dir also having the same issue.
It seems like my movies and series are unaffected.

I have ordered new disks to replace the failing one.
Will not do anything before that is done.

What can I do to try to fix the corrupted files/folders without loosing data? (I have all 26TB on cloud backup, but it takes forever to get it back :))

Thanks for any help on this.
 
Hi.

I have a small problem.
Today my Areca 1280ML with 11x3TB in a RAID6 started to act funny.


Thanks for any help on this.

If you had a R6 setup then a single failing drive should have just degraded the array, made the access a bit slower but it shouldn’t have trashed the data on the array. You may have more serious problems going on. When is the last time you did an integrity or volume check?
 
If you had a R6 setup then a single failing drive should have just degraded the array, made the access a bit slower but it shouldn’t have trashed the data on the array. You may have more serious problems going on. When is the last time you did an integrity or volume check?

The last time I did a volume check would be some days after building the RAID and stress testing it. Maybe 2 years ago?

Here is a little more info;

- I have a server running ESXi
- The Areca is not a datastore, but a passed through controller to one of the VM's
- This VM is a dedicated download VM with only uTorrent running on Windows Server 2012 R2
- RAID was originally 8x3TB HGST drives in RAD 6
- Expanded with 1x3TB TOSHIBA after 6 months
- Expanded further with 2x3TB TOSHIBAS 6 months after that.
- Since ESXi "just works" i have had 6 reboots of host in those 2 years. Mainly updates.

In my last post I had 123 reallocated sectors.
Now the number is 171.
EDIT: 2 min later: 173

As you say, it should have just degraded the array. The problem is that Areca web-gui thinks everything is normal.

Here is a screenshot from HD sentinel for drive 11.

upload_2017-10-7_9-41-23.png


Here is what Areca says

upload_2017-10-7_9-43-12.png


Here is disk 11.

upload_2017-10-7_9-44-6.png


Areca log

upload_2017-10-7_10-19-31.png


Should I pull the disk before it makes more problems?
Really don't know what to do now.
 
Last edited:
HD Sentinel now reports 699 Reallocated sectors and health is at 10%. The VM has started to bluescreen and restart from time to time.
Web-Gui still says "Normal".
The new drives will arrive today so could really need some help with what to do next.
 
So while trying to figure out how to proceed with a potential new raid card for my new drives (I asked on reddit as my last similar thread never got a reply here, see www.reddit.com/r/DataHoarder/comments/782hdh/planning_on_adding_another_8_disk_raid_6_array_to/ ), I picked up on something odd. The last two scheduled checks have taken twice the normal amount of time, without any significant increase in the stored data or other changes to the system. I did mark on here where I installed Windows 10 and a new Ryzen system, but since one check completed with its normal time after the install, I don't think that's the reason. It is on a pci express 2.0 x16 (x4 electrical) slot currently, but it was also there during the last 74 hour check.. I've attach the Logs here, anyone have any ideas?

*EDIT* so I must have been mistaken, because I've got it in the top pci express 3.0 8x slot and the check is scheduled to finish with a total time of around 70-80 hours, which is typical. It really didn't like being in a pci express 2.0 x4 slot...

Areca 1220 log 10-23-2017.png
 
Last edited:
hello,
since yesterday I'm having problems with my Areca controller and RAID6 volume.
Without any warning it drops the complete array.
And when I check the controller the expander name is totally garbled (see image below).
I've also attached the log of the controller.
When I reboot my server evrything is fine again only to get messed up after some time.
Is there anything that I can do please?
Thanks in advance!

Code:
Time    Device    Event Type    Elapse Time    Errors
2017-10-26 07:13:23    Enclosure#2    Removed        
2017-10-26 07:13:23    Enc#2 SES2Device    Device Removed        
2017-10-26 07:13:23    Enc#2 SLOT 03    Device Removed        
2017-10-26 07:13:23    Enc#2 SLOT 02    Device Removed        
2017-10-26 07:13:23    Enc#2 SLOT 01    Device Removed        
2017-10-26 07:13:22    ARC-7K4000    RaidSet Degraded        
2017-10-26 07:13:22    ARC-7K4000    RaidSet Degraded        
2017-10-26 07:13:22    Enc#2 SLOT 04    Device Removed        
2017-10-26 07:13:22    ARC-1880-VOL#001    Volume Failed        
2017-10-26 07:13:21    ARC-1880-VOL#001    Volume Failed        
2017-10-26 07:13:21    ARC-7K4000    RaidSet Degraded        
2017-10-26 07:13:21    ARC-1880-VOL#001    Volume Failed        
2017-10-26 07:13:21    Enc#2 SLOT 20    Device Removed        
2017-10-26 07:13:21    Enc#2 SLOT 19    Device Removed        
2017-10-26 07:13:21    Enc#2 SLOT 18    Device Removed        
2017-10-26 07:13:21    Enc#2 SLOT 17    Device Removed        
2017-10-26 07:13:21    Enc#2 SLOT 16    Device Removed        
2017-10-26 07:13:21    Enc#2 SLOT 15    Device Removed        
2017-10-26 07:13:21    Enc#2 SLOT 14    Device Removed        
2017-10-26 07:13:21    Enc#2 SLOT 13    Device Removed        
2017-10-26 07:13:21    Enc#2 SLOT 12    Device Removed        
2017-10-26 07:13:21    Enc#2 SLOT 11    Device Removed        
2017-10-26 07:13:21    Enc#2 SLOT 10    Device Removed        
2017-10-26 07:13:21    Enc#2 SLOT 09    Device Removed        
2017-10-26 07:13:21    Enc#2 SLOT 08    Device Removed        
2017-10-26 07:13:21    Enc#2 SLOT 07    Device Removed        
2017-10-26 07:13:21    Enc#2 SLOT 06    Device Removed        
2017-10-26 07:13:21    Enc#2 SLOT 05    Device Removed        
2017-10-26 07:13:21    ARC-7K4000    RaidSet Degraded        
2017-10-26 07:13:21    ARC-7K4000    RaidSet Degraded        
2017-10-26 07:13:21    ARC-7K4000    RaidSet Degraded        
2017-10-26 07:13:21    ARC-7K4000    RaidSet Degraded        
2017-10-26 07:13:21    ARC-7K4000    RaidSet Degraded        
2017-10-26 07:13:21    ARC-7K4000    RaidSet Degraded        
2017-10-26 07:13:21    ARC-7K4000    RaidSet Degraded        
2017-10-26 07:13:21    ARC-7K4000    RaidSet Degraded        
2017-10-26 07:13:21    ARC-7K4000    RaidSet Degraded        
2017-10-26 07:13:21    ARC-7K4000    RaidSet Degraded        
2017-10-26 07:13:21    ARC-7K4000    RaidSet Degraded        
2017-10-26 07:13:21    ARC-7K4000    RaidSet Degraded        
2017-10-26 07:13:21    ARC-7K4000    RaidSet Degraded        
2017-10-26 07:13:21    ARC-7K4000    RaidSet Degraded        
2017-10-26 07:13:21    ARC-7K4000    RaidSet Degraded        
2017-10-26 07:13:21    ARC-1880-VOL#001    Volume Failed        
2017-10-26 07:13:20    ARC-1880-VOL#001    Volume Failed        
2017-10-26 07:13:20    ARC-1880-VOL#001    Volume Failed        
2017-10-26 07:13:19    ARC-1880-VOL#001    Volume Failed        
2017-10-26 07:13:19    ARC-1880-VOL#001    Volume Failed        
2017-10-26 07:13:19    ARC-1880-VOL#001    Volume Failed        
2017-10-26 07:13:18    ARC-1880-VOL#001    Volume Failed        
2017-10-26 07:13:18    ARC-1880-VOL#001    Volume Failed        
2017-10-26 07:13:17    ARC-1880-VOL#001    Volume Failed        
2017-10-26 07:13:17    ARC-1880-VOL#001    Volume Failed        
2017-10-26 07:13:17    ARC-1880-VOL#001    Volume Failed        
2017-10-26 07:13:16    ARC-1880-VOL#001    Volume Failed        
2017-10-26 07:13:16    ARC-1880-VOL#001    Volume Failed        
2017-10-26 07:13:16    ARC-1880-VOL#001    Volume Failed        
2017-10-26 07:13:16    ARC-1880-VOL#001    Volume Degraded        
2017-10-26 07:13:16    ARC-7K4000    RaidSet Degraded        
2017-10-26 07:13:16    ARC-1880-VOL#001    Volume Degraded        
2017-10-26 06:56:54    192.168.001.002    HTTP Log In        
2017-10-26 06:55:34    H/W Monitor    Raid Powered On        
2017-10-26 06:52:22    192.168.001.033    HTTP Log In        
2017-10-26 05:06:42    Enclosure#2    Removed        
2017-10-26 05:06:42    Enc#2 SES2Device    Device Removed        
2017-10-26 05:06:42    Enc#2 SLOT 01    Device Removed        
2017-10-26 05:06:42    Enc#2 SLOT 04    Device Removed        
2017-10-26 05:06:42    Enc#2 SLOT 03    Device Removed        
2017-10-26 05:06:42    ARC-7K4000    RaidSet Degraded        
2017-10-26 05:06:42    ARC-7K4000    RaidSet Degraded        
2017-10-26 05:06:42    Enc#2 SLOT 02    Device Removed        
2017-10-26 05:06:41    ARC-1880-VOL#001    Volume Failed        
2017-10-26 05:06:41    ARC-1880-VOL#001    Volume Failed        
2017-10-26 05:06:41    ARC-7K4000    RaidSet Degraded        
2017-10-26 05:06:41    ARC-1880-VOL#001    Volume Failed        
2017-10-26 05:06:41    Enc#2 SLOT 20    Device Removed        
2017-10-26 05:06:41    Enc#2 SLOT 19    Device Removed        
2017-10-26 05:06:41    Enc#2 SLOT 18    Device Removed        
2017-10-26 05:06:41    Enc#2 SLOT 17    Device Removed        
2017-10-26 05:06:41    Enc#2 SLOT 16    Device Removed        
2017-10-26 05:06:41    Enc#2 SLOT 15    Device Removed        
2017-10-26 05:06:41    Enc#2 SLOT 10    Device Removed        
2017-10-26 05:06:41    Enc#2 SLOT 09    Device Removed        
2017-10-26 05:06:41    Enc#2 SLOT 08    Device Removed        
2017-10-26 05:06:41    Enc#2 SLOT 07    Device Removed        
2017-10-26 05:06:41    Enc#2 SLOT 06    Device Removed        
2017-10-26 05:06:41    Enc#2 SLOT 05    Device Removed        
2017-10-26 05:06:41    Enc#2 SLOT 13    Device Removed        
2017-10-26 05:06:41    Enc#2 SLOT 14    Device Removed        
2017-10-26 05:06:41    Enc#2 SLOT 12    Device Removed        
2017-10-26 05:06:41    Enc#2 SLOT 11    Device Removed        
2017-10-26 05:06:41    ARC-7K4000    RaidSet Degraded        
2017-10-26 05:06:41    ARC-7K4000    RaidSet Degraded        
2017-10-26 05:06:41    ARC-7K4000    RaidSet Degraded        
2017-10-26 05:06:41    ARC-7K4000    RaidSet Degraded        
2017-10-26 05:06:41    ARC-7K4000    RaidSet Degraded        
2017-10-26 05:06:41    ARC-7K4000    RaidSet Degraded        
2017-10-26 05:06:41    ARC-7K4000    RaidSet Degraded        
2017-10-26 05:06:41    ARC-7K4000    RaidSet Degraded        
2017-10-26 05:06:41    ARC-7K4000    RaidSet Degraded        
2017-10-26 05:06:41    ARC-7K4000    RaidSet Degraded        
2017-10-26 05:06:41    ARC-7K4000    RaidSet Degraded        
2017-10-26 05:06:41    ARC-7K4000    RaidSet Degraded        
2017-10-26 05:06:41    ARC-7K4000    RaidSet Degraded        
2017-10-26 05:06:41    ARC-7K4000    RaidSet Degraded        
2017-10-26 05:06:41    ARC-7K4000    RaidSet Degraded        
2017-10-26 05:06:41    ARC-1880-VOL#001    Volume Failed        
2017-10-26 05:06:41    ARC-1880-VOL#001    Volume Failed        
2017-10-26 05:06:40    ARC-1880-VOL#001    Volume Failed        
2017-10-26 05:06:39    ARC-1880-VOL#001    Volume Failed        
2017-10-26 05:06:39    ARC-1880-VOL#001    Volume Failed        
2017-10-26 05:06:39    ARC-1880-VOL#001    Volume Failed        
2017-10-26 05:06:38    ARC-1880-VOL#001    Volume Failed        
2017-10-26 05:06:38    ARC-1880-VOL#001    Volume Failed        
2017-10-26 05:06:37    ARC-1880-VOL#001    Volume Failed        
2017-10-26 05:06:37    ARC-1880-VOL#001    Volume Failed        
2017-10-26 05:06:37    ARC-1880-VOL#001    Volume Failed        
2017-10-26 05:06:36    ARC-1880-VOL#001    Volume Failed        
2017-10-26 05:06:36    ARC-1880-VOL#001    Volume Failed        
2017-10-26 05:06:35    ARC-1880-VOL#001    Volume Failed        
2017-10-26 05:06:35    ARC-1880-VOL#001    Volume Degraded        
2017-10-26 05:06:35    ARC-7K4000    RaidSet Degraded        
2017-10-26 05:06:35    ARC-1880-VOL#001    Volume Degraded        
2017-10-25 21:04:16    192.168.001.002    HTTP Log In        
2017-10-25 20:47:10    H/W Monitor    Raid Powered On

HTML:
 Controller:Areca ARC-1880IX-24 1.51
SAS Address    5001B4D10C74D000
Enclosure    ENC#1
Number Of Phys    8
Attached Expander    Expander#1[5001B4D7073A003F][7x6G]
 Expander#1:¼ƒ@pƒ¥@¥ï@0¸@ýö@ ÓŠ@¼ƒ@
SAS Address    5001B4D7073A003F
Component Vendor    ¼ƒ@
Component ID    9083
Enclosure    ENC#2
Number Of Phys    0
 
Well I picked up a ARC-1883ix-16-2G open box from newegg, and I wouldn't have been able to tell it wasn't new if I hadn't saved about half off their full price. I'd like to keep my existing array from my ARC-1220, and from what I've read this should be possible. It has been mentioned previously in this thread that I should connect them in the same order as I had them connected previously to the old controller. However I'm having difficulty determining the order the ARC-1883ix-16-2G goes in. In the manual the bottom two SFF-8643 ports are SCN1, and the upper two are SCN2, each a group of two SFF-8643 connections. But which of these two are first? Should I plug drives 1-4 into the SFF-8643 connector of SCN1 closest to the pci express connector (A,) or the other SCN1 connector (B?)
areca scn1.png


Additionally, I was considering upgrading the firmware before I start using the card. However there I am a little nervous because there is only a firmware for 1883 series, not specifically 1883ix. There's also one for the ARC-1883ix-expander, but again I think that's for an add on. I am pretty sure these are just modifications to the base 1883 card, with the integrated expander for more ports. However it would ease my mind if someone could confirm the 1883 series raid firmware was appropriate for my card, I don't want to brick this thing!
 
Last edited:
Drive order is mostly critical for recovery purposes, not so much when moving between cards. If you wish to maintain the same order however, you'll start with the bottom connector.

Regarding firmware, it's universal between the cards in the series. They're all identical internally except for how the ports are wired up. The expander does have a separate firmware that you can update through the serial port on the back of the card, except I'd recommend against doing that unless you have a good reason to.
 
Well I think something isn't right. The old array which was created on the 1220 seems to be having a lot of issues. I made sure nothing was accessing the arrays before I ran the tests, task manager confirmed I had stopped the things that were looking constantly (backblaze and plex mainly.) I can excuse the slower speed I guess because it's an older drive, but it spins faster than the new array (5700 vs 5400.) Both arrays were created with the same settings, maybe it's really not compatible with the way the 1220 created it? Various hard drive tools and smart info don't report anything wrong with the drives. All these drops are going to be a problem I think, wasn't really looking to transfer everything over to the new array. I tried transferring some 8gb files over from old to new and the speed was very inconsistent. Any ideas?
Areca arrays old vs new 11-03-17.png
 
I have a couple of questions about my areca (1882)-setup. (several 4-disk raid5s)

1. The transfer-speed when transferring a lot seems to be uneven, it is quick a good while, then it is very low for several seconds, and then increases again, I think it is because it is flushing the cache. Is it a easy to setup the preferences to ensure a bit more consistent transfer speed and tuned for more than one stream (i guess at the cost of max transfer)

2. Is there a good/easy way to set up tiered storage together with the array? I have an SSD i want to use as a "cache". With external software or otherwise.
 
Well I think something isn't right. The old array which was created on the 1220 seems to be having a lot of issues. I made sure nothing was accessing the arrays before I ran the tests, task manager confirmed I had stopped the things that were looking constantly (backblaze and plex mainly.) I can excuse the slower speed I guess because it's an older drive, but it spins faster than the new array (5700 vs 5400.) Both arrays were created with the same settings, maybe it's really not compatible with the way the 1220 created it? Various hard drive tools and smart info don't report anything wrong with the drives. All these drops are going to be a problem I think, wasn't really looking to transfer everything over to the new array. I tried transferring some 8gb files over from old to new and the speed was very inconsistent. Any ideas?
View attachment 41895
Just wanted to update that I gave up and wiped out the old array. Created a raid set and volume set the exact same way as i had it on the old controller, and now it's behaving. There was SOMETHING that carried over from the 1220's configuration of the array, that didn't play nice on the 1883. Here are the same disks as the one shown in the previous image on the left:
Areca 1883 hdtune & crystaldiskmark 11-08-2017.png
So while I was able to see the old array on the new controller, and copy all the data off of it - there was something buggy about it. Took about 12 hours to initialize, which is also ridiculously faster than the 180 hours it took to initialize on the 1220.

Sorry Ugle43, I have no answers for your questions.
 
hey folks, just signed up after stumbling on this thread. I'm in the process of setting up a 3x8tb (ironwolf) array. Its doing background initialisation, but set at 80%. Taking about 1.5 hours per percent. Its going to take a week to initialise. is this normal? I'm setting it up as raid-3 after getting advice from the hardware folk at the adobe forums.

The other thing I cant find an answer to: can you expand a raid 3 array like you can a 5?
 
hey folks, just signed up after stumbling on this thread. I'm in the process of setting up a 3x8tb (ironwolf) array. Its doing background initialisation, but set at 80%. Taking about 1.5 hours per percent. Its going to take a week to initialise. is this normal? I'm setting it up as raid-3 after getting advice from the hardware folk at the adobe forums.

The other thing I cant find an answer to: can you expand a raid 3 array like you can a 5?
This depends entirely on the controller card, let us know what you have. My old 1220 took 100 hours to initialize five 4TB drives in raid 6, my new 1883 initialized eight 8tb drives in raid 6 in 16 hours. Additionally if you don't currently have an array set up, there's no reason to do a background initialization?

I can't comment on expanding it, raid 3 is rarely used it seems and I have no experience with it.
 
This depends entirely on the controller card, let us know what you have. My old 1220 took 100 hours to initialize five 4TB drives in raid 6, my new 1883 initialized eight 8tb drives in raid 6 in 16 hours. Additionally if you don't currently have an array set up, there's no reason to do a background initialization?

I can't comment on expanding it, raid 3 is rarely used it seems and I have no experience with it.

Ah yes that would help: its a 1223 8i. I selected background because I wasn't sure if foreground needed to stay in the firmware console. At that point too, I had no idea it would take so long. So I might just ditch my 2.4 % and start again. Could speed things up by a day!

EDIT: foreground - thats more like it! 1% in 6 mins, ETA 10 hours!
 
Last edited:
Ah yes that would help: its a 1223 8i. I selected background because I wasn't sure if foreground needed to stay in the firmware console. At that point too, I had no idea it would take so long. So I might just ditch my 2.4 % and start again. Could speed things up by a day!

EDIT: foreground - thats more like it! 1% in 6 mins, ETA 10 hours!
Yup there you go, no once you start an operation you don't need to sit and watch it, unless you did it from the bios before Windows loaded of course!
 
I'm having an issue with extremely poor read/write performance, but only when I copy from a RAID5 array to a passthrough disk. Running FreeBSD 11 over an (yes, it's old) Areca 1170. Mostly I've been using dd to try to figure out what's happening.

`dd if=/mnt/array/testfile of=/dev/zero` gives me a read speed of 300MB/s or something. Not great, but that beats single-spindle write speeds...
`dd if=/dev/zero of=/mnt/passthrough/testfile` gives me a write speed of ~140MB/s

The problem is that `dd if=/mnt/array/testfile of=/mnt/passthrough/testfile` gives me ~30-38 MB/s. And this is far from acceptable.

Ch/Id/Luns are 00/00/00 and 00/00/02 for the Raid5 array and the passthrough, there's plenty of memory available and the processor is basically idle.

Reflecting on the fact that there is another array on 00/00/01, I tried moving files to and from there. Array2's transfers to passthrough are actually even slower (~22MB/s), array2->array1 and array1->array2 transfers are similar to /dev/zero to array writes.

I'm curious why I'm seeing such poor performance. Copies from the disk to the first array are slow (at around 60 M/s), but a lot better than the copies in the other direction.
 
Last edited:
I've been running the Raid for a few months now, (1880ix-12) with 2 Raid sets

2x3TB Raid 1 (WD Reds)
4x6TB Raid 6 (Seagate NAS drives).

For the last couple of days, three disks dropped off the array. It started with my RAID1 becoming degraded and then gone
and now one of my 6TB drives dropped off too (marked as FREE on the controller).

The Event log states:

2018-02-06 06:57:17 Enc#2 SLOT 06 Device Inserted
2018-02-06 06:57:05 Enc#2 SLOT 06 Device Removed
2018-02-06 00:40:31 Ctrl 12V Recovered
2018-02-06 00:40:01 Ctrl 12V Under Voltage
2018-02-06 00:37:01 Ctrl 12V Recovered
2018-02-06 00:36:31 Ctrl 12V Under Voltage
2018-02-06 00:35:31 Ctrl 12V Recovered
2018-02-06 00:34:31 Ctrl 12V Under Voltage

Several times.

My Hardware info is:

CPU Temperature 42 ºC
Controller Temp. 27 ºC
12V 10.518 V
5V 5.080 V
3.3V 3.376 V
DDR-II +1.8V 1.856 V
CPU +1.8V 1.856 V
CPU +1.2V 1.264 V
CPU +1.0V 1.056 V
DDR-II +0.9V 0.928 V
Battery Status Charging(92%)


I got this message regarding the Raid6 set while with the Raid1 it just dropped off.
The drives are different, bought separately and it's highly unlikely disks from 2 sets dropped out
suddenly, (within 3 days).

I'm using an Xeon CPU, Asrock Rack mobo, Areca i got off ebay and SeaSonic ss-850-am 850W (bronze) PSU.

I'm sure it's not the SAS-SATA cables because the dropped drivers are connected via 2 sets of cables
into 2 different ports on the controller.

Is it the PSU? MOBO? CARD?

Update: My asrock rack 12v sensor reports 12V as lower critical. does that mean my psu is dying?

Please help me :)
 
Last edited:
The Event log states:

2018-02-06 06:57:17 Enc#2 SLOT 06 Device Inserted
2018-02-06 06:57:05 Enc#2 SLOT 06 Device Removed
2018-02-06 00:40:31 Ctrl 12V Recovered
2018-02-06 00:40:01 Ctrl 12V Under Voltage
2018-02-06 00:37:01 Ctrl 12V Recovered
2018-02-06 00:36:31 Ctrl 12V Under Voltage
2018-02-06 00:35:31 Ctrl 12V Recovered
2018-02-06 00:34:31 Ctrl 12V Under Voltage

Several times.

My Hardware info is:

CPU Temperature 42 ºC
Controller Temp. 27 ºC
12V 10.518 V
5V 5.080 V
3.3V 3.376 V
DDR-II +1.8V 1.856 V
CPU +1.8V 1.856 V
CPU +1.2V 1.264 V
CPU +1.0V 1.056 V
DDR-II +0.9V 0.928 V
Battery Status Charging(92%)


The 12V rail looks a bit low.

Is there a different 12V rail that you can try?
If not, do you happen to have a spare PSU that you can swap out and test with?
 
by "rail" you mean the 24p connector?
17-151-108-14.jpg
From the looks of it, it's one "big" 12V rail which is then divided at the modular connector, so if possible, try a different connector at the PSU.

Do you happen to have a power supply tester (or a multimeter)? May want to test the PSU just by itself.
If you are using a multimeter, you can force a PSU on by grounding pin 14 (on 20pin ATX)/pin 16 (on 24pin ATX) (aka green) to any black: pin 13/15 for example)
 
View attachment 53135
From the looks of it, it's one "big" 12V rail which is then divided at the modular connector, so if possible, try a different connector at the PSU.

Do you happen to have a power supply tester (or a multimeter)? May want to test the PSU just by itself.
If you are using a multimeter, you can force a PSU on by grounding pin 14 (on 20pin ATX)/pin 16 (on 24pin ATX) (aka green) to any black: pin 13/15 for example)

The psu is semi modular, the big 24p connector goes straight out of the psu...the modular connections are for all sorts of connections but the big one feeding the board is straight out of the psu.
 
The psu is semi modular, the big 24p connector goes straight out of the psu...the modular connections are for all sorts of connections but the big one feeding the board is straight out of the psu.
That card doesn't have it's own power connector, so it depends on the power passed through the slot on the motherboard. As I'm sure you've figured out, it's almost certainly the power supply. Possible the motherboard not passing through the juice, but with your rack sensor reporting low voltage I'm leaning more toward the psu not delivering the volts. How's the other motherboard connector (8 pin but maybe 4 pin if it's old enough?) I would really suggest you find another power supply from some other machine to test with. Unfortunately the arrays will unlikely be unrecoverable, who knows what kind of data corruption happens with inadequate voltage.
 
That card doesn't have it's own power connector, so it depends on the power passed through the slot on the motherboard. As I'm sure you've figured out, it's almost certainly the power supply. Possible the motherboard not passing through the juice, but with your rack sensor reporting low voltage I'm leaning more toward the psu not delivering the volts. How's the other motherboard connector (8 pin but maybe 4 pin if it's old enough?) I would really suggest you find another power supply from some other machine to test with. Unfortunately the arrays will unlikely be unrecoverable, who knows what kind of data corruption happens with inadequate voltage.

is it possible my drives were damaged?
Also the raid6 array is degraded, cant just replace a psu and reconnect the "free" drive?
 
is it possible my drives were damaged?
Also the raid6 array is degraded, cant just replace a psu and reconnect the "free" drive?
I think the drives themselves are likely ok, you were under voltage not over. I'm sorry I replied too quick regarding the arrays. The raid 1 may be gone, but the raid 6 array should be fine with only one drive dropped out.
 
I think the drives themselves are likely ok, you were under voltage not over. I'm sorry I replied too quick regarding the arrays. The raid 1 may be gone, but the raid 6 array should be fine with only one drive dropped out.

Thanks! gonna buy a psu and report back...
 
I think the drives themselves are likely ok, you were under voltage not over. I'm sorry I replied too quick regarding the arrays. The raid 1 may be gone, but the raid 6 array should be fine with only one drive dropped out.

Well, i've purchased a new PSU and also replaced my mobo.
the 12V is now spot on, however the raid still shows as degraded.
I've set the "failed" hdd to "Fail" and re-activated it but i'm unable to
re-introduce it into the array.

How to i make the array accept the "failed" drive?

EDIT: Also tried make the drive passthru, deleted all info
but it still shows as "Free" on the controller.

Solved: I made the "failed" disk a hotspare and it started rebuilding :)
 
Last edited:
Back
Top