Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Blah... I unplugged my arc-1170 while shuffling things around in the case. And now my machine refuses to post with it attached. Web interface doesn't appear to be coming up either.
I'm now extremely frustrated, but might as well check. Do the various areca cards use the same on-disk layouts? If I buy something (much) newer will there be any issue just using the current arrays?
I upgraded from a 1220 to 1883 and my old array was detected fine, still worked although performance had odd speed spikes. I can't rule out that I did something wrong, but I remade the array on the new controller and all has been fine since then so the disks weren't to blame. So you should be fine, but if you do notice some oddities don't be surprised.Blah... I unplugged my arc-1170 while shuffling things around in the case. And now my machine refuses to post with it attached. Web interface doesn't appear to be coming up either.
I'm now extremely frustrated, but might as well check. Do the various areca cards use the same on-disk layouts? If I buy something (much) newer will there be any issue just using the current arrays?
I bet some one already asked this but i can't find a solid answer to it.
Currently using sff-8087 to 4xSata cables from my HP sas expander to all my disk in the array, but i am planing to move to a new case that have built in SAS backplane for all disk but i am having a hard time finding out in the new case witch one is nr 1,2,3... and so on..
Is it critical when i move the disk that disk have the same "Device location" in the new case??
Location doesn't matter with an Areca array.
Ok, thanks.
So i can put the disk's in any port on the SAS expander and it wil work
It's never caused an issue with me. To be safe, turn on the feature to not enable an incomplete array. Then go into bios on boot to confirm it found the array.
This is not completely accurate. For regular use, yes. If you ever need to recreate the drive with the SIGNAT or LeVeL2ReScUe commands, the drives need to be in the order that the array was originally created in for it to have the best chance. This has been shown time and time again in attempts to recreate arrays over the years.Location doesn't matter with an Areca array.
This is not completely accurate. For regular use, yes. If you ever need to recreate the drive with the SIGNAT or LeVeL2ReScUe commands, the drives need to be in the order that the array was originally created in for it to have the best chance. This has been shown time and time again in attempts to recreate arrays over the years.
Label drives to drive cage positions, label the cable ends, etc. - whatever is appropriate for your situation. Imagine if the drives got pulled and all the cables disconnected while you weren't there, what would you have had to of done to be able to re-assemble them exactly as they were?Can't say I have ever had a need to use these commands. Should one need to do this, how would you know which drive goes where?
Label drives to drive cage positions, label the cable ends, etc. - whatever is appropriate for your situation. Imagine if the drives got pulled and all the cables disconnected while you weren't there, what would you have had to of done to be able to re-assemble them exactly as they were?
Yeah one of the benefits is that for the most part that data is readable by the areca controller card, thus why putting them back in order usually doesn't matter. It's when you've had a drive failure, and some of that data may be corrupted, that know which went where is key.LOL, I think we all know how to make labels. Is there no identifying data on the drive itself that can be read by the card? It is probably a good idea to label them just in case.
If the raid card is beeping with the power off, that kind of tells me you have a battery backup unit attached to it as well? Otherwise I can't figure out why a failed fan would be causing a beep with the system powered off. Anyway upon boot get into the card's setup before windows, see if all looks well in that environment. You should also unplug the raid card as a test to see if you boot windows without it, as you mentioned you didn't have time for this.A couple days ago my Windows 10 Pro PC with Areca ARC-1882ix-16 crashed and rebooted in the middle of the night, but got stuck at the windows logo screen with the spinning circle. Thinking it was a corrupted windows NVME SSD, I did a fresh Windows install to an old spinny hard drive to try and troubleshoot. It booted fine, of course. So I installed Areca drivers and rebooted. The new install of windows took a long time at the restart screen, then blue screened with a comment about "driver power state error". When it tried to boot up it got stuck at the windows logo screen just like my original install on the SSD!
From this I drew the conclusion I have a problem with the RAID card, and it is likely causing Windows not to boot. Also, when the PC is powered off, about every 15 seconds the RAID card emits a short beep. It was getting late last night so I did not run any further experiments like removing the RAID card to confirm my theory. As I thought about it this morning, I wonder if these symptoms are an indication the RAID card fan has failed? Has anybody else experienced a fan failure, and is the beeping supposed to be an indicator of this? If this is the case, what is a good quality replacement fan?
If the raid card is beeping with the power off, that kind of tells me you have a battery backup unit attached to it as well? Otherwise I can't figure out why a failed fan would be causing a beep with the system powered off. Anyway upon boot get into the card's setup before windows, see if all looks well in that environment. You should also unplug the raid card as a test to see if you boot windows without it, as you mentioned you didn't have time for this.
I don't know, my server motherboard boots to uefi and still shows the areca initialization and status with the few seconds opportunity to enter the setup afterwards. To be fair it's still consumer grade stuff, just an Asus Prime x-370 pro w/ a ryzen 1700x. I almost wish I could turn it off as that's 80% of my boot up time, I never even considered it though. You might be able to also connect over ethernet if you had that set up.Yes, I have a battery backup. PCs have a 5V standby power supply as well, which is hot even when the PC is "off". I have no idea which is powering the beeper, or if the periodic beeping is trying to tell me something. I'll report back after work this evening. I think after I switched to UEFI boot in the RAID card the option to get into RAID BIOS using the keyboard disappeared. How do I do this? Or maybe I won't be able to if the overheat self-protect is active. But I should be able to see that by watching the fan.
So I have an Areca 1883ix-24 RAID card with an external SFF-8644 port for expansion. Once this box filled up, I went out a built another expansion box using the Areca ARC-8028 24port expander box. I connected the two via each devices SFF-8644 port, but the 1883ix can't see the expansion box.
The light on the back of the 1883ix card where the port is has a slow blinking green light, but nothing on the ARC-8028 expander box that would indicate the connection is active. I've scoured the manuals to see if I need to do anything to initialize the connection, but I can't find anything. Also my ARC-8028
didn't come with the RJ-11 serial cable so I'm SOL there.
Has anybody gotten these two to work together before?
Thanks.
You’re really going to need the serial cable (you should be able to build one with an RJ11 and a DB9, RJ11 Pin 1 is RTS, 2 is RX, 3 is TX and 4,5,6 are Ground (there are pinout pics in the manual) so you can use the CLI. First, do you have the enclosure powered up with either 2 molex’s to the adapter or 1 PCIe power connector? If you bought this pre-owned I am guessing that either you have a bad expander, bad power or it is in Zone mode instead of normal mode, which you can only change with the CLI (hence the need for the cable)
[/QUOTE]You’re really going to need the serial cable (you should be able to build one with an RJ11 and a DB9, RJ11 Pin 1 is RTS, 2 is RX, 3 is TX and 4,5,6 are Ground (there are pinout pics in the manual) so you can use the CLI. First, do you have the enclosure powered up with either 2 molex’s to the adapter or 1 PCIe power connector? If you bought this pre-owned I am guessing that either you have a bad expander, bad power or it is in Zone mode instead of normal mode, which you can only change with the CLI (hence the need for the cable)
[/QUOTE]You’re really going to need the serial cable (you should be able to build one with an RJ11 and a DB9, RJ11 Pin 1 is RTS, 2 is RX, 3 is TX and 4,5,6 are Ground (there are pinout pics in the manual) so you can use the CLI. First, do you have the enclosure powered up with either 2 molex’s to the adapter or 1 PCIe power connector? If you bought this pre-owned I am guessing that either you have a bad expander, bad power or it is in Zone mode instead of normal mode, which you can only change with the CLI (hence the need for the cable)
You are correct, just because they are not on the hardware compatibility list does not mean it will not work. That being said, it doesn't mean it will work without issue either. Since you are looking at the SAS model you are far less likely to have issues (compared to some of the large enterprise SATA drives that have had issues with timing and dropouts,) but until each is on the other's HCL (and sometimes even that is not a guarantee) it doesn't mean you will have issue-free compatibility. One of the reasons you par through the nose for HPE, Dell/EMC, NetAPP etc is the fact that if they sell you something and it doesn't work, you can at least be put in touch with the people that can make it work even if it involves spinning up a beta patch to bring you back up. White box can get you better performance for (lots) less money, but you are further out of the loop for weird compatibility issues.Has anyone tried any of the 14TB HGST drives?
I've got an Areca 1882ix and wanted to soon purchase some drives with model WUH721414AL5204 (i.e. P/N 0F31052) and hook them up.
They're not on the compatibility list but that means no one tested them, it doesn't necessarily mean they don't work.
I honestly see no reason why they wouldn't, just though I'd ask here first.
Cards just won't die. I still have a 1882i in daily use in my colocated Supermicro 846. Never imagined I'd still be using Areca 13 years later after purchasing an 1130ML in 2007. Things were a little more exciting back then. Here's what it looked like in action with a whole 7TB:How in gods name is this thread already a decade old
I've noticed recently that my old array of 4tb hgst drives (#000) has started taking longer and longer to do its monthly checks. The 8tb wd red array (#001) has maintained its usual time frame. The old array has always taken 14-15 hours, all the way back to April 2018 (as far back as the log goes.) I haven't started using the computer or array in a different way, does this mean anything?
Curious. I know it's different since it's spread across disks, but Windows does defragment them weekly. Anything else I should do?Probably just more full, and fragmented.
It doesn't much matter if the data is static, performance is degraded as new writes are much more likely to be fragmented. Everybody knows if you want maximum performance you should keep your hard drive(s) less than full. If you're under 80% full and still seeing poor performance, you might try a real defragment - which I don't think Windows does. Auslogic is good, but there are others.Curious. I know it's different since it's spread across disks, but Windows does defragment them weekly. Anything else I should do?
They have been at 99% full since early 2018 when I filled up unused space with burstcoin plots. As time goes on I delete plots as I need space. On this array, 17tb out of 21.7tb usable are burstcoin plot files that haven't changed since then. The bulk of new data goes to the other array (#001) so I would say easily 95-98% of the data on this array has been static for a long time.