ARECA Owner's Thread (SAS/SATA RAID Cards)

Regarding Areca and TLER/ERC/CCTL .... well guys ... Since going with 12TB drives I just have endless problems - i.E. not working in the end.
I have multiple Arecas 1260, 1880 and 1882ix-24 (4 GB) .... The 1882 was even on a compatibility list from my HGST Ultrastar/Enterprise 12TBs .... but no TLER/ERC/TTL - Newest Firmware on Areca, Newest BIOS, Newest OS.
And on other drives (12TB Ironwolf (non Pro) ... TLER/ERC/CCTL works on 1260, but NOT1880 ... but 1882 works again ....

Areca support is somewhat unhelpful and just says basically.... well it recognizes the disk ....then it's fine ... I did explain the problem carefully. But Kevin doesn't seem to bother.
I did stress test them with HGST 12TB + Areca 1260 (with no TLER/ERC/CCTL) it did not take long and some HDs just dropped randomly out (no surprise).... The drives are brand new HGSTs and are okay..... I investes so much time in this BS and even the "both is on a compatibility list"helps nothing. Like proofed here. When you read the fine print they anyway don't do stress tests with most models - Then you also can drop the whole compatibility BS, since I never had a single SATA disk which was not recognized by any controller. And I have a few hunderd disks here.

Probably time to drop that Areca shit and move to ZFS.
I was very happy with Areca before with smaller/older disks, which I used heavily for years and did basically all you can do withit ... But it just doesn't work anymore.
Try posting those details here, you might get lucky and someone may help you. I know I just picked up my second areca last year, and it's been great with my old 4 tb drives as my new 8tb drives. The newness of your drives is likely the issue, but we need more information.
 
Probably time to drop that Areca shit and move to ZFS.
I was very happy with Areca before with smaller/older disks, which I used heavily for years and did basically all you can do withit ... But it just doesn't work anymore.

You mentioned 3 different cards, some of which are 10+ years old now. Please post the following information so we may better help you. The BIOS, Boot & Firmware versions installed on each of your cards. The specific drive versions (including firmware revisions) that you are attempting and whether are they connected directly or via backplane. Are there any intermediary expanders between HBA and drive? Are they SAS or SATA? If SATA, are they negotiating the correct maximum connection speed. Please post a complete log from the card. I know you said they are random, but if SATA are they on the same fanout cable? Have you tried more than one server/power source? What motherboard and PCIe slot type (both physical and electrical) is the card mounted in? Is it a "Server" motherboard or a consumer/prosumer gaming board? Do you have an nVidia or AMD PCIe graphics card mounted in the same machine/primary PCIe slot?
 
Last edited:
arnemetis:
It also worked (and still is working great with 12x 4TB,12x6TB, and even 4x10TB (Ironwolfes)
So until i did the upgrade on 12TB SATA Enterprise disks (tried several ones) i was the happiest Areca customer. They were rocksolid and performance is just beast.
But since the 12TB ones it's a complete mess. I tested so much and was endlessly frustrated (and I still am, since I still don't know that i should do now).


I already made 2 threads a while ago ...
https://hardforum.com/threads/no-tler-erc-cctl-on-wd-gold-12tb-hgst-ultrastar-12tb.1953576/
for example.
No one seems to use the combination i guess....

mwroobel
The 1882 for example is still supported on also on HCL's of HGST (but I already mentioned that the HCL is really useless, since no stress testing on most models) so please don't put it on that ... makes me kind of upset... That's exactly what everyone told... OK, you are partitally right maybe... so i extra organized the 1882 with 4GB ECC so that no one can say "don't use old bullshit" ... and off course still the same....
In short: SATA HD's (had tested all usable Enterprise 12TB's SATA HDs as of then + SOHO Ironwolfes), where just the Irowolfes (in some combinations) worked correctly with TLER/ERC/CCTL.
I did test it in modern Server boards as well on pro-consumer/workstation boards (which were working with Arecas for years), newest firmware means newest firmares -everywhere. No expanders off course to rule everything out. No Power supply issues. Different cables tried of course. No backplanes - everything directly attached. PCIe slots which worked for years with over 100TB attached (x16 and x8 PCIe 2.0s and PCIe 3.0). No graphics cards and other devices.
Oh and just because of fun i also tested it with a HP SAS Expander (the old/cheap ones most use) - same results everywhere (of course just tested on the SAS ones, the 1260 is just SATA)
Dropouts during stress test happens on a random hardrive in the 12TB disk array. Other controllers in the system (with 4TB or/and 6TB drives and even 4 10TB ones) (other controllers are in that case the old 1260) are not affected and rockstable (at the same time) as they have been for the past 6 years.
But this dropping behavior is expected in my eyes, if TLER/ERC/CCTL is not working....

But i also tested it standalone with just 1 HD attached....
I'm really done with testing and Areca proofed to not be interested.

I just can not recommend it, if you don't have a very good return policy. Feel free to also try it out... Only very few people seem to have some of these 12TB disks and tested around with it. Basicallly what you need to do is just hook a 12TB drive (WD Gold, Seagate Enterprise/Exos or HGST) to Areca controller and tell the people if TLER/ERC/ERC is working or not. Not more needed.

To be fair ...TLER/ERC/CCTL also did bot work often on normal HBAs (tested it on old consmerbaords)... And hell i don't even know if the HUH721212ALE600 REALLY supports TLER/ERC/CCTL (support of HGST also can't say that - they are totally ignorant) and even the huge OEM drive manual says nothing very explicit about that... But i know for example for the WD Gold that they 100% have TLER/ERC/CCTL (they also promote it very big).-> Well, it's not working until now.


Everyone can do what he wants .... But I just want to warn other people which want to to the same... IF you have rocksolid return policy - Try it out guys. We have nearly no infos about this combination now ....
I guess most people here are from the US anyway, where you can get the cheap 8TB (shucked) ones - I would recommend to go rather this way, since also much cheaper.
I started to hate my "hobby"... And that's not cool. That's the biggest issue of all. And FreeNAS/ZFS was never my taste since I'm a Windows guy and like the easy and rocksolid expandability and performance of Areca. But probably now still have to move.... I'm really pissed off, but I don't even know who is to blame excatly. But I can say that I expected much more from Areca, since I tought they are a small company which is enganged and storage is their passion. And I talk about stuff here that is enterprise (cards and disks). And gosh - You would at least try to test it as Areca, right? (to tell the customer that he's a idiot ... And it works on the Arecas) (with other words of course)...
 
Last edited:
maxxmusterr,
You do present a unique scenario, very few people I think would be using the Areca cards with new, unproven 12tb drives. I took a look at your other post as well. It appears there is something off about these new 12tb drives. It's also worth noting these drives are (to me) insanely expensive, and at this time may be beyond Areca's outdated devices. Given the $400+ USD cost for these drives, I think you should be looking at a more expensive controller solution. I honestly don't know who's above Areca in this regard, but you're playing with some new toys that are obviously beyond them. I am sorry you had such poor support from them, but one of the ways Areca has value is solid hardware - so long as you don't need support. They're too small to be testing all the new drives coming out obviously. I think you'll have to start looking for a different brand which is at the cutting edge and making new controllers constantly, and you'll likely also get to enjoy the cost premium of that as well. I however cannot speak to what that brand would be, I don't even know how to approach this. Best of luck!
 
arnemetis:
It also worked (and still is working great with 12x 4TB,12x6TB, and even 4x10TB (Ironwolfes)
So until i did the upgrade on 12TB SATA Enterprise disks (tried several ones) i was the happiest Areca customer. They were rocksolid and performance is just beast.
But since the 12TB ones it's a complete mess. I tested so much and was endlessly frustrated (and I still am, since I still don't know that i should do now).


I already made 2 threads a while ago ...
https://hardforum.com/threads/no-tler-erc-cctl-on-wd-gold-12tb-hgst-ultrastar-12tb.1953576/
for example.
No one seems to use the combination i guess....

mwroobel
The 1882 for example is still supported on also on HCL's of HGST (but I already mentioned that the HCL is really useless, since no stress testing on most models) so please don't put it on that ... makes me kind of upset... That's exactly what everyone told... OK, you are partitally right maybe... so i extra organized the 1882 with 4GB ECC so that no one can say "don't use old bullshit" ... and off course still the same....
In short: SATA HD's (had tested all usable Enterprise 12TB's SATA HDs as of then + SOHO Ironwolfes), where just the Irowolfes (in some combinations) worked correctly with TLER/ERC/CCTL.
I did test it in modern Server boards as well on pro-consumer/workstation boards (which were working with Arecas for years), newest firmware means newest firmares -everywhere. No expanders off course to rule everything out. No Power supply issues. Different cables tried of course. No backplanes - everything directly attached. PCIe slots which worked for years with over 100TB attached (x16 and x8 PCIe 2.0s and PCIe 3.0). No graphics cards and other devices.
Oh and just because of fun i also tested it with a HP SAS Expander (the old/cheap ones most use) - same results everywhere (of course just tested on the SAS ones, the 1260 is just SATA)
Dropouts during stress test happens on a random hardrive in the 12TB disk array. Other controllers in the system (with 4TB or/and 6TB drives and even 4 10TB ones) (other controllers are in that case the old 1260) are not affected and rockstable (at the same time) as they have been for the past 6 years.
But this dropping behavior is expected in my eyes, if TLER/ERC/CCTL is not working....

But i also tested it standalone with just 1 HD attached....
I'm really done with testing and Areca proofed to not be interested.

I just can not recommend it, if you don't have a very good return policy. Feel free to also try it out... Only very few people seem to have some of these 12TB disks and tested around with it. Basicallly what you need to do is just hook a 12TB drive (WD Gold, Seagate Enterprise/Exos or HGST) to Areca controller and tell the people if TLER/ERC/ERC is working or not. Not more needed.

To be fair ...TLER/ERC/CCTL also did bot work often on normal HBAs (tested it on old consmerbaords)... And hell i don't even know if the HUH721212ALE600 REALLY supports TLER/ERC/CCTL (support of HGST also can't say that - they are totally ignorant) and even the huge OEM drive manual says nothing very explicit about that... But i know for example for the WD Gold that they 100% have TLER/ERC/CCTL (they also promote it very big).-> Well, it's not working until now.


Everyone can do what he wants .... But I just want to warn other people which want to to the same... IF you have rocksolid return policy - Try it out guys. We have nearly no infos about this combination now ....
I guess most people here are from the US anyway, where you can get the cheap 8TB (shucked) ones - I would recommend to go rather this way, since also much cheaper.
I started to hate my "hobby"... And that's not cool. That's the biggest issue of all. And FreeNAS/ZFS was never my taste since I'm a Windows guy and like the easy and rocksolid expandability and performance of Areca. But probably now still have to move.... I'm really pissed off, but I don't even know who is to blame excatly. But I can say that I expected much more from Areca, since I tought they are a small company which is enganged and storage is their passion. And I talk about stuff here that is enterprise (cards and disks). And gosh - You would at least try to test it as Areca, right? (to tell the customer that he's a idiot ... And it works on the Arecas) (with other words of course)...
Well,,I have some potentially good news. I looked back at the linked message from when we last spoke in January. Since then, Areca has updated their HCL for HDD and where the December version I linked in that message had no 12TB drives in that list, both your SATA and the SAS counterparts have passed basic compatibility tests, are officially on the HCL which once they start the stress testing may uncover changes that need to be made. Are you sure you updated your 1882 completely (all 4 files to 1.55 for the card, and 1.16 for the onboard expander?) again, please post the complete log for the card.... Do the drives ever drop out during array creation or just After the array is up and available to the OS. What OS are you running btw?
 
Hello guys,

Welll these drives are actually one of the "cheaper" solutions here in Europe. Actually one of the cheapest Enterprise or even THE cheapest disks per TB at that time.. We don't have this dirt cheap 8TB WD ones. There are some cheaper ones per TB yes, but they are not so much less here, so i took the 12TB's to have maximal density.
And they are PMR, fast as hell and have 2.5 Mio MTBF, which is great and are power saving, which were all just ++++ for me. They were technically just the best choice. And they were also the cheapst per TB around. So a no-brainer you would think.
And HGST lists the 1882 controllers as compatible. And the only size barier i know (and we all know) was the 2TB mark. And since i had working 10TB ones... I trusted it even more.

Well the dropouts I think, happen because of the deactivated TLER/CCTL/ERC.
The drives drop out mostly if they have pushed some TB's of data (mostly just 1 of them at a time).

Btw. do you know where I would see if TLER/ERC/CCTL is on or off for a specific hdd on a LSI RAID card. I didn't find any info about it?


mwroobel
Thanks for the update. Actually i was so frustrated over the wasted time and money, that I didn't followed the updates on HGST / Areca side. Also since it was clear, that nothing is ongoing. Maybe some day it will look better.
And yes the 1882 is fully updated - Just updated it very recently, when It arrived from the US (because, like mentioned, I got myself the 1882, to rule my "old" cards out.). Tested with RAID6 in a Server 2012 R2 ReFS and later also (for testing) Windows 10 (for newest ReFS version).


Since I need a solution now, (my fileserver is basically filled since then) I probably have to move to FreeNAS anyway. Even if I do really not like some aspects of it. Biggest of it is simply that i don't know anything about BSD. And I usually just run stuff I know what to to, when it stops working.... Because if something get's wrong on FreeNAS I'm screwed.
Base line is: If you have any Areca and plan to use 12TB Enterprise disks. Be cautious and expect it not to work. And only try it, when you can return the stuff.
 
Last edited:
Hello guys,

Welll these drives are actually one of the "cheaper" solutions here in Europe. Actually one of the cheapest Enterprise or even THE cheapest disks per TB at that time.. We don't have this dirt cheap 8TB WD ones. There are some cheaper ones per TB yes, but they are not so much less here, so i took the 12TB's to have maximal density.
And they are PMR, fast as hell and have 2.5 Mio MTBF, which is great and are power saving, which were all just ++++ for me. They were technically just the best choice. And they were also the cheapst per TB around. So a no-brainer you would think.
And HGST lists the 1882 controllers as compatible. And the only size barier i know (and we all know) was the 2TB mark. And since i had working 10TB ones... I trusted it even more.

Well the dropouts I think, happen because of the deactivated TLER/CCTL/ERC.
The drives drop out mostly if they have pushed some TB's of data (mostly just 1 of them at a time).

Btw. do you know where I would see if TLER/ERC/CCTL is on or off for a specific hdd on a LSI RAID card. I didn't find any info about it?


mwroobel
Thanks for the update. Actually i was so frustrated over the wasted time and money, that I didn't followed the updates on HGST / Areca side. Also since it was clear, that nothing is ongoing. Maybe some day it will look better.
And yes the 1882 is fully updated - Just updated it very recently, when It arrived from the US (because, like mentioned, I got myself the 1882, to rule my "old" cards out.). Tested with RAID6 in a Server 2012 R2 ReFS and later also (for testing) Windows 10 (for newest ReFS version).


Since I need a solution now, (my fileserver is basically filled since then) I probably have to move to FreeNAS anyway. Even if I do really not like some aspects of it. Biggest of it is simply that i don't know anything about BSD. And I usually just run stuff I know what to to, when it stops working.... Because if something get's wrong on FreeNAS I'm screwed.
Base line is: If you have any Areca and plan to use 12TB Enterprise disks. Be cautious and expect it not to work. And only try it, when you can return the stuff.
Again, could you please post a complete log from the card? Again, could you answer as to if they drop out during the build too or just with an OS writing to them? Finally, did it happen with NTFS or just with ReFS?
 
I need to upgrade my Areca 1882ix-16 to larger drives, ideally 12TB PMR. I have always been partial to HGST drives and currently have the well-respected 3TB models in my array. Unfortunately this thread indicates there are/were significant problems with time-outs, whose RAID-compatible time limit feature is described as TLER or CCTL depending on the mfg.

The thread linked above indicates consumer-grade Seagate IronWolf 12TB as the only sure thing. Buying $1,600 worth of HDDs, finding they don't work, then trying to get the vendor to accept a return (not!) seems like the wrong approach. Checking with the vendors (Areca, HGST) would be ideal, except we all know they are non-responsive.

I guess I am left with using Seagate instead of the preferred HGST drives based on uncertainty, unless I can find a vendor who will accept a return if CCTL proves not to be supported.
 
Last edited:
I need to upgrade my Areca 1882ix-16 to larger drives, ideally 12TB PMR. I have always been partial to HGST drives and currently have the well-respected 3TB models in my array. Unfortunately this thread indicates there are/were significant problems with time-outs, whose RAID-compatible time limit feature is described as TLER or CCTL depending on the mfg.

The thread linked above indicates consumer-grade Seagate IronWolf 12TB as the only sure thing. Buying $1,600 worth of HDDs, finding they don't work, then trying to get the vendor to accept a return (not!) seems like the wrong approach. Checking with the vendors (Areca, HGST) would be ideal, except we all know they are non-responsive.

I guess I am left with using Seagate instead of the preferred HGST drives based on uncertainty, unless I can find a vendor who will accept a return if CCTL proves not to be supported.
Unfortunately being on the cutting edge of tech is usually costly. You could probably get away with ordering say two drives, in order to test out tler no? Then at least the financial hit is a lot less, and you might be able to sell the drives to recoup most of your loss. I don't have a great enough experience to give you any additional help, but I wish you luck.
 
I just expanded my raid set (raid 3 4x8). then went to "modify volume set" and set the volume to 24TB. \
I expected to go to disk management and see unused space on that drive that i could expand the partition into, but didn't. How do I get the OS to see the extra space?
 
Last edited:
I just expanded my raid set (raid 3 4x8). then went to "modify volume set" and set the volume to 24TB. \
I expected to go to disk management and see unused space on that drive that i could expand the partition into, but didn't. How do I get the OS to see the extra space?
To save myself some retyping, Here is a link to an older post I did about the steps.
 
I don't suppose anybody has tried the new Toshiba MG07ACA14TA 14 TB hard drives yet? Would live to start my new array with these, but as always, nobody can confirm they play nice with TLER and Areca 1882ix-16.
 
I don't suppose anybody has tried the new Toshiba MG07ACA14TA 14 TB hard drives yet? Would live to start my new array with these, but as always, nobody can confirm they play nice with TLER and Areca 1882ix-16.
Check the Areca and Toshiba HCL’s and if they both list it you should be ok. Keep in mind, you are looking to pair a bleeding edge drive with an almost 8 year old controller. Even if they go through the verifications, it will generally be 6-9 months after introduction and they will usually certify the SAS versions first, and for their latest cards (1883 and 1884) with no guarantee that the firmware fixes will filter down to the older cards. Their hcl’s are also meant for an enterprise environment, which means the proper cooling, power and host environment (eg a PCIe slot where the bios hasn’t been optimized for crossfire, SLI or other consumer “enhancements” that shouldn’t but do cause issues with HBA use ranging from not seeing the card at all to spooky problems, whether they can or can’t be reproduced
 
I checked Areca's HCL, but it hasn't been updated for 6 months. It doesn't seem like there are too many drives listed, but we know the list isn't exhaustive. You would think any decent HDD would have this feature by now, but sometimes the command to enable it has changed and the RAID card firmware hasn't been updated to match, which I think is the situation with the Hitachi 12TB.

My 8 year old RAID card is just fine as far as I can tell, that would be a bit much to start buying $1,200 RAID cards every few years.
 
Looking for 8-drive RAID10 recipe for 1883x. My firmware only has a "Quick Create" option for 0+1, do later firmwares offer 1+0? Or is there a manual recipe for 8-drive RAID10 (1+0)?
[EDIT]: mwroobel below is correct and my post above is all wrong - my "Quick Create" option is 1+0
 
Last edited:
Looking for 8-drive RAID10 recipe for 1883x. My firmware only has a "Quick Create" option for 0+1, do later firmwares offer 1+0? Or is there a manual recipe for 8-drive RAID10 (1+0)?
The Areca supports striped mirrors (RAID10), not mirrored stripes (RAID0+1)
 
  • Like
Reactions: bbito
like this
The Areca supports striped mirrors (RAID10), not mirrored stripes (RAID0+1)
Wow, you are so right thank you! I seem to be getting dotty in my old age. I didn't remember 0+1, but when I had to build a new array last night I kept on looking at the 1+0 option and read it as 0+1...
 
Hi all,
do you think it makes real sense to upgrade from the 1882 to 1883 model today?
I use standard SATA 6Gb/s drives. Now with 6x 10TB Seagate Iron Wolf in Raid 6 for home storage. Movie. MP3..

I have a model of 1882-24 for two years and it works smoothly.

Thank you
 
With hard drives and your usage, you really won't notice any difference between the 1882 and 1883 series. I personally wouldn't spend the money to upgrade.
 
Was wondering, does anyone know what is a safe threshold for CPU temperature?
I've got an Areca 1882ix-24 and during the summer time when it's very hot outside it does go up to 70 degrees Celsius. Should I be worried?

Also, wanted to share some knowledge.
I've got the above mentioned controller for 2 years now. During those 2 years, I've had a very annoying issue for which I never suspected the controller but had come to the conclusion that it was an incompatibility issue because of it.
When I bought the board, I had a very old setup: Asus P5E3 Pro motherboard + Intel Q6600 CPU + Asus STRIX GTX970 GPU. I've got my PC connected to my TV via the GPU's HDMI and sometime when I watched something the display driver would just crash. Or even when watching youtube videos. It was all random, couldn't reproduce it. PC would run normally apart from that because I could hear Windows noises, I just couldn't see a thing and had to restart it to get an image. Of course, I tried multiple display driver versions, reinstalling Windows etc.

For those 2 years, I blamed the GPU because in the exact same time I got the controller I also had the GPU RMA'd in its warranty period due to some other issues and then I had the same card being returned to be as fixed by the manufacturer. So I thought something went wrong in the repair process and they messed up something with the GPU.
It would make sense given that it was the display driver that kept crashing, right? Well, turns out no.
A couple of months ago, I replaced everything in my PC except the controller and attached drives. So I bought Asus Prime X399-A motherboard + AMD Ryzen Threadripper 1920X CPU + Gigabyte RTX2070 Mini ITX GPU. SSD, RAM and everything else were replaced. Everything brand new.
I also bough 2 more drives to expand my drive pool. A day or so after the migration started, display driver crashed again. Not sure exactly when as I was not at home when it happened. It started migrating on a Friday evening, I was away for the weekend and on Sunday when I got back, it was already crashed.

So I starred at a black monitor for 10 days until migration ended... a bloody eternity when you have a 10TB drive pool (expanded from 5x to 7x10TB). Anyway, that's when I realized the controller was to blame.
This all happened sometime at the end of February. Out of sheer curiosity, I checked the Areca website and noticed a new 1.56 firmware was released end of January this year. I was running the previous 1.54 version. And yes, I do know the proper order in which to flash the BIOS, BOOT ROM, FIRM and MBR files onto the controller.

After flashing everything to 1.56, the miracle happened. No more display driver crashes.
Although I wasn't able to reproduce the crash, it usually occurred when heavily stressing the GPU, it was just a matter of time. So I tried everything, opened both Firefox & Chrome, each with its own instance of YouTube running, I opened MPC-HC player with madVR video renderer and all its processing settings maxed out, and waited... Left it running for a couple of tens of minutes, nothing. No crash. And never had one since.

It had to be the firmware because I've changed nothing else since then, neither software, nor hardware wise.
You can check my signature for my components, PSU is also a rock solid 1000W Seasonic Prime Titanium.

So in case anyone else was having this issue, this may be a possible fix.
And yes, I know Areca is picky on hardware compatibility and I don't have a server grade board. I was initially considering the latest workstation board from Asus (WS Z390 PRO) as I know their WS boards offer great compatibility, but I got fed up with Intel's incompetence of releasing a worthy CPU which is not overpriced... so I ditched Intel for AMD.

All is well now, I just had 2 years of hick-ups :)
 
Last edited:
The 1882 series can go to 105C, so I would not be concerned about 70C. The card will alert if it gets too close to the threshold.
 
I've got an Areca ARC-1220 RAID card which has some pin headers on the board for hard drive LED activity. In theory I know how to connect an LED to a header like that, but where do I put the LED ? Note that I'm using the card in a regular PC case. So I'm wondering if some kind of a cheap/generic "LED module" exists that I can mount into a 5.25" or 3.5" drive bay and then wire the LED cables to the RAID card so I can see the activity on the front of my PC case? The closest thing I fould is this: http://www.frozencpu.com/products/7...play_Fan_Control_Panel_-_Black_w_Red_LED.html

Or I could just drill 8 holes into the plastic 5.25" bay cover and hot glue 8 LEDs onto it.
 
Last edited:
I've got an Areca ARC-1220 RAID card which has some pin headers on the board for hard drive LED activity. In theory I know how to connect an LED to a header like that, but where do I put the LED ? Note that I'm using the card in a regular PC case. So I'm wondering if some kind of a cheap/generic "LED module" exists that I can mount into a 5.25" or 3.5" drive bay and then wire the LED cables to the RAID card so I can see the activity on the front of my PC case? The closest thing I fould is this: http://www.frozencpu.com/products/7...play_Fan_Control_Panel_-_Black_w_Red_LED.html

Or I could just drill 8 holes into the plastic 5.25" bay cover and hot glue 8 LEDs onto it.
I retired my 1220 a few years ago, but it has standard 2 pin headers for led activity for each channel. I think that controller only has one hdd lead, and the ten leds are just to display patterns rather than one for each drive. If you're running just one array, this will work because if one disk has activity they all should. If you're just running a bunch of disks, or have multiple arrays, you will want to look elsewhere. If you have eight activity led leads, yes you could modify a panel to display them all.
 
Not sure whether here is the right place to ask, but I've got an Areca 1882ix-16 (battery doesn't seem to register) that I have finished using. Has 4 breakout cables to run 16 Sata drives, and wondering what the card and cables together might be worth to sell? Have had it for several years now and still have to original box.
 
Not sure whether here is the right place to ask, but I've got an Areca 1882ix-16 (battery doesn't seem to register) that I have finished using. Has 4 breakout cables to run 16 Sata drives, and wondering what the card and cables together might be worth to sell? Have had it for several years now and still have to original box.
People on eBay are listing them for $399 to $499, but I don't know if they are getting any bites. How much RAM is on your particular card?
 
Just checked and it's just got the base 1Gb.

Yeah, I looked at those, and thought I might offer it at $400 including the cables, plus shipping.
 
Hello to everybody,
how does ARECA and expand raid 6 work?
Last week, I added another disk to the disk array, and the migration is complete, but the controller still shows me only 30 TB of space both at 5x10TB and now 6x10TB :-( Does anyone know what to do with it?
I thought the disk capacity would increase by itself.

01.png


02.png
 
Hello to everybody,
how does ARECA and expand raid 6 work?
Last week, I added another disk to the disk array, and the migration is complete, but the controller still shows me only 30 TB of space both at 5x10TB and now 6x10TB :-( Does anyone know what to do with it?
I thought the disk capacity would increase by itself.

View attachment 165429

View attachment 165430
I did this with my 5x 4tb array to 8x 4tb two years ago, but I don't remember of the top of my head. When I get home I'll check my manual, I'm pretty sure I marked the page as I was quite worried about losing my data.
Edit: It seems that yes, altering that size there is all you need to do. I pulled this page from my manual, it should be the same or very similar for you.
upload_2019-6-4_10-5-9.png
 
Last edited:
Yep, that's what you need to do. Expand the RAID set (which you've done), then the volume, and finally whatever partitioning you might have within your OS.
 
Hello all,
my problem is with this: https://www.partitionwizard.com/partitionmagic/the-volume-cant-be-extended-clusters-will-exceed.html

"The volume cannot be extended because the number of clusters will exceed the maximum number of clusters supported by the file system."

Old Capacity 30TB . NEW Capacity with extended array + 10 TB


Volume size Default cluster sizes for NTFS in Windows Server 2003 and above
7 MB–512 MB 4 KB
512 MB–1 GB 4 KB
1 GB–2 GB 4 KB
2 GB–2 TB 4 KB
2 TB–16 TB 4 KB
16TB–32 TB 8 KB
32TB–64 TB 16 KB
64TB–128 TB 32 KB
128TB–256 TB 64 KB

> 256 TB Not Supported

I had a 8KB cluster formatted array :-(

That's why I can't extend it with the full capacity of 10 TB added :-(
I need to back up all data and format the entire cluster-sized disk array: 32 KB or 64KB

A year ago, when setting the dispatch field, I left the default value. This didn't occur to me.
the-volume-cant-be-extended-clusters-will-exceed-1.png
 
Hello all,
my problem is with this: https://www.partitionwizard.com/partitionmagic/the-volume-cant-be-extended-clusters-will-exceed.html

"The volume cannot be extended because the number of clusters will exceed the maximum number of clusters supported by the file system."

Old Capacity 30TB . NEW Capacity with extended array + 10 TB


Volume size Default cluster sizes for NTFS in Windows Server 2003 and above
7 MB–512 MB 4 KB
512 MB–1 GB 4 KB
1 GB–2 GB 4 KB
2 GB–2 TB 4 KB
2 TB–16 TB 4 KB
16TB–32 TB 8 KB
32TB–64 TB 16 KB
64TB–128 TB 32 KB
128TB–256 TB 64 KB

> 256 TB Not Supported

I had a 8KB cluster formatted array :-(

That's why I can't extend it with the full capacity of 10 TB added :-(
I need to back up all data and format the entire cluster-sized disk array: 32 KB or 64KB

A year ago, when setting the dispatch field, I left the default value. This didn't occur to me.
View attachment 165700
Yeah that's something I never thought about! Looks like I lucked out going from 5x 4tb disks to 8x 4tb disks, 20tb to 32tb keeps the 8K cluster size fine. My 8x 8tb array is reporting 16K, so I have no expansion possibilities there. On the flip side, it sounds like this is a good opportunity to set up a backup plan! For my cloud backup I use Backblaze, will go quick if you have fast upload, I've got 30TB backed up with them.
 
Last edited:
Unfortunately, there is no non-destructive way to expand this array. Your only options are... To back off the data to another box and recreate (best choice.)... Another is to create a new array with the new drive(s) and move the data, one drive size block at a time to the new array, and diminish the size of the existing array one drive at a time to add to the new array. This option while of course possible is not recommended as there is a lot that can go wrong, especially if you click the wrong box. The only reason to choose the latter is if you don't have enough free space to backup your array while you recreate.
 
Unfortunately, there is no non-destructive way to expand this array. Your only options are... To back off the data to another box and recreate (best choice.)... Another is to create a new array with the new drive(s) and move the data, one drive size block at a time to the new array, and diminish the size of the existing array one drive at a time to add to the new array. This option while of course possible is not recommended as there is a lot that can go wrong, especially if you click the wrong box. The only reason to choose the latter is if you don't have enough free space to backup your array while you recreate.

I've already backed up....

What happens if I complete format an old array but a cluster size of 32 or 64KB. Will I be able to add another drive in the future and then simply expand the existing array formated to 32 or 64KB?
 
I've already backed up....

What happens if I complete format an old array but a cluster size of 32 or 64KB. Will I be able to add another drive in the future and then simply expand the existing array formated to 32 or 64KB?
You can't non-destructively change the stripe/block size in an existing array. For all my media arrays, I use 64k blocks. Your ability to add drives will be based on your existing array geometry and how many additional slots/ports you have (in addition to how many drives you plan on adding). If you are creating a new array, I would recommend 64k or 128k based upon if you plan on expanding beyond 100TB in the next few years and plan on utilizing the same drives.
 
I've already backed up....

What happens if I complete format an old array but a cluster size of 32 or 64KB. Will I be able to add another drive in the future and then simply expand the existing array formated to 32 or 64KB?
Yes that should work, so long as you once again limit the total array size to the max size supported by each (32KB should be fine at 128TB I would think.) That 8KB is what limited you because you started at 30tb, so going to 40tb wasn't possible. I lucked out because I started with 20tb, so going to 32tb was possible within that same cluster size. I am now fully limited on both my arrays, I cannot expand either because of this. I of course do not plan on expanding these arrays further, I will likely retire the 8x 4tb array in favor of an 8x 12tb array a few years from now.
 
Thank you, and it is better to choose 32KB Cluster in windows or rather 64KB. colleague above thinks stripe size. 128 KB cluster cannot be created under Windows.
 
Thank you, and it is better to choose 32KB Cluster in windows or rather 64KB. colleague above thinks stripe size. 128 KB cluster cannot be created under Windows.
Pick 32KB if you think that absolutely, without exception, you will never expand the array beyond 128TB (so twelve 10tb disks is your max.) Otherwise pick 64KB.
 
Back
Top