ARECA Owner's Thread (SAS/SATA RAID Cards)

lol you sure could have fooled me. Every post you've made has mostly been defending the position of Areca or the OEM, including this post (minus the last small note that you are not arguing on the part of the manufacturer lol)

http://hardforum.com/showpost.php?p=1038879653&postcount=1753

No, I am just giving you the realities of how things do work as opposed to how we would like them to work.
If you don't like how things are, then sell your Areca and buy something else where you feel you will be better treated/supported. By all means vote with your dollars. You will find every HBA manufacturer works the same way. Areca, LSI etc all have QVL and using anything outside of it they won't guarantee compatibility.
It also goes beyond HBAs. Use ink that isn't certified by your printer manufacturer, they won't guarantee any quality results or that you wont get streaks down the page (Even worse, with Xerox Solid Ink printers for example third party ink actually destroyed the print heads and Xerox wouldn't replace them under warranty, which was upheld even in the face of the Mangusson-Moss Warranty Act). If a manufacturer puts forth a list of compatible items, and you choose an item that isn't on the list you can sick your lawyers on them if you have a problem, but in the end as I said this is how things are now and NOT how I (or you) would like them.
 
Last edited:
I was wondering if anyone could help me out. I have an Areca ARC-1882IX-24 with 1 raid 1 array for a vm, and 6 drives in raid 6. They are all wd re4 1tb drives. I have already had 2 drives that I have got bad from newegg and am getting replacements. Came into work this morning and one was dropped out of the raid 6 array. I just got the server built last week and have not put it into production yet. Good thing I have been testing it. I was just wondering if there is some sort of setting I am missing or something that would make this happen. Or if re4's are just junk anymore. My raid log has just said reading error. I then put the drive in my desktop and ran wd's data lifeguard tool. Won't pass smart. Any help would be appreciated.

Regardless I am running out to microcenter because I saw they started carrying the same drive. Wondering if maybe newegg got a bad batch. Hopefully this will work a little better. Was just wondering if someone had a little insight into maybe a setting or something that I had wrong that might be making these drives fail.
 
Did you get all the drives from the same reseller at the same time? Are they very close in Serial Number range?
 
Did you get all the drives from the same reseller at the same time? Are they very close in Serial Number range?


Yep, got them all from the same reseller at the same time. They were very close to the same serials. Well, I just got one from microcenter this morning but I haven't installed it.
 
Yeah, we have seen a lot of SIDS failures and they usually are in bunches. We try not to mix same lot code or close serial number ranges in the same arrays to minimize this particular problem.
 
Yeah, we have seen a lot of SIDS failures and they usually are in bunches. We try not to mix same lot code or close serial number ranges in the same arrays to minimize this particular problem.

Well thanks for the heads up. I thought it might of been something i did. Besides running the wd smart tester is there anything else I can do to make sure they drive is ok?
 
Well thanks for the heads up. I thought it might of been something i did. Besides running the wd smart tester is there anything else I can do to make sure they drive is ok?

You can run the drive(s) through their respective manufacturers Drive Fitness Tests, run it for 24-48 hours to shake out any early-death drives before putting them into production. You can also run DBAN for enough passes for 2 or 3 days as an alternative, that also makes weaker drives drop out.
 
Hi! First post, I've been lurking for a few months... :)

I have a ARC-1882IX-16 with 6 3TB HD in RAID 6. The drives are Seagate ST3000DM001 (can't afford enterprise drive until prices drop :( )

I'm looking for some suggestion on the settings as this is my first RAID controller...

I had the Background Task Priority set 80% while I initialized the drives. Should I set it to Low (20%) now?

Anything else I should change? :)

I'm going to use this system to replace my existing WHSv1. I have WHS 2011 running in Hyper-V on Server 2008 RC, running on an i5 with 16GB of Ram. I created 2TB volumes so I can back them up using the 2008 RC server backup feature. I currently use my WHSv1 to backup and share media to my 5 computers.

Posted WHSv1 system in: http://hardforum.com/showthread.php?p=1038896915&posted=1#post1038896915 :cool:







Thanks! :)
 
Last edited:
welcome ipeefreely, for home use I'd leave background task at default 80%, there's no reason not to have fastest rebuild rate in a home usage scenario. the reasons for setting it lower would be if copying files to/from the array was critical during the rebuild or a scenario like a multi-user server in a business setting where you dont want productivity affected by a rebuild hogging most of the I/O. at home in the event of a drive failure, a rebuild is going to tend to be your highest priority.

you can set the staggered spinup more aggressively to reduce the time you're waiting for the pool to wake back up on access, 0.7 and 1.0 are fine w/ those drives, drives which by the way are working well in my torture testing so far (16 x Seagate ST3000DM001 in RAID6) have already created/scrubbed/destroyed/repeat the raidset over 40 times over the past month with an AutoIt script and zero errors in eventlog, no drops or SMART anomalies. I'll give it 6 more months before I'm satisfied but they're working out and damn they're fast for spinners thx to the 1TB platters.

lastly make sure you're on latest areca firmware rev.
 
Last edited:
Thanks for the reply odditory! :)

Keep it at 80% makes sense (i wasn't sure what would be best for home use)! :D

Thanks for the comments, it good to hear you're having good luck with the 3TB Seagates! :)

I upgraded to V1.50 2012-02-16 which I believe is the lastest...
 
then you're all set, defaults are pretty much fine. the only thing i'd question is enabling the Low Power Idle and Low RPM Mode commands - not sure the Seagate honors those modes as they're somewhat legacy (a lot if not most desktop class harddisks don't) but if you haven't had any problems with seeing any Timeouts in the event log then I guess its okay. Thing is you've only got 6 drives so having them constantly switching power modes - for purposes of power saving - is splitting hairs to the tenth power. The critical one of course is Spin Down which is fine at 40min, I'll tend toward the highest 60min value there to avoid unnecessary cycles since I have quite a few 24-disk raidsets which means having to wait a while for spinup if it goes to sleep too quickly.
 
Last edited:
and yeah with the Seagates, given that I've deplored it as a brand since every time I've bought a batch of eight or so for testing over the past 6 years, like whenever they came out with a new gen drive, its been nothing but disaster every time.

so taking that step and clicking the buy button on these new Seagate 3TB's was pretty degrading and when they arrived immediately I felt what all those Sandusky kids had gone through in those showers, but like everything, Hitachi's reign as the go-to brand for home hardware-raid had to end and now we must pick up the pieces and find alternatives.

and as it turns out Seagate might have finally NOT gone porn and blown it with this 1TB/platter gen.
 
Hey everyone
I`m new in this Forum and wanted to ask u if Areca fixed the multi array problem with their latest controller 1882-ix.
I ask this becouse i want to buy one of these with 24 slots but i want to build up more than one array with each got up to 5 Disks in raid 5 mode.
 
Thanks again odditory! :)

I set the Low Power Idle and Low RPM Mode commands to disable. Probably better to not mess with it! ;)

I'll probably start moving stuff over this weekend, hopefully everything goes well! :D


Edit: I forgot to ask... what should I set the Volume Checking to? I had it set to 2 weeks during testing and it takes about 5hrs 45 mins to complete.



Thanks again! :)
 
Last edited:
Well thanks for the heads up. I thought it might of been something i did. Besides running the wd smart tester is there anything else I can do to make sure they drive is ok?

Aside from checking SMART logs for indicators of a failing drive (re-allocated sectors, etc), you can run other diagnostic tools that usually are the equivalent of a sector scan of the entire device.

If you're running a linux OS, you can run "badblocks" command via a terminal session. Be aware that the -w flag will overwrite existing data.

Badblocks man page ("man badblocks" from terminal)
http://linux.die.net/man/8/badblocks

I usually run badblocks followed by a zero fill for 2-3 passes to stress test a drive and then check test output and SMART logs. Through this initial test process you will usually identify most faulty drives or drives which may be prone to infant mortality (failure within the first 6 months). Depending on the size of the drive and the number of passes this process can take 1-2 days
 
welcome ipeefreely, for home use I'd leave background task at default 80%, there's no reason not to have fastest rebuild rate in a home usage scenario. the reasons for setting it lower would be if copying files to/from the array was critical during the rebuild or a scenario like a multi-user server in a business setting where you dont want productivity affected by a rebuild hogging most of the I/O. at home in the event of a drive failure, a rebuild is going to tend to be your highest priority.

you can set the staggered spinup more aggressively to reduce the time you're waiting for the pool to wake back up on access, 0.7 and 1.0 are fine w/ those drives, drives which by the way are working well in my torture testing so far (16 x Seagate ST3000DM001 in RAID6) have already created/scrubbed/destroyed/repeat the raidset over 40 times over the past month with an AutoIt script and zero errors in eventlog, no drops or SMART anomalies. I'll give it 6 more months before I'm satisfied but they're working out and damn they're fast for spinners thx to the 1TB platters.

lastly make sure you're on latest areca firmware rev.
and yeah with the Seagates, given that I've deplored it as a brand since every time I've bought a batch of eight or so for testing over the past 6 years, like whenever they came out with a new gen drive, its been nothing but disaster every time.

so taking that step and clicking the buy button on these new Seagate 3TB's was pretty degrading and when they arrived immediately I felt what all those Sandusky kids had gone through in those showers, but like everything, Hitachi's reign as the go-to brand for home hardware-raid had to end and now we must pick up the pieces and find alternatives.

and as it turns out Seagate might have finally NOT gone porn and blown it with this 1TB/platter gen.


Odditory, did you happen to test Seagate ST3000DM001 on 12xx/16xx series cards running firmware 1.49?
 
Last edited:
Odditory, did you happen to test Seagate ST3000DM001 on 12xx/16xx series cards running firmware 1.49?

I have 6 of them running R6 on a 1231ML (1.49) in a box we are using for surveillance camera backup storage. It has been running without incident for about 4 months now.
 
Odditory, did you happen to test Seagate ST3000DM001 on 12xx/16xx series cards running firmware 1.49?

No I got rid of my 1280 and 1680 gen cards a long time ago. However I would be very surprised if they didn't work with the 1280 since its native SATA, and I'd expect them to work with the 1680, excepting the 1680ix models due to the onboard expander that historically had some problems with Seagates in the past (dropouts), and I'd want to do some extensive testing before trusting data to them in that scenario. On the bright side I think some of the old incompatibilities with certain older controllers and older drives had to do with firmware issues on the controller and/or the drive, and on the newer SATA-III drives controller/firmware and more recent areca f/w's, things have evolved far enough that most or all of those old glitches are gone, even on Areca V1.49.

You could always get a few drives from Amazon or wherever and test them and if there are problems there's always the return policy.
 
Last edited:
Edit: I forgot to ask... what should I set the Volume Checking to? I had it set to 2 weeks during testing and it takes about 5hrs 45 mins to complete.

I'd consider a scrub every 14 days excessive w/ RAID6 but that's just me, and granted I have a separate backup copy of every raidset and run hitachi's which have historically gone years without bad blocks developing so I only bother every 3 months.

But it comes down to personal preference and there's really no one value for everyone.
 
ST3000DM001 lists 1yr warranty. What life span do you hope to get out of them in production?
 
I have 6 of them running R6 on a 1231ML (1.49) in a box we are using for surveillance camera backup storage. It has been running without incident for about 4 months now.
No I got rid of my 1280 and 1680 gen cards a long time ago. However I would be very surprised if they didn't work with the 1280 since its native SATA, and I'd expect them to work with the 1680, excepting the 1680ix models due to the onboard expander that historically had some problems with Seagates in the past (dropouts), and I'd want to do some extensive testing before trusting data to them in that scenario. On the bright side I think some of the old incompatibilities with certain older controllers and older drives had to do with firmware issues on the controller and/or the drive, and on the newer SATA-III drives controller/firmware and more recent areca f/w's, things have evolved far enough that most or all of those old glitches are gone, even on Areca V1.49.

You could always get a few drives from Amazon or wherever and test them and if there are problems there's always the return policy.

Thanks for the feedback ;)
 
ST3000DM001 lists 1yr warranty. What life span do you hope to get out of them in production?

I haven't been paying much attention to Seagate lately, but a few weeks back I was reading that some consumer grade retail boxed Seagate drives are shipping with a 5 year warranty.

I can't confirm this so you'll need to research or maybe someone else with knowledge on the subject can comment.
 
I haven't been paying much attention to Seagate lately, but a few weeks back I was reading that some consumer grade retail boxed Seagate drives are shipping with a 5 year warranty.

I can't confirm this so you'll need to research or maybe someone else with knowledge on the subject can comment.

OEM drives are 1 year. Some retail pack drives show 5 years on the box and supposedly show 5 years on the Seagate warranty-check website. That said, Seagate has in the past cut the warranty terms in their system, and I wouldn't put it past them doing it again.
 
Folks,

I have had my Areca 1260 for a few years, and it has been completely trouble free. Unfortunately, that trouble-free streak came to an abrupt end a few days ago. The other day my computer would randomly freeze, and become completely unresponsive to all input. After some troubleshooting, I narrowed down the cause to the Areca card. I yanked the Areca card, and rebooted the box, and everything worked great without any lock-ups.

The next step was to get my RAID array back online. I inserted the card back into the mobo, and was immediately greeted with the warning beeps. The following message also popped-up:

FW detected: **SDRAM 1st 128K error** !!!!!!
No BIOS Disk found! RAID Controller BIOS not installed!
Some volume(s)/RAID controller(s) failed. Press Tab/F6 to enter controller SETUP menu. 30 second(s) left <ESC to quit>...

Pressing either Tab or F6 results in everything freezing. My hypothesis was that the RAM module is faulty, so I replaced the existing RAM chip with the exact same spec Transcend module (256MB DDR PC3200 400Mhz). I still get the same error.

Any ideas?

Thanks in advance
 
Well, if you replaced it with the exact same type DIMM and you are still getting the same result, I would say it is time to replace the card.
 
Looks like I might hold off on Seagate. This is definitely not a quiet drive

http://forums.seagate.com/t5/Barrac...0100-Barracudas-Making-Weird-quot/td-p/161356

Yeah I've read about that but don't have the issue I don't think, but mine were removed from the external enclosures and run a slightly different firmware rev which may have already had APM/PWM/power management disabled and left power functions in the hands of the USB/SATA controller. I'd previously read the chirping was related to an older rev firmware, but now reading more about it, if it is just a bandaid fix (supposedly the chirping is just barely audible after the "fix" but still there) and unnecessary load/unload cycles are still happening then will have to see how Seagate handles (or mishandles) this and what develops before considering more of them.

Precisely these kinds of issues are why I'll vet a newer make/model of drive I'm not already familiar with for 6-10 months. Still buying Hitachi's for expansion of more critical storage given their statistical likelihood of outliving their now junk warranty anyway (when I say junk I mean the fact people are already getting WD Green's :headdesk: as replacements on Hitachi 5K3000 RMA's).
 
Last edited:
@aprikh1: The first thing I would do before going any further is disconnecting those drives from the controller, the last thing you want is an unstable controller potentially going spastic on your disks. Its not likely but you want to minimize risk while troubleshooting the controller.

The second thing I would do is email areca support just to make them aware of the issue. <support (at) areca.com.tw> It's a standard procedure I always recommend so they can be aware of what's going on with their cards rather than being off the hook when people find a fix on a forum.

Third I would remove the ram, blow out the ram slot of any dust, make sure there's no dust or residue in pci slots, clean pci connector on areca controller with isopropyl alcohol and soft cloth, etc. Common sense stuff and not likely the culprit but some people live in humid environments where the moisture and dust combine into a sort of residue that can wreck electronics and specifically their conductive surfaces with enough time.

Fourth do not stress that you might have lost data as people tend to do in these situations. It is extremely unlikely.

Lastly, how long ago did you buy it? I'm guessing its probably out of warranty.
 
Last edited:
@aprikh1: The first thing I would do before going any further is disconnecting those drives from the controller, the last thing you want is an unstable controller potentially going spastic on your disks. Its not likely but you want to minimize risk while troubleshooting the controller.

The second thing I would do is email areca support just to make them aware of the issue. <support (at) areca.com.tw> It's a standard procedure I always recommend so they can be aware of what's going on with their cards rather than being off the hook when people find a fix on a forum.

Third I would remove the ram, blow out the ram slot of any dust, make sure there's no dust or residue in pci slots, clean pci connector on areca controller with isopropyl alcohol and soft cloth, etc. Common sense stuff and not likely the culprit but some people live in humid environments where the moisture and dust combine into a sort of residue that can wreck electronics and specifically their conductive surfaces with enough time.

Fourth do not stress that you might have lost data as people tend to do in these situations. It is extremely unlikely.

Lastly, how long ago did you buy it? I'm guessing its probably out of warranty.

odditory,

We are exactly on the same page. All of the above have been performed. As far as the date of purchase, I think I bought this thing in 09, but I am not 100% positive. I need to go through my email archives and dig-up my NewEgg receipt. The warranty on the card is 3 years, correct?

I also sent Areca technical support an email, so we'll see what comes of that.

Finally, can someone confirm that my choice of RAM is correct, I am using a Transcend 256MB DDR PC3200 400Mhz non-ECC RAM.

Thanks again!
 
You said you replaced it with "exact same spec" but there are more possibilities to the spec of the old module than "256MB DDR PC3200 400Mhz" Can you take a pic of the old ram? Been a while but I could swear the 12xx's were already using parity ram if not ecc. A parity chip is most common a ninth chip usually situated in the middle of the chip arrangement and holds checksum data, so bottom line an Areca will not generally accept a plain old ram stick from an old PC or something.

example of parity memory
XONLT.jpg
 
The 11xx/12xx models will take both ECC and non-ECC memory up to 4gb (sodimm ones take less). They are picky about ranks, density, and configuration in the larger stick sizes however.
 
Yeah I've read about that but don't have the issue I don't think, but mine were removed from the external enclosures and run a slightly different firmware rev which may have already had APM/PWM/power management disabled and left power functions in the hands of the USB/SATA controller. I'd previously read the chirping was related to an older rev firmware, but now reading more about it, if it is just a bandaid fix (supposedly the chirping is just barely audible after the "fix" but still there) and unnecessary load/unload cycles are still happening then will have to see how Seagate handles (or mishandles) this and what develops before considering more of them.

I read the entire thread after I posted the link. I have to agree that the information about APM and load/unload cycles incrementing from head parking was equally concerning. Either this is a bug in Seagate's consumer line or a revenue strategy based off of planned obsolescence (increase wear and reduce MTBF == shorter life and higher turnover)

Precisely these kinds of issues are why I'll vet a newer make/model of drive I'm not already familiar with for 6-10 months. Still buying Hitachi's for expansion of more critical storage given their statistical likelihood of outliving their now junk warranty anyway (when I say junk I mean the fact people are already getting WD Green's :headdesk: as replacements on Hitachi 5K3000 RMA's).

I managed to pickup a spare 5K3000 new recently, but I'm already dreading future RMA. HGST might still have some stock for RMA if you know the right person to contact, but I haven't verified this first hand.
 
Either this is a bug in Seagate's consumer line or a revenue strategy based off of planned obsolescence (increase wear and reduce MTBF == shorter life and higher turnover)

I can't believe the revenue strategy as a theory, not that these companies aren't total whores incapable of it but I don't think they're that well thought out nor would a strat like this fly up and down the corporate chain, and more significantly, premature deaths within warranty due to excessive cycles would only incur additional cost to Seagate and another P.R. fiasco since some people are estimating a drive dying in as little as a few months due to the rate of excessive cycling relative to the spec'd lifetime max. This really smells more like a bug than a desperation move. Remember also they're enjoying a market with half the competition gone, between them and WD buying everyone else out.

As for RMA on Hitachi, aside from just selling any WD Green you receive back on a Hitachi RMA and putting the $ toward another Hitachi, an argument can be made for buying extra drives - even if secondhand/used/expired warranties and just doing your own RMA's, especially if you have a significant quantity of existing drives, and for those people with let's say 16+ drives, the math is there for buying cheaper drives with lower warranties and then buying let's say 10-20% additional in spares and coming out cheaper versus the 5yr warrantied or enterprise drives.

Example: I bought 16 x external 3TB Seagate's, on sale at a price 33% less than the internal/bare/1yr warranty OEM version. Removed from the external case, warranty is void. However with the money I've saved I could buy 8 more of the cheaper drives and do my own RMA's, use them as hot or cold spares, scratch space, whatever. Since its very unlikely that, after a 30-day torture test (the return period basically), that 8 out of the original 16 drives fail within 1yr, unless its a bad batch or something (which in itself is rare). Advantage cheaper drives. A lot also depends on usage scenario - in a business there are other variables to consider but just home media storage who cares -- especially with systems like FlexRAID that allow an infinite number of parity drives (example 16 disks for data + 4 parity disks). This certainly isn't for everyone but something to consider.

The idea came years ago when looking at a corporate Verizon bill with 200 lines on it, with a $6.99 insurance fee per line to cover lost/stolen phones. So that was $1400 a month in insurance and yet we were only replacing a phone maybe every 3 months. So I stripped the insurance fees off all the lines save for a few problem people, then simply paid full price for replacements -- significant cost savings. Our corp VZ sales rep had no counter-argument, just a grin.
 
Last edited:
Just a reminder, when buying breakout cables, make sure you get the right ones, and dont do a me and order the reverse breakouts by mistake.

Was pulling my hair out trying to figure out why my new 1880ix was not picking up drives till i figured it was the cables and checked them.
 
I can't believe the revenue strategy as a theory, not that these companies aren't total whores incapable of it but I don't think they're that well thought out nor would a strat like this fly up and down the corporate chain, and more significantly, premature deaths within warranty due to excessive cycles would only incur additional cost to Seagate and another P.R. fiasco since some people are estimating a drive dying in as little as a few months due to the rate of excessive cycling relative to the spec'd lifetime max. This really smells more like a bug than a desperation move. Remember also they're enjoying a market with half the competition gone, between them and WD buying everyone else out.

Bare with my speculation, but I'm a pessimist by nature.

Storage manufacturers have been testing platter based storage devices over several decades. I'd wager that Seagate and WDC have reliable testing information that has allowed them to predict reliable MTBF estimates assuming certain conditions or programmed actions. (MTBF information is normally published for all devices). However, Seagate no longer publishes MTBF information for consumer grade drives.

Seagate Barracuda
MTBF: not listed
Power on: 2400 hours (total --> it does not list "hours per year")

http://www.seagate.com/files/staticfiles/docs/pdf/datasheet/disc/barracuda-ds1737-1-1111us.pdf


Seagate Constellation ES.2
MTBF: 1.2 million hours
Power on: 8760 hours per year

http://www.seagate.com/files/www-co...cs/constellation-es2-fips-ds1725-5-1207us.pdf


In 2011 consolidation within the storage industry resulted in 2 main players (Seagate, WDC) and one lagging competitor that may or may not decide to ramp up production of consumer grade platter based storage devices.

Now consider the timeline of events...

03/07/11 - WDC announced a buyout of HGST
04/19/11 - Seagate announced a buyout of Samsung

12/15/11 - WDC announced a reduction in warranty period from 3 years to 2 years on consumer models
http://www.dailytech.com/Quick+Note...rpio+HDDs+from+3+to+2+Years+/article23529.htm]

12/18/11 - Seagate announced a reduction in warranty period from 5 years to 1 year on most consumer models (baracuda models).
http://www.dailytech.com/Seagate+Jo...s+Down+with+1Year+Warranties/article23545.htm


It's likely that WDC and Seagate mergers initiated independently of each other. It's generally assumed that Seagate's buyout of Samsung's storage division was in response to WDC buying Hitachi's storage division. Now It's a much tougher stretch to believe that moves to reduce warranty periods occurred without extensive discussion and collaboration between Seagate and WDC. With warranty periods reduced and Seagate and WDC controlling nearly 90% of platter based storage market, the likelihood of players to exert monopoly power is high.

We've already seen some results of monopoly power and collusion in the form of inflated pricepoints for Q1-Q2 2012. Given Seagate's new market position and the number of competitors left, It isn't beyond speculation that Seagate may also be leveraging their position and intentionally shortening the life of products to increase revenue. Seagate has more or less refused so far to release a tool which disables APM (responsible for head parking). Seagate may or may not have reliable test information over the years which predicts MTBF based on load/unload cycles. If Seagate can accurately predict failure rates within 5% based on a predetermined number of load/unload cycles it isn't unimaginable that Seagate couldn't program behavior into their firmware to park heads at a rate which pushes a drive to fail somewhere between 1-2 years or just out of the predetermined warranty period.

Would customers be outraged by this? Sure. What are you going to do though? Go to WDC? What if WDC jumps on board in a race toward the bottom? Who's left? Toshiba? Maybe in a few years if they make some progress with Hitachi's production lines acquired from conditions of WDC's Hitachi buyout.


As for RMA on Hitachi, aside from just selling any WD Green you receive back on a Hitachi RMA and putting the $ toward another Hitachi, an argument can be made for buying extra drives - even if secondhand/used/expired warranties and just doing your own RMA's, especially if you have a significant quantity of existing drives, and for those people with let's say 16+ drives, the math is there for buying cheaper drives with lower warranties and then buying let's say 10-20% additional in spares and coming out cheaper versus the 5yr warrantied or enterprise drives.

If HItachi does have stock on 5K3000/5K4000 or 7K3000/7K4000 I'm sure it's limited, but they may still have some if you know who to contact. The alternative like you mention is to scrounge fleabay or alternatives wherever you can find replacements.


Example: I bought 16 x external 3TB Seagate's, on sale at a price 33% less than the internal/bare/1yr warranty OEM version. Removed from the external case, warranty is void. However with the money I've saved I could buy 8 more of the cheaper drives and do my own RMA's, use them as hot or cold spares, scratch space, whatever. Since its very unlikely that, after a 30-day torture test (the return period basically), that 8 out of the original 16 drives fail within 1yr, unless its a bad batch or something (which in itself is rare). Advantage cheaper drives. A lot also depends on usage scenario - in a business there are other variables to consider but just home media storage who cares -- especially with systems like FlexRAID that allow an infinite number of parity drives (example 16 disks for data + 4 parity disks). This certainly isn't for everyone but something to consider.

I'm not beyond deal hunting and I've done it in the past for home use. I regularly scan sites like slickdeals and fatwallet.

The idea came years ago when looking at a corporate Verizon bill with 200 lines on it, with a $6.99 insurance fee per line to cover lost/stolen phones. So that was $1400 a month in insurance and yet we were only replacing a phone maybe every 3 months. So I stripped the insurance fees off all the lines save for a few problem people, then simply paid full price for replacements -- significant cost savings. Our corp VZ sales rep had no counter-argument, just a grin.

Hopefully you got a bonus :)
 
Last edited:
Storage manufacturers have been testing platter based storage devices over several decades. I'd wager that Seagate and WDC have reliable testing information that has allowed them to predict reliable MTBF estimates assuming certain conditions or programmed actions. (MTBF information is normally published for all devices). However, Seagate no longer publishes MTBF information for consumer grade drives.

Re: your previous post - you are very thorough, my man. Impressed. I don't doubt they're using analytical data to calculate and optimize warranty periods to their advantage, but my impression based on the statements of others is that the chirping problem was much more severe and crippling than merely shortening lifespan to 1-2 years. As always it comes down to one's own testing for more clarity and when I get a chance I'll do some testing on mine.

However the lack of APM mod tool is unforgivable and presumably they're trying to mitigate admitting to a problem that actually needs fixing (and dealing w/ both the bad p.r. from the blogosphere and the tech support costs of spoon-feeding the people that just 'heard' or read there was a problem and dont know what to do next) vs. waiting and hoping any outcry doesn't proliferate enough to be embarrassing.

Wonder if this all won't be the stuff of class action lawsuit somewhere down the line -- because they can deny that the excessive cycling was intentional, but would have a harder time arguing why they were made aware of an issue and didn't make a fix available to those requesting one - especially when the fix is potentially as simple as disabling APM.
 
Last edited:
Just requested beta firmware 1.51 from Areca, which is supposedly going to enhance SMART transparency even further (being able to see the disks "through" the card even better) including full compatibility with smartmontools. I know houkounchi had it working with smartctl to some extent in linux back last year but so far its a no-go with the windows version of smartctl.exe even on passthrough disks and supposedly firmware 1.51 is the fix.

In other news if anyone needs an ARC-1880i I need to get rid of my cold spare since I don't need 5 anymore.
 
I really hope they make 1.50 and 1.51 available for all their cards.. including the 12xx series.. :(
 
Well most of what I saw in the changelog for 1.50 were SAS specific issues, fixes or enhancements that aren't relevant to a 1280. It could also be that certain things they've changed or added in 1.50+ aren't supported by the hardware on 1280's. But you could always email Areca support and express your concern.
 
I recently bought an Areca 1280ML and it's been running for about two weeks without a problem. I'm sure somebody will tell me off for something I've done though :p. Any help or input would be greatly appreciated.

System specs:
OS: Windows 7 Ultimate x64
CPU: Intel E5200
Mobo: Gigabyte G31M-ES2L
RAM: 4 GB DDR2 Corsair XMS2
Card: 24 port Areca 1280ML with 256 MB of RAM
Drives in RAID 6: 8 total (4x SAMSUNG HD203WI and 4x ST2000DL004 HD204UI)
Case: Norco 4224

For the past two years I have been running the same 4x SAMSUNG HD203WI in RAID 5 on a Dell Perc 5/i.

I set up Hdd Power Management the other day and used these settings:

Time To Hdd Low Power Idle: 2
Time To Hdd Low RPM Mode: 10
Time To Spin Down Idle HDD: 20

Since then I haven't noticed any problems until today. As everything seemed to be running fine I moved the OS drive from a motherboard port to one of the 1280's ports after reading the manual and seeing I should be able to boot from it. I didn't have any problems for a few hours until seemingly randomly, in the middle of playing a video, it started to get extremely choppy as if it wasn't getting the data fast enough to display the video. I checked the server and on the Disk tab on Resource Monitor it was telling me that the array was maxing out Active Time (%)/Queue Length but giving me less than a megabyte of throughput.

Next I checked the http management software (ArcHttpSrvGUI) and saw the each of the ST2000DL004 HD204UIs had around five Timeout Counts which were rapidly increasing while the SAMSUNG HD203WIs had none. I turned off the power management settings because I thought that might have been causing the timeouts and it seemed to stop the choppy video playback for a bit but the timeouts didn't stop and are around 150 now. The array status is normal and System Events Information show no errors or warnings. My first thought now is some green drive features in the Seagates putting it to sleep but why would that start nearly two weeks after first setting it up?

Also is it me or does the ArcHttpProxyServer service on Windows seem to crash a lot? I had to make a batch file that would remotely restart the process and open the address in my browser due to how often it crashed. I read some people saying the onboard ethernet port was more stable, I will try that out once I sort this current problem out.
 
Back
Top