Want to upgrade from 5x1TB raid5 areca 1220

arnemetis

2[H]4U
Joined
Aug 2, 2004
Messages
4,086
Hello everyone,

My signature is a little out of date for my system, but I am still running 5x 1TB (3.6TB usable) Seagate drives (the ones that had all the firmware issues a few years back.) I've nearly filled it and I need to upgrade. This has lasted me over 5 years and I believe the Areca may still have some life left in it, but I haven't kept up with drive storage technology. I'm here with some questions and would like to hear any comments or suggestions with my proposed build.

First, I'm thinking of 8x 3TB or 4TB drives in RAID 6 for a new array. I know WD has several series of drives now, and I would like to know what to look for in drives for such a setup (doesn't have to be WD, just an example.) I know it's hard to ensure, but reliability is far more important than performance. I did have one drive succumb to the firmware bug, and when I received my replacement drive and I started rebuilding my array, I wondered what would happen if it failed during rebuild? So that's why I'm going RAID 6. The bulk of data is media written once and then just read occasionally, but there is a lot of smaller files being constantly changed or added (work files.)

I have a 3TB internal drive that I back my really important files up to, but since the array filled up I have also been using that drive to dump new files that won't fit on the array. I now have about 250gb free on both the array and this drive.

One challenge with this upgrade is the transition of data from old to new. Since I want to reuse the controller card, I think I will be doing a lot of shut downs and cable swapping to accomplish this. Here's what I think will happen:
1) Disconnect old array drives from controller.
2) Connect new drives to array, and set up new raidset.
3) Transfer contents of 3TB drive to new array.
4) Delete contents of 3TB drive, shut down.
5) Disconnect new array and reconnect old array.
6) Transfer data from old array to 3TB drive, shut down.
7) Disconnect old array and connect new array.
8) Transfer data from 3TB drive to new array
9) Repeat steps 4-8 as required for all data of old array.
10) Transfer important data from new array to 3TB for backup.

So that's the plan right now. I'm open to constructive criticism , so please don't hesitate to comment. Thanks!
 
Go to Best Buy, buy a 4TB external hard drive. Transfer your RAID to the external drive. Build your new RAID with new drives. Copy data from external drive to new RAID. Return to Best Buy and say that your old computer would not see more that 2TB on the external drive and that you read on the Internet that that can happen and you just want to return the drive.

You'll just be out the money for the drive until you return it. Make sure the store you buy it from does not have a restocking fee specifically on hard drives.

Now to be nice, when I did this, I went to Fry's and specifically looked for previously returned 4TB drives so that when I returned it they did not have to knock off anything for it not being an open box.
 
Do you have a spare controller card? If not, that's your single point of failure: if it fails, you'll be up a creek.

Buy your 8 new HDDs and another Areca 1220 (or another controller that's interchangeable). Build your new array on the new Areca. Copy the data across. Remove old Areca and old array. Much simpler.

Something else: how are you backing up all this data?
 
Thanks for the replies guys. Dunno how I feel about borrowing a drive like that. I never really thought about the raid card being the single point of failure; I always figured if the card died, the data was safe I just had to wait for a replacement. How would I know if another card is interchangeable? The card is also sata 2, figured with mechanical drives that doesn't matter? I had mentioned the single 3tb was used to back up my critical files; it isn't feasible try and back everything so that's part of why I'm using raid to least avoid loss from a single drive failure. Keep the suggestions coming!
 
You can migrate your array to any Areca card. If your controller fails, just get another and the array will show right back up.
 
There are some cases where arrays created with newer controllers/firmware or by SAS controllers can't be read by SATA controllers but all the new areca controllers are SAS/SATA and can read all the old legacy raid formats so you can easily move the array to a new areca controller if it failed.
 
Thanks for all the supportive comments guys! It's good to know that if my card bites it I can just use it as an excuse to upgrade. I suppose it does lock me into areca, but I think I can live with that. I'd love some discussion on what drive series to consider? I looked at wd reds, and I know to take then with a brick of salt, but newegg reviews were not so good. Also I do shut the server down fairly regularly, I don't keep it up 24x7 much anymore, if that matters.
 
Thanks for all the supportive comments guys! It's good to know that if my card bites it I can just use it as an excuse to upgrade. I suppose it does lock me into areca, but I think I can live with that. I'd love some discussion on what drive series to consider? I looked at wd reds, and I know to take then with a brick of salt, but newegg reviews were not so good. Also I do shut the server down fairly regularly, I don't keep it up 24x7 much anymore, if that matters.

From personal experience (nearly 100 drives for personal usage) and thousands of drives in the datacenter I highly recommend hitachi/HGST drives. They have the best reliability by far. Plus you can use the coolspin disks in raid arrays which doesn't work for the other brands. HGST coolspin drives work great with areca controllers. My main arrays are 24x4TB coolspin, 30x3TB coolspin and 24x2TB 7200 RPM dekstars. The dekstars have 4+ years of use on them and one failed and a few bad sectors but absolutely; flawless stats and no failures on the coolspins. (also have another 8+12x3TB coolspin in colo and another off-site machine). No failures and flawless stats on all my coolspins even under very heavy 24/7 disk I/O.
 
'Just getting' another controller would involve days of downtime. It's better if he has one on hand.
So they should have a spare power supply, motherboard, memory, processor, etc on hand too just in case? Overnight shipping exists if you desperately need something the following day and if your data is that important, one would have it accessible via various means, or at least should. Areca HBAs aren't exactly cheap and not everyone can afford to keep spares. Even then, you might never use it and just wasted a bunch of money (less resale depreciation). :rolleyes:
 
Thanks a ton houkouonchi, glad to hear you have had such good luck with Hitachi drives, I'll check those ones out. There's no way I can justify the cost having a spare controller around, I had a better job back then and wouldn't be buying one today. As it is I don't know how I can afford new drives. I don't need uptime like that either; when that Seagate drive died and I was waiting on the rma, I just kept the array off the whole time.
 
Hi Everyone,

Bringing back my thread here as I'm ready to move forward with a purchase. Still need some discussion on drive choice. Right now I'm looking at the WD red 3tb, I can get them for $110 each from newegg (up to five) so that would move me to about 8TB of usable space. Hitachi drives seem very good as was suggested by houkouonchi, but they are simply too expensive for my personal use. I'm open to other suggestions, but using hover hound's price history it looks like this is a good deal.

Additionally, I would like some thoughts on future upgradability. I've seen here that it should be a simple matter of using RESETCAPACITY once I add the new drives. Is it really that easy?

Thank you for any and all input.
 
Hi Everyone,

Bringing back my thread here as I'm ready to move forward with a purchase. Still need some discussion on drive choice. Right now I'm looking at the WD red 3tb, I can get them for $110 each from newegg (up to five) so that would move me to about 8TB of usable space. Hitachi drives seem very good as was suggested by houkouonchi, but they are simply too expensive for my personal use. I'm open to other suggestions, but using hover hound's price history it looks like this is a good deal.

Additionally, I would like some thoughts on future upgradability. I've seen here that it should be a simple matter of using RESETCAPACITY once I add the new drives. Is it really that easy?

Thank you for any and all input.


Well the 4TB HGST coolspin drives are just about the same price per GB as the 3TB drives your looking at. 3TB drives @ $110 is $146.67 for 4TB drives and bhphoto has them for $149 right now and if your not in NY there is no tax or shipping.


If all things being somewhat equal on the price front (per GB) then I always try to go with the biggest drives possible because with raid you are limited by the smallest drive so it makes it easier to expand and have a larger raid set down the road.

To answer your question, yes if you replace the disk one by one until all your disks have been replaced with larger capacity drives all you need to do on the areca side is resetcapacity so the raidset sees the extra space and then increase the volume set (which will just do an initialization which should be pretty quick unless you also decide to change from raid5 -> raid6 or something.

After that its just a matter of increasing the partition table (if the file-system is on a partition) and then increasing the file-system on the software end of things.

On linux I expanded the file-system without even rebooting:

Code:
root@dekabutsu: 03:05 AM :~# dmesg | grep -i sde
sd 1:0:0:3: [sde] 85374958592 512-byte logical blocks: (43.7 TB/39.7 TiB)
sd 1:0:0:3: [sde] Write Protect is off
sd 1:0:0:3: [sde] Mode Sense: cb 00 00 08
sd 1:0:0:3: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
 sde: sde1
sd 1:0:0:3: [sde] Attached SCSI disk
XFS (sde1): Mounting Filesystem
XFS (sde1): Starting recovery (logdev: internal)
XFS (sde1): Ending recovery (logdev: internal)
sd 1:0:0:3: [sde] 171312463872 512-byte logical blocks: (87.7 TB/79.7 TiB)
sde: detected capacity change from 43711978799104 to 87711981502464
XFS (sde1): Mounting Filesystem
XFS (sde1): Ending clean mount

Code:
root@dekabutsu: 03:05 AM :~# df -H /data
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sde1               44T    42T   2.5T  95% /data

Code:
root@dekabutsu: 03:05 AM :~# xfs_growfs /data
meta-data=/dev/sde1              isize=256    agcount=40, agsize=268435440 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=10671869440, imaxpct=5
         =                       sunit=16     swidth=352 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=32768, version=2
         =                       sectsz=512   sunit=16 blks
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 10671869440 to 21414057723
Code:
root@dekabutsu: 03:06 AM :~# df -H /data
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sde1               88T    42T    47T  48% /data

I did have to unmount the file-system in order to resize the partition table because the linux kernel won't re-read the partition table when the disk is in use due to a mounted file-system (unfortunately).
 
Hi houkouonchi,

Thank you very much for your detailed response and advice. I think I was looking at the enterprise HGST drives, thus my immediate dismissal over price. I've never ordered from B&H before, but I've heard of them and the no tax is nice. I think I may in fact go forward with these HGST drives instead.

With regard to my data transfer, I plan to go ahead with my plan that I laid out in my first post. I will be moving to RAID6, so I believe this is still the best path. It mitigates the strain on the array, and maintains all the data in two places (good in case of early drive failure, which is always possible.) I've got Windows 7 so I use Areca's utility to do changes or I do it right from the controller's boot up configuration. In this case I just wanted to leave my options open, so later I could just add another three 4TB drives to max out my controller if space became an issue again.

It appears I should be all set, thank you again houkouonchi and everyone else.
 
Hi everyone, just as a quick update the drives came stupid quick and well packed from bh. The initialization is taking FOREVER ( literally 1% per hour) but with drives this large I figure that's probably standard. Here it is after over 3 days:
3piU63K.png

Before I go dumping all my data on it, anyone want me to perform any tests on the fresh array, in the name of science?
 
background init? background init is usually slow especially if background percent is only at 20% especially considering its an ARC-1220 which is pretty dated. The 1220 works but its pretty slow for complex parity calculations/rebuilds/initilization/etc compared to the newer cards.

Actually 1% per hour wouldn't surprise me at all on an ARC-1220 on a background init. On new machines which are much faster I have seen a large (50TB+ array) take ~16 hours on a foreground init but it would take like 2 days on a backgorund one.

That is on the slow side for a foreground init even for a 1220.

The ARC-1220 chip is 6 generations old (there are 5 generations of cards newer than it).
 
Nope that's foreground, I tried background but it seemed like it never even moved. I just chalked it up to them being 5400rpm and huge, since my original array took 24 hours initialize. That seemed okay since these drives are four times as large. Could this be an indication of some problem?
 
Nope that's foreground, I tried background but it seemed like it never even moved. I just chalked it up to them being 5400rpm and huge, since my original array took 24 hours initialize. That seemed okay since these drives are four times as large. Could this be an indication of some problem?

Its hard for me to say on a 1220 but its definitely not because they are 5400 RPM or the speed of the drives. I would expect the slowness to be because of the 1220.

With parity raid initialization it needs to read the data strips off the disk and then calculate the parity and write the parity stripes out. It requires consistency so you know when you have errors (if you do a consistency check).

It is both a decently heavy disk operation (because its reading and writing at the same time in small data sets) but also heavy on the CPU as well. About the only controller i have seen that did it near native disk speeds was the ARC-1882 and above (1882/1883).
 
Well it finally finished initializing, here's some proof that it really took that long! I couldn't do the write test, even with no partition it said I had to delete partitions to do it, oh well. Does it look okay?
MIxRtGz.png
 
Hi everyone!

It's been a year, and the array has been fine. I still have 3.4TB free actually, so the space crunch hasn't been an issue yet. There was however a fantastic deal this black friday at Fry's, I managed to snag 3 of the 4TB HGST coolspin drives for $88 each, a perfect match for my current array!They will be arriving over the next few days, so I want to get myself ready to add these in.

Now I'd like some help to add the disks to the raidset, resize it, and anything that needs to be done in windows for the partition side of things (it's just a simple NTFS partition.) There was some good info mentioned previously, but I'm a little leery without revisiting the discussion again. System is still running windows 7.
 
Thanks for the update, i cringed when i saw Raid 5 and Seagate drives, you are much better with raid 6 for storage needs.

I believe you would just add the new drives to the array, expand it, then in windows it should show the new disk space as un-unsed and you just "extend" the existing partition under Disk Management.
 
Hi everyone,

I finally got around to upgrading my server and putting the new drives in. As a reminder, I was adding 3x 4TB HGST drives to an existing 5x 4TB HGST drive raid 6 array. Since I needed to use the array while it expanded, I did it as a foreground operation from within windows. It took 242 hours, 52 minutes and 33 seconds to expand the raidset! Now I've gone on to expand the volume set from 12000GB to 24000GB, and it looks like this will take a while too. It immediately jumped to 50%, and is slowly progressing but at a faster rate than the raid set expansion. I'm guessing it has to initialize all the new areas before I can extend my windows partition. I'm guessing it will take close to 100 hours, like when I first initialized the array.I hope I haven't done nay part of this wrong, the data is still available currently so it hasn't ruined anything (yet!) I do have a backblaze backup, but it will take a long time to download and unzip 9TB...
 
Hi Everyone,

Another 71 hours, and the volumeset finish initializing. Now I am unable to expand the volume in windows disk management, and I kind of hope it's something stupid like just having to convert it to a dynamic disk. Here's a screenshot of my issue:
wHkvPiV.png

Anyone have any ideas?

*edit* I'm an idiot. All I had to do was click next, it had already selected the unused space. Total partition size is 21.8TB now. Sorry to waste everyone's time.
 
Last edited:
Back
Top