ARECA Owner's Thread (SAS/SATA RAID Cards)

Halp! It's been so long since I configured this ARC-1680 card. I went to go pull data off one of our old servers and the machine beeped after the raid card initialized, but proceeded to boot just fine.

After rebooting and entering the raid BIOS I see the raid status as 22/24 disks (Raid 6) degraded. The other two disks are there but marked as "free." Whats the safest way to proceed?

Look in event log to see what happened. If you don't ever see they hit the fail state then you probably had 'auto activate incomplete raid set' in the raid funciton settings which I think is a bad idea to have on as it can make an array immediately be degraded if one of the disks does not spin up fast enough or something and is temporarily missing from the raidset.
 
Look in event log to see what happened. If you don't ever see they hit the fail state then you probably had 'auto activate incomplete raid set' in the raid funciton settings which I think is a bad idea to have on as it can make an array immediately be degraded if one of the disks does not spin up fast enough or something and is temporarily missing from the raidset.

OK, so the log doesn't indicate any failures. I'll make sure to change that option when I get back in tomorrow. So how do I get those two free disks back in the array? A rescue command first, or do I need to remake the raid set with that "No init" option I see in the manual
 
Looks like I messed up. Here is my raid status:

V6rB2WC.png


and my log

Ql1YoRe.png


Looks like in trying to pop and push the drives back in I picked 17 and 23 instead of 16 and 22. My plan is to bring the raid offline, pop out a drive to make sure I have the right one (the identify drive function doesn't seem to work on my norco case), then bring the raid back online, and pop the drive back in and see if things rebuild themselves. If it works I'll do the same for the final disk.

I'm behind on firmware (1.45) vs the latest (1.51). I want to upgrade, but I assume its best to fix my raid first.
 
For the communities sake, what I did seemed to have worked. I brought the machine down, loaded up the BIOS at boot time, popped out the proper drives that displayed as "free" and re-inserted them. They reappeared in the raid set and it began rebuilding itself. It's indicating it will take about 5hrs to rebuild, which is less than I had anticipated. I'm just going to leave it at the BIOS until it is done.

I'll update the firmware after the set is healthy again
 
Does anyone know if you can move an array from an Areca 1680 to a 1261? I tried to swap controllers and the 1261 shows no array and all drives are in the free state. Can you only swap to the same or newer controllers?
 
Believe that is unfortunately true. At least I don't believe you can go from the SAS controllers back to SATA.
 
Doh, I didn't even consider one was SAS and one was SATA. Guess it's time to find a bigger 1680.

I should be able to go from a 1680 to a 1680ix? SAS expander shouldn't cause an issue?
 
Swapping it around like that won't cause issues and neither will adding a SAS expander to the mix. My current array has had the following history: 1680ix-24 --> 1680i + SAS expander --> 1880i + SAS expander --> 1882i + SAS expander.
 
Honestly I have never tried going from SAS back to SATA (have done it the other way dozens of times). If the array is large enough (and it is just to save time restoring to a new array), AND you are CERTAIN you have a FULL backup just in case:

Make sure the existing array is in the NORMAL state. Make sure the array members are installed in the EXACT SAME physical order to the new card as they were when the original array was created on the original card. Make sure you have the EXACT SAME specs as the original array on the original card and recreate it with the No-Init option.

In the end, you have a 50% possibility that this will work and as with any non-sanctioned operations YMMV (and there is always the chance of total failure/loss of data even if NOTHING goes wrong) so the risk/reward values are completely up to you.
 
I think it depends on whether the raidset was created and '128 volume support' was selected or not on whether it is backwards compatible with older areca controllers or not.
 
I built an AV processing rig for one of my customers using:

-Areca ARC-1882IX-24 Firmaware v1.52 with a ARC-8018-.01.14.0114 Expander and 4GB Cache
-12 x Seagate ST2000DM001-1CH164 2GB SATA3 (6Gbps) HDDss
-The array was setup as a RAID50, setup as 4 x 3 HDD RAID5 Raidsets with one HDD from each controller channel
-the raidsets are combined into a RAID0 to form a 16TB volume
-Volume is formatted as EXT4 and is exported as a NFS share over 10GBps Ethernet from a Centos 6.4 Linux box.

I benchmarked the array from the local machine (the one that has the array controller attached) using frametest (available Here
and i got an average of 552MB/s read performance while working with 2K frames:

What do you think of the numbers below, are they any good for this hardware configuration ?

Code:
Test parameters:      -r -z12512 -n1800
Test duration:        39 secs
Frames transferred:   1771 (21639.406 MB)
Fastest frame:        10.763 ms (1135.30 MB/s)
Slowest frame:        76.189 ms (160.37 MB/s)

Averaged details:
              Open        I/O         Frame      Data rate   Frame rate
  Last 1s:   0.009 ms    36.05 ms    36.07 ms   338.75 MB/s   27.7 fps
       5s:   0.009 ms    28.79 ms    28.81 ms   424.14 MB/s   34.7 fps
      30s:   0.009 ms    21.56 ms    21.58 ms   566.25 MB/s   46.3 fps
  Overall:   0.009 ms    22.08 ms    22.10 ms   552.87 MB/s   45.2 fps

Histogram of frame completion times:
   50% |
       |
       |
       |
       |                                    *
       |                                    *
       |                                   **
       |                                   ***
       |                                  ********
       |                                 **************
       +|----|-----|----|----|-----|----|----|-----|----|----|-----|----|
  ms  <0.1  .2    .5    1    2     5   10   20    50   100  200   500  >1s


  Overall frame rate .... 45.01 fps (576635665 bytes/s)

  Average file time ...... 22.213 ms
  Shortest file time ..... 8.177 ms
  Longest file time ...... 76.189 ms

  Average open time ...... 0.009 ms
  Shortest open time ..... 0.004 ms
  Longest open time ...... 0.021 ms

  Average read time ...... 22.2 ms
  Shortest read time ..... 8.2 ms
  Longest read time ...... 76.2 ms

  Average close time .... 0.007 ms
  Shortest close time ... 0.002 ms
  Longest close time .... 0.025 ms
 
After a PSU crash my Areca ARC-1882ix seems to have a HW problem and my PC won't get passed the "waiting for f/w to becomre ready". My guess is that it's a RAM problem.

Can I buy a standard 1gb DDR3 ECC DIMM or does the overpriced "original" RAM offer anything special? Which speed should the RAM be running on?

Thanks
 
I'm looking at replacing my 3ware 9650SE-24M8 card (getting 300-350MB/sec across my arrays) with an Areca 1882ix-24 or 1883ix-24. Price difference isn't much between these models judging by the difference between the 1883i and 1882i on the Areca website.

Does anyone know anything about a release date of the 1883ix-24? Waiting looks like the superior option here given the performance of the 1883ix. I've gotten five years out of my 3ware and I'd like to get at least that out of the Areca.
 
I'm looking at replacing my 3ware 9650SE-24M8 card (getting 300-350MB/sec across my arrays) with an Areca 1882ix-24 or 1883ix-24. Price difference isn't much between these models judging by the difference between the 1883i and 1882i on the Areca website.

Does anyone know anything about a release date of the 1883ix-24? Waiting looks like the superior option here given the performance of the 1883ix. I've gotten five years out of my 3ware and I'd like to get at least that out of the Areca.


The 1883 did not give me any better sequential red/write speeds on a raid6 array compared to an ARC-1882. It appears its only faster for sequential on raid0 for like SSD's and stuff.

ARC-1882 can do about 3 gigabytes/sec read and 2 gigabytes/sec write in raid6 which is what I run (big raid6 arrays).

With the 1883 you will need the newer 12 gig SAS hardware too (or adapter cables) which are more expensive/harder to come by. Unless you have a significant reason I think I would just stick with the 1882.
 
I built an AV processing rig for one of my customers using:

-Areca ARC-1882IX-24 Firmaware v1.52 with a ARC-8018-.01.14.0114 Expander and 4GB Cache
-12 x Seagate ST2000DM001-1CH164 2GB SATA3 (6Gbps) HDDss
-The array was setup as a RAID50, setup as 4 x 3 HDD RAID5 Raidsets with one HDD from each controller channel
-the raidsets are combined into a RAID0 to form a 16TB volume
-Volume is formatted as EXT4 and is exported as a NFS share over 10GBps Ethernet from a Centos 6.4 Linux box.

I benchmarked the array from the local machine (the one that has the array controller attached) using frametest (available Here
and i got an average of 552MB/s read performance while working with 2K frames:

What do you think of the numbers below, are they any good for this hardware configuration ?

Code:
Test parameters:      -r -z12512 -n1800
Test duration:        39 secs
Frames transferred:   1771 (21639.406 MB)
Fastest frame:        10.763 ms (1135.30 MB/s)
Slowest frame:        76.189 ms (160.37 MB/s)

Averaged details:
              Open        I/O         Frame      Data rate   Frame rate
  Last 1s:   0.009 ms    36.05 ms    36.07 ms   338.75 MB/s   27.7 fps
       5s:   0.009 ms    28.79 ms    28.81 ms   424.14 MB/s   34.7 fps
      30s:   0.009 ms    21.56 ms    21.58 ms   566.25 MB/s   46.3 fps
  Overall:   0.009 ms    22.08 ms    22.10 ms   552.87 MB/s   45.2 fps

Histogram of frame completion times:
   50% |
       |
       |
       |
       |                                    *
       |                                    *
       |                                   **
       |                                   ***
       |                                  ********
       |                                 **************
       +|----|-----|----|----|-----|----|----|-----|----|----|-----|----|
  ms  <0.1  .2    .5    1    2     5   10   20    50   100  200   500  >1s


  Overall frame rate .... 45.01 fps (576635665 bytes/s)

  Average file time ...... 22.213 ms
  Shortest file time ..... 8.177 ms
  Longest file time ...... 76.189 ms

  Average open time ...... 0.009 ms
  Shortest open time ..... 0.004 ms
  Longest open time ...... 0.021 ms

  Average read time ...... 22.2 ms
  Shortest read time ..... 8.2 ms
  Longest read time ...... 76.2 ms

  Average close time .... 0.007 ms
  Shortest close time ... 0.002 ms
  Longest close time .... 0.025 ms

Seems low... couldn't run it on my ARC-1880 volume because its not 64-bit compiled and my 32-bit int inodes are used up on that on my other volume on an old dated ARC-1280:

Code:
Test parameters:      -w12512 -n1800 -t4 
Test duration:        28 secs
Frames transferred:   1768 (21602.750 MB)
Fastest frame:        8.293 ms (1473.41 MB/s)
Slowest frame:        1390.234 ms (8.79 MB/s)

Averaged details:
              Open        I/O         Frame      Data rate   Frame rate
  Last 1s:   0.088 ms    42.69 ms    10.47 ms  1167.26 MB/s   95.5 fps
       5s:   0.094 ms    79.95 ms    20.03 ms   610.10 MB/s   49.9 fps
      30s:   2.882 ms    58.66 ms    16.31 ms   749.31 MB/s   61.3 fps
  Overall:   2.882 ms    58.66 ms    16.31 ms   749.31 MB/s   61.3 fps

Histogram of frame completion times:
   20% |                                        *                            
       |                                        **                           
       |                                        ***                          
       |                                        ****                         
       |                                        *****                        
       |                                        ******                       
       |                                        *******                      
       |                                        *******                      
       |                                       **********                    
       |                               **  ***************  **** ****** *    
       +|----|-----|----|----|-----|----|----|-----|----|----|-----|----|
  ms  <0.1  .2    .5    1    2     5   10   20    50   100  200   500  >1s


  Overall frame rate .... 61.68 fps (790242061 bytes/s)

  Average file time ...... 61.218 ms
  Shortest file time ..... 8.293 ms
  Longest file time ...... 1390.234 ms

  Average create time .... 2.833 ms
  Shortest create time ... 0.038 ms
  Longest create time .... 1208.779 ms

  Average write time ..... 58.4 ms
  Shortest write time .... 8.0 ms
  Longest write time ..... 1390.1 ms

  Average close time .... 0.009 ms
  Shortest close time ... 0.002 ms
  Longest close time .... 0.677 ms
 
Seems low... couldn't run it on my ARC-1880 volume because its not 64-bit compiled and my 32-bit int inodes are used up on that on my other volume on an old dated ARC-1280:

Code:
Test parameters:      -w12512 -n1800 -t4 
Test duration:        28 secs
Frames transferred:   1768 (21602.750 MB)
Fastest frame:        8.293 ms (1473.41 MB/s)
Slowest frame:        1390.234 ms (8.79 MB/s)

Averaged details:
              Open        I/O         Frame      Data rate   Frame rate
  Last 1s:   0.088 ms    42.69 ms    10.47 ms  1167.26 MB/s   95.5 fps
       5s:   0.094 ms    79.95 ms    20.03 ms   610.10 MB/s   49.9 fps
      30s:   2.882 ms    58.66 ms    16.31 ms   749.31 MB/s   61.3 fps
  Overall:   2.882 ms    58.66 ms    16.31 ms   749.31 MB/s   61.3 fps

Histogram of frame completion times:
   20% |                                        *                            
       |                                        **                           
       |                                        ***                          
       |                                        ****                         
       |                                        *****                        
       |                                        ******                       
       |                                        *******                      
       |                                        *******                      
       |                                       **********                    
       |                               **  ***************  **** ****** *    
       +|----|-----|----|----|-----|----|----|-----|----|----|-----|----|
  ms  <0.1  .2    .5    1    2     5   10   20    50   100  200   500  >1s


  Overall frame rate .... 61.68 fps (790242061 bytes/s)

  Average file time ...... 61.218 ms
  Shortest file time ..... 8.293 ms
  Longest file time ...... 1390.234 ms

  Average create time .... 2.833 ms
  Shortest create time ... 0.038 ms
  Longest create time .... 1208.779 ms

  Average write time ..... 58.4 ms
  Shortest write time .... 8.0 ms
  Longest write time ..... 1390.1 ms

  Average close time .... 0.009 ms
  Shortest close time ... 0.002 ms
  Longest close time .... 0.677 ms

Can you give me specifics about the HDD models and RAID mode you got setup here ? also which filesystem are you using ?
 
The 1883 did not give me any better sequential red/write speeds on a raid6 array compared to an ARC-1882. It appears its only faster for sequential on raid0 for like SSD's and stuff.

ARC-1882 can do about 3 gigabytes/sec read and 2 gigabytes/sec write in raid6 which is what I run (big raid6 arrays).

With the 1883 you will need the newer 12 gig SAS hardware too (or adapter cables) which are more expensive/harder to come by. Unless you have a significant reason I think I would just stick with the 1882.

How many disks? As I plan on running this controller for 5-6 years plus, the 12Gbit/sec link rate might be useful with port multipliers down the road if I run them out of the external port on the back, plus the additional cache capacity.

The cables are being ordered with the card and there seems to be very little difference between the two (in the case of of the 1882i vs 1883i 8 port models, there is a $20 difference with cables, so assuming this will be a ~$60 difference which is a very small part of the ~1300 asking price). So the extra dollars aren't an issue unless I'm going to run into problems by running SATA cables directly in to Hitachi 7K3000 and 7K4000 drives.

I can't seem to find much doco on it - does the Ethernet management port support IPv6?
 
quick question....can I migrate from raid 1 to raid 5 with a 1680?

Yep no problem. From the 1680 manual:

•
Online RAID Level and Stripe Size Migration For those who wish to later upgrade to any RAID capabilities, a system with Areca online RAID level/stripe size migration allows a simplified upgrade to any supported RAID level without having to reinstall the operating system. The SAS RAID controllers can migrate both the RAID level and stripe size of an existing volume set, while the server is online and the volume set is in use. Online RAID level/stripe size migration can prove helpful during performance tuning activities as well as when additional physical disks are added to the SAS RAID controller. For example, in a system using two drives in RAID level 1, it is possible to add a single drive and add capacity and retain fault tolerance. (Normally, expanding a RAID level 1 array would require the addition of two disks). A third disk can be added to the existing RAID logical drive and the volume set can then be migrated from RAID level 1 to 5. The result would be parity fault tolerance and double the available capacity without taking the system down. A forth disk could be added to migrate to RAID level 6. It is only possible to migrate to a higher RAID level by adding a disk; disks in an existing array can’t be reconfigured for a higher RAID level without adding a disk. Online migration is only permitted to begin, if all volumes to be migrated are in the normal mode. During the migration process, the volume sets being migrated are accessed by the host system. In addition, the volume sets with RAID level 1, 10, 3, 5 or 6 are protected against data loss in the event of disk failure(s).
In the case of disk failure, the volume set transitions from migrating state to migrating+degraded) state. When the migration is completed, the volume set transitions to degraded mode. If a global hot spare is present, then it further transitions to
rebuilding state
 
How many disks? As I plan on running this controller for 5-6 years plus, the 12Gbit/sec link rate might be useful with port multipliers down the road if I run them out of the external port on the back, plus the additional cache capacity.

The cables are being ordered with the card and there seems to be very little difference between the two (in the case of of the 1882i vs 1883i 8 port models, there is a $20 difference with cables, so assuming this will be a ~$60 difference which is a very small part of the ~1300 asking price). So the extra dollars aren't an issue unless I'm going to run into problems by running SATA cables directly in to Hitachi 7K3000 and 7K4000 drives.

I can't seem to find much doco on it - does the Ethernet management port support IPv6?


This was with 30 disks, Also exact same speeds I saw on an array with an 1882 with 50 drives or 24 disks.

I also tested this with areca's 12g exapnder with the special option to allow 12 gig link speeds even with 6 gig disks. Basically i could not get over 3 gigabytes/sec read and 2 gigabytes/sec write in raid6 on either an ARC-1882 or an ARC-1883. When asking areca to do testing of their own (to see if it was an issue with the linux dirver) I found that they got the same results ( 2gbyte/sec write limit) on raid6. Seems to just be a sequential write speed limit on their cards with raid6 (raid5 is slightly higher).
 
Generally happy owner of 4x Areca ARC-1882ix-24 cards.

My only complaint is on the very first card I bought, I originally had 12x3TB drives on it. I later expanded that to 24x3TB drives and expanded the volume. The volume expanded to use all 24 disks, but I was never able to expand the available space on the RAID volume. It's been a while since I looked at it, so I might be getting the terms backwards. The other two cards started out life with 24 drives, and they've all been fine.
 
Generally happy owner of 4x Areca ARC-1882ix-24 cards.

My only complaint is on the very first card I bought, I originally had 12x3TB drives on it. I later expanded that to 24x3TB drives and expanded the volume. The volume expanded to use all 24 disks, but I was never able to expand the available space on the RAID volume. It's been a while since I looked at it, so I might be getting the terms backwards. The other two cards started out life with 24 drives, and they've all been fine.

Guess you just missed a step. You first add the additional disks to the raidset and that causes a change to the raid set size and it will say migrating (%). After that is done you have to modify the volume set and change it to use the additional space (or create another volume set) and in that case it will say initializing (%) and start at the % of the additional space being added (similar process going from 24x2TB -> 24x4TB) and in that case my initialization started at 50% because the disks doubled in size.
 
I'm running an ARC-1680, firmware V1.51 2012-07-04, connected via SFF-8088 to a Habey 12 drive enclosure, 10 x 3T RAID 6, single Volume. Per the XFS website, I've set Disk Write Cache Mode to disabled.

I partitioned the volume with gparted
# parted -a optimal /dev/sdc mklabel gpt mkpart primary 0% 100%

And after doing some research about proper alignment, formatted with xfs
# mkfs.xfs -f -d su=64k,sw=8 /dev/sdc1

Last night I had a timeout error, this morning the RAID was rebuilding normally in the background:
2014-07-16 00:09:29 Enc#2 PHY#9 Time Out Error
2014-07-16 00:09:31 ARC-1680-VOL#010 Volume Degraded
2014-07-16 00:09:33 Raid Set # 010 RaidSet Degraded
2014-07-16 00:09:35 Enc#2 PHY#9 Device Removed

I would have expected the OS (Ubuntu 14.04 LTS Server) to carry on without noticing, or maybe a write error, but the xfs driver totally barfed. I found this concurrent entry in /var/log/syslog:

Jul 16 00:10:19 srvr42 kernel: [37949.044839] ffff88010d833000: 5c 63 63 9e 9b 21 32 88 05 9d a3 dc 42 9d 7a e6 \cc..!2.....B.z.
Jul 16 00:10:19 srvr42 kernel: [37949.044991] ffff88010d833010: 45 1a f0 65 dc 4d 0e d5 fd 22 f0 fb ad 0b 46 11 E..e.M..."....F.
Jul 16 00:10:19 srvr42 kernel: [37949.045122] ffff88010d833020: 66 c2 9c 91 ad 17 72 45 36 c2 77 e1 6d 4d 66 11 f.....rE6.w.mMf.
Jul 16 00:10:19 srvr42 kernel: [37949.045252] ffff88010d833030: ad 53 5e 9b 5e dd 55 81 7b 3e 21 97 cf 4c c4 3b .S^.^.U.{>!..L.;
Jul 16 00:10:19 srvr42 kernel: [37949.045385] XFS (sdc1): Internal error xfs_allocbt_read_verify at line 362 of file /build/buildd/linux-3.13.0/fs/xfs/xfs_alloc_btree.c. Caller 0xffffffffa015b6c5
Jul 16 00:10:19 srvr42 kernel: [37949.045603] CPU: 0 PID: 390 Comm: kworker/0:1H Not tainted 3.13.0-29-generic #53-Ubuntu
Jul 16 00:10:19 srvr42 kernel: [37949.045605] Hardware name: Supermicro X7SBi/X7SBi, BIOS 1.3a 11/03/2009
Jul 16 00:10:19 srvr42 kernel: [37949.045639] Workqueue: xfslogd xfs_buf_iodone_work [xfs]
Jul 16 00:10:19 srvr42 kernel: [37949.045642] 0000000000000001 ffff8800c8867d78 ffffffff8171a214 ffff8802209b7000
Jul 16 00:10:19 srvr42 kernel: [37949.045646] ffff8800c8867d90 ffffffffa015e53b ffffffffa015b6c5 ffff8800c8867dc8
Jul 16 00:10:19 srvr42 kernel: [37949.045650] ffffffffa015e595 0000016a364f2b20 ffff8800364f2b20 ffff8800364f2a80
Jul 16 00:10:19 srvr42 kernel: [37949.045654] Call Trace:
Jul 16 00:10:19 srvr42 kernel: [37949.045661] [<ffffffff8171a214>] dump_stack+0x45/0x56
Jul 16 00:10:19 srvr42 kernel: [37949.045683] [<ffffffffa015e53b>] xfs_error_report+0x3b/0x40 [xfs]
Jul 16 00:10:19 srvr42 kernel: [37949.045704] [<ffffffffa015b6c5>] ? xfs_buf_iodone_work+0x85/0xf0 [xfs]
Jul 16 00:10:19 srvr42 kernel: [37949.045725] [<ffffffffa015e595>] xfs_corruption_error+0x55/0x80 [xfs]
Jul 16 00:10:19 srvr42 kernel: [37949.045746] [<ffffffffa015b6c5>] ? xfs_buf_iodone_work+0x85/0xf0 [xfs]
Jul 16 00:10:19 srvr42 kernel: [37949.045770] [<ffffffffa0178ba9>] xfs_allocbt_read_verify+0x69/0xd0 [xfs]
Jul 16 00:10:19 srvr42 kernel: [37949.045791] [<ffffffffa015b6c5>] ? xfs_buf_iodone_work+0x85/0xf0 [xfs]
Jul 16 00:10:19 srvr42 kernel: [37949.045812] [<ffffffffa015b6c5>] xfs_buf_iodone_work+0x85/0xf0 [xfs]
Jul 16 00:10:19 srvr42 kernel: [37949.045817] [<ffffffff810838a2>] process_one_work+0x182/0x450
Jul 16 00:10:19 srvr42 kernel: [37949.045820] [<ffffffff81084641>] worker_thread+0x121/0x410
Jul 16 00:10:19 srvr42 kernel: [37949.045823] [<ffffffff81084520>] ? rescuer_thread+0x3e0/0x3e0
Jul 16 00:10:19 srvr42 kernel: [37949.045827] [<ffffffff8108b322>] kthread+0xd2/0xf0
Jul 16 00:10:19 srvr42 kernel: [37949.045830] [<ffffffff8108b250>] ? kthread_create_on_node+0x1d0/0x1d0
Jul 16 00:10:19 srvr42 kernel: [37949.045834] [<ffffffff8172ab3c>] ret_from_fork+0x7c/0xb0
Jul 16 00:10:19 srvr42 kernel: [37949.045837] [<ffffffff8108b250>] ? kthread_create_on_node+0x1d0/0x1d0
Jul 16 00:10:19 srvr42 kernel: [37949.045840] XFS (sdc1): Corruption detected. Unmount and run xfs_repair
Jul 16 00:10:19 srvr42 kernel: [37949.045948] XFS (sdc1): metadata I/O error: block 0x384ae0778 ("xfs_trans_read_buf_map") error 117 numblks 8
Jul 16 00:10:19 srvr42 kernel: [37949.046104] XFS (sdc1): xfs_do_force_shutdown(0x1) called from line 376 of file /build/buildd/linux-3.13.0/fs/xfs/xfs_trans_buf.c. Return address = 0xffffffffa01bcd7d
Jul 16 00:10:19 srvr42 kernel: [37949.046726] XFS (sdc1): I/O Error Detected. Shutting down filesystem
Jul 16 00:10:19 srvr42 kernel: [37949.046832] XFS (sdc1): Please umount the filesystem and rectify the problem(s)

After unmounting the file system, xfs_repair told me to remount it so it could play the journal. When I did that, it told me that the journal was corrupt and I was going to have to blow it away with the -L option and rebuild from scratch. This isn't acceptable performance for me. I switched from ext4 to xfs because of size requirements, but with a similar RAID error (my system does this occasionally, the drive that timed out tests OK, I put it back as the new Hot Spare), ext4 never blinked. Any advice?
 
Anyone know when the 1883ix-12/16/24 cards are ready for sale?

The LSI SAS3X** SAS Expander silicon is in extremely short supply and are going to high end boutique manufacturers/large storage contract customers first. I don't expect you will see >8 port cards in any quantity until Q314/Q414 unless things change from what we have been told. The PMC Expanders based upon their own silicon are available ($$$$$).
 
Last edited:
Anyone with some 1280ML knowledge here? :) I'm thinking of buying a 1280ML, but I see all these different revisions on ebay. Ver B, Ver C, Ver 1.0 and 2.0. I have no clue if there are relevant differences between these four.
 
Thinking of buying a ARC-1224-8i for initially 4x4TB RAID6 (usable 8TB of course). Does this card have any specific airflow requirements? I remember with the PERC5/i the gist was "have some air moving over it" which meant I put a fan next to it specifically (even though my case has air moving through it anyway).
 
Anyone with some 1280ML knowledge here? :) I'm thinking of buying a 1280ML, but I see all these different revisions on ebay. Ver B, Ver C, Ver 1.0 and 2.0. I have no clue if there are relevant differences between these four.

I'm curious about this, too. I haven't ever seen Areca post release notes about hardware revisions.
 
Will these cables work with the ARC-1680IX line?

http://www.monoprice.com/Product?c_id=102&cp_id=10254&cs_id=1025406&p_id=8187&seq=1&format=2

And does anyone know where to purchase compatible 4GB memory?
I have confirmed 4GBx1 Kingston KVR667D2D8P5 and 4GBx1 Crucial CT51272AA667 work.
However, I can find noone selling them other then Memory4Less and reviews are too poor there.

Thanks

Do what I did for a similar situation with memory and contact Kingston sales via email. They will provide you a direct part number replacement of current shipping RAM.
 
Thanks, I will try that. Do you use an 1680 as well? Could you suggest a SAS to 4xSata cable?
Found these HERE, but not sure if they work.

Also, does anyone know if WD6401AALS hard-drives are compatible with the 1680IX?
 
Alright, one more question... I won an 1882ix on ebay the other day, and it arrived today. Good! What puzzles me is that the auction said that there should be one 4GB stick and one 1GB stick included. The picture showed no 4GB module, only that Unigen 1GB one. I asked to be sure, and he sent me this:

http://i.ebayimg.com/00/s/MTA2MFgxNjAw/z/dmEAAOSwI~VTyX6H/$_4.JPG

It does say "system memory: 4096MB", which makes little sense if there was either a 1GB module or no module at all installed (unless it just shows the maximum when there is no stick). Anyone got any clues here?
 
Well, since you are now in possession of the card, you should easily be able to determine what you received. No need to ask us. Just plug it in and find out.
 
No, I cannot test it myself :p I could ask my mate who received it to give it a shot - but I try to minimize the work for my good friends :) I live in Norway, so I always ship stuff to "middlemen" to save shipping costs and taxes/fees the greedy government would ask for if the box isn't marked as a "gift".

I should've mentioned this, so no wonder why it looked like a dumb question...
 
You don't have the card, so you don't know what's included.
We don't have the carrd, either, so we don't know.
Your friend has the card, but you won't ask him.

I'm not sure how to proceed from here, but maybe I'm also not clearly grasping your question.
 
Now, this is the weirdest thing I've seen for a long time. There IS a 4GB stick installed, which looks like a black backplate. Hynix. I was totally expecting a green PCB on that stick so I didn't even bother taking a closer look :p

*octopus facepalm*
 
Back
Top