The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
I need to update my post... It doesn't have several upgrades (12x3tb drives added to a machine, two 24x1 TB backup machines) and I just ordered 24x4 TB drives to upgrade my main machine.

Sounds like a pretty epic build.

I need to buy more drives for mine soon. Not exactly running low on disk space, but I like to have more than 50% unused. :D

Code:
[root@isengard ~]# df -hl
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_isengard-lv_root
                       50G  4.3G   43G  10% /
tmpfs                 3.9G     0  3.9G   0% /dev/shm
/dev/sde1             485M   38M  422M   9% /boot
/dev/mapper/vg_isengard-lv_home
                       53G  180M   50G   1% /home
/dev/md0              5.4T  3.7T  1.5T  72% /volumes/raid1
/dev/md1              6.3T  4.2T  1.9T  70% /volumes/raid2
[root@isengard ~]#
 
And it starts:



Code:
root@dekabutsu: 06:14 PM :~# cli64 vsf info
CLI>   # Name             Raid Name       Level   Capacity Ch/Id/Lun  State
===============================================================================
  1 DATA 2 VOLUME    90TB RAID SET   Raid6   84000.0GB 00/01/00   Normal
===============================================================================
GuiErrMsg<0x00>: Success.

CLI> GuiErrMsg<0x00>: Success.

CLI>   # Name             Raid Name       Level   Capacity Ch/Id/Lun  State
===============================================================================
  1 WINDOWS VOLUME   48TB RAID SET   Raid6    129.0GB 00/00/00   Rebuilding(50.5%)
  2 MAC VOLUME       48TB RAID SET   Raid6     30.0GB 00/00/01   Need Rebuild
  3 LINUX VOLUME     48TB RAID SET   Raid6    129.0GB 00/00/02   Need Rebuild
  4 DATA VOLUME      48TB RAID SET   Raid6   43712.0GB 00/00/03   Need Rebuild
===============================================================================
GuiErrMsg<0x00>: Success.

CLI>

Code:
root@dekabutsu: 06:14 PM :~# cli64 disk info
CLI>   # Enc# Slot#   ModelName                        Capacity  Usage
===============================================================================
  1  01  Slot#1  N.A.                                0.0GB  N.A.
  2  01  Slot#2  N.A.                                0.0GB  N.A.
  3  01  Slot#3  N.A.                                0.0GB  N.A.
  4  01  Slot#4  N.A.                                0.0GB  N.A.
  5  01  Slot#5  N.A.                                0.0GB  N.A.
  6  01  Slot#6  N.A.                                0.0GB  N.A.
  7  01  Slot#7  N.A.                                0.0GB  N.A.
  8  01  Slot#8  N.A.                                0.0GB  N.A.
  9  02  Slot#1  N.A.                                0.0GB  N.A.
 10  02  Slot#2  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 11  02  Slot#3  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 12  02  Slot#4  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 13  02  Slot#5  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 14  02  Slot#6  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 15  02  Slot#7  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 16  02  Slot#8  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 17  02  Slot#9  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 18  02  Slot#10 Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 19  02  Slot#11 Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 20  02  Slot#12 Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 21  02  Slot#13 Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 22  02  Slot#14 Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 23  02  Slot#15 Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 24  02  Slot#16 Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 25  02  Slot#17 N.A.                                0.0GB  N.A.
 26  02  Slot#18 N.A.                                0.0GB  N.A.
 27  02  Slot#19 N.A.                                0.0GB  N.A.
 28  02  Slot#20 N.A.                                0.0GB  N.A.
 29  02  Slot#21 N.A.                                0.0GB  N.A.
 30  02  Slot#22 N.A.                                0.0GB  N.A.
 31  02  Slot#23 N.A.                                0.0GB  N.A.
 32  02  Slot#24 N.A.                                0.0GB  N.A.
 33  03  Slot#1  N.A.                                0.0GB  N.A.
 34  03  Slot#2  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 35  03  Slot#3  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 36  03  Slot#4  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 37  03  Slot#5  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 38  03  Slot#6  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 39  03  Slot#7  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 40  03  Slot#8  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 41  03  Slot#9  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 42  03  Slot#10 Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 43  03  Slot#11 Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 44  03  Slot#12 Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 45  03  Slot#13 Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 46  03  Slot#14 Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 47  03  Slot#15 Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 48  03  Slot#16 Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 49  03  Slot#17 N.A.                                0.0GB  N.A.
 50  03  Slot#18 N.A.                                0.0GB  N.A.
 51  03  Slot#19 N.A.                                0.0GB  N.A.
 52  03  Slot#20 N.A.                                0.0GB  N.A.
 53  03  Slot#21 N.A.                                0.0GB  N.A.
 54  03  Slot#22 N.A.                                0.0GB  N.A.
 55  03  Slot#23 N.A.                                0.0GB  N.A.
 56  03  Slot#24 N.A.                                0.0GB  N.A.
===============================================================================
GuiErrMsg<0x00>: Success.

CLI> GuiErrMsg<0x00>: Success.

CLI>   # Ch# ModelName                       Capacity  Usage
===============================================================================
  1  1  HGST HMS5C4040BLE640            4000.8GB  48TB RAID SET
  2  2  Hitachi HDS722020ALA330         2000.4GB  48TB RAID SET
  3  3  Hitachi HDS722020ALA330         2000.4GB  48TB RAID SET
  4  4  Hitachi HDS722020ALA330         2000.4GB  48TB RAID SET
  5  5  Hitachi HDS722020ALA330         2000.4GB  48TB RAID SET
  6  6  Hitachi HDS722020ALA330         2000.4GB  48TB RAID SET
  7  7  Hitachi HDS722020ALA330         2000.4GB  48TB RAID SET
  8  8  Hitachi HDS722020ALA330         2000.4GB  48TB RAID SET
  9  9  Hitachi HDS722020ALA330         2000.4GB  48TB RAID SET
 10 10  Hitachi HDS722020ALA330         2000.4GB  48TB RAID SET
 11 11  Hitachi HDS722020ALA330         2000.4GB  48TB RAID SET
 12 12  Hitachi HDS722020ALA330         2000.4GB  48TB RAID SET
 13 13  Hitachi HDS722020ALA330         2000.4GB  48TB RAID SET
 14 14  TOSHIBA DT01ACA200              2000.4GB  48TB RAID SET
 15 15  Hitachi HDS722020ALA330         2000.4GB  48TB RAID SET
 16 16  Hitachi HDS722020ALA330         2000.4GB  48TB RAID SET
 17 17  Hitachi HDS722020ALA330         2000.4GB  48TB RAID SET
 18 18  Hitachi HDS722020ALA330         2000.4GB  48TB RAID SET
 19 19  Hitachi HDS722020ALA330         2000.4GB  48TB RAID SET
 20 20  Hitachi HDS722020ALA330         2000.4GB  48TB RAID SET
 21 21  Hitachi HDS723020BLA642         2000.4GB  48TB RAID SET
 22 22  Hitachi HDS723020BLA642         2000.4GB  48TB RAID SET
 23 23  Hitachi HDS723020BLA642         2000.4GB  48TB RAID SET
 24 24  Hitachi HDS723020BLA642         2000.4GB  48TB RAID SET
===============================================================================
GuiErrMsg<0x00>: Success.

CLI>


Rebuild number 1 started... 23 more rebuilds to go...
 
You should just keep spare chassis and RAID cards like me to save yourself the headache of rebuilding a million times to switch disk sizes. If you were local I'd let you borrow one of my unused Supermicro 847E16 chassis. :p
 
You should just keep spare chassis and RAID cards like me to save yourself the headache of rebuilding a million times to switch disk sizes. If you were local I'd let you borrow one of my unused Supermicro 847E16 chassis. :p

I do but then I have to rsync (especially when several of the other LUN's have OS stuff and honestly is more of a PITA when I am just going to replace the disks anyway this way is definitely slower though. It doesn't get any simpler than going this route rather than constantly doing rsync's in loops to make sure i got everything and do the migration this way I don't even ever have to reboot my machine to make use of the extra storage.

Although there will be like a 4 day period where nothing will be happening going this route as this Saturday I am flying out to Georgia to pickup my GT-R that I am buying out of state (from California) due to dealers not wanting to deal and driving it back and thus will be gone 3-4 days.
 
Jimmeny! 90tb!? nice

I still won't take that 1st place spot for most storage in one chassis as I will be at 96TB of raw disks but with the 90TB of raw disks in external storage coupled with it that will mean 186TB of disks hooked up to a single machine. I wonder if that is a record somewhere :)
 
I really should update my post. Been at 100TB+ for years now. Not at 200TB though, so you're besting me. Haven't really upgraded much in years though, so I'm still sporting a ton of 2TB Hitachis. Shame I can't post much from work. 10000+ drives (mix FC and SSD) across our 3PARs.
 
Last edited:
I still won't take that 1st place spot for most storage in one chassis as I will be at 96TB of raw disks but with the 90TB of raw disks in external storage coupled with it that will mean 186TB of disks hooked up to a single machine. I wonder if that is a record somewhere :)

It's got to be.
 
I would like to make a long over due update to the main post and rankings. won't get into details now, but would you guys prefer a new thread or just edit this one?
 
I would like to make a long over due update to the main post and rankings. won't get into details now, but would you guys prefer a new thread or just edit this one?

Open new thread
50TB+ only.. too many small tiny servers on this thread
 
Last edited:
I would like to make a long over due update to the main post and rankings. won't get into details now, but would you guys prefer a new thread or just edit this one?

Open the new "Post your 50TB+ systems"
10TB is old with new 6TB drives
 
ender could you take me off as i don't have my 20x array of 3tb drives anymore
 
No pics, but recently upgraded NAS for holding client data offsite, kept the case & PSU and a few of the existing drives.

Norco 4216 case w/ 120mm fan wall mod.
seasonic 650W PSU
supermicro X10SLM-F
32gb kingston KVR16E11K4/32I
e3-1220v3 cpu
lsi 9211 in IT mode for the 3tb drives (except for 1 that hangs on a motherboard port)
br10i in IT mode for the 2tb drives

7x 3tb WD red in raidz3 for main storage
6x 2tb hitachi in 3 way mirror for vmware backups
2x 3tb WD red in mirror misc storage
2x 500gb 2.5" rpool mirror

So 40tb raw, a little over 20tb usable. Not a ton, but gets the job done. Using omni-os r151008.
 
My modest home server build-

3m0yHnz.jpg


Fractal Design Define R4 Mini
Xeon E3-1230 V2
SUPERMICRO MBD-X9SCM-IIF
Kingston 16GB (2 x 8GB) 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1333

FreeNAS 9.2.1.5

6X Western Digital Red (5400rpm) 3TB in RAIDZ2 leaving 12TB / 10.7 TiB usable.

Getting ~78 MB/sec copy rates (624 Mbps) across LAN (Windows/CIFS). Haven't measured power consumption yet.

Jails for Crashplan, ownCloud, and Plex Media Server. IPMI on motherboard and UPS shutdown via FreeNAS service. Access from outside house only by VPN.

Storing documents, code, home videos, photos, music, TV/movies, ebooks, games, software - all household data. Goal was de-duplication across devices, ease of backup (w/ Crashplan), and protection from bitrot (ZFS). So far, it's fantastic.
 
Nearly finished version, the last things i am waiting for are :
  • M.2 SSD which will allow me to remove the Samsung 830 from that spot between the motherboard and disk drives.
  • Bigger hole for the outtake fan.

Hardware - i5-4460, ASRock Z97 Extreme 6, 4x8GB DDR3, Corator DS with Noiseblocker eLoop PWM fan, Samsung 830 256GB SSD + bunch of hard drives (see bellow), Seasonic X460, Intel Dual Gigabit LAN card, two WiFi cards (for 2.4GHz and 5GHz AP), CineCT v6 TV card, IBM M1015 flashed to LSI 9211-8i, Pulse Eight HDMI-CEC USB adapter, a old fan controller, some fans :).

Hard drives (ignoring the system SSD):
WD20EARS x2
WD20EARX x6
WD30EFRX x6
WD40EFRX x 1

Total : 2x2TB + 6x2TB + 6x3TB + 1x4TB = 38TB (34.56 TiB).

HDD temps and idle CPU temps are both around 38-40C.

(Clicking on the picture will show a bigger version of picture)



 
Nearly finished version, the last things i am waiting for are :
  • M.2 SSD which will allow me to remove the Samsung 830 from that spot between the motherboard and disk drives.
  • Bigger hole for the outtake fan.

Hardware - i5-4460, ASRock Z97 Extreme 6, 4x8GB DDR3, Corator DS with Noiseblocker eLoop PWM fan, Samsung 830 256GB SSD + bunch of hard drives (see bellow), Seasonic X460, Intel Dual Gigabit LAN card, two WiFi cards (for 2.4GHz and 5GHz AP), CineCT v6 TV card, IBM M1015 flashed to LSI 9211-8i, Pulse Eight HDMI-CEC USB adapter, a old fan controller, some fans :).

Hard drives (ignoring the system SSD):
WD20EARS x2
WD20EARX x6
WD30EFRX x6
WD40EFRX x 1

Total : 2x2TB + 6x2TB + 6x3TB + 1x4TB = 38TB (34.56 TiB).

HDD temps and idle CPU temps are both around 38-40C.

Very nice. Very custom. What kind of software do you use on this?
 
Just plain Ubuntu. UFW + dnsmasq + hostapd for router/AP part, Samba & NFS for file sharing, Apache & stuff for web part, various stuff for backups,... I could RAID some drives if there is a need for it, but so far they are simple separate drives, big part of them unused (for example the 4TB drive was bought last month because a friend was selling a never used one for 100&#8364; - market price is 170&#8364;).

The wood part and few L profiles are work of my brother, everything else is bought in shops.
The motherboard tray was transplanted from a old unused Lian Li PC-A17 - my brother made a custom wood piece which is screwed to the main wooden tray to lift the mobo higher, the aluminum tray sits on it perfectly. There are four L profiles around the PSU with cutout for some of the modular connectors, so the PSU is just dropped in and again fits perfectly. As you can see, the fans are screwed in to a one huge L profile, and finally the HDD trays are just these products from Nanoxia (AFAIK not sold in USA) - http://www.nanoxia-world.com/products/2/21/hddbays. At price of 7-8 euros per one it is a bargain, all those bays for 15 drives were less than 50 euros.

And why is the server in the "TV stand" ? Well, why not :). It is invisible this way, while fully acessible, temps are relatively good, it is quiet...
 
Last edited:
Not that often. Of course it was cleaned before taking a photo, due a simple fact that i installed a new mobo back then.
 
This is my media server (update 2)

TOTAL STORAGE: 52 TB (34.7 usable)

Chassis: Supermicro CSE-846XE26-R1K28B
Mainboard: Asus M2N32-SLI Dlx/WF
CPU: AMD Athlon64 X2 6000+
RAM: Corsair XMS2 Twin2x2048-6400C4
VGA: Asus EAX1950XTX
HBA: Adaptec RAID 5805ZQ
HDD: 2x Seagate 7200.10 ST3250620AS
HDD: 2x Seagate 7200.11 ST31000340AS
HDD: 2x Western Digital Caviar Black WD1001FALS
HDD: 8x Western Digital RE4 WD2003FYYS
HDD: 8x Western Digital Re WD4000FYYZ
ODD: LiteON iHAS 120
DTV: Hauppauge HVR-2200

ASM.jpg


CSE-846XE26-R1K28B (front).jpg


CSE-846XE26-R1K28B (fan 1).jpg


CSE-846XE26-R1K28B (cable 2).jpg


CSE-846XE26-R1K28B (cable 1).jpg


CSE-846XE26-R1K28B (back 1).jpg


CSE-846XE26-R1K28B (back 2).jpg


CSE-846XE26-R1K28B (fan 2).jpg


CSE-846XE26-R1K28B (backplane).jpg


CSE-846XE26-R1K28B (assembled).jpg


CSE-846XE26-R1K28B (rack).jpg
 
Last edited:
Some very nice systems out there that ppl are using, I need to get pics of my system up. I am running a 24x2tb system in a backblaze case on Nas4Free running ZFS at the house. Getting ready to ad another 6x4tb in RaidZ-2 just so I don't have to do the full rebuild thing.
 
Its taken a long time especially with power outages and going on vacation but finally got all 24 of my 2TB drives replaced. None of my 24x4TB hitachi's had any issues from bhphoto and the rebuild happened 24 times with no read/write errors and all drives still have 0 re-allocated sectors and 0 pending sectors. Finally started the online capacity expansion so its initializing. I should be starting the fs expansion tomorrow:

Code:
dekabutsu ~ # cli64 rsf info
CLI>  #  Name             Disks TotalCap  FreeCap MinDiskCap         State          
===============================================================================
 1  90TB RAID SET       30 90000.0GB    0.0GB   3000.0GB         Normal
===============================================================================
GuiErrMsg<0x00>: Success.

CLI> GuiErrMsg<0x00>: Success.

CLI>  #  Name             Disks TotalCap  FreeCap DiskChannels       State          
===============================================================================
 1  48TB RAID SET       24 96000.0GB    0.0GB 123456789ABCDEFGHIJKLMNO Initializing
===============================================================================
GuiErrMsg<0x00>: Success.

CLI> 
dekabutsu ~ # cli64 vsf info
CLI>   # Name             Raid Name       Level   Capacity Ch/Id/Lun  State         
===============================================================================
  1 DATA 2 VOLUME    90TB RAID SET   Raid6   84000.0GB 00/01/00   Normal
===============================================================================
GuiErrMsg<0x00>: Success.

CLI> GuiErrMsg<0x00>: Success.

CLI>   # Name             Raid Name       Level   Capacity Ch/Id/Lun  State         
===============================================================================
  1 WINDOWS VOLUME   48TB RAID SET   Raid6    129.0GB 00/00/00   Normal
  2 MAC VOLUME       48TB RAID SET   Raid6     30.0GB 00/00/01   Normal
  3 LINUX VOLUME     48TB RAID SET   Raid6    129.0GB 00/00/02   Normal
  4 DATA VOLUME      48TB RAID SET   Raid6   43712.0GB 00/00/03   Initializing(52.1%)
===============================================================================
GuiErrMsg<0x00>: Success.

CLI>
 
And its upgraded!

Code:
root@dekabutsu: 03:05 AM :~# dmesg | grep -i sde
sd 1:0:0:3: [sde] 85374958592 512-byte logical blocks: (43.7 TB/39.7 TiB)
sd 1:0:0:3: [sde] Write Protect is off
sd 1:0:0:3: [sde] Mode Sense: cb 00 00 08
sd 1:0:0:3: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
 sde: sde1
sd 1:0:0:3: [sde] Attached SCSI disk
XFS (sde1): Mounting Filesystem
XFS (sde1): Starting recovery (logdev: internal)
XFS (sde1): Ending recovery (logdev: internal)
sd 1:0:0:3: [sde] 171312463872 512-byte logical blocks: (87.7 TB/79.7 TiB)
sde: detected capacity change from 43711978799104 to 87711981502464
XFS (sde1): Mounting Filesystem
XFS (sde1): Ending clean mount
root@dekabutsu: 03:05 AM :~# df -H /data
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sde1               44T    42T   2.5T  95% /data
root@dekabutsu: 03:05 AM :~# xfs_growfs /data
meta-data=/dev/sde1              isize=256    agcount=40, agsize=268435440 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=10671869440, imaxpct=5
         =                       sunit=16     swidth=352 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=32768, version=2
         =                       sectsz=512   sunit=16 blks
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 10671869440 to 21414057723
root@dekabutsu: 03:06 AM :~# df -H /data
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sde1               88T    42T    47T  48% /data
root@dekabutsu: 03:06 AM :~#
 
So far I have 8 of the 24 disks replaced. I would be on disk 13 or so by now had I not been out of town for several days. Each rebuild is taking ~24 hours.

How are you going about this? replacing each drive with 4tb and at the end going to expand the extra space?
 
Yes, which I already did.

very nice I thought about going that route before I killed by array...I got a lot of negative feedback about expanding that way but obviously it is possible cause you did it and I know you know what you are doing.
 
very nice I thought about going that route before I killed by array...I got a lot of negative feedback about expanding that way but obviously it is possible cause you did it and I know you know what you are doing.

It depends on the type of drives you have in it. I am going hitachi -> hitachi so I was not worried at all. No read errors or any disk failures in the 24 times I rebuilt my array to expand it in this way.

I was not so confident when I was coming from an array of seagate disks. In that case I mounted my main file-system (20TB at the time) as read-only so no new data could be written to the array that way I could re-use old disks in a catastrophic failure if I had an array failure due to multiple disks failing at once if any of the new hitachi's failed on me but I don't even have that worry now that i have more experience with hitachi and their reliability.

Also I would only ever consider this if its raid6 and only doing a single drive at a time. At no point (on this rebuild) was I ever in a double degraded state. It is definitely safer to create a new array and copy your data over and if I did not have *a lot* of trust in hitachi I would have done it that way. None of my coolspin hitachi disks even have any re-allocated sectors (even the ones powered on over 900 days:

Code:
root@dekabutsu: 12:35 PM :~#
diskinfo.sh --noserial
                        ARC-1280 Enclosure #1
Port     Model Number/Firmware           Re-aloc/Pend/PC/DaysOn  Temp:
1        HMS5C4040BLE640/MPAOA5D0        0/0/9/33                37
2        HMS5C4040BLE640/MPAOA5D0        0/0/6/32                37
3        HMS5C4040BLE640/MPAOA5D0        0/0/6/28                37
4        HMS5C4040BLE640/MPAOA5D0        0/0/6/27                37
5        HMS5C4040BLE640/MPAOA5D0        0/0/6/26                38
6        HMS5C4040BLE640/MPAOA5D0        0/0/6/24                39
7        HMS5C4040BLE640/MPAOA5D0        0/0/6/23                39
8        HMS5C4040BLE640/MPAOA5D0        0/0/6/22                39
9        HMS5C4040BLE640/MPAOA5D0        0/0/6/21                38
10       HMS5C4040BLE640/MPAOA5D0        0/0/6/20                39
11       HMS5C4040BLE640/MPAOA5D0        0/0/6/19                40
12       HMS5C4040BLE640/MPAOA5D0        0/0/6/18                40
13       HMS5C4040BLE640/MPAOA5D0        0/0/6/17                39
14       HMS5C4040BLE640/MPAOA5D0        0/0/6/16                40
15       HMS5C4040BLE640/MPAOA5D0        0/0/6/15                40
16       HMS5C4040BLE640/MPAOA5D0        0/0/6/14                39
17       HMS5C4040BLE640/MPAOA5D0        0/0/6/13                39
18       HMS5C4040BLE640/MPAOA5D0        0/0/4/11                39
19       HMS5C4040BLE640/MPAOA5D0        0/0/3/10                40
20       HMS5C4040BLE640/MPAOA5D0        0/0/3/9                 40
21       HMS5C4040BLE640/MPAOA5D0        0/0/3/8                 39
22       HMS5C4040BLE640/MPAOA5D0        0/0/3/7                 40
23       HMS5C4040BLE640/MPAOA5D0        0/0/3/6                 40
24       HMS5C4040BLE640/MPAOA5D0        0/0/3/5                 40

                        ARC-1880x Enclosure #2
Port     Model Number/Firmware           Re-aloc/Pend/PS/DaysOn  Temp:
1        HDS5C3030ALA630/MEAOA580        0/0/50/939              38
2        HDS5C3030ALA630/MEAOA580        0/0/52/938              39
3        HDS5C3030ALA630/MEAOA580        0/0/50/939              39
4        HDS5C3030ALA630/MEAOA580        0/0/50/939              39
5        HDS5C3030ALA630/MEAOA580        0/0/50/937              39
6        HDS5C3030ALA630/MEAOA580        0/0/50/937              41
7        HDS5C3030ALA630/MEAOA580        0/0/50/938              39
8        HDS5C3030ALA630/MEAOA580        0/0/50/938              39
9        HDS5C3030ALA630/MEAOA580        0/0/50/938              39
10       HDS5C3030ALA630/MEAOA580        0/0/50/937              40
11       HDS5C3030ALA630/MEAOA580        0/0/50/937              41
12       HDS5C3030ALA630/MEAOA580        0/0/50/937              40
13       HDS5C3030ALA630/MEAOA580        0/0/50/937              40
14       HDS5C3030ALA630/MEAOA580        0/0/50/936              39
15       HDS5C3030ALA630/MEAOA580        0/0/51/933              38

                        ARC-1880x Enclosure #3
Port     Model Number/Firmware           Re-aloc/Pend/PS/DaysOn  Temp:
1        HDS5C3030ALA630/MEAOA580        0/0/67/946              38
2        HDS5C3030ALA630/MEAOA580        0/0/63/946              38
3        HDS5C3030ALA630/MEAOA580        0/0/61/946              39
4        HDS5C3030ALA630/MEAOA580        0/0/63/946              39
5        HDS5C3030ALA630/MEAOA580        0/0/61/945              40
6        HDS5C3030ALA630/MEAOA580        0/0/60/944              41
7        HDS5C3030ALA630/MEAOA580        0/0/60/945              39
8        HDS5C3030ALA630/MEAOA580        0/0/60/944              40
9        HDS5C3030ALA630/MEAOA580        0/0/62/944              40
10       HDS5C3030ALA630/MEAOA580        0/0/62/945              39
11       HDS5C3030ALA630/MEAOA580        0/0/60/944              40
12       HDS5C3030ALA630/MEAOA580        0/0/62/945              39
13       HDS5C3030ALA630/MEAOA580        0/0/60/943              39
14       HDS5C3030ALA630/MEAOA580        0/0/60/943              39
15       HDS5C3030ALA630/MEAOA580        0/0/60/943              37

The disks in enclosure two have been getting > 1 petabyte read/write I/O per month load over the last few months to doing a pi calculation.
 
very nice...All my discs are hitachi too for the same reason... now in the process of building a new raid 6 array with 4tb hitachi drives...I just have to start and get another case
 
Status
Not open for further replies.
Back
Top