[H]ard Forum Storage Showoff Thread

EricThompson guessed correctly. It's a Supermicro 5018A-AR12L.

Total Storage 137.4 TB
Max single system storage 72.6 TB in 1U

Supermicro 5018A-AR12L (1U)
RAW capacity 72.6 TB (HDD/SSD combined)
Atom C2750 (8x 2.4GHz cores)
32GB ECC RAM
Onboard LSI 2116
12x 6TB Hitachi HDD
5x 120GB Intel 530 SSD (2x boot, 3x cache)
10Gb Myricom CX4 NIC
2012 R2 Storage spaces with 100GB write cache

Supermicro intended for 2x 2.5" drives with a mount that wasn't listed when I ordered this server. I adapted it to hold 5x 2.5" 7mm drives with a few zip ties. I wouldn't ship it like that but it holds fine between the workbench and rack.


ScreenShot152.jpg

ScreenShot153.jpg


Old pic of the rest of the rack

Case - Supermicro SC846E1-R900B
PSU - 900w redundant
Mobo - Supermicro X7DWN+
CPU - 2X Intel Xeon E5420 @ 3GHz
RAM 56GB
RAID controller - Supermicro AOC-USAS-H4iR
26x 2TB Hitachi
8x 450GB 15k Hitachi

Case - Supermicro SC118G-1400B
PSU - 1400w single
Mobo - Supermicro X8DTG-D
CPU - 2x Intel Xeon E5620 @2.4GHz
RAM 48GB
GPU 1x Nvidia 560ti 2GB
2x 600GB WD Velociraptors
4x 1TB WD Velociraptors

P1020201s.jpg
 
Last edited:
Im finally getting to the point of setting up my Media Server.
I have an old Chenbro cube case I bought from a friend 8 years ago (used to use this as my desktop PC case). Game plan is to fill it with Icy Dock Fat Cages and set it up using 8 x 4Tb WD Red Pro drives in Raid 6 via LSI Megaraid 9260 and using some leftover PC parts from old builds (dual core cpu, 4Gb RAM, OCZ Vertex SSD for boot drive).

Hoping to buy the drives in the next month.
Pix for clix

Im debating buying one more Icy Dock so I can put 2 drives in each of the 3 available slots to help aid with cooling, plus it'll look badass :D
2015-02-10.jpg


2015-02-10.jpg


full case:
IMG_20141204_221510.jpg

Dont mind the Dell Powervault, on the desk, thats for work, not mine :(
 
Last edited:
Thats a neat little piece, but I cant afford to go full SSD RAID at this point.
Hopefully once drives drop in price over the next 2-4 years I can switch over to SSD though.
FOr the time being Ill stick to good ole 7200rpm SATA hdds
 
Im debating buying one more Icy Dock so I can put 2 drives in each of the 3 available slots to help aid with cooling, plus it'll look badass

A suggestion: create two RAID 5 arrays, each comprising of one drive from each dock, then stripe them. That way you'll be protected against the failure not only of a drive but also one Icy Dock.
 
I'll reserve a spot for my build as I'm currently moving everything around upgrading from my 39 TB server.
I should update my sig too.

Edit: Finally adding pictures

Box 2.0
AMD proc, Gigabyte Mobo
Areca 1880i
HP Sas Expander

14x 2TB Raid 6 = 24tb
7x 3TB Raid 6 = 15tb

Box 1.0 was running with 99.9% uptime (Ice storm last winter ruined 99.99%). I recently replaced the 4 year old AMD guts with new AMD guts. I use it as just a home file server nothing fancy.Normally a laser printer sits on top of the Norco case. But since replacing the guts and reformatting I couldn't get the drivers to work again on Windows Server 2008 R2. Lastly for those who can count, I do have 22 drives in the case. I have the OS SSD mounted where the internal DVD drive goes, and in next to it inside is the 21st 3TB drive, which does make hot swapping a chore.
MwF5nyZl.jpg



Tower
I bought this server off a friend who had to move out of the country and was didn't want to bring it with him, or sell it off.

The guts are unknown, as are all the random pci sata cards and 500w psu. It also has Sans Digital 4 hd enclosures. It's running Unraid 6 b12, with a whole bunch of mix and mash drives:
1x 500GB (WD 5000AALS Black Edition, Cache drive)
2x 4TB (WD40EZRX, 1 is Parity drive)
3x 3TB (WD30EZRX)
15x (13x WD20EARS, 2x WD20EADS)
1x 1.5TB (Seagate ST31500341AS)

44.5TB array, with 500GB cache.

lq6ztSJl.jpg



The plan is to combine both of these, I still don't know if I'm going to keep the H/W raid or software raid. They each have their +/-'s and I've got way more storage then I need :p

Finally got around to updating my original post.
 
Last edited:
Amount of advertised storage: 143TB

Main File Server - Two Norco chasis built into one. (Cut the bottom out of the top 20bay chassis.)
Amount of storage in the following system: 118TB (Formatted)
Case: NORCO RPC-4220 (Top Case), NORCO RPC-4224 (Bottom Case) + Rackable SE3016 + Rackable SE3016
PSU: Corsair 950w (Top Case), Corsair 850w (Bottom Case)
CPU: Intel Quad Core Xeon X3330
Motherboard: Asus P5BV-M
RAM: 8GB (4x2GB)
SSD: Western Digital 64GB SSD
RAID 1: 20x Hitachi 7K2000 HDS722020ALA330 (Raid 6 + Cold Spare) (32.7TB) (Norco RPC-4220)
RAID 2: 16x Hitachi 5K3000 HDS5C3020ALA632 (Raid 6 + Cold Spare) (25.4TB) (Rackable SE3016)
RAID 3: 24x Toshiba DT01ACA300 (Raid 6 + Cold Spare) (60TB) (Norco RPC-4224)
Raid Card: Areca ARC-1680IX-24-2G w/BBU + HP SAS Expander
Operating System: Windows Server 2008 R2 with Drive Bender to pool all the arrays together.

VM Server
Amount of storage in the following system: 8.86TB (Formatted)
Case: Supermicro 836
PSU: 920w redundant
CPU: Dual Intel Xeon X5670 (24 threads)
Motherboard: Supermico X8DTH
RAM: 96GB (12x8GB) DDR3-1333 ECC
SSD: OCZ Vertex 3 120GB (OS)
RAID 1: 6x Samsung 830 120GB SSD (Raid 5) (593GB) (VM Array)
RAID 2: 10x Western Digital 1TB RE3 WDCWD1002FBYS0 (Raid 5) (8.18TB) (VM Storage + Backup)
Raid Card: LSI 9260-4i
Operating System: Windows Server 2008 R2


 
Last edited:
wicked nice setup, I have the same case sans the sata backplate, did you get them somewhere for a good deal?

Everywhere I found they cost an arm and a leg.

This is where I get mine from. Not horribly expensive and they have a great range.
http://www.moddiy.com/categories/Connectors/SATA-Connectors/

I prefer to use molex on the other end of he cable as its a bit more hard wearing than sata (they break a bit of you swash the end in behind a case panel. )

My Cables
img02331a.jpg


I swapped the end to Sata and have been kicking myself ever since. One day i'll swap them back.
 
24x2TB SAS in each 4U connected to an LSI2308 HBA. Pair of SSDs in each chassis, too. ESXi 6 with the controller passed through to CentOS 7 with ZFS on Linux. It's basically the same setup I had before, only now it's finally rack mounted. It pulls 750W on boot, then settles down to about 260W. With both of them running, it's 60dBA @ 3'.

A warning for anyone who uses the Supermicro SC846 chassis - The rails that come with it will not allow you to pull the chassis out far enough to remove the lid. To get rails that work right, you need to buy the ones that come with the SC826 or SC836. Supermicro is aware of the issue, and has no intention of resolving it.
lRAzO7Dh.jpg

Code:
[root@nas ~]# zpool status pool
  pool: pool
 state: ONLINE
status: The pool is formatted using a legacy on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on software that does not support
        feature flags.
  scan: scrub repaired 0 in 9h53m with 0 errors on Sat Mar 21 09:54:16 2015
config:

        NAME                              STATE     READ WRITE CKSUM
        pool                              ONLINE       0     0     0
          raidz2-0                        ONLINE       0     0     0
            scsi-35000c50034f36cff        ONLINE       0     0     0
            scsi-35000c50034eb58bb        ONLINE       0     0     0
            scsi-35000c50034f44577        ONLINE       0     0     0
            scsi-35000c50034e85e4b        ONLINE       0     0     0
            scsi-35000c50034f422b7        ONLINE       0     0     0
            scsi-35000c50034e85c3f        ONLINE       0     0     0
            scsi-35000c50040cf0c4f        ONLINE       0     0     0
            scsi-35000c500409ae567        ONLINE       0     0     0
            scsi-35000c500409946ff        ONLINE       0     0     0
            scsi-35000c5003c95a907        ONLINE       0     0     0
            scsi-35000c50034fbe17b        ONLINE       0     0     0
            scsi-35000c50034f3dfc7        ONLINE       0     0     0
          raidz2-1                        ONLINE       0     0     0
            scsi-35000c50034f3cc5f        ONLINE       0     0     0
            scsi-35000c50034f3e81f        ONLINE       0     0     0
            scsi-35000c50034ea0857        ONLINE       0     0     0
            scsi-35000c50034ff6167        ONLINE       0     0     0
            scsi-35000c50034f3decf        ONLINE       0     0     0
            scsi-35000c50034f421c7        ONLINE       0     0     0
            scsi-35000c50034f3daeb        ONLINE       0     0     0
            scsi-35000c50034ff1b8b        ONLINE       0     0     0
            scsi-35000c50034f42db7        ONLINE       0     0     0
            scsi-35000c50034f3d3ab        ONLINE       0     0     0
            scsi-35000c50034e011d3-part1  ONLINE       0     0     0
            scsi-35000c5003c95abdf        ONLINE       0     0     0

errors: No known data errors
[root@nas ~]# fdisk -l|grep /dev/sd
Disk /dev/sda: 17.2 GB, 17179869184 bytes, 33554432 sectors
/dev/sda1   *        2048     1026047      512000   83  Linux
/dev/sda2         1026048    33554431    16264192   8e  Linux LVM
Disk /dev/sdj: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdk: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdi: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdp: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdw: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdh: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sde: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdq: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdn: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdg: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdu: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdv: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdl: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdo: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdm: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sds: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdf: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdx: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdr: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdy: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdc: 1024.2 GB, 1024209543168 bytes, 2000409264 sectors
Disk /dev/sdt: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdz: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
 
A warning for anyone who uses the Supermicro SC846 chassis - The rails that come with it will not allow you to pull the chassis out far enough to remove the lid. To get rails that work right, you need to buy the ones that come with the SC826 or SC836. Supermicro is aware of the issue, and has no intention of resolving it.

I noticed that after getting my two SC846's installed. I'm thankful for tool-less blanking panels.

KSGPqcph.jpg
 
24x2TB SAS in each 4U connected to an LSI2308 HBA. Pair of SSDs in each chassis, too. ESXi 6 with the controller passed through to CentOS 7 with ZFS on Linux. It's basically the same setup I had before, only now it's finally rack mounted. It pulls 750W on boot, then settles down to about 260W. With both of them running, it's 60dBA @ 3'.

A warning for anyone who uses the Supermicro SC846 chassis - The rails that come with it will not allow you to pull the chassis out far enough to remove the lid. To get rails that work right, you need to buy the ones that come with the SC826 or SC836. Supermicro is aware of the issue, and has no intention of resolving it.
lRAzO7Dh.jpg

Code:
[root@nas ~]# zpool status pool
  pool: pool
 state: ONLINE
status: The pool is formatted using a legacy on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on software that does not support
        feature flags.
  scan: scrub repaired 0 in 9h53m with 0 errors on Sat Mar 21 09:54:16 2015
config:

        NAME                              STATE     READ WRITE CKSUM
        pool                              ONLINE       0     0     0
          raidz2-0                        ONLINE       0     0     0
            scsi-35000c50034f36cff        ONLINE       0     0     0
            scsi-35000c50034eb58bb        ONLINE       0     0     0
            scsi-35000c50034f44577        ONLINE       0     0     0
            scsi-35000c50034e85e4b        ONLINE       0     0     0
            scsi-35000c50034f422b7        ONLINE       0     0     0
            scsi-35000c50034e85c3f        ONLINE       0     0     0
            scsi-35000c50040cf0c4f        ONLINE       0     0     0
            scsi-35000c500409ae567        ONLINE       0     0     0
            scsi-35000c500409946ff        ONLINE       0     0     0
            scsi-35000c5003c95a907        ONLINE       0     0     0
            scsi-35000c50034fbe17b        ONLINE       0     0     0
            scsi-35000c50034f3dfc7        ONLINE       0     0     0
          raidz2-1                        ONLINE       0     0     0
            scsi-35000c50034f3cc5f        ONLINE       0     0     0
            scsi-35000c50034f3e81f        ONLINE       0     0     0
            scsi-35000c50034ea0857        ONLINE       0     0     0
            scsi-35000c50034ff6167        ONLINE       0     0     0
            scsi-35000c50034f3decf        ONLINE       0     0     0
            scsi-35000c50034f421c7        ONLINE       0     0     0
            scsi-35000c50034f3daeb        ONLINE       0     0     0
            scsi-35000c50034ff1b8b        ONLINE       0     0     0
            scsi-35000c50034f42db7        ONLINE       0     0     0
            scsi-35000c50034f3d3ab        ONLINE       0     0     0
            scsi-35000c50034e011d3-part1  ONLINE       0     0     0
            scsi-35000c5003c95abdf        ONLINE       0     0     0

errors: No known data errors
[root@nas ~]# fdisk -l|grep /dev/sd
Disk /dev/sda: 17.2 GB, 17179869184 bytes, 33554432 sectors
/dev/sda1   *        2048     1026047      512000   83  Linux
/dev/sda2         1026048    33554431    16264192   8e  Linux LVM
Disk /dev/sdj: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdk: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdi: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdp: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdw: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdh: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sde: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdq: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdn: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdg: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdu: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdv: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdl: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdo: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdm: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sds: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdf: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdx: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdr: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdy: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdc: 1024.2 GB, 1024209543168 bytes, 2000409264 sectors
Disk /dev/sdt: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdz: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors

very clean very nice!!!
 
TeeJayHoward: wondering about the flat ethernet cables? I've always been weary of them because of the lack of twist. Is there a reason you chose them and have you had any trouble? Also, what chassis is the top server in? I don't recognize it with the vertical drives.
 
TeeJayHoward: wondering about the flat ethernet cables? I've always been weary of them because of the lack of twist. Is there a reason you chose them and have you had any trouble?

They are twisted, just side by side instead of blocked in a square.
 
TeeJayHoward: wondering about the flat ethernet cables? I've always been weary of them because of the lack of twist. Is there a reason you chose them and have you had any trouble? Also, what chassis is the top server in? I don't recognize it with the vertical drives.
The flat ethernet cables were just a "Huh, haven't seen those before. Let's give 'em a shot" purchase. They're working out pretty well. Can't say I prefer one way or another.

The "top server" is actually eight servers in one 3U chassis. It's a Supermicro MicroCloud. 8xE3v3s, each with two 3.5" bays.
 
Total multiple system storage 128 TB
Total single/internal system storage 96 TB

96 TB (Bruto) System:

Case: Ri-vier RV-4324-01A
PSU: Seasonic Platinum 860
Motherboard: Supermicro X9SCM-F
CPU: Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz
RAM: 16 GB ECC
Controller Cards: 3 x IBM M1015
Hard Drives: 24 x HGST HDS724040ALE640 4 TB (7200RPM)
Operating System: Linux (Debian Wheezy)
NIC: on-board 2 x Gigabit + 4 x 1 Gbit Nic Bonding for 450 MB/s NFS data transers.
Filesystem: ZFS (ZoL)
Netto Capacity: 71 TB usable
read (2.5 GB/s) / write (1.9 GB/s) (Gigabytes not Gigabit)

The system consists of two VDEVs, 1 x 18 disks and 1 x 6 disks both in RAIDZ2.
Boot drives are two 120 GB SSDs.

Chassis fan speed is governed by the temperature of the hottest drive.
Python script can be found here:
https://github.com/louwrentius/storagefancontrol

Pictures of the system:
zfsnas01.jpg

topview.jpg

4cards.jpg

backside.jpg



My Lack-Rack setup front:
lackrack01.jpg

Back:
http://louwrentius.com/static/images/lackrack02.jpg
Also seen:
10 TB Download Server
20 TB Old NAS (Linux + 20 x 1 TB in RAID6 MDADM)
HP Microserver N54L as Router/Firewall

Blog post:
http://louwrentius.com/71-tib-diy-nas-based-on-zfs-on-linux.html
 
Last edited:
Q, that looks fantastic. I love the Ikea tables, and what are you using to run your ethernet cables through? That looks pretty brilliant :D
 
Q, that looks fantastic. I love the Ikea tables, and what are you using to run your ethernet cables through? That looks pretty brilliant :D

Thank you. These are just standard cable guides you can buy at your local shop (not sure how these stores are called in English). You just have to cut them at length and used double-sided adhesive tape to stick them to the table. I used the same stuff to get rid of cables to/from electronic equipment in my living room.

Extra pic of the backside:

http://louwrentius.com/static/images/lackrack02.jpg
 

Nice and clean setup. The only thing that would bother me is the Lack-Racks. While they are neat, weight capacity is not their best point.
With the weight of the servers above the bottom one, a bump would likely see it topple. In the past, I have seen right-angle steel or aluminium placed around the legs to help stiffen them up.

Mind you, a rack is the next step maybe.
 
Since there is a new thread I might as well redo it.

IMG_0464.jpg

IMG_0572.jpg

IMG_0461.jpg

yes it has more drives now.

Case: Custom. It consists of 3 seperate modules connected internally for fast assembly and moving to a LAN. Making it move ready is a 5 minute task.
Mobo: Supermicro X9-SAE-V
CPU: Intel Xeon E3-1265LV2
RAM: 16 GB, 2x 8GB ECC
OS: ZFSguru (FreebSD 10.1-001)
Controllers: IBM M1015 IT Flashed with 2x HP SAS Expanders
SSD: 2x Intel Postville 160 GB (OS, ZIL and SLOG)
HDD: 1x RaidZ2 8x 3TB WD Green
1x RaidZ2 8x 4TB Hitachi 5K4000
1x RaidZ2 8x 4TB WD RED
Capacity: around 62 TB storage usable and 81 TB RAW
Network: 2x onboard 1 GB nics
Power ~110 watt idle and ~180 watt load
 
Last edited:
I have 40 x 2TB + 10 x 4TB in a single ZFS pool. Usable space is 108.8 TiB.

System consists of 3 Supermicro 846 chassis.

Main chassis:

Supermicro X10SRL-F LGA 2011R3 motherboard
Intel Zeon E5-1620 v3 Haswell CPU
22 x Samsung 16GB DDR4 2133 ECC RAM
2 x LSI SAS9200-8e HBA
1 x LSI SAS9211-8i HBA

846main.JPG


Disk only chassis:

This has the brand new Supermicro power controller with IPMI ethernet and 10 or so 4-pin fan connectors. Also monitors power consumption.

846diskonly.JPG


Disk / HTPC chassis:

The 3rd chassis is much like the 2nd chassis in that it is connected to a dedicated LSI controller in the main server. But instead of a power controller I run a little Asus Haswell mobo which runs my HT projector in the next room, and well as hosting my JRiver music library and connects to my pre-amp via HDMI for whole house sound. I of course have to be careful not to ever shut this chassis down without taking down the main FreeNAS server first!

846htpc.JPG


View of the 3 chassis in the rack running a scrub:

50disk-01.JPG


And a view of both racks. Data on the left and audio on the right.

846bothracks.JPG


Volume view from GUI:

50diskvolumeoverview.PNG
 
Last edited:
My basement was unfinished when I moved in, so when I finished it, I basically built that particular wall around the racks.

Here's a view from the back side.

846backside.JPG
 
Thank you. What you've got is essentially my long term plan for when I own a house (several years from now). Very, very nice.
 
Freakin' sweet! Bet it's pretty warm in that room back there.
What's the receptacle with the orange wire? 240?
 
And show us what the Crown's and the rest of the gear are powering! Maybe post in the mancave thread (maybe you already have)?
 
Freakin' sweet! Bet it's pretty warm in that room back there.
What's the receptacle with the orange wire? 240?
With just the computer gear running it doesn't get too hot, but when the power amps are running, it can get pretty toasty.

There are actually 2 receptacles with orange wires. They are 120V 30A outlets (each wired to a dedicated 30A breaker). They used to each feed a APC 3KW SmartUPS. Those have both been retired (but are still sitting in the bottom of the rack) and now I just run a single APC2.2KW SMARTUPS LCD UPS that is plugged into one of those 30A outlets.

On the wall you can also 3 quad outlets behind the Supermicro 846 cases. Each has a dedicated 20A breaker. Despite that, I have still managed to trip a breaker on several occasions when playing bass heavy music. :D
 
And show us what the Crown's and the rest of the gear are powering! Maybe post in the mancave thread (maybe you already have)?

I'll post what the amps power in the mancave thread, but provide a couple of teaser pics here. These are TC Sounds PA-5000 18" subs, btw. :D

subs5.jpg


ht1.jpg
 
Since there is a new thread I might as well redo it.


yes it has more drives now.

Case: Custom. It consists of 3 seperate modules connected internally for fast assembly and moving to a LAN. Making it move ready is a 5 minute task.
Mobo: Supermicro X9-SAE-V
CPU: Intel Xeon E3-1265LV2
RAM: 16 GB, 2x 8GB ECC
OS: ZFSguru (FreebSD 10.1-001)
Controllers: IBM M1015 IT Flashed with 2x HP SAS Expanders
SSD: 2x Intel Postville 160 GB (OS, ZIL and SLOG)
HDD: 1x RaidZ2 8x 3TB WD Green
1x RaidZ2 8x 4TB Hitachi 5K4000
1x RaidZ2 8x 4TB WD RED
Capacity: around 62 TB storage usable and 81 TB RAW
Network: 2x onboard 1 GB nics
Power ~110 watt idle and ~180 watt load


freaking love that custom chassis setup
 
i had buy a NAS for our DATAS (160T max)
this is the shopping list :
Motherboard : double socket LBA2011
Processors : 2x Intel Xeon 2620v3
RAM : 256GB DDR4 (16 x 16 GB)
HDD OS : 2x 1TB SAS
ARC : 2x SSD HGST s842: 400GB
ZIL : 2x SSD HGST Zeus RAM: 8 GB
Network : 1x Intel X520 10 GB + GBIC single mode
: 1x Intel I350 quad port
Drives interfaces : 1x HBA SAS 6Gb 8 internal ports
: 1x HBA SAS 6Gb 8 external ports
HDD : 80x 4TB near line SAS6 Gb in RAID 10
arrays : 2 x JBOD 45 disks dual backplane

for software OmniOs + Nappit (latest one)

is there some thing wrong ???
 
Back
Top