The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
Wont be long before I do an update now guys =)



I am really impressed with the speed of the array too. Here is read speeds to the block device:

Code:
root@dekabutsu: 01:01 PM :~# dd bs=1M count=20000 iflag=direct if=/dev/sde of=/dev/null
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 16.4803 s, 1.3 GB/s

and write speed to the file-system:

Code:
root@dekabutsu: 01:01 PM :/data2# dd bs=1M count=20000 oflag=direct if=/dev/zero of=./20gb.bin
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 21.8738 s, 959 MB/s

Keep in mind this is *While* the machine is doing a background init @ 80% priority. already getting significantly better results than my ARC-1280 in normal state.

Total power usage of both 15 disk enclosures under full load was around 270 watts so not too bad there either for powering 30 disks.

Some smart stats of the disks using a script I wrote to just parse the info I want using smartctl (including temps):

Code:
root@dekabutsu: 01:30 PM :~# /bin/diskinfo.sh --noserial
                        ARC-1280 Enclosure #1
Port     Model Number/Firmware           Re-aloc/Pend/PS/DaysOn  Temp:
1        HDS722020ALA330/JKAOA28A        0/0/31/471              37
2        HDS722020ALA330/JKAOA28A        0/0/29/471              37
3        HDS722020ALA330/JKAOA28A        0/0/29/471              37
4        HDS722020ALA330/JKAOA28A        0/0/29/470              36
5        HDS722020ALA330/JKAOA28A        1/0/30/470              37
6        HDS722020ALA330/JKAOA28A        0/0/29/469              40
7        HDS722020ALA330/JKAOA28A        0/0/29/469              40
8        HDS722020ALA330/JKAOA28A        0/0/29/468              38
9        HDS722020ALA330/JKAOA28A        0/0/30/468              39
10       HDS722020ALA330/JKAOA28A        0/0/29/468              40
11       HDS722020ALA330/JKAOA28A        0/0/29/467              40
12       HDS722020ALA330/JKAOA28A        1/0/29/467              39
13       HDS722020ALA330/JKAOA28A        10/0/33/466             39
14       HDS722020ALA330/JKAOA28A        0/0/32/466              38
15       HDS722020ALA330/JKAOA28A        0/0/32/465              39
16       HDS722020ALA330/JKAOA28A        0/0/33/465              40
17       HDS722020ALA330/JKAOA28A        0/0/34/464              38
18       HDS722020ALA330/JKAOA28A        0/0/33/464              40
19       HDS722020ALA330/JKAOA28A        0/0/33/463              40
20       HDS722020ALA330/JKAOA28A        0/0/33/463              37

                        ARC-1880x Enclosure #2
Port     Model Number/Firmware           Re-aloc/Pend/PS/DaysOn  Temp:
1        HDS5C3030ALA630/MEAOA580        0/0/6/0                 35
2        HDS5C3030ALA630/MEAOA580        0/0/7/0                 35
3        HDS5C3030ALA630/MEAOA580        0/0/6/0                 36
4        HDS5C3030ALA630/MEAOA580        0/0/6/0                 36
5        HDS5C3030ALA630/MEAOA580        0/0/6/0                 36
6        HDS5C3030ALA630/MEAOA580        0/0/6/0                 37
7        HDS5C3030ALA630/MEAOA580        0/0/6/0                 36
8        HDS5C3030ALA630/MEAOA580        0/0/6/0                 36
9        HDS5C3030ALA630/MEAOA580        0/0/6/0                 36
10       HDS5C3030ALA630/MEAOA580        0/0/6/0                 36
11       HDS5C3030ALA630/MEAOA580        0/0/6/0                 37
12       HDS5C3030ALA630/MEAOA580        0/0/6/0                 37
13       HDS5C3030ALA630/MEAOA580        0/0/6/0                 36
14       HDS5C3030ALA630/MEAOA580        0/0/6/0                 36
15       HDS5C3030ALA630/MEAOA580        0/0/6/0                 35

                        ARC-1880x Enclosure #3
Port     Model Number/Firmware           Re-aloc/Pend/PS/DaysOn  Temp:
1        HDS5C3030ALA630/MEAOA580        0/0/19/7                35
2        HDS5C3030ALA630/MEAOA580        0/0/18/7                35
3        HDS5C3030ALA630/MEAOA580        0/0/16/7                36
4        HDS5C3030ALA630/MEAOA580        0/0/18/7                36
5        HDS5C3030ALA630/MEAOA580        0/0/16/7                37
6        HDS5C3030ALA630/MEAOA580        0/0/15/7                37
7        HDS5C3030ALA630/MEAOA580        0/0/15/7                36
8        HDS5C3030ALA630/MEAOA580        0/0/15/7                37
9        HDS5C3030ALA630/MEAOA580        0/0/17/7                36
10       HDS5C3030ALA630/MEAOA580        0/0/17/7                36
11       HDS5C3030ALA630/MEAOA580        0/0/15/7                37
12       HDS5C3030ALA630/MEAOA580        0/0/17/7                36
13       HDS5C3030ALA630/MEAOA580        0/0/15/7                37
14       HDS5C3030ALA630/MEAOA580        0/0/15/7                37
15       HDS5C3030ALA630/MEAOA580        0/0/15/7                35
root@dekabutsu: 01:31 PM :~#

The first enclosure (1280) is norco and the other two are supermicro with 5v mod.

And none of the retail packaged drives I got from frys have any bad sectors or dead out of the box (all 30 worked).
 
@ houkouonchi : What RAID level is that? I tried to discern from your screenshots but was unable to.
 
Its raid6. The CLI will tell you the raid level. I thought the web-interface did as well but it doesn't without clicking on the volumes.

CLI:

Code:
root@dekabutsu: 10:18 AM :~# cli64 vsf info
CLI>   # Name             Raid Name       Level   Capacity Ch/Id/Lun  State
===============================================================================
  1 WINDOWS VOLUME   40TB RAID SET   Raid6    129.0GB 00/00/00   Normal
  2 MAC VOLUME       40TB RAID SET   Raid6     30.0GB 00/00/01   Normal
  3 LINUX VOLUME     40TB RAID SET   Raid6    129.0GB 00/00/02   Normal
  4 DATA VOLUME      40TB RAID SET   Raid6   35712.0GB 00/00/03   Normal
===============================================================================
GuiErrMsg<0x00>: Success.

CLI> GuiErrMsg<0x00>: Success.

CLI>   # Name             Raid Name       Level   Capacity Ch/Id/Lun  State
===============================================================================
  1 DATA 2 VOLUME    90TB RAID SET   Raid6   84000.0GB 00/01/00   Initializing(52.4%)
===============================================================================
GuiErrMsg<0x00>: Success.

CLI>

So looks like the initialization (background) is going to take around 40 hours or so.
 
So looks like the initialization (background) is going to take around 40 hours or so.

I guess I was a bit off =)

Code:
2011-09-04 05:21:17  DATA 2 VOLUME    Complete Init         040:39:42

Very impressed with the speeds. Raw device reads:

Code:
root@dekabutsu: 05:22 AM :~# dd count=60000 bs=1M iflag=direct if=/dev/sde of=/dev/null
60000+0 records in
60000+0 records out
62914560000 bytes (63 GB) copied, 37.7154 s, 1.7 GB/s

File-system write:

Code:
root@dekabutsu: 05:25 AM :/data2# dd count=60000 bs=1M oflag=direct if=/dev/zero of=./60gb.bin
60000+0 records in
60000+0 records out
62914560000 bytes (63 GB) copied, 62.648 s, 1.0 GB/s

File-system writes honestly aren't as good as I expected. Also I was only hitting around 80 %util which I found odd. Writing directly to the block device resulted in the same speeds. I am happy with that almost 2 GB/sec read speed though =)

My ARC-1280 before gave me the following when doing a read/write almost over the entire array (both near the same value).

Read;

Code:
17166132+0 records in
17166132+0 records out
17999994028032 bytes (18 TB) copied, 21999.1 s, 818 MB/s

Write:
Code:
dd: writing `/dev/sdd': No space left on device
17166117+34 records in
17166116+34 records out
17999994028032 bytes (18 TB) copied, 22649.6 s, 795 MB/s

The ARC-1280ML was bottlenecked at about the same for write/read. I wonder why the write speeds are so much slower than read on the 1880. I honestly would have expected them to be near the same. Maybe its the large number of disks (30) that is causing the issue?

Here is the difference in seeks/sec on the 20x2TB 7200 RPM vs 30x3 TB 5600 RPM via SAS expanders:

20x2TB:
Code:
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sdd [69749987328 blocks, 35711993511936 bytes, 33259 GB, 34057611 MB, 35711 GiB, 35711993 MiB]
[512 logical sector size, 512 physical sector size]
[256 threads]
Wait 30 seconds.............................
Results: 1985 seeks/second, 0.504 ms random access time (129761493 < offsets < 35711759315971)

3x3TB:

Code:
root@dekabutsu: 05:42 AM :~# ./seeker_baryluk /dev/sde 256
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sde [164062474240 blocks, 83999986810880 bytes, 78231 GB, 80108630 MB, 83999 GiB, 83999986 MiB]
[512 logical sector size, 512 physical sector size]
[256 threads]
Wait 30 seconds..............................
Results: 1925 seeks/second, 0.519 ms random access time (346681048 < offsets < 83999843022147)

I expected them to be about the same. I am sure the SAS expanders add latency and the fact they were lower RPM made sense that about 30% more spindles should yeild about the same number of random 512b (seeks) per second.

There is a pretty big difference in single threaded (doesn't take advantage of raid) stats due to the difference in disk RPM:

2tb:
Code:
root@dekabutsu: 05:43 AM :~# ./seeker_baryluk /dev/sdd 1
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sdd [69749987328 blocks, 35711993511936 bytes, 33259 GB, 34057611 MB, 35711 GiB, 35711993 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 75 seeks/second, 13.286 ms random access time (4907056513 < offsets < 35688543782462)

vs

3tb:
Code:
root@dekabutsu: 05:44 AM :~# ./seeker_baryluk /dev/sde 1
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sde [164062474240 blocks, 83999986810880 bytes, 78231 GB, 80108630 MB, 83999 GiB, 83999986 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 46 seeks/second, 21.307 ms random access time (76251250826 < offsets < 83915357010958)

I would have loved to have more write speed but overall I am still happy. I will probably update my post tomorrow night with new pictures/stats.

I will also run some benchmarks on windows while I still can (while the volume doesn't have data on it and I can format it as NTFS.

I will see if the write speeds are also bottlenecked at ~1GB/sec on windows to.
 
Ok so updated my thread

Total multiple system storage 197.4TB
Total single/internal system storage 45TB

Newest rack picks. Disk activity isn't taken well with the flash (took two different shots to try to show difference). Included a bit blury no-flash pick which shows all LEDs as active:

New 30x3 TB drives bought from Fry's. Arranged the boxes up on a wall to take a pic =)


No flash pic (shows disk activity but blurry):


Two flash pics (clear but disk activity washed out):



Back of the rack showing cables going to boxes with just a SAS expander in them:



I might take a video or something later. I also updated the pictures I had of my internet setup that my router box was doing (zeroshell machine) as I have since upgraded to a faster connection (150/75).
 
Very nice and clean setup!

It looks like you've pulled a few of your power supplies in your redundant PSU, how come?
 
Very nice and clean setup!

It looks like you've pulled a few of your power supplies in your redundant PSU, how come?

It'll cause alarm if there is no power plugged to reduntant psus. So if you pull out the psu it won't cause any alarms :)
 
It'll cause alarm if there is no power plugged to reduntant psus. So if you pull out the psu it won't cause any alarms :)

Not only that but it will also use power as well (even with nothing plugged in). If I did plug them in then it would use even more power for each PSU I plugged in. Just one of the PSU's has no problem having all the disks spin up at once (as the SAS expander does that) so no reason to have the others plugged in.
 
Just got my new home fileserver online.

Lian-Li PC-Q08B
Seasonic X-400 fanless PSU
Zotac H55 mini ITX
Intel Core i3-540
8GB RAM
6x 3TB Hitachi 5400rpm drives
32GB USB stick for the OS

Running FreeBSD 8.2-RELEASE w/ZFS.

Pretty stoked. It replaces a Shuttle w/ a 2TB striped (meant to mirror, d'oh) ZFS volume.

KjZwU.png


eZGUr.jpg
 
Thanks!

I ran bonnie on it last night, here's the output from using a 10GB file:


Code:
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
         10240 274456 74.9 205785 29.3 136330 20.3 251618 80.3 433442 25.0 278.9  0.6

And the best read performance I saw:

Code:
                    capacity     operations    bandwidth
pool              used  avail   read  write   read  write
---------------  -----  -----  -----  -----  -----  -----
storage          12.7G  16.2T  3.39K      0   432M      0
  raidz2         12.7G  16.2T  3.39K      0   432M      0
    label/disk1      -      -    865      0   108M      0
    label/disk2      -      -    896      0   107M      0
    label/disk3      -      -    881      0   107M      0
    label/disk4      -      -    898      0   108M      0
    label/disk5      -      -      0      0  12.8K      0
    label/disk6      -      -      0      0  12.8K      0
---------------  -----  -----  -----  -----  -----  -----

Writes peaked at around 2.5k/s.

I tried a 12GB file but it started trying to swap. I have no swap, so that didn't go well :D
 
Just finished my first ZFS build today! Thanks to Napp-it and some help from [H] friends and cie, everything went rather smoothly for a first experience.

BUILD
MOBO..........................MSI X58M 1366
CPU..............................Intel Xeon E5606
SSD..............................Kingston SSDNow V100 64GB
HDD..............................8x Hitachi Deskstar 5K3000 2TB
PSU..............................Antec QuattroPower 850W Modular
GPU..............................EVGA 8600GT
HS.................................CORSAIR CAFA50
RAM..............................G.SKILL Ripjaws Series 8GB
CASE...........................Antec Three Hundred Mini Tower
NIC................................Intel PRO/1000 Pci-e
SAS..............................LSI SASUC8I (flashed with IT firmware. See this)

OS................................OpenIndiana with Napp-it
Total space..................16TB
Usable space..............10.1TB
Parity............................RAIDZ1

zfs1.jpg


zfs2.jpg


zfs3.jpg


zfs4.jpg


zfs5.jpg


And my main rig, because I never miss an excuse to showcase it...

sp-fin6.jpg


zfs6.jpg


Thanks to [H] for the help!!!

PS. For the purists, I'll work on wire management this week-end.
 
Amount of total storage
{lots 'n' lots}

Amount of storage in the following system
1.14504E-1 Petabytes

Case
1x Lian-Li PC-343B Cube Case (with CCTOPFAN and CC2PSUATX mods)
2x Lian-Li CCEX23REAR HDD cages
6x IcyDock MB455SPF-B 5-in-3 SATA3/64bit-LBA Cages
3x Scythe SQD2.5-1000 QuietDrive SDD Silencers (for the SSDs; panel-epoxy)
4x Scythe Slipstream SY1225SL12L 120mm 800RPM fans
2x Scythe KamaFlow2 SP0825FDB12L 80mm fans

PSU & UPS
2x OCZ ZX-1250W
1x Add2PSU.COM adapter
2x CyberPower OR2200LCDRTXL2U 2190VA 1650W
2x NEMA 5-20R independent circuits @ main panel

Mainboard
ASUS P6T7 WS (size caution: CEB)

CPU
1x Intel Xeon E5620 Gulftown 32nm 2.4GHz 5.68QPI (cheapest AESNI-capable @ LGA1366)
{Zalman CNPS9900NT CPU Cooler; Tuniq TX-4 TIM)

RAM
24GB DDR3-1333 240-pin ECC DIMM CL9 (2x Crucial CT3KIT51272BA1339)

GPU
{slot #2 below}

Controller Cards
Slot01: LSI 20320IE U320 SCSI / PCIe x4 (encrypted tape)
Slot02: Jaton PX628GS-LP1 / PCIe x1 (nv8400gs; 512M, not much local GUI)
Slot03: HighPoint RR-2720 #1 / PCIe x8 (8-port)
Slot04: HighPoint RR-640 / PCIe x4 (4-port)
Slot05: HighPoint RR-2720 #2 / PCIe x8 (8-port)
Slot06: Koutech IO-PEU232 / PCIe x1 (USB3 internal)
Slot07: HighPoint RR-2760 / PCIe x16 (24-port)
Header: Koutech IO-UU220 USB2 / (usb2 internal header adapter for w7/tmp)

Optical Drives
{misc. external USB2 if required locally}

Hard Drives
38x HDS 5K3000 (0S03230) 3TB 5200RPM SATA3/6Gbps (zData)
2x OCZ Agility3 60GB SATA3/6Gbps (mirrored ZIL)
1x OCZ Agility3 240GB SATA3/6Gbps (L2ARC)
2x Patriot PSF32GXPUSB 32GB USB3 (boot/swap)
1x Patriot Magnum PEF64GMNUSB 64GB 210x-read USB2 (tmp)
1x Kingston DTI/16GB USB2 (win7x64 side-boot: for bios/etc updates)
10x HighPoint INT-MS-1M4S fanout cables
= 114.504TB (or, 1.14504E-1 PB)

Other
All internal case surfaces lined with custom-cut AcoustiPack Ultimate
PSUs & all case fans mounted with silica gel sound dampers
All Case & PSUs fan intakes air filtered
IcyDocks and EX23s all frame-mounted with silical gel dampers

Backup
1x HP StorageWorks Ultrium 1840 LTO-4 tape drive

Operating System
FreeBSD 9.0-CURRENT (for zfs-v28, aes-ni@geom, etc.)

Usage
Main ZFS raid-z3 encrypted NAS


***PICS and Build Log to follow (once all components arrive after this week)***
 
Finally decided on pooling software. Ended up sticking with Drive Bender. The professionalism of the product has come a long way in this (hopefully) final beta release. So far so good.

2drwj09.jpg
 
That is why he said "1.14504E-1 Petabytes". Note the -1. Aka 1.14504 x (10^(-1)) = 0.114504PB = 114.504TB. You guys missed his joke :).
 
That is why he said "1.14504E-1 Petabytes". Note the -1. Aka 1.14504 x (10^(-1)) = 0.114504PB = 114.504TB. You guys missed his joke :).


Agree, real men start measuring at 2^50th.

:D

Sadly, 'giga/tera/peta' prefix is not 1024 multiplier anymore since 1998 when IEC fsck'd us all with the 60027-2 sissy re-naming conventions. :mad:

Pebibyte? Gibibyte? Never. FTW. WTH, it's worse than when we converted to metric in '77.

cheers,

R.
 
It took a bit of messing around but, I finally have my server back online. My biggest issue right now is I am outta space... again... Ordered a Norco 4224 and a larger powersupply.

Using ESXi 5 and OpenSolaris + Napp-it. Thanks Gea!

Hardware:
Supermicro X8SIL -F -O
4x4gb Kingston DDR 3 Unbuffered ECC
Intel X3430
IBM ServeRAID BR10i
1x Intel Pro 1000 PT Dual Port Nic
10x Hitachi 2tb Cool Spin drives
1x Crucial C300 64g cache drive
1x 640gb WD blue main storage drive
Supermicro 5x3.5 hard drive chassis
OCZ 600w powersupply

Advertized space: 20tb
Usable Space after ZFS/Raid Z2: 14tb

Parts waiting to be installed on the new box:
HP Port expander
10 additional 2tb hard drives
2nd 640gb drive for OS mirror

Hopefully ended up with closer to 30tb of usable space. This thing is fast for using such cheap drives. Can't wait to get the SSD installed tonight. :)

Terrible photos, sorry, did not want to bring out the D700.


Uploaded with ImageShack.us


Uploaded with ImageShack.us


Uploaded with ImageShack.us


Uploaded with ImageShack.us
 
Hi guys

is longblock still updating the total?

i have sent him 3 mails in the pat 2 months but no update yet. Might be me as nothing is showing up in my sent items.,
 
I don't know if this has been asked before, but I've gotta bite.

What are some of you storing on your servers that you need in upwards of 30TB of space? My server has a total of ~4TB with ~2.6TB used, and even if I replaced all my DVDs with Blu-Ray and all of my mp3's with lossless WAV, I'd still top out at about 11TB.

Just what in the world is in these boxes????
 
I don't know if this has been asked before, but I've gotta bite.

What are some of you storing on your servers that you need in upwards of 30TB of space? My server has a total of ~4TB with ~2.6TB used, and even if I replaced all my DVDs with Blu-Ray and all of my mp3's with lossless WAV, I'd still top out at about 11TB.

Just what in the world is in these boxes????

Pictures of your mom....

she is so fat just one picture takes a TB.....;)

Plus back ups, entire media collection, etc.
 
Depends on your household. I know one guy who is running SageTV with a family I think of 7. He says on average, they are recording between 15-20 shows in HD per night during season. That is ~100GB per DAY! Figure most people are packrats and don't like to delete or want to watch on a more gentle schedule and it can easily jump to a steady state value of at least 20-30 TB.

Me personally, I have ~200 epsidoes of Good Eats in HD...so, umn, yeah.
 
Pictures of your mom....

she is so fat just one picture takes a TB.....;)

Plus back ups, entire media collection, etc.

This was funny, and no, I will not post actual pics for you milf lovers out there :p

I always figured that most of it was tv shows, movies, porn, or some combination thereof, and yeah, digital hoarding is certainly a factor, although not for everyone. I know plenty of people who keep around music, movies, video games, and software that they'll never listen to, watch, play, or use. Sometimes, people just like to have it for the sake of it, like saying that they have the whole set even though they really aren't using it *shrug*
 
Not the greatest - not even in a case yet! (waiting for the lian-li pc-q25 to be released)

6 * 2tb Samsung 204ui's
1 * 120gb hdd
Asus mini-itx e350 board with 6 sata 3 ports
2 port sata card for 120gb disk
4gb ram


Well the stuff i posted above now has a home and looks like this;
the only two hardware changes are a 64gb ssd and the 2port sata card is now a 1port


2011-09-16143636.jpg
 
Status
Not open for further replies.
Back
Top