The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
And this is my setup at home.

1x D-Link DFL-800
1x D-link DGS-1248T
1x Brocade 300 SAN fibre switch
1x QNAP TS-212 2x 750GB in RAID1, Dedicated backup for VMs via Veeam.

1x Fujitsu Primergy RX300 S4 ESXi 5.5 U1
1x Dell PowerEdge R710 ESXi 5.5 U1
2x Dell PowerEdge R610 ESXi 5.5 U1

1x DotHill 2730T dual controllers
1x Fujitsu Eternus DX60 dual controllers

2x APC Smart-UPS 1500VA
1x APC Smart-UPS 1000VA

Total Storage: ~15TB

IMG_20140904_210152.jpg


IMG_20140904_210040.jpg


IMG_20140904_210053.jpg


IMG_20140905_184354.jpg
 
Last edited:
This is sweet guys! I'm totally just creeping into the epic world of enterprise level performance!

These dual xeon systems are wicked!

Anyway do you guys know of a windows server based on 7/8 architecture like Server 2012 with x86 cpu support? I need full use of 4gb on an x86 dual xeon system with full pae support/ 36 bit address width and no Nx Bit support.

On another note : Does anyone know of the cheapeast case they can think of that can hold an EATX board? Otherwise known as a big-a dual xeon server board.

Thanks!
 
This is sweet guys! I'm totally just creeping into the epic world of enterprise level performance!

These dual xeon systems are wicked!

Anyway do you guys know of a windows server based on 7/8 architecture like Server 2012 with x86 cpu support? I need full use of 4gb on an x86 dual xeon system with full pae support/ 36 bit address width and no Nx Bit support.

On another note : Does anyone know of the cheapeast case they can think of that can hold an EATX board? Otherwise known as a big-a dual xeon server board.

Thanks!

There is Windows Server 2008 - its Vista based, but don't let that scare you of - it isn't a pig to use like Vista! :) Only thing you need to be mindful of is that it does not have TRIM support, so say you have an Intel SSD, you will need to schedule a manual TRIM command once a week or so
 
Added more disk space to my file server! Not that much compared to some of the other stuff here, but more than enough for me, for now. :D

Code:
[root@isengard volumes]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_isengard-lv_root
                       50G  4.4G   43G  10% /
tmpfs                 3.9G     0  3.9G   0% /dev/shm
/dev/sdn1             485M   38M  422M   9% /boot
/dev/mapper/vg_isengard-lv_home
                       53G  180M   50G   1% /home
/dev/md0              5.4T  2.3T  2.9T  44% /volumes/raid1
/dev/md1              6.3T  4.1T  2.0T  68% /volumes/raid2
/dev/md2              6.3T  181M  6.0T   1% /volumes/raid3
[root@isengard volumes]#

total of 18TB.







1 drive is dead though, so I RMAed it. Once it comes in I'll extend the raid array which will give me another 1.6TB or so.

I'll have 3 extra bays left after this. Probably going to extend raid1 which is a raid 10 with 4 3TB drives, then add another drive to extend raid2 which is a raid 5 with 1TB drives. Eventually I should probably convert that raid5 to raid10 or 6. Could also replace the drives with 3TB ones.
 
I need a recommendation on a Raid controller. I have 16 2TB hitachi drives. I purchased a
Intel SRCSASPH16I which is really just a glorified LSI megaraid 8xxx controller. However, it will not work.

I need a recommendation for a raid controller that will work with a AMD cpu/mobo. That has 4 sas connections to it I can use as a breakout? Any recommendations?

Thanks!
 
that's a question for a whole 'nother thread, but if you do some searches on the forums you might find this has been already asked. Are you looking for a simple HBA? like 4 IBM 1015s to use with ZFS, or something high end like an Areca card?
 
Thanks for to do ZFS I am looking for a card I can connect 4 break out SAS to SATA cables for 16 2TB drives to that will work with a AMD board.
 
4x IBM 1015s would work...

If you need one card with 4 SAS connectors, then look at LSI controllers. Somethink like LSI SAS 201-16i.

If you need 12Gb/s controllers, look at 2x LSI SAS 9300-8i.

Matej
 
My server has 5x 3.5" slots and 4x 5.25" slots. I have a 3x 5.25" to 4x 3.5" drive cage, so I currently have 9 HDDs in there with a single 5.25" slot free.

I won't need to expand for probably a year but I'm not sure what to do next. I could obviously get a 1x 5.25 to 1x 3.5" converter for a single drive but there's also 2 unused FDD slots. Is it safe to put a 3.5" HDD in those slots, providing there are appropriately placed screw holes? I think the last time I tried the HDD seemed really tight and I wasn't sure if that was acceptable or not.
 
Had some trouble with my raid3 array, finally managed to rebuild it. Now I'm at full capacity!

Code:
[root@isengard raid3]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_isengard-lv_root
                       50G  4.4G   43G  10% /
tmpfs                 3.9G     0  3.9G   0% /dev/shm
/dev/sdn1             485M   38M  422M   9% /boot
/dev/mapper/vg_isengard-lv_home
                       53G  180M   50G   1% /home
/dev/md0              5.4T  2.3T  2.9T  45% /volumes/raid1
/dev/md1              6.3T  4.3T  1.8T  71% /volumes/raid2
/dev/sdx1             917G  429G  442G  50% /mnt/rembackupdisk
/dev/md3              7.2T  179M  6.8T   1% /volumes/raid3

Those drives were an impulse purchase, I don't even know what I will do with 7.2TB of space yet when I still have several TB in my other arrays. I'll find a use for it I'm sure. :D

I'm at 18.9TB total now. Not that impressive compared to other people, but it's still a nice chunk of space.

Next upgrade is probably the raid 5 array, it's all 1TB drives. I could either expand it with 3TB drives or do a whole new array altogether and do raid 10. Though a very large raid 5 could be useful for extra backups too.
 
Had some trouble with my raid3 array, finally managed to rebuild it. Now I'm at full capacity!

Code:
[root@isengard raid3]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_isengard-lv_root
                       50G  4.4G   43G  10% /
tmpfs                 3.9G     0  3.9G   0% /dev/shm
/dev/sdn1             485M   38M  422M   9% /boot
/dev/mapper/vg_isengard-lv_home
                       53G  180M   50G   1% /home
/dev/md0              5.4T  2.3T  2.9T  45% /volumes/raid1
/dev/md1              6.3T  4.3T  1.8T  71% /volumes/raid2
/dev/sdx1             917G  429G  442G  50% /mnt/rembackupdisk
/dev/md3              7.2T  179M  6.8T   1% /volumes/raid3

Those drives were an impulse purchase, I don't even know what I will do with 7.2TB of space yet when I still have several TB in my other arrays. I'll find a use for it I'm sure. :D

I'm at 18.9TB total now. Not that impressive compared to other people, but it's still a nice chunk of space.

Next upgrade is probably the raid 5 array, it's all 1TB drives. I could either expand it with 3TB drives or do a whole new array altogether and do raid 10. Though a very large raid 5 could be useful for extra backups too.

Porn pics or didnt happen! :D
 
And this is my setup at home.

1x D-Link DFL-800
1x D-Link DGS-1224T
1x Brocade 300 SAN fibre switch
1x D-link DGS-1248T
[2gbit uplink between the switches]
1x QNAP TS-212 2x 750GB in RAID1, Dedicated backup for VMs via Veeam.

1x Dell PowerEdge R710 ESXi 5.5
1x Fujitsu Primergy RX300 S4 ESXi 5.5
1x Dell PowerEdge 2950 Gen III ESXi 5.5

1x DotHill 2730T dual controllers
1x Fujitsu Eternus DX60 dual controllers

2x APC Smart-UPS 1500VA
1x APC Smart-UPS 1000VA

Old servers that im going to take down:

CoolerMaster Stacker
HP SmartMedia Server


Img_0063-medium.jpg
What rack and what's the actual size? I need something that size. looks to be perfect for my needs.
 
ALpHaMoNk: See post #2293 above. Danne84 didn't answer my posted query, nor a private message, but bmh.01 pegged it as an APC Netshelter.
 
Last edited:
ALpHaMoNk: See post #2293 above. (Danne84 didn't answer my posted query, nor a private message, but bmh.01 pegged it as an APC Netserver.)

thanks for the heads up...definitely sweet now I just need to get rid of my monster lol
 
Total storage: 101 TiB
This system: 74 TiB

Case Ri-vier RV-4324-01A
PSU Seasonic Platinum 860
Motherboard Supermicro X9SCM-F
CPU Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz
RAM 16 GB
Controller Cards 3 x IBM M1015
Hard Drives 24 x HGST HDS724040ALE640 4 TB (7200RPM)
Battery Backup Units Back-UPS RS 1200 LCD
Operating System Debian Linux

About 6 years ago I started with this system:
http://hardforum.com/showthread.php?p=1034392907&highlight=#post1034392907

Inspired by this forum about 10 TB+ storage boxes, I started my own thread on a Dutch forum.
http://gathering.tweakers.net/forum/list_messages/1457031 (unfortunately, Dutch only)

It's now full, so I need something bigger.

This box is a 24 x 4 TB drive in a ZFS RAIDZ3 (ashift=9) so it provides me with 74 TiB usable space.

root@nano:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
storage 4.88T 69.1T 4.88T /storage

I've created a lengthy blogpost about this system with all the details.
http://louwrentius.com/74tb-diy-nas-based-on-zfs-on-linux.html

The disk performance is absolutely amazing: 2.5 GB/s Read and 1.1 GB/s Write.
No, this is not cached, this performance is real storage performance.

Some images:

http://louwrentius.com/static/images/zfsnas01.jpg
http://louwrentius.com/static/images/nano/topview.jpg
http://louwrentius.com/static/images/nano/4cards.jpg
http://louwrentius.com/static/images/nano/backside.jpg

Here is some reasoning about the motherboard / CPU / RAM:
http://louwrentius.com/an-affordable-server-platform-based-on-server-grade-hardware.html

Also nice: quad-port gigabit bonded together to get actua 450 MB/s file transfers over NFS with my download box.
http://louwrentius.com/achieving-450-mbs-network-file-transfers-using-linux-bonding.html

I've written some tools to get a better overview of the status of the hardware. See the article for more.
lsidrivemap: https://github.com/louwrentius/lsidrivemap

root@nano:~# lsidrivemap disk

| sdr | sds | sdt | sdq |
| sdu | sdv | sdx | sdw |
| sdi | sdl | sdaa | sdm |
| sdj | sdk | sdn | sdo |
| sdb | sdc | sde | sdf |
| sda | sdd | sdh | sdg |

root@nano:~# lsidrivemap temp

| 37 | 40 | 40 | 37 |
| 36 | 36 | 37 | 36 |
| 35 | 37 | 36 | 36 |
| 35 | 37 | 36 | 35 |
| 35 | 36 | 37 | 36 |
| 34 | 35 | 36 | 35 |

I hope you like this machine.
 
Last edited:
Tisk, Tisk, you're running 74TB of storage with only 16GB of ram?!
 
I have over 100TB across 2 servers and only have 8gb per. Don't really need much depending on what you're doing. One of them even has a Celeron CPU.
 
Tisk, Tisk, you're running 74TB of storage with only 16GB of ram?!


why couldnt run with 16 gigs if he dosnt need more for any softwares? i think without any softwares even 8 gigs would be just fine..

Edit:

and.. Q... Nice work!
 
it's ZFS and for best perf I've read 1GB per TB of space as a best practice. but if it works, it works. I just scale it like that because having used ZFS long enough I know it performed best with tons of ram on my end. do you run compression? or dedupe?
 
it's ZFS and for best perf I've read 1GB per TB of space as a best practice. but if it works, it works. I just scale it like that because having used ZFS long enough I know it performed best with tons of ram on my end. do you run compression? or dedupe?

Ohh damn yeah sorry didnt look OS stuff at all :D *stupid me*

and i run Windows 7 x64 in both servers.. so.. totally different thing... but yeah for his system 16 gigs aint much.. :eek:
 
I cant belive people are.still saying that.

1gb per tb of disk is only for deduplication, not normal setup.

Matej
 
@WestSidaz thanks!

I'm running ZFS so RAM is a thing of concern, as opposed to any other RAID configurations.

16 GB is plenty for my purposes. The 1 GB/TB rule is only for heavy duty production servers.
Then you have multiple users and you want lots of RAM as cache. Even L2ARC on ssds.

If you use Dedup, the memory requirements are even higher if I'm correct:
http://doc.freenas.org/index.php/Hardware_Recommendations
This source states 5 GB/TB as a rule of thumb. My motherboard does not support 370 GB RAM :) (Max 32).

This box would probably have been fine with 8 GB or RAM. But I have no proof for that.

This is a non-typical ZFS setup

- No SSD cache (does not make sense for a single user file server)
- Not much RAM
- RAIDZ3 with 24 drives (not best practice)
- Ashift=9 (brrrrr.!)

And it works fine.
 
I was mulling over that eBay listing, but decided I shouldn't be spending more money. Ah well. Had some other decent finds recently at least.
 
the always running stuff is "only" (from the bottom)

the UPS :p
the Synology RS814 (4x 3TB WD RED - RAID10)
the first Supermicro, which is my ESXi 5.5 single node (E3 1230V, 32GB RAM, 4x 1TB WD RE4 - RAID10 +some 2.5 500G 7200)
the Cisco SG300-10

the rest is too powerful/noisy/power consuming/useless as 24h home server, but it's used for my tests.

photo_3.jpg
photo_2.jpg


photo_1.jpg
photo.jpg
 
Last edited:
Status
Not open for further replies.
Back
Top