storage server build

sdadept

Limp Gawd
Joined
Jul 3, 2004
Messages
433
I'm helping a small but growing niche social networking site. As part of the site, users can store photos. Currently there are about 14tb of photos stored and last month 4tb were added. Considering we only have 26tb of storage total on a 2u with SATA drives (and no backups, weeee), we will run out fairly soon. I'm thinking we need at least 100tb of storage to comfortably make it through the next 6 months.

So, with a limited budget of around $10k to $15k, what would everyone suggest that could get us to that number?

My early thinking on this is to get a 4u, drop a mid level mb/proc with 4 nics teamed with maybe 32gigs of ram and as many 4tb hd's as we can afford. Also should probably get the highest end SATA controller card I can get.

thoughts? recommendations?
 
I would first think about items like backup, redundancy, quality of service, allowed downtime etc and how you can achieve at least an acceptable level within your budget.

If you look at offers from Dell, EMC, HP, netapp or Nexenta you have no option to get a solution with that money.

Your only chance is a good storage solution with an OpenSource SAN OS. Use any with ZFS software raid, nothing else.
Think first about a storage case from SuperMicro with 36-72 slots with included LSI expanders and a mainboard with an LSI 9207 SAS controller and a 10 GB Nic. Add 4 TB SAS disks (you need at least about 30 x 4 TB disks for 100 TB usable, I would not use Sata disks with SAS expanders) but you cannot afford two of them.
http://www.supermicro.nl/products/nfo/storage.cfm

What I would do
Buy two SuperMicro 24 bay cases without expander so you can use Sata disks without possible problems. Add a mainboard like a SuperMicro X9SRH-7TF with 32 GB ECC (max 256 GB), 10 GBe and one of the best 8 channel SAS controllers onboard. Add two LSI SAS controller like LSI 9207 or IBM 1015 flashed with IT firmware.

Add 11 x 4 TB to each box (one as a spare) in a Raid-Z2 config. This gives you 32 TB usable per box (or 64 TB if you use 20 x 4 TB). Activate LZ4 compress, Wait some months until 6 TB disks are available and add 11 of them per box later. This increases capacity to about 80 TB. If you replace the 4 TB disks in a year or when needed with 6 TB disks, you have a total of about 100 TB each.

Buy from someone that can help on hardware problems or replace parts within a short time.

Advantage: You have two boxes connected with 10 Gbe for backup and redundancy. This should be possible below your limit. I would add a Solaris ZFS SAN OS like OmniOS. It is stable with the absolut newest ZFS bits, one of the fastest at all and free. Such a solution is used quite often without the danger of serious problems.
 
Last edited:
Gea,

In your opinion raidz2 is alright with 11x4TB drives, not raidz3?
11 disk = 10 disk raid Z2 + one hot spare

I would use the hotspare config if you intend to add more vdevs
For one vdev I would also prefer a raid Z3 over a Z2 + hotspare
 
If you want to use raidz2, then use 10 disk raidz2. 8 disks + 2 parity disks = 10 disks.
 
Yikes about no-backups. You're in a ship without life-vests. You really need to address that first.

A ZFS san could work for this nicely, especially if you can get a couple of them and use the replication built into the ZFS file system to send data back and forth.

Something else you should think about is when your storage is starting to run out of IOPS serving the photos up to your growing user-base

You're going to either want a way to shard data to scale up.

Having one giant pool is quickly going to become an issue for backups, IOPS, and scalability I think.

You need to get your coders to start working (now) on spanning the photos over multiple systems instead of one monster one.
 
Last edited:
really good recommendations, following up with pricing out hardware and researching the SAN recommendation.

And yea, the no backups thing blows my mind a little bit. I just came onto the project recently and the guy says 'i never expected this to grow, what can i tell you'.
 
If you're a social networking site, I would guess you already have a lot of bandwidth, so maybe backing up to "the cloud" would make sense - on a high speed connection, moving 20TB of data to Amazon or one of the other reliable backup providers wouldn't be too time consuming.
 
I'm helping a small but growing niche social networking site. As part of the site, users can store photos. Currently there are about 14tb of photos stored and last month 4tb were added. Considering we only have 26tb of storage total on a 2u with SATA drives (and no backups, weeee), we will run out fairly soon. I'm thinking we need at least 100tb of storage to comfortably make it through the next 6 months.

So, with a limited budget of around $10k to $15k, what would everyone suggest that could get us to that number?

My early thinking on this is to get a 4u, drop a mid level mb/proc with 4 nics teamed with maybe 32gigs of ram and as many 4tb hd's as we can afford. Also should probably get the highest end SATA controller card I can get.

thoughts? recommendations?

Your hard drives will cost about $5-10K. Does not leave much.

I would look a bit further into the future and arrange funding for that.
 
Back
Top