what should i get for my storage server?

Joined
Feb 29, 2012
Messages
36
Hello,

At the moment i have a homemade server with a RocketRAID 2680 card and 8x 2TB Hitachi HDS5C3020ALA632.
this setup is running on raid 5 from the high point rocked raid 2680.


I am however running out of space, so here is my plan:

I have a DL140G3 (dual xeon quad) with 12gb ram (upgrading to 16gb).
It now has a standard sas controller for 2x 1TB R4 drives in raid 1.
(low profile PCI-X, and a high profile PCI-E x(16 i think))


I want to replace the standard controller with a controller that has a internal port for the 2x1TB and a external port.
The external port will then connect to a extender so i can hook up 24 drives externaly.
This way i only need one physical server (DL140G3)

Ill probably make the external case myself or find something cheap that can house 24 drives (suggestions???)

So what i need is this:
Which raid card should i get (400EUR max)
Which expander should i get (300EUR max)
i was looking at the HP sas expander in combination with at HP P212.

also i need to be able to simply add dives to the array, this is because im going to buy 8 new driver, create they array, then copy old array data to new one. then add drives of the old array so its 16 drives. then in a year ill push it up to 24drives.


It has to be raid 5 or 6.
At the moment its running esxi with 2x a server 2008r2.
i need at least one windows platform but a other platform through ESXI is possible if needed.



That's all i can think of now. if you need more info please ask.
 
Last edited:
Well for the case, look at SuperMicros JBOD cases. They make a few cases just for JBOD, which include the expander and usually a redundant PSU. You have your choice of Single or Multipath expanders.

These cases are not that cheap but the case, psu and expander are rock solid.
 
Well for the case, look at SuperMicros JBOD cases. They make a few cases just for JBOD, which include the expander and usually a redundant PSU. You have your choice of Single or Multipath expanders.

These cases are not that cheap but the case, psu and expander are rock solid.
I'll second this. You will not find a higher quality case than SuperMicro's 6047R series. (Linky) Of course, you are looking at around the $2,000 mark. For that price, you could build yourself a second computer and put it in a Norco RPC-4224. Heck, you'd probably even have cash left over.

I've never played with extension/expansion devices outside of a professional environment, and with those, it was always something like HP's MSA system connected to a HP DL-series. It stayed inside a brand and had proprietary written all over it. If you do build it, I'd love to see some pics of everything you bought, and how it all works together!
 
Here is a link to a JBOD case.

http://www.supermicro.com/products/chassis/3U/?chs=837

The single path expander version goes for ~$1500, this include rails, expander and redundant PSU. They make a 4U version also.

initially i wanted to go for a norco 24 drive case (€500) and then a expander (€350??)
so that would be €1000 with a power supply.

the $1500 price would be €1180 so it looks like nice choise, considering this will give me 4 more drives.
so ill look into places to buy it in my country.



but that still leaves on thing i need recommendations on.

the raid card for the DL140G3

it must have these capability:
RAID 5 / 6
online raid expansion (add drives to array)
needs to control 2 internal sata drives in raid 1
needs to control the supermicro case m1abram recommended (single path)



If you do build it, I'd love to see some pics of everything you bought, and how it all works together!

at the moment im still in the brainstorm fase. i want to see if im on the right track.
so i wont be buying it any time soon, i just want to be sure as its a lot of money for a home data server.
but when i buy / build it you will see it in the 10tb+ sticky threat.
 
i decided not to go for the super mirco JBOD case.

im probably getting the following
-HP Expander
-Norco RPC 4224


as far at the raid-card goes im still not sure, atm im looking at these 2:

-HP P212 w/256mb and Battery (price is good should be compatible but im not sure about the speed)
or
-HP P222 w/512mb and Battery (price is good should be compatible but im not sure about the speed or the compatibility witch PCI-e 2.0)
or
-Adaptec 5445 (not sure about compatibility with expander, and the price is on the high side)
or
-Adaptec 6445 (not sure about compatibility with expander, and the price is on the high side)


of those cards im leaning towards the P222


i was wondering if you know a card i might have missed in my search, or maybe you have some feedback on my choices.

the cards need the following:
-1x external 8088 (or more)
-1x internal 8087 (or more)
-raid 5 or preferably 6
-support for OCE
-support for expander (obviously :p)
and exaptable speeds for my needs: storage of video files that only stream to home network 5x 1080p stream maximum.

as far as the price range goes, like i said the 5445 is on the expansive side so if there is something out the cheaper then the 5445 or the same price then please let me know.


ps i have looked in the expansion card threat but all the cards there i high budget ones.
 
Last edited:
I changed my mind. the 2 internal drives will run of the simple pci-x sas raid card thats already in the server.

so that leaves 1 high profile pci-e x16 for the SFF-8088 raid card witch will connect to the expander.

The card will need the following:
-1x SFF-8088 connection (or more)
-raid 6
-support for OCE
-support for expander (obviously :p )
-hardware raid
-price rang should be up to €400


im currently looking at these:
areca ARC-1213-4x but i don't know about its expander support.
3ware 9690SA-4I4E witch is supported but might be slow.

so if you know a raidcard please let me know :)
 
Last edited:
i already know what parts ill be using, but i would like to know the best way to have 2 systems power on at the same time?

here is the scenario:
a hp server with a raid-card, this raid-card is connected to a expander witch is in system 2 (norco 4224)
system 2 has a small atom based board powering the expander, it has a normal atx power supply witch also powers the hdd in the norco.

so when i turn on the hp system it need to automatic power on the norco system or else it wont find the ex pander or drive.

simple solution would be to power on the norco first and then the hp but i would like it to be automatic.
should i just wire the power buttons in parallel? or is there a better way of doing what i want to do?
 
look into M1015, you can get 3 for less than $200, should cover 24 drives.

Build a norco 4224 ($500ish), put in a simple Xeon E3 board+CPU ($400ish), make sure have 3x pcie8 slots. put in all 3 M1015s in IT mode. Get a quiet and decent PSU ($150ish)
Totally should cost about $1000

To be honest, for a home environment with a few user, your data (assuming Videos, Music, Photos, files etc.) should NOT be protected using hardware RAID. Too much hassle and flaws with HW raid. You should be using software raids like FlexRaid, Snapraid etc, sitting in Server 2008R2 / Server 2012.
There are plenty of discussion on why so google them up.
For Enterprise hardware SAN is required due to speed and simultaneous access, unlikely to be significant for home given 1GB lans. If you are particularly adventurous, try unRaid in a VM.

For your VM workloads, get 2x 256GB SSDs, they should be faster than any HW raid solution you will ever get. VM roles should be separate from your file server roles, to allow drives to spin down when not in use.

Also Pre-built JBOD solutions (e.g. supermicros) are extremely LOUD, due to redundancies and high-duty fans for 200% cooling requirement.
If did your setup like I did, only a few drives should be spinning at any given time so the entire setup can have very low cooling requirement, thus quiet fans are all you need.
 
@hikarul,

thank you for the advise, ill start looking into witch one of there options is best for me, both practical and financially (remember i already have a DL140G3)

one question though, i take it something like flexraid supports OCE and a raid 6 equivalent?
and what kind of speeds should i expect (16 2tb drives @ 5400rpm)



ps: prebuild was not a option anymore i was thinking of the folowing:

LSI 9690SA-4e4i in the dl140
hp expander.
norcotek 4224.
hx850 powersuply.
and some other stuff to go around it all

totaling about 1700EURO for a total of 14 2tb driver expandable up to 24
 
There is no need for OCE with Flexraid. You just put in drives as you see fit, with or without data. then include it in the pool and run a reconciliation job. NOTHING on these drives will be affected which is why they are so good for existing drives. ZFS and Unraid need complete reformats because they use a different File System. Flexraid is on top of NTFS (for windows).

The drives are pooled together as one share, you set the smb shares within the interface.
You can use the real-time feature or just do a recon every night, assume most of your data dont change very frequently, it can be pretty fast.

Error detection and correction features in ZFS can be replicated manually by running a scrub. Errors are detected from per file (or block, can't be sure) hash codes.

You can assign any # of drives as parity drives, similar to what PAR2 files are for Usenet, or "Recovery records" in an RAR file, but for whole drives. I personally use 2 parity drives for a 22x (soon 24x) 3TB setup.

Parity rebuild takes about the same speed as ZFS, since all drives are spinning. drive IOs are the bottle neck, not CPU, so all solutions (HW or SW) are virtually the same (exclude OS and FS overhead).

If you already have a head server then expander is the way to go, just get one of those switches that turn on both PSUs (~$50) at the same time while powering up the Expander in the JBOD chassis.

As far as I know there isn't a remote-on method for the JBOD PSU, except some SuperMicro JBODs.
 
i think you changed my mind :)

so flex raid is roughly the same as you would have drive pools in the old WHS then, cool :)


Now besides the hp server i also have QC amd home build server that i wanted to get rid of but ill probably buy a new motherboard and 32gb ram for it and put in 2 IBM cards (i already have a HPT 2680 which should work as the 3rd card)

Ill that all in a RPC-4224 with a HX850 power suply (is 850 watt enough to start up 24/26 drives?) and im done, ill go with flex raid as that seems like a nice choice. 2 drives for parity.


does that sound good to you?
 
Last edited:
Most people would recommend get a cheap Xeon E3 and ECC memory for fileserver.
Both on here and ServeTheHome (which has a brilliant community as well).
If you are gonna buy a new motherboard, go with Xeon, hands down.

For a file server no more than 16GB ECC mem is needed. Even 4GB should be fine for most work loads. High mem requirement seems to only apply to ZFS / Unraids.
When I first did my server I had an atom 330 with 4gb and worked fine for 12x 2TBs, as long as it's only used as file server.
(Now I have dual Xeon E5, file server sits inside a HyperV VM, drive pass-thru).

HX850 is a very good PSU. 850 watt is more than enough, if your HBA card supports it, you can also stagger the hard drive startup, generally to reduce the instant amperage starting up.

Flexraid pool is very similar to WHS v1 pool (I would say better). I migrated my files from WHS to Flexraid beta and never looked back. Tried unraid, snapraid, ZFS, Storage Spaces, in the end Flexraid is still a win for home-oriented data.

Just keep in mind unless you use Flexraid realtime (which can get buggy if you don't know what you are doing), it is snap shot raid. So you need to schedule it to resync. Note though WHS v1 is also not real-time. The backup was handled by the DriveExtender process which runs in the background when its not busy.
 
oke thank you for the quick answers :)

But just one more question, i think you already covered it but i just want to be sure: If i re-install the OS but leave all the flexraid drives untouched, will flex raid recognize it again?

Also, I know xeon is recommended but for now i want to go with the amd (to save costs) as i already have it laying around including the ram (ill stick with the 16gb ram i already have)
If it doesn't work out ill upgrade to xeon (which it why i asked the question above)

PS. the server will indeed only be for video files, torrents, usenet, ftp, etc.
Ill run everything else on the DL140G3.
 
reinstall the OS does not affect the data drives, it creates a small hidden directory on each drive to tag it, so just delete it and add to new flexraid as if its a new data drive, and rebuild parity.

Some people on their forum suggest backup the directory/library and move over to the new OS, but its buggy and not worth it, just start all over, it doesn't take much more time to rebuild parity drives (you lose protection during this time, but its the same for HW raid anyways).

Also for Flexraid, make sure you do your torrents, usenets, ftps etc to your head server. The files on your file server should be immutable once its there (WORM). Otherwise you have to rebuild parity drive all the time.

On uTorrent or usenet, you can set it such that it will move the completed files to another directory/drive, just point it to your file server, or run a batch task each day.

I have a 2TB drive dedicated to torrent/usenet in the download VM (pass-thru). The only other software on there is anti-virus. This way I know virus wont infect all my other VMs even if I got crap.
 
oke i im doing that right now (headserver downloads and puts finished files on the data server) ill buy a extra 3tb or 4tb drive to put in the dataserver for active date (up/down load)

ill probably buy it within a 2 weeks just need to look around the forums for nice ways to mount non hotswap drives in the 4224 (drives like the ssd for the os and the 3tb for downloading)
 
That's a very interesting thread, guys!

I've just migrated my stuff from a Debian fileserver to a HP Microserver (N40L w/4GB ECC RAM) that runs a DC (need this for training purpose). However - I'm currently running the onboard RAID10 over 4x 1.5TB HDs + a separate HD for my media - which feels bad - but that is another story).

I just stumbled upon this thread this morning and I started looking deeper into this flexraid stuff and damn - I will try it in a VM, but I'm nearly sold. On my HP Microserver, I could add a SAS Expander and then add more drives to that using maybe with a setup like it's described here in future: http://www.servethehome.com/external-sas-sata-disk-chassis-wiring-part-2/

@morbid_looney Note that MOST AMD CPUs do work fine with ECC RAM. I agree with hikarul that this should be the RAM of choice for a fileserver.

At least also a question to the current users of flexraid: How does flexraid behave when a disk drops and I replace it with another one? Do I have a config-file or GUI that allows me to say: replace the lost disk with this one?
 
@morbid_looney, when you buy the norco, there are two brackets supplied that you can use to mount TWO drives inside the case. There is absolutely NO instructions for it so you have to be clever. If you google you might find pictures for it though.

@DieTa, Flexraid comes with a GUI (web-based) for configuring and changing dead disks. Not too hard to understand.
Before you really commit, do a test, Create 3x 10GB VHDs and mount into your test VM. Create a pool and copy some files to it and spread it out, run a Sync process to create the parity data.
Then using the VM manager, unmount a VHD and create a new 10GB VHD and mount it, then try to use the GUI to replace it. It should rebuild your new drive.

Do this until you are comfortable with the process.

Also, the STH article was over complicated. Just get one of these from Chenbro:

http://usa.chenbro.com/corporatesite/products_detail.php?sku=76

and you are all set for JBOD and even daisy-chaining multiple JBODs in the future.
 
i decided to go for the RPC-4220, im hoping to start building the server this weekend.

i have been playing with flex raid in a vm, i added 8 RDU's and 2 PPU's in the cruise controll setup.
is it correct that it will fill up the driver one by one, while still having parity on the PPU drives?

if so doesn't this mean ill miss out on the read speed gain of raid 6?
or im i doing something wrong? (i have updated the snapshot)

i tried expert mode with the T2+ engine (raid 6) but it did not pool the drives witch was a bit odd....
 
Last edited:
the m1015 is flashed to it mode, but for some reason i cant access the smart data?
shouldn't pass through allow me to read that data?
 
Last edited:
Back
Top