Looking for opinions on a home server config

bluesdoggy

Limp Gawd
Joined
Jul 14, 2000
Messages
442
This is probably a going to be a bit of a longish post, so thanks in advance for taking the time to read.

Background: My wife and I are wanting to consolidate our data / media by implementing a server at home. Both of us are quite technically proficient, so an "idiot proof" solution from an usability standpoint isn't required.

Current available hardware: 24 " iMac, self-built c2d based pc, mac mini, old 3ghz p4 mothballed system, 3 or 4 sizable external drives (500 GB +), and a couple of 1 TB WD "green" internal drives.

Serving media / files to: above mentioned macs + 1 macbook, 1 macbook pro. The mini is our "livingroom" box and runs boxee most of the time. Also a PS3, Xbox360.

My problem exists more at the hardware level than software. I was all set to grab a drobo, hook it up to one of the above systems, setup some samba shares + upnp services for the game consoles and rock on.. .but then i got to reading about the (seemingly high) number of catostrophic failures people had experienced with the drobo. One common theme of these seemed to be high i/o levels, and with the application we are looking at, i could see that being a problem. We generally always have a torrent seeding / downloading , music or video playing when we are at home, so if the drobo has a low tolerance for alot of sustained i/o requests, then it wouldn't be a good solution.

My next thought was to build out an atom-based system or take the existing pc / mothballed system, throw in a decent raid card, and build up a traditional file server. I looked at some software raid solutions as well ( like ready nas). My concern here is data redundancy and the ability to restore from failure of the array. Also ease of expansion of the array. I would imagine we start out with about 4 TB of data and will only grow from there, so expansion is something we have to think about. I'm also not crazy about using that old p4 box since its a power hungry old work-horse.

I've also looked at a few other NAS appliances and they either seem to be feature sparse (no raid 5 or equivalent stripe + parity solution) or too expensive and aimed at more of an enterprise problem set.

What would you folks suggest? Should i bite the bullet and try the drobo? I could always backup the data that we absolutely did not want to lose, but its not really cost-effective or plausible to back up EVERYTHING. The attractive thing is the ease of upgrade and the (supposed) ease of recovery from a drive failure, but the horror stories on new-egg and other sites give me pause.

Your suggestions are greatly appreciated.
 
Forget the Drobo

Take the mothballed system, it will be plenty fast for a file server.
I dunnno if MAC's can access a windows network share or not....but if it can just grab WHS.
WHS will provide redundancy, and allow you to use all different sized drives whether they are internal or usb external drives does not matter.

If not then look at Unraid or some linux distro.

O yea did I mention that the Droblow sucks?
 
care to expand on why?

My hesitation with the mothballed system is the fact this thing will be on 24/7 and that system is big, loud, and sucks power.

Regarding WHS, i was under the impression it didn't officially support RAID and that the redundancy built into the software was essentially a software raid-1 implementation. This seems pretty inefficient to me.

FYI: the macs can access windows shares without issue.
 
Drobo Slow Expensive, plus you have to buy an addon to make it a NAS = Waste of Money.
If you search drobo on this forum....i doubt you will hear much positive about.

WHS.....does not *Support* raid.....does not mean that it does not work.
RAID is RAID and the OS cannot tell a difference.
WHS is also simply an application set that runs on top of Server2003.
If it works under 2003 it will work under WHS.

WHS Duplication feature:
While at first glance it may seem like Raid 1ish, its really better.
This feature allows you to chose what is important and what is not.
You get to chose what you want replicated, in some cases it could be everything but in most cases it is not.

Also with WHS you dont have the overhead you do with a R5 array.
There is no rebuilding of arrays and hoping you dont have a URE, or a power outtage, or something gets corrupted when a disk die.
You dont have expensive HW raid controllers.

If you ask the people on this forum who have some of the largest File Servers here...myself included...you will see that at one point we were running Raid5 or 6 and now we use WHS....theres a lot to be said for people that have 25TB file servers on WHS instead of a R5/R6.
 
I'll grab WHS off of MSDN and load it on a system to give it a whirl. I'm still not sold on straight duplication though, for the simple fact that it effectively doubles the size of the storage i'd need to feel comfortable. I did an audit of the initial data dump that will be going on this thing, and its going to be roughly 5.76 TB... Even if i were conservative and said that i only wanted 4 TB redundant, we are still talking a decent size addition.

I looked at Unraid and that looks interesting. It looks like it is doing a rough approximation of the raid-5 mirroring w/o striping. If i'm reading correctly, it stores across drives sequentially, and uses the biggest drive for parity?

If this is the case... then when you go to upgrade the array size wise... and insert a new drive that is bigger than any of the previous drives, how does it rebuild?

Do you have experience with unRaid?
 
even if you need 4TB of extra storage space for WHS, thats what, $360 for the disk. Compared to a RAID controller card, not that big of a difference. Factor in WHS ease of use, server 2k3 reliability, and doomsday senario of pulling all the disk out 1 by 1 and reading them to another PC, (they are standard NTFS disk, good luck on that with raid 5). WHS is easy to expand with more disk..........

Not meaning to ramble, as i have no experiance with Linux servers or the like, but i do know WHS is perfect for what i want, and i'd give it a shot if i was you.
 
There are many threads on this, general consensus seems to be its hard to beat WHS.
 
From the way of your last post, it seems as if your trying to use raid as a backup. Dont make that mistake. If you have 5 Tb of data, youll need 5 Tb worth of backups ( barring compression the backups may or may not be using). Period. If you set up a raid 1, raid 0, raid 5, raid 6, WHS, etc, youre always gonna need at about twice that amount for backup, most people here recommend 3 with one being offsite.


Edit, As far as I understand it, when WHS does duplication, it takes whatever data you specified to duplicate, then writes that on 2 separate drives. The data is on 2 spindles so it would require that both disks the data was on to die at once. I am not sure its raid 1, but it may be.
 
Last edited:
i love my qnap nas and it has some extra mac feature on it which of course i have no use for lol

but it does work well, im using a ts 209 II
 
WHS is definitely an option, but I'm not that knowledgeable about it, so I'll let others discuss it, except to say that since it sounds like the bulk of your PC usage is Mac-based, and you have some tech proficiency, some of the benefits of WHS (specifically the easy backups and turnkey operation) may not be worth it.

My personal setup (used primarily with my MacBook Pro, although occasionally with WinXP boxes) is an linux based box (Ubuntu Server specifically). The current storage is a 3x1TB software RAID5 and an old 40GB IDE drive for the OS, running on a dual-core Celeron (E1200 I think), 1GB of RAM on an ASUS board, using the onboard ICH9 SATA ports.

It's main use is as a media server that I access from my Mac, and as a remotely accessible Torrent downloader (Azureus/Vuze with the htmlwebui plugin). Additionally I use it for partial backups (including rsync functionality), and I've recently set it up as an iSCSI target and in combination with some (free) software on my Mac, I can use it as a networked Time Machine drive (without the need for any unsupported hacks).

I've found that it works particularly well with my Mac since OS X's Unix underbelly lets me interface with a Linux server in ways that I couldn't with Windows. Also I (at times) enjoy tinkering, and Linux provides much more in the way of options there. I also particularly like the capabilities of the Linux software RAID. It's completely hardware-independent, so you can yank the drives and attach them to a different controller or different machine even (which has saved me from my own foolishness in the past). As long as you have a version of Linux and mdadm installed, it doesn't matter what your disks are connected to. Additionally the RAID5 expansion and rebuilds have gone off without a hitch multiple times (provided you pay attention to the details you pass it - I once expanded an array to 5 disks instead of 4, and it ended up with a degraded array, and I had to copy all the data off, then rebuild and copy it back on).

I chose RAID 5 because I'm storing data that I value, but not so much that I need 1:1 backups of. My important stuff is on other systems and backed up to the server, and the really important stuff (of which there is a very small amount) is encrypted and uploaded places. For my purposes this is a good balance between no redundancy and mirroring/duplication, and one WHS can't offer without extra hardware. Of course you need to figure out what's valuable to you and how you want to keep it secure (as jay mentioned above, RAID isn't backup, but you may have lots of things for which the expense of a full backup isn't worth it).

Hardware-wise, my system is overkill if anything. I started out running it on an 800MHz Duron with 512MB RAM and a 3x500GB RAID5 on a 4 port PCI controller. I got about 60MB/s sustained writes and 80MB/s sustained reads - far from smoking fast, but pretty impressive considering the hardware, and since it only had a 10/100Mbit connection, not a bottleneck. I'm pretty sure the PCI controller was the bottleneck on that system, since AFAIK the CPU usage never seemed to get too close to 100%, even with sustained drive activity. For comparison, my current setup gets about 90MB/s sustained writes and about 180MB/s sustained reads. That's less than my peak speeds when I had 4x and 5x500GB arrays which got about 170MB/s writes and 220MB/s reads (slightly slower disks, but more spindles than my current setup). Depending on the number of disks, you can easily saturate a GigE connection.

For your hardware, I can think of two options - a lightweight Atom-based one, or one based on a cheap low-power Celeron or AMD equivalent. From my experience the most important thing is to get SATA ports on a PCI-E bus. That can be a little problematic on an Atom board, especially if you want GigE, but from the looks of it there are newer ones coming out now with more onboard ports and onboard GigE, in addition to a PCI-E slot. For maximum expandability, I'd go the mainstream CPU route - even the mATX boards have piles of SATA ports now. With an Atom you can get 3-4 onboard tops, plus probably 2 on an expansion card. If that's enough for you, then the Atom would likely be a good choice, and as long as you're not doing anything else too heavy duty (i.e. beyond file serving, some torrents and maybe a personal web server), based on my experience with the Duron, I would think it could hold its own. Keep in mind that if you're doing a RAID5, you'll need a separate boot disk (or you can partition a small slice off one to boot off, but that means you lose that amount on all your disks), but any old disk will do (if the board you get has IDE, you might consider using that to save SATA ports for your array), or even boot off a CF drive.

Software-wise, if you're comfortable tinkering (and it sounds like you are), I highly recommend a standard Linux distro and adding on to it as you like. You'll get a very flexible system, and probably learn a few things in the process. The general setup I would be something along the lines of a RAID5 using mdadm, possibly with LVM (this would let you dynamically carve up and resize partitions on your array, so you could add ones for iSCSI volumes for Time Machine, or whatever else). Samba for windows file sharing, or NFS or netatalk (AFP) for sharing with the Macs. Plus whatever other things you want to use it for.
 
Last edited:
I'd hate to hijack, but to bring up WHS and RAID... what about RAID 10/5? Don't consider that "backup"?

I'd just like to have a server where I can store all my TV and movies. However, I'd hate to have a drive die and I'd have to rerip/acquire the data I lost... so do I go RAID 10/5? What can WHS offer me for backup/hdd health?

The server would be primarily used for only storing multimedia, maybe possibly streaming... I have the hardware, I'm just wondering how to go upon software/raid...etc...

I'm in the same shoes as OP but probably with more in mind of using larger 1/1.5/2tb drives when I can find a good deal on some soon...
 
I love WHS...but you have to be careful about back-up if you use WHS as a file-server.

WHS does 2 things (well, lots of things, but for this conversation, we'll concentrate on 2 of them).

It does automated back-up of the PCs. This is great. Accidentally delete a local file? No big deal. Have some source code that you fubar'd? No big deal...there's a previous version in the backups. But, those are still local files...they aren't visible to anyone else on the network if that computer is in standby...

It also provides file server duties. This is worrisome. If (like me) you jump at the chance to put all of your media on the file server so that it is accessible from everywhere...and then turn on duplication since it would be hard to re-create...you are still quite exposed to the accidental deletion. It isn't backed up, it's just replicated (and the replica gets deleted at the same time as the original). Once you come to grips with that, you have to start thinking of backing up your shared folders. What I did (for photos only since they are completely irreplaceable) was to put the main storage of them back on my primary PC. Those get backed up via the automated backup---great...however, I also wanted the photos to be available from the rest of the machines when the primary machine is in standby... So, I use syncing software to ALSO sync all of the photos up to a share... quite redundant, and solvable by not having the server auto-backup those files, but I'm comfortable with the overhead.

-Kevin
 
somewhat similar to yourself in that i wanted a server that could be a central store for all my photos ... i am photographer as a second job, so wanted somewhere separate to my desktop to store them. also the wife can help processing via the network.
Like KevinG above I use syncing software (syncback) to create a copy of files from my desktop to the server

The server also:
* backs-up our computers, one desktop and two laptops
* is a central store for our music collection. i was recently in Paris and was able to stream music from the server to my laptop. Very impressed:cool:
* our personal documents are duplicated to the server, again using syncback ... gives us access when away from home.

I am very impressed with WHS, it does exactly what it says with no fuss.

A few of observations:
* My server is based on an intel atom mobo and adaptec 4 port SATA card. With alot of disc acitivity the server can be somewhat unresponsive when logging in.
* There is no hard drive failure protection for the system drive

So, my next server, already in the planning:
* Will use a desktop mobo and celeron CPU
* A raid 10 or 1E array. Mobo if raid 10, or an adaptec 3805 SAS card if raid 1E
* Storage will be a SAT2-MV8
 
veritas7

folder duplication would protect against failure of a data drive

raid could be desired for protection against failure of the system drive.
though if the system drive were not protected the likely outcome would be a server re-install.
depends how critical the server is, and the hassle of a re-install VS the cost of the hardware for raid
 
I explored both RAID on Server 2003 or just building a box for WHS. In the end I went with WHS. A good friend of mine went the other way.

At less than $.90 a gig, hard drives are relatively cheap. My friend dropped a huge amount of money on a nice Areca RAID card. I will have to have a lot of hard drives before I start getting into the price range of that server card (and the more expensive hard drives). As well, we are both limited to the same transfer speeds due to gigabit transfer speeds.

As well, WHS will backup your systems after waking them out of sleep, and is very much fire and forget. I highly recommend it, even if you do have to purchase a few more hard drives.
 
First of all, thanks for everyone who's offered their opinion in this thread. I'd forgotten what its like to talk about this sort of stuff with a group of knowledgable people.

I threw WHS on my c2d box and my immediate impression was that, while a nice turn key, i'm essentially just taking 2k3 and throwing a bloat layer on top with functionality that i'm not really going to need or want. I can create and maintain my own network shares, i've already got frontends for my media on the devices that will access them.. i still have to had upnp /dna media server / itunes serving... so it gets me easy backup (which i won't use due to the macs and scripted backups via time machine and super duper) and selective replication... which i'll be replicating everything...so not sure where the benefit in that is.

Also, i realize raid != backup strategy... and i'm also familiar with the 321 backup strategy (3 backups, 2 different media types, 1 off-site). There is an amount of pain i'm willing to take on our bulk media (music, movies, etc..), but having that restore layer that is fasciliated by raid or a raid equivalent is something i'm willing to pay for. For our data that absolutely can't be lost (work, code, personal documents, pictures), we will be employing (and do employ now) a true backup strategy (on server, external HD, fire-proof lockbox monthly updated DVDs and ext HD, and cloud storage).

Again, my chief reservation about true hardware-based RAID is the rigid equipment requirements and cost of entry. I haven't explored Linux-based software raid, which i would assume most of these NAS boxes you see are basing their implementation on.

In the end, i'm not sure i'm much better off than when i started. The most attractive thing i've seen so far are these QNAP devices. They are apparently linux-based appliances that offer most / all the functionalty i am looking for. The downside is cost and the fact that they are, from a feature perspective, more focused on business / enterprise than home use.

Keep the opinions / comments coming though. All is appreciated.
 
WHS won't be for everyone
consider the "home" in WHS
WHS was intented for people with alot less experience than yourself

for me, and many others, the simplicity of WHS is what makes it attractive
 

Hey bluesdoggy,

I saw your post about unRAID and figured I would chime in as I am currently using it with my MBP.

The way unRAID expands its storage and restores your data if a drive were to go down is that it uses the parity drive in concert with all the data drives to rebuild the new/failed drive bit by bit. It reads from all the data drives and does an XOR and then reads the parity bit. if they match it writes a zero to the new drive, if not then it rights a 1 to make the XOR match up correctly.

The unRAID forum is on the most friendly I have been a part of and they can answer almost any question relating to unRAID. It sounds like you have done some reading on unRAID but I will give a couple more links just in case you did not find them the first time around. You can check out the Topical Index and the Hardware Compatibility Page to get a sense for everything that is in the forum and what hardware is currently being used. You can also check out this thread were I describe some of the stuff I have done to get my MBP to work better with unRAID and my network setup.

Feel free to ask questions if you have any.
 
Thanks for the info prostuff.

My biggest question (and this may be addressed in one of the links you supplied) is regarding expanding the array. If i understand what i've read correctly, the largest drive in the array is used as the parity drive. If i want to expand the array, and a disk i put in is larger than the parity drive, does it rebuild the array adding in the new disk, but only using as much of it as is the size of the parity drive, or does it recognize the bigger drive, copy the parity info over to the new larger drive, and re-purpose the old parity drive as part of the array?

Also, can anyone offer up an opinion of these Qnap devices? Specifically the 400 series. The formfactor looks to be consistant with what i'm looking for and the embedded linux + utilities seem to cover the functionality i want.
 
Thanks for the info prostuff.
My biggest question (and this may be addressed in one of the links you supplied) is regarding expanding the array. If i understand what i've read correctly, the largest drive in the array is used as the parity drive. If i want to expand the array, and a disk i put in is larger than the parity drive, does it rebuild the array adding in the new disk, but only using as much of it as is the size of the parity drive, or does it recognize the bigger drive, copy the parity info over to the new larger drive, and re-purpose the old parity drive as part of the array

You can insert a new drive that is bigger then the parity, assign it to the parity slot, move the current parity to a data slot, and then start the array again, it will go about doing its thing no problem. The only thing you have to be aware of is that the expansion will take a while and the use of the array will be out of commission during that time.

What a lot of unRAID users are doing is using a little script called preclear.sh that one of the users created to clear and set up the drive before adding it to the array, that way you do not have to wait for the lengthy clearing process.

The way i usually do a parity drive upgrade is to completely remove the parity drive, insert the new one, rebuild parity onto the new drive and do a parity check to make sure all is good, then run the old parity drive through the preclear.sh script and then insert it into the array. I do it that way so that if a data drive were to fail during the parity build onto the new one then i would be covered and could put the old parity drive back in and find a new data drive to rebuild onto. It's just a little extra insurance in case something happens
 
Hi bluesdoggy,

After reading your initial post, I noticed that our setup is similar. I have 2 macs, 2 XP boxes, and a PS3. I ordered parts for my backup server and I am trying to figure out a way to share files/music/video with all 5 machines. Initially I was thinking of Ubuntu and setting up Samba and then using PS3 Mediatomb to stream files to the PS3. But I am reading that setting up Samba with either NFS or SMB can cause access problems for OSX and XP boxes (anyone, please correct me if I am wrong).

How's WHS working out so far between your macs, xp boxes and your gaming consoles?
 
Back
Top