Help me not ditch freenas, pleeeease!

The Spyder

2[H]4U
Joined
Jun 18, 2002
Messages
2,628
Long story short, I tossed Freenas on a older Dual Core Xeon 2.8 supermicro server with 2 raid arrays and around 2.2tb of storage total.

I have had nothing but problems. I finally got it working friday and left it for the weekend to run Rsync to backup a few servers and today its dead. It reset back to factory settings and lost both shares. No drives show up anymore, no shares, no services are running (rsync, ftp, ssh, ect).

Do you think its dropping my hardware based raid? I have noted a few people reporting they have to use the software raid.
 
If that doesn't work out for you, I would suggest that you just install a distro of Linux and set it up yourself. I gave up on FreeNAS when I could not get it to recognize all my hardware and just installed a console only install of Slackware and love it.
 
While I LOVE freenas for its ease of use and setup I hated it becuase the transfers are so slow.

I have since switched to Win2k8 Server but will also try out WHS.

I tried a few different distros of linux and bsd but I couldnt get my hardware raid's to work right, I'm just not familiar enough with them.

Zack
 
Just use software raid. More reliable, more robust, not tied to hardware should your hardware fail, good performance. Really the way to go with Linux.
 
Just use software raid. More reliable, more robust, not tied to hardware should your hardware fail, good performance. Really the way to go with Linux.

Good performance :confused: By what measure is a software raid going to do better than a battery-backed cache PERC or Areca card? More robust - how? How is being tied to software better or worse than hardware?
 
Gave up on it. Had to. :(
Stable release would not even load.

Running server 2003 with vmware for backtrack/centos web server.
 
Good performance :confused: By what measure is a software raid going to do better than a battery-backed cache PERC or Areca card? More robust - how? How is being tied to software better or worse than hardware?

I'm not the person that posted it, but I can answer most of your (rhetorical? I can't tell) questions. Performance of software RAID is good enough these days - even a three-disk RAID5 can saturate a gigabit Ethernet connection (assuming your application is network file serving). Battery backup can be provided by UPS for the entire system. As for robustness, I have no statistics. However, I do know that being tied to software is better than being tied to hardware - it's possible to move the disks to any other system, even a different architecture, without worry of having a spare controller with the same firmware rev, etcetera. The data is never held hostage by the hardware, in other words. Software RAID is not a total replacement for hardware, but it's a lot better than it used to be, both in terms of performance and managability. I've discovered that if software RAID isn't good enough, except in certain narrow applications, that the whole storage architecture might as well be moved to SAN. It'll solve the other problems you're having with storage, along with the ones you raised. As well as cost a lot more, unfortunately.

Gave up on it. Had to.
Stable release would not even load.
I'm curious about what stopped you from using a Linux distro, or BSD (yuck)?
 
Ease of setup/availability. I just moved a copy off one of my retired boxes. I sadly fell into the silly Windows admin portion of my brain. Well, actually, the project had to be up and running ASAP and I was out of time. Silly 30yr data retention policies for aerospace.
 
Can't argue with that reasoning. 30 year retention? Optical media isn't reliable enough to meet that, except under controlled conditions, and hard drives aren't a sure bet, either. If you don't mind the question, what's the plan for achieving that longevity? If it's repetitive copying to new media, how are you guaranteeing data integrity?
 
I'm not the person that posted it, but I can answer most of your (rhetorical? I can't tell) questions. Performance of software RAID is good enough these days - even a three-disk RAID5 can saturate a gigabit Ethernet connection (assuming your application is network file serving). Battery backup can be provided by UPS for the entire system. As for robustness, I have no statistics. However, I do know that being tied to software is better than being tied to hardware - it's possible to move the disks to any other system, even a different architecture, without worry of having a spare controller with the same firmware rev, etcetera. The data is never held hostage by the hardware, in other words. Software RAID is not a total replacement for hardware, but it's a lot better than it used to be, both in terms of performance and managability. I've discovered that if software RAID isn't good enough, except in certain narrow applications, that the whole storage architecture might as well be moved to SAN. It'll solve the other problems you're having with storage, along with the ones you raised. As well as cost a lot more, unfortunately.


I'm curious about what stopped you from using a Linux distro, or BSD (yuck)?

Battery backed cache isn't for UPS style recovery - it's to greatly improve performance on the hardware raid cards (and most large storage arrays too - you'll see somewhere around 50-60% increases from having it enabled). Also works great if you're using local drives with a GOOD hardware raid card.

You CAN load balance / combine gigabit paths, you know, especially for things like iSCSI. Microsoft has MPIO, ESX can load balance per path, etc - especially if you have multiple targets per array / lun. I can easily saturate a hardware RAID card, so I ~know~ you can saturate software raid.

I guess I always think a bit too "big" for this kind of thing - even my small SAN environments (open filer, for instance) can saturate a good hardware raid card :)
 
Can't argue with that reasoning. 30 year retention? Optical media isn't reliable enough to meet that, except under controlled conditions, and hard drives aren't a sure bet, either. If you don't mind the question, what's the plan for achieving that longevity? If it's repetitive copying to new media, how are you guaranteeing data integrity?

High end tape libraries can do it, but we're talking multi-million there...
 
Just set up my first FreeNAS server. I have little to no linux/bsd experience, but it was extremely easy for a linux/bsd noob.

As I speak, I'm transferring all my multimedia to it now!

EDIT: using 0.69RC1 Salusa Secundus (revision 3991)
 
However, I do know that being tied to software is better than being tied to hardware - it's possible to move the disks to any other system, even a different architecture, without worry of having a spare controller with the same firmware rev, etcetera. The data is never held hostage by the hardware, in other words. Software RAID is not a total replacement for hardware, but it's a lot better than it used to be, both in terms of performance and managability. I've discovered that if software RAID isn't good enough, except in certain narrow applications, that the whole storage architecture might as well be moved to SAN. It'll solve the other problems you're having with storage, along with the ones you raised. As well as cost a lot more, unfortunately.

although i can't argue with the cost factor, your data should never be held hostage by the hardware if you do things properly in the first place.....if the data is important enough, you should have some other form of backup of that data. if you do, then you can replace any failed hardware with any other make/model that meets your needs, then recover the data from the backups.

repeat after me: RAID IS NOT A BACKUP SOLUTION!!

a RAID-1 or RAID-5 or RAID-6 setup provides some fault tolerance, which is very nice for making the recovery of data more convenient **should a hard drive fail**, but should NEVER be relied upon as a trusted backup solution. for important data, you can use tape drives and/or spare hard drives as backup media, which should only be connected to the system during the backup/recovery process, and stored away safely the rest of the time.
 
Everything is printed in paperwork, this is the 3rd backup server the data is mirrored to nightly, including one offsite server with its own tape backup.
 
repeat after me: RAID IS NOT A BACKUP SOLUTION!!

I think that point is well understood by everyone here. But there are backups, and there are backups. Quite a few shops use tape for backups, which is slow, somewhat inconvenient, and definitely not automatic or seemless (not talking about mega$ robotic libraries, guys). For more than a couple terabytes, the restore time can be huge. Far better to avoid it, if possible.

Battery backed cache...
There's nothing magical about the battery, it simply enables more aggressive caching. On a software RAID, better caching behavior can be had by tuning pdflush (Linux kernel) for the workload. The real reasons for hardware RAID's (potential) performance edge is a dedicated XOR processor, and a more direct communication to the drives. As I suspect you already know.

My point still stands, I think - software RAID is a viable alternative to hardware in many situations. Maybe even most, when considering smallish size setups.
 
I think that point is well understood by everyone here. But there are backups, and there are backups. Quite a few shops use tape for backups, which is slow, somewhat inconvenient, and definitely not automatic or seemless (not talking about mega$ robotic libraries, guys). For more than a couple terabytes, the restore time can be huge. Far better to avoid it, if possible.


There's nothing magical about the battery, it simply enables more aggressive caching. On a software RAID, better caching behavior can be had by tuning pdflush (Linux kernel) for the workload. The real reasons for hardware RAID's (potential) performance edge is a dedicated XOR processor, and a more direct communication to the drives. As I suspect you already know.

My point still stands, I think - software RAID is a viable alternative to hardware in many situations. Maybe even most, when considering smallish size setups.

"small setup" for me is a CX3-80 on 2g fibre ;) Medium is a single DMX-4. Big is SVC with multiple DMX's behind it on 4g fibre or Infiniband/10gig ethernet.

Almost everyone I know either uses a remote san or a tape library with off-site replication, but again, I work with real companies in the fortune 1000 with big sites running their main business off of our hardware/software. If you DONT have those multi-terabyte backups, your shareholders will eat you alive. Business continuity for the win.

I do know what you're talking about, and a high end card can out-do software especially at RAID 1+0 and 0+1. RAID 5 is close or even faster sometimes, especially with XFS, and the 10/01 are about 10-20% faster on good hardware, and when you're trying to run 50 vms off of a single openfiler server, every drop of performance matters ;) I'm actually impressed with how the software performs for RAID 5.

edit: Correction, RAID 5 software reads are a bit faster, gets stomped by writes.
http://www.linux.com/feature/140734 Given the stuff I normally see, hardware > software by a long shot (185% for block writes). Depends on your app then - we definitely benefit MASSIVELY from hardware raid.
 
Back
Top