Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Nice post dualblade, you seem to have pretty much covered the ground.
I'm currently trying to work myself up to a point where I can de-recommend EVMS. I did some testing, and on my machine using a raw block device is several times faster than an EVMS-created array, but for sheer convenience of setup, evms is pretty nifty.
I'm currently investigating Solaris 10 for its awesome-looking zfs; I'll post another thread when I get some results.
Nice post dualblade, you seem to have pretty much covered the ground.
I'm currently trying to work myself up to a point where I can de-recommend EVMS. I did some testing, and on my machine using a raw block device is several times faster than an EVMS-created array, but for sheer convenience of setup, evms is pretty nifty.
I'm currently investigating Solaris 10 for its awesome-looking zfs; I'll post another thread when I get some results.
Nah, it's that fiber cable running to your house that does that.Nothing says "I download porn" more than 15 drives in a raid 50.
For the home media server as Dualblade already pointed out software raid is actually a better alternative.
Speed isn't a huge issue if you just need it fast enough to stream media over a home network which software raid handles fine. A hardware raid controller means that if the hardware card fails down the line you will have to replace it with that exact controller or if you're lucky a card from the same manufacturer.
Software Raid actually makes alot of sense. I've used windows 2003 server software raid (5) for the past year and a half but am now thinking about switching to linux just to have the option for raid 5 expansion.
In summary: software raid is actually a very viable solution for a dedicated Home storage server.
Also, another point is that with hardware raid you're stuck with the same manufacturer, you can't transition from a small cheap card to a large expensive card, so you're stuck. A proper software raid is always the best solution.Also, another point is that with software raid you are stuck with your own solution, you can't transfer platforms easily and in some cases you are stuck. A proper hardware raid solution is always the best solution. In your situation you would be stuck if you wanted to converge over to linux if you were running software raid... most raid cards supports online capacity expansion and live swapping, it's also a lot more rugged platform.
If you don't know how to use Linux in the general case (no pun intended), you certainly don't know how to use Linux for a fileserver. But if you've got some expertise in using command line stuff, you are a good candidate for Linux software raid. It may take longer to set up, but it's a good deal cheaper. When I built my array, the 8-port controller cost me $130. The cheapest comparable hardware controller was probably $500. I spent a few hours setting it up (even counting the fact that much of that effort would have happened with a hardware card), but I saved $100/hr for my time. It was worth it to me to save a couple hundred bucks, but if your time is too precious to you or you want the best performance possible, hardware raid is the place to look.I went through the same process. My advice would be forget linux. Don't even go that way. It will give you software raid and you could use something like EVMS that will let you dynamically grow the array, but don't bother. It will take you 10X as long to set everything up. You'll deal with hardware incompatiblities, Linux install problem (intall Ubuntu server caues you want to use LAMP and oh, you can't install a GUI that way cause who would want a GUI on a server), administration issues and every extra thing you try to do will be a massive ordeal.
Nah, it's that fiber cable running to your house that does that.
Raid controllers don't exactly spring up overnight, they are on the market for an incredibly long time. Also, I have yet to see the ratio of raid controllers match that of motherboards in failures. I also have yet to not be able to find matching controllers years later down the road.
Also, another point is that with hardware raid you're stuck with the same manufacturer, you can't transition from a small cheap card to a large expensive card, so you're stuck.
A proper software raid is always the best solution.
What do motherboad failures have to do with anything? If you're motherboard fails all you have to do is replace it. Software raid is not hardware specific. If you're OS drive fails just replace it and reinstall OS.
cornfield: EVMS uses md arrays, but it's doing some intermediate mapping or something. If you don't need the pretty management interface EVMS gives you, using mdadm is a good deal faster in sequential transfers. If you want to split your array into volumes rather than one big filesystem, EVMS will make setting up and administering that much easier.
It was a logical comparison made to indicate the failure rates of hardware based controllers... or the lack thereof.
Your logic is that you are using a motherboard with the headers for your drive connectivity, therefore if you upgrade a board, you are potentially out of a drive header or two, this is increasingly important in the days that things such as IDE headers are disappearing from motherboards.
Using the logic "but then you have to find an identical controller" is a weak statement.
O'RLY?No one said anything about IDE controllers. I am using Sata like most hardware Raid controllers. Your statement my friend is illogical. Finding Sata or even IDE controllers, weather onboard the motherboard or on some sort of PCI expansion card, Is way easier then finding a manufacturer specific HW Raid Controller. To compare the two doesn't even make sense.
i bought a motherboard that's got quite a few ports on it (8sata, 3ide), so i haven't needed to get a controller card (this is software raid).
simply put: If a hardware raid card fails and you can't find the exact same card or card compatible with your array made by the same manufacturer then you are screwed. with software raid If your motherboard fails you buy another one, If your Sata or IDE expansion card fails you buy another one. You will be able to rebuild your array regardless. If you didn't understand what I typed initailly you should have asked. Software raid is in no way hardware specific.
dude calm down.
You're the one that said hardware was the end all raid solution.
And I agree, however, the other posters do not agree about the first comment.I never said Software raid was superior. I did say that for a home media solution with not many users software raid is more then adequate.
This I do not agree with. You might have reduced a hardware point of failure but you also introduced other areas for potential failure.By adequate I mean reliable (less points of failure) and with bearable read/write speeds.
Speeds fast enough to stream/access your media. Music, documents or even HD video. If you are going to reffer to any of my posts please do not make things up.
Where did I say this?Having a high post count on a forum does not make you right by default.
I provided plenty of stats for you to feast on.If you have a place for your drives IDE or Sata, on board or with the aid of a controller card, is up to you. Hardware is not a restriction. If you need some stats to coprehend that then I'm sorry I cannot help you.
If you use 2.6.16 or later (which will ship with Ubuntu, I'm pretty sure - Debian has it) you'll be all set to expand.unhappy_mage: Thanks for the info. Is there a GUI that helps one manage mdadm better. It seems really ground level. I'm trying to move from windows 2003 to linux. Ubuntu.
I opted to not use linux a year ago because raid5 expansion was only possible using raidtools. A utility that was untested with very little support. Do you know if Ubuntu 6.10 comes with the right kernel to support raid 5 mdadm expansion. I read somewhere that you may have to enable the right kernel ?
Raid 1 can be thought of as raid 5 with two disks, but the disks contain the same data in raid 1 and would have the bitwise complement on the two disks if it were implemented as 2-disk raid 5.I'm about to throw some drives in and start messing around with it. I also read that raid1 (mirroring) is the same alogorithm used in raid 5 and that raid1 is actually raid 5 with just 2 drives. Is it possible to have a drive with data on it put in a raid 1 array by adding a spare then adding another to up it to raid 5? All with out losing the data on the original drive.
Solaris 10 is neat. When I get some more disks to play with I'll tell you what I think of ZFSI'm also about to install solaris 10 and see what zfs is all about. If anyone has any experience with solais 10 and or zfs let us know what you think.
I did, it worked Even moved from an IDE controller with internal bridges to sata to a native sata card. Nothing like having your sata disks start out appearing as IDE and then appearing as SCSI... and the array not changing at allKeep in mind, I'm still using my raid controllers from 1998, you can still buy them on ebay, and yet I moved them through many platforms and operating systems... try that with your software raid.
Unless budget is a consideration, or you're using "simple" raid. Buying a $400 raid controller for a 2-drive raid 1 is probably cost ineffective, even if it's mission-critical (but then, you need two machines, not two disks!).I can back this up with research upon research proving this claim incorrect. I can back this up with massive vendors and groups. Hardware raid is superior to software raid.
Except Sun. Their X4500 (which is quite the beast!) doesn't have any hardware raid controllers, just six 88SX6081s and ZFS support. (PS: Take a look at the block diagram for that sucker. Makes me salivate every time.)Makes me wonder why Compaq, HP, Dell, and all the large enterprise environments favors hardware raid each and every time when it comes to a solution...hmmmm
I'll complain about these one line at a time, in the same order as you linked them. Bet I can find something that makes each suspect as evidence of hardware superiority.Here is some food for your brain from very well established vendors/sites:
Not true of LSR. A single P3 can do ~2GB/s of raid 5 calculations, or ~800 MB/s of CPU calcs. In most cases, either you're doing less I/O than that, or you can dedicate a processor to parity calcs. Pentium 3 machines are cheaper than hardware raid, for most values of "Pentium 3 machine" and "hardware raid".StorageReview said:If you want to use any of the more esoteric RAID levels such as RAID 3 or RAID 1+0, you pretty much require hardware RAID, since support for these levels is usually not offered in software. If you need top performance while using a computation-intensive RAID level such as RAID 5, you also should consider a hardware solution pretty much "mandatory", because software RAID 5 can really hurt performance.
Correct. On this point, hardware raid wins.Also, you seem to forget that most raid controllers support fault tolerance with battery backup, you can also have mirrored controllers.
I agree. Software raid isn't superior, it's just more convenient and cheaper. But for probably 80% of the raid builds that happen on this forum, it's enough, and buying a raid controller designed to handle server-type loads is overkill and unnecessary.I'm not disputing your claim that you can use a bunch of cheap hardware for a raid solution, I am disputing the claims where you are claiming it to be superior.
I agree. Software raid isn't superior, it's just more convenient and cheaper.