RAID Feedback

i'm not expert (not close), but i can share my experience. i've got an 8x300gb raid5 linux setup. the drives are a combination of ide and sata, as well as various models (7200.8 and 7200.9 sata and pata). they're all seagate, and all 7200rpm 8mb cache drives. i bought a motherboard that's got quite a few ports on it (8sata, 3ide), so i haven't needed to get a controller card (this is software raid). i use the array for storing media, so i don't need much in the way of speed. as long as i can play back a video over the network, it's fine. i haven't done any benchmarking with the system, but network transfers over gigabit are about ~20mB/s. i think the array itself is faster than that, but i haven't done any local system performance testing or tweaking because 20mB/s is complete overkill for my usage.

i'm using ubuntu server 6.10 64bit as my os, and evms to manage the array. ubuntu server doesn't come with x-windows installed by default, but an awesome meta-package is provided as long as you enable the universe in apt. with a simple 'apt-get install ubuntu-desktop', everything needed to run x-windows is installed (libraries, x itself, fonts, themes...). most of the time i'm using the standard text ui (dos style) but there are some configuration changes that are easier to do in X, and certainly it's easier to download files from the web via firefox as opposed to text based browsers. when you install X, it starts up by default, but ubuntu has a services configuration section, and if you uncheck gdm, then you boot to text by default.

i've got openssh installed, and that's what i use to administer the box most of the time, as it's even faster than kvm'ing over to the server.

samba is installed on the server, so the windows boxes in my house can see the array just fine. it's a central point for my family's movies, music, and tv shows.

as far as the software vs. hardware debate goes, i felt that the money spent on a controller card was better spent on more hard drives. i wasn't worried about software raid consuming processing power, as the server is dedicated as a file server. it's an intel 640 (p4 3.2ghz), on a foxconn 925xe desktop motherboard that just happens to have a ton of ports. aside from the 4 sata and 1ide port that are provided with the chipset, there's an extra sata controller on the motherboard (brings the sata port total to 8), and an extra ide controller (adds two more ide ports). it was ~$75, and i bought the processor for ~$100 shipped on the forums here. the system has 512mb of ddr2 that i picked up pretty cheap, and most of it goes to waste most of the time. i use a coolermaster stacker as my case. it's gigantic and very well built. there's fiber mesh on all of the intakes to help filter out dust. coolermaster offers these great drive holders - they take up 3 x 5.25" bays and hold 4 drives. on the front of the part is a 120mm fan that helps keep the drives cool. the power supply is mounted at the bottom. where the power supply would usually be are 2 x 120mm fans arranged vertically, which cools nicely. all of the fans are running at slow speeds, so the case isn't too noisy despite have 2 rear 120mm fans, 1 in the power supply, and 3 120mm fans for intake (one on each of the drive holders i have). if you're going to hold a bunch of drives, i highly recommend a case like this, as it gives you lots of space and good cooling; both important things in a server

the nice thing about running software raid is that i can move the drives to another system at any time and the array will still be fine. on the hardware side, there's no standard for how to store raid data, so if your controller dies and you can't find a matching one, you're in a lot of trouble. sometimes, different controllers from the same manufacturer will work out, but it's still a gamble. also, evms allows you to expand an existing array if you want to add a disk. some controllers support this, some don't. it's usually referred to as online capacity expansion (or sometimes just capacity expansion), which means that you don't have to backup, destroy the array, recreate with a larger size when you want to add a drive. i'm going to put a semi-obvious disclaimer here and say that whether you're doing software or hardware raid, you should definitely test expanding/shrinking an array as soon as you get the controller. put data you don't care about on the array, so if a shrink/expand fails, there's no tears. evms recovers gracefully from a failed shrink/expand. lets say you're expanding the array and the computer shuts down; when you go into evms (after the machine boots back up again), evms will bitch a bit and restore the array back to it's original state. you can then try again to expand the array. every hardware controller's ui is different, so there's no way for me to compare this, but you'd certainly hope they'd have a method of recovery given a failed operation.

in evms, the process that i used for creating the array is as follows:

- set up the dos segment manager on each disk
- created a raid4/raid5 region with all the disks
- created an evms volume from the region
- formatted the evms volume (reiserfs)
- mounted the volume

i had some occational trouble with the ncurses evms utility, and i had no desire to mess around with the evms-cli, so i did my array setup in x-windows (evms-gui).

if you use ubuntu, evms is an easy package download, so you don't have to know how to compile and build your own software (although it's not very hard). many linux distros now have evms support built into the kernel. this means that you can run an array as a boot drive, but i'd recommend against it. use a small drive for booting, let the array be separate. i use an old 20gb ide drive as my boot drive for the system. it's not especially fast, but who cares - once the system boots up, it just sits there
 
Nice post dualblade, you seem to have pretty much covered the ground.

I'm currently trying to work myself up to a point where I can de-recommend EVMS. I did some testing, and on my machine using a raw block device is several times faster than an EVMS-created array, but for sheer convenience of setup, evms is pretty nifty.

I'm currently investigating Solaris 10 for its awesome-looking zfs; I'll post another thread when I get some results.
 
If this is only a backup storage device, then congratulations -- most people don't think about this enough. If this is actually a general purpose file server + local backup storage, then consider -- what backs up the file server? "RAID is not a backup", regardless of how good, and planning ahead for a backup when you're designing a file server is IMO a good tactic and constraint, and can even help you maintain the RAID.

As far as RAID config choices -- based on my experiences (drive failure, spare failed to engage, consistency failure and data inconsistency in another array), I don't even think that RAID 5 is enough. RAID 6 might be a better choice, for greater defense depth. Obvious significant drive cost and some performance cost, but performance is improving, and drive and other costs are coming down. The biggest down side is probably lack of availability, which only continues as few people use it..
 
Nice post dualblade, you seem to have pretty much covered the ground.

I'm currently trying to work myself up to a point where I can de-recommend EVMS. I did some testing, and on my machine using a raw block device is several times faster than an EVMS-created array, but for sheer convenience of setup, evms is pretty nifty.

I'm currently investigating Solaris 10 for its awesome-looking zfs; I'll post another thread when I get some results.

thanks. i have no enterprise experience, no real storage experience, and almost no linux-unix-solaris experience. i heard about raid5 and what it could do, and i really liked the idea of expandable, fault tolerant storage. i couldn't find any specific guides on evms raid5, so the procedure that i followed was just trial and error, and what seemed to work for me. i read all about lvm containers, and i had no idea how to use them with my setup (or if they could be). if anyone has anything to add, please do. i know i'm short on knowledge and experience, but i felt it was better to put up a possibly flawed setup and have people correct it than to offer no help at all. if i had no confidence in my config, that'd be a different story, but i've tested my box and it seems to work ok. while possibly not being perfect, it's probably not a total disaster either

i was surprised to not find a complete evms raid5 focused walkthrough, as i'd imagine it'd be the best guide to write up. i bet that most storage newbies follow my thought process:

- need storage
- read about raid 5
- wanted disks to be one logical unit, and liked the idea of fault tolerance
- wanted capacity expansion
- can windows do software raid 5 with capacity expansion? no, ok linux can so i'll do that
- evms something something, ok sounds good

for me, going to evms/linux was my first linux experience, and i kind of wanted my hand held. the 3 million page guide that evms has was good, but sometimes it's really nice to have focused and precise information. most of us have storage controllers on our motherboards that support raid 0/1, so a guide that just sticks to how to download, compile, install, and configure a raid5 array using a given distribution of linux would be great.

i found ubuntu desktop really easy to use, and i love that it's server version has an option to install a LAMP (linux apache mysql php) server as part of the install. not having to configure the separate applications earns points in my book. i find apt-get easy to use, and i like the feel of the desktop. my question (and this still lingers), is whether ubuntu is the best distro for the purpose. it seems to be working just fine, but i don't know if it's as fast or as stable as anything else. it would be nice if a guide suggested a distro, and then tailored it's instructions around using that distro (as opposed to more generic instructions to support all distros). hell, maybe a 2 part "how to install linux/how to install evms" guide would be good. perhaps i will write up such a guide, as to help those that are new to this. linux isn't hard, but the command line is very different and can be (was for me) very very frustrating. Coming from a world where you just download a self extracting executable and run it to install anything (programs, drivers...), apt-get is a big change. until i understood that packages existed and how to really use apt(configuring sources, searching available packages with apt-cache search), i spent a lot of time banging my head against the wall.

i can definitely see raid 5 bringing geeks over to linux. raid 5 is still far too complex for a consumer application (but i bet that will be changing soon), but it's great for geeks like us. disks are big and cheap now, and backup is not. now i know raid5 is not the same thing as backup, but as a method to store large amounts of media, i think it's the most elegant. burning hundreds of dvd's and swapping them in when i want files is such a pain as to be unacceptable. tape drives are very expensive, access to data is not direct (has to be restored), and capacity still isn't enough for a large media collection. i don't see any way to hold dvds, tv shows, and music except for a hard drive, and their failure rate makes them a lousy permanent medium. raid5 helps that issue, although of course something like a power supply failure could still take out all the drives at once. however, for me at least, the nature of a media server makes its data less than mission-critical. if all my media disappeared would i be sorry? yes. would it be a heartbreaking loss? no. is it worth the cost of one hard drive's capacity worth of space to ensure a bit more reliability? yes. is it worth the cost of an actual backup option for that raid array? no. the pictures i have taken are absolutely irreplaceable, and are therefore on my computer's hard drive, the file server's hard drive, and burned to dvd. i can download all my tv shows again

almost anything that can be done in linux can be done in windows, and i think windows makes a better desktop power user/gamer os (as long as you don't mind paying for an os). i think raid 5 with expansion is the first app that forces people to look outside of windows. even if there was a 3rd party app that could do software raid 5 in windows, it'd likely be 1) very expensive, and 2) not have the stability/reliability of something built into the os kernel. maybe it's just superstition, but i just don't feel comfortable having an app manage my data in a way that the system does not natively support. evms is an extension of the abilities already found in linux, so using it for configuration doesn't bother me.
 
Nice post dualblade, you seem to have pretty much covered the ground.

I'm currently trying to work myself up to a point where I can de-recommend EVMS. I did some testing, and on my machine using a raw block device is several times faster than an EVMS-created array, but for sheer convenience of setup, evms is pretty nifty.

I'm currently investigating Solaris 10 for its awesome-looking zfs; I'll post another thread when I get some results.

I've been hearing lots of good things about ZFS, will be interested to hear what you find out. Haven't had the time or effort to bother trying it myself yet.. :p
 
If you have a machine dedicated for this purpose that won't be doing anything else, I say software RAID. The cool thing about software RAID, is you can actually buy more than one RAID controller and do nested raid across two or more controllers.

Nothing says "I download porn" more than 15 drives in a raid 50.
 
I would honestly recomend a proper raid controller. For the reason that you can offload some potential problems and you can also offload resources.

I guess if this is more of a learning experience then it's fine, but if you want to hold your data on there without having to worry much about it, I would go with a hardware based solution.
 
For the home media server as Dualblade already pointed out software raid is actually a better alternative.
Speed isn't a huge issue if you just need it fast enough to stream media over a home network which software raid handles fine. A hardware raid controller means that if the hardware card fails down the line you will have to replace it with that exact controller or if you're lucky a card from the same manufacturer.
Software Raid actually makes alot of sense. I've used windows 2003 server software raid (5) for the past year and a half but am now thinking about switching to linux just to have the option for raid 5 expansion.

In summary: software raid is actually a very viable solution for a dedicated Home storage server.
 
For the home media server as Dualblade already pointed out software raid is actually a better alternative.
Speed isn't a huge issue if you just need it fast enough to stream media over a home network which software raid handles fine. A hardware raid controller means that if the hardware card fails down the line you will have to replace it with that exact controller or if you're lucky a card from the same manufacturer.
Software Raid actually makes alot of sense. I've used windows 2003 server software raid (5) for the past year and a half but am now thinking about switching to linux just to have the option for raid 5 expansion.

In summary: software raid is actually a very viable solution for a dedicated Home storage server.

Raid controllers don't exactly spring up overnight, they are on the market for an incredibly long time. Also, I have yet to see the ratio of raid controllers match that of motherboards in failures. I also have yet to not be able to find matching controllers years later down the road.

Also, another point is that with software raid you are stuck with your own solution, you can't transfer platforms easily and in some cases you are stuck. A proper hardware raid solution is always the best solution. In your situation you would be stuck if you wanted to converge over to linux if you were running software raid... most raid cards supports online capacity expansion and live swapping, it's also a lot more rugged platform.

Perhaps in his case he would be fine since the use would be minimal, however, if he does decide to add in a ton of hard drives, he would eventually run into a larger controller with raid functionality.
 
www.google.com





JK.

I went through the same process. My advice would be forget linux. Don't even go that way. It will give you software raid and you could use something like EVMS that will let you dynamically grow the array, but don't bother. It will take you 10X as long to set everything up. You'll deal with hardware incompatiblities, Linux install problem (intall Ubuntu server caues you want to use LAMP and oh, you can't install a GUI that way cause who would want a GUI on a server), administration issues and every extra thing you try to do will be a massive ordeal.

In the end, how are your files vulnerable to virii? Keep your machine behind a router and block all unneccessary ports. Install Hamachi so you can get to it remotely behind the router and spend the money on a SATA RAID card.

In terms of RAID level, 5 will do. Important files should be stored in multiple locations. RAID-5 doesn't protect you from a fire, flood, electric surge or aliens.
 
Also, another point is that with software raid you are stuck with your own solution, you can't transfer platforms easily and in some cases you are stuck. A proper hardware raid solution is always the best solution. In your situation you would be stuck if you wanted to converge over to linux if you were running software raid... most raid cards supports online capacity expansion and live swapping, it's also a lot more rugged platform.
Also, another point is that with hardware raid you're stuck with the same manufacturer, you can't transition from a small cheap card to a large expensive card, so you're stuck. A proper software raid is always the best solution.

Transitioning from Windows to Linux isn't trivial, even if the block device is supported in both OSes and is viewed the same way. NTFS is not (in my ever-so-humble opinion) production-ready on Linux. I think anyone that's used it for a good period of time read/write would agree with me.

LSR supports OCE, and as I understand it, hot-swapping works if your controller supports it. Mine doesn't, or I'd report. What happens if you pull a disk and your controller doesn't support hot-swapping disks? The kernel reports that that disk is gone, your array goes degraded, and even if you plug another disk in it doesn't get recognized. Next time you re-initialize the card (next reboot?) it's picked up and the rebuild can start (or happens automatically if you've got it set up right).
I went through the same process. My advice would be forget linux. Don't even go that way. It will give you software raid and you could use something like EVMS that will let you dynamically grow the array, but don't bother. It will take you 10X as long to set everything up. You'll deal with hardware incompatiblities, Linux install problem (intall Ubuntu server caues you want to use LAMP and oh, you can't install a GUI that way cause who would want a GUI on a server), administration issues and every extra thing you try to do will be a massive ordeal.
If you don't know how to use Linux in the general case (no pun intended), you certainly don't know how to use Linux for a fileserver. But if you've got some expertise in using command line stuff, you are a good candidate for Linux software raid. It may take longer to set up, but it's a good deal cheaper. When I built my array, the 8-port controller cost me $130. The cheapest comparable hardware controller was probably $500. I spent a few hours setting it up (even counting the fact that much of that effort would have happened with a hardware card), but I saved $100/hr for my time. It was worth it to me to save a couple hundred bucks, but if your time is too precious to you or you want the best performance possible, hardware raid is the place to look.

Installing Linux and getting the packages you want should be the least of your worries if you decide to go with a LSR solution. I mean that literally - if installing Linux worries you more than any item on your list of concerns, try something else. I feel like Slartibartfast (of HHGTTG, not [H]) right now - "The late Dentarthurdent". It's meant as a threat, you see :p

cornfield: EVMS uses md arrays, but it's doing some intermediate mapping or something. If you don't need the pretty management interface EVMS gives you, using mdadm is a good deal faster in sequential transfers. If you want to split your array into volumes rather than one big filesystem, EVMS will make setting up and administering that much easier.
 
Raid controllers don't exactly spring up overnight, they are on the market for an incredibly long time. Also, I have yet to see the ratio of raid controllers match that of motherboards in failures. I also have yet to not be able to find matching controllers years later down the road.

What do motherboad failures have to do with anything? If you're motherboard fails all you have to do is replace it. Software raid is not hardware specific. If you're OS drive fails just replace it and reinstall OS.
 
Also, another point is that with hardware raid you're stuck with the same manufacturer, you can't transition from a small cheap card to a large expensive card, so you're stuck.

Just like how you can't jump from one platform to the other with software raid, so the point is very moot.

A proper software raid is always the best solution.

I can back this up with research upon research proving this claim incorrect. I can back this up with massive vendors and groups. Hardware raid is superior to software raid.

Makes me wonder why Compaq, HP, Dell, and all the large enterprise environments favors hardware raid each and every time when it comes to a solution...hmmmm

Here is some food for your brain from very well established vendors/sites:

http://www.raiddatarecovery.com/raid-data-recovery-library/hard_vs_soft_raid.pdf
http://graphics.adaptec.com/pdfs/raid_soft_v_hard.pdf
http://www.optimumrecovery.com/raid/index.html
http://www.midwestdatarecovery.com/software-vs-hardware-raid.html
http://www.3ware.com/products/pdf/HWvsSW_111804.pdf
http://www.zdnet.com.au/whitepaper/0,2000063328,22089246p-16001438q,00.htm
http://eval.veritas.com/downloads/whitepapers/vxvm_swhwraid_20417%20wp_L.pdf
http://www.storagereview.com/guide2000/ref/hdd/perf/raid/conf/ctrlInterfaces.html
http://www.pcguide.com/ref/hdd/perf/raid/conf/ctrl.htm
http://www.answers.com/topic/raid-2021


Discussion sites dealing with the issues:
http://techrepublic.com.com/5208-11184-0.html?forumID=39&threadID=198882
http://www.hardforum.com/showthread.php?p=1030415836
http://www.vbulletin.com/forum/showthread.php?p=1231794
http://www.sql-server-performance.com/forum/topic.asp?TOPIC_ID=5535
http://lists.debian.org/debian-isp/2002/01/msg00484.html


Basic principle: Hardware > Software > No Raid. It's better to have something than nothing. No single solution is the best solution, however, in general terms… hardware is more prevalent when it comes to protecting your data. Some applications software raid is the best solution, those usually tend to be financial constraints.
 
What do motherboad failures have to do with anything? If you're motherboard fails all you have to do is replace it. Software raid is not hardware specific. If you're OS drive fails just replace it and reinstall OS.

It was a logical comparison made to indicate the failure rates of hardware based controllers... or the lack thereof.

Your logic is that you are using a motherboard with the headers for your drive connectivity, therefore if you upgrade a board, you are potentially out of a drive header or two, this is increasingly important in the days that things such as IDE headers are disappearing from motherboards.

Using the logic "but then you have to find an identical controller" is a weak statement.


Keep in mind, I'm still using my raid controllers from 1998, you can still buy them on ebay, and yet I moved them through many platforms and operating systems... try that with your software raid. As I said earlier, software raid would be fine if the load is going to be low and/or if it's going to be more of an experiment or fun factor or just for personal use, however, it's not a favorable replacement for hardware raid solutions.
 
cornfield: EVMS uses md arrays, but it's doing some intermediate mapping or something. If you don't need the pretty management interface EVMS gives you, using mdadm is a good deal faster in sequential transfers. If you want to split your array into volumes rather than one big filesystem, EVMS will make setting up and administering that much easier.

unhappy_mage: Thanks for the info. Is there a GUI that helps one manage mdadm better. It seems really ground level. I'm trying to move from windows 2003 to linux. Ubuntu.

I opted to not use linux a year ago because raid5 expansion was only possible using raidtools. A utility that was untested with very little support. Do you know if Ubuntu 6.10 comes with the right kernel to support raid 5 mdadm expansion. I read somewhere that you may have to enable the right kernel ?

I'm about to throw some drives in and start messing around with it. I also read that raid1 (mirroring) is the same alogorithm used in raid 5 and that raid1 is actually raid 5 with just 2 drives. Is it possible to have a drive with data on it put in a raid 1 array by adding a spare then adding another to up it to raid 5? All with out losing the data on the original drive.

I'm also about to install solaris 10 and see what zfs is all about. If anyone has any experience with solais 10 and or zfs let us know what you think.
 
It was a logical comparison made to indicate the failure rates of hardware based controllers... or the lack thereof.

Your logic is that you are using a motherboard with the headers for your drive connectivity, therefore if you upgrade a board, you are potentially out of a drive header or two, this is increasingly important in the days that things such as IDE headers are disappearing from motherboards.

Using the logic "but then you have to find an identical controller" is a weak statement.

No one said anything about IDE controllers. I am using Sata like most hardware Raid controllers. Your statement my friend is illogical. Finding Sata or even IDE controllers, weather onboard the motherboard or on some sort of PCI expansion card, Is way easier then finding a manufacturer specific HW Raid Controller. To compare the two doesn't even make sense.

simply put: If a hardware raid card fails and you can't find the exact same card or card compatible with your array made by the same manufacturer then you are screwed. with software raid If your motherboard fails you buy another one, If your Sata or IDE expansion card fails you buy another one. You will be able to rebuild your array regardless. If you didn't understand what I typed initailly you should have asked. Software raid is in no way hardware specific.
 
No one said anything about IDE controllers. I am using Sata like most hardware Raid controllers. Your statement my friend is illogical. Finding Sata or even IDE controllers, weather onboard the motherboard or on some sort of PCI expansion card, Is way easier then finding a manufacturer specific HW Raid Controller. To compare the two doesn't even make sense.
O'RLY?

i bought a motherboard that's got quite a few ports on it (8sata, 3ide), so i haven't needed to get a controller card (this is software raid).

I'd like to see your refrences to back up your claims. Managing approx. 15,000 servers on my end without yet having to go out on a limb to find controllers it must say something. Interesting.


simply put: If a hardware raid card fails and you can't find the exact same card or card compatible with your array made by the same manufacturer then you are screwed. with software raid If your motherboard fails you buy another one, If your Sata or IDE expansion card fails you buy another one. You will be able to rebuild your array regardless. If you didn't understand what I typed initailly you should have asked. Software raid is in no way hardware specific.

Tell me. How many occurrences have you encountered with not being able to find a raid controller? Or having one fail for that matter.

Also, you seem to forget that most raid controllers support fault tolerance with battery backup, you can also have mirrored controllers. So please use some statistics to show me some facts here, because if I'm not mistaking you are spewing out just personal opinions.



I hope you can start using some facts and statistics to back up your claims on how much more superior software raid is, because large vendors, corporations, software companies, and even enthusiasts are not for your claims. You are going to need to provide some serious statistics on your claims.

Like I said, software solutions are not the absolutle solution but it offers a venue for data redundancy, it's also a cheap alternative to hardware raid and it gets the job done. But make no mistake, its far from superior to hardware solutions. I'm not disputing your claim that you can use a bunch of cheap hardware for a raid solution, I am disputing the claims where you are claiming it to be superior. I do agree with you how you can make use of existing hardware and have a budgeted approach, I also agree where you can use a ton of cheap controllers and boards and be set, so don't forget that. I'm simply disputing the hardware vs software approach and I am disputing the fact that hardware controllers are easily obtained even years later.
 
dude calm down. You're the one that said hardware was the end all raid solution. I never said Software raid was superior. I did say that for a home media solution with not many users software raid is more then adequate. By adequate I mean reliable (less points of failure) and with bearable read/write speeds. Speeds fast enough to stream/access your media. Music, documents or even HD video. If you are going to reffer to any of my posts please do not make things up.

Having a high post count on a forum does not make you right by default. If you have a place for your drives IDE or Sata, on board or with the aid of a controller card, is up to you. Hardware is not a restriction. If you need some stats to coprehend that then I'm sorry I cannot help you.
 
dude calm down.

I am calm...

You're the one that said hardware was the end all raid solution.

Please quote me on that? Where did I say that?

I never said Software raid was superior. I did say that for a home media solution with not many users software raid is more then adequate.
And I agree, however, the other posters do not agree about the first comment.

By adequate I mean reliable (less points of failure) and with bearable read/write speeds.
This I do not agree with. You might have reduced a hardware point of failure but you also introduced other areas for potential failure.

Speeds fast enough to stream/access your media. Music, documents or even HD video. If you are going to reffer to any of my posts please do not make things up.

Never argued with this either.

Having a high post count on a forum does not make you right by default.
Where did I say this?

If you have a place for your drives IDE or Sata, on board or with the aid of a controller card, is up to you. Hardware is not a restriction. If you need some stats to coprehend that then I'm sorry I cannot help you.
I provided plenty of stats for you to feast on.
 
unhappy_mage: Thanks for the info. Is there a GUI that helps one manage mdadm better. It seems really ground level. I'm trying to move from windows 2003 to linux. Ubuntu.

I opted to not use linux a year ago because raid5 expansion was only possible using raidtools. A utility that was untested with very little support. Do you know if Ubuntu 6.10 comes with the right kernel to support raid 5 mdadm expansion. I read somewhere that you may have to enable the right kernel ?
If you use 2.6.16 or later (which will ship with Ubuntu, I'm pretty sure - Debian has it) you'll be all set to expand.
I'm about to throw some drives in and start messing around with it. I also read that raid1 (mirroring) is the same alogorithm used in raid 5 and that raid1 is actually raid 5 with just 2 drives. Is it possible to have a drive with data on it put in a raid 1 array by adding a spare then adding another to up it to raid 5? All with out losing the data on the original drive.
Raid 1 can be thought of as raid 5 with two disks, but the disks contain the same data in raid 1 and would have the bitwise complement on the two disks if it were implemented as 2-disk raid 5.
I'm also about to install solaris 10 and see what zfs is all about. If anyone has any experience with solais 10 and or zfs let us know what you think.
Solaris 10 is neat. When I get some more disks to play with I'll tell you what I think of ZFS ;)

Hoo... kay. Begin period of not flaming, just respectfully disagreeing.
Keep in mind, I'm still using my raid controllers from 1998, you can still buy them on ebay, and yet I moved them through many platforms and operating systems... try that with your software raid.
I did, it worked ;) Even moved from an IDE controller with internal bridges to sata to a native sata card. Nothing like having your sata disks start out appearing as IDE and then appearing as SCSI... and the array not changing at all :D
I can back this up with research upon research proving this claim incorrect. I can back this up with massive vendors and groups. Hardware raid is superior to software raid.
Unless budget is a consideration, or you're using "simple" raid. Buying a $400 raid controller for a 2-drive raid 1 is probably cost ineffective, even if it's mission-critical (but then, you need two machines, not two disks!).
Makes me wonder why Compaq, HP, Dell, and all the large enterprise environments favors hardware raid each and every time when it comes to a solution...hmmmm
Except Sun. Their X4500 (which is quite the beast!) doesn't have any hardware raid controllers, just six 88SX6081s and ZFS support. (PS: Take a look at the block diagram for that sucker. Makes me salivate every time.)
Here is some food for your brain from very well established vendors/sites:
I'll complain about these one line at a time, in the same order as you linked them. Bet I can find something that makes each suspect as evidence of hardware superiority.

Date: 2001.
By a hardware raid maker, date 2002.
"If you have a server hiccup, you could have some data loss issues." applies to hardware too. Except there's more hardware to hiccup. Not that hardware controllers are terribly unreliable, just that the probabilities are just about equal in both cases.
Marketing brochure from a hardware raid maker.
Required registration, didn't read, sorry.
2002, tests done on 9gb disks. Relevant? I don't think so.
StorageReview said:
If you want to use any of the more esoteric RAID levels such as RAID 3 or RAID 1+0, you pretty much require hardware RAID, since support for these levels is usually not offered in software. If you need top performance while using a computation-intensive RAID level such as RAID 5, you also should consider a hardware solution pretty much "mandatory", because software RAID 5 can really hurt performance.
Not true of LSR. A single P3 can do ~2GB/s of raid 5 calculations, or ~800 MB/s of CPU calcs. In most cases, either you're doing less I/O than that, or you can dedicate a processor to parity calcs. Pentium 3 machines are cheaper than hardware raid, for most values of "Pentium 3 machine" and "hardware raid".
2001. Has nothing been posted on this subject in the last 5 years?
Why not cite Wikipedia? That's where it came from... I don't see anything about hardware versus software raid that hasn't been discussed there.

Point taken. They sure recommended him good ;)
uOpt seems to be recommending software raid, doesn't he? Unless I'm drastically misreading...
Assertion without proof.
2004, "I had problems with it so it's EVIL, so now I bought hardware raid. Nobody gets fired for buying hardware raid." And did you read that article? It's from 2003, and not terribly relevant, but the software raid seems to outperform the 3ware card a good deal of the time.
This poster is comparing his experience with FC hardware raid against software IDE raid. Two independent variables, not one.

Also, you seem to forget that most raid controllers support fault tolerance with battery backup, you can also have mirrored controllers.
Correct. On this point, hardware raid wins.
I'm not disputing your claim that you can use a bunch of cheap hardware for a raid solution, I am disputing the claims where you are claiming it to be superior.
I agree. Software raid isn't superior, it's just more convenient and cheaper. But for probably 80% of the raid builds that happen on this forum, it's enough, and buying a raid controller designed to handle server-type loads is overkill and unnecessary.

So. Would I buy a hardware raid card? Sure. But not for pure capacity applications like home media storage, where speed isn't an issue, and most of the data is recoverable or unimportant. Unrecoverables need to be backed up anyways; hardware raid doesn't help there.
 
Back
Top