Best price/performance PCI RAID for linux?

Vyedmic

Limp Gawd
Joined
Jul 20, 2007
Messages
222
Hi,
I would like to build small backup system based on older ECS 741GX-M mobo with Athlon XP. I'd like to put a hardware RAID controller in it and that's where I need help. The most important thing for me is security/reliabilty I don't need greatest speeds.

Is it better to go IDE or SATA way? Is it enough secure to rely on software RAID?

I found 3ware 8006-2LP which I quite like, but there were some data corruption reports.

Thanks for help
 
If speeds aren't a concern you can get any sata controller or ide controller and build a RAID array using md software RAID under linux. Using md is very robust.. you can move from motherboard to motherboard, reinstall, etc, and it can pick up and reassemble the array. Also supports online expansion, etc with a newish kernel.
 
As stated, linux software raid is very robust, and actually surprisingly fast.. It'd be my recommendation actually as you can move it to another server easily, etc.
 
Just an fyi, with six 500GB sata WD RE2s in a raid 5 via mdadm:
$ hdparm -Tt /dev/md0

/dev/md0:
Timing cached reads: 1314 MB in 2.00 seconds = 657.05 MB/sec
Timing buffered disk reads: 452 MB in 3.01 seconds = 150.33 MB/sec
 
Rakinos, could you shed some light on what controllers you're using, as well as any optimizations you might have done to hit those speeds?
 
Three Western Digital 640s in RAID5:

Code:
# hdparm -t /dev/md0

/dev/md0:
 Timing buffered disk reads:  372 MB in  3.01 seconds = 123.77 MB/sec

Nothing special is required to get reasonable performance from the md driver. These drives are plugged into a ICH9R, for example.

[edit] Here's a sequential write test to the array:
Code:
# mkfs.xfs /dev/md0
# mount /dev/md0 /mnt/tmp
# dd if=/dev/zero of=/mnt/tmp/testfile bs=10M count=1000
1000+0 records in
1000+0 records out
10485760000 bytes (10 GB) copied, 89.1485 s, 118 MB/s
Pretty good performance, I'd say.
 
If you are trying to compare IDE/SATA, SATA has a lot of benefits (hot swap, seems to be the direction everyone is going). SATA might not be available on older gear though, so you would have to drop in an SATA card. A hardware raid card does not come cheap, unless you mean a card that just has more ports. I've replaced drives in my Areca 1220 and it was pretty straightforward.

My software array:
a8n sli premium (8x onboard sata), with 8x 320gb WD's in raid5 on jfs:
Code:
llama raid # dd if=/dev/zero of=10gb bs=10M count=1000
1000+0 records in
1000+0 records out
10485760000 bytes (10 GB) copied, 89.8936 s, 117 MB/s

llama raid # hdparm -tT /dev/md0
/dev/md0:
 Timing cached reads:   1592 MB in  2.00 seconds = 795.43 MB/sec
 Timing buffered disk reads:  396 MB in  3.01 seconds = 131.77 MB/sec

llama raid # df -h /mnt/raid/
Filesystem            Size  Used Avail Use% Mounted on
/dev/md/0             2.1T  1.2T  885G  58% /mnt/raid
 
Rakinos, could you shed some light on what controllers you're using, as well as any optimizations you might have done to hit those speeds?

As, I mentioned earlier, I'm just using mdadm. As for controllers, I'm using four sata connectors on my motherboard, and the cheapest pci-e sata card that was available at newegg. Nothing special as far as optimization.
 
Thanks for replies. So md is the way then. As I said I'm not that interested in great speeds - I won't be using it for any streaming, purely backup. But it can't hurt to have decent read performance. Was thinking about WD5000AAKB drives. What do you think about them?
 
I don't have good experience with Barracudas. One b*tch died on me. WD is running great so far. As for SATA I may think about it PCI SATA controllers are not that expensive so yeah either WD5000AAKS or KB. Again speed is not as important as reliabilty in this case. Because of this I don't want to go int WD RE drives either. Their error correction seem somewhat limited. Can someone please explain whether it's better to go into RE and run the array at all times even with possible errors or get SE and replace the drive when it develops an error? Thinking about RAID 1 or RAID 10.
 
Is this a DAS backup, or are you putting this on the network? At 100mbit you'll cap out at ~10 megabytes per second, and on gigabit about 100ish megabytes per second (assuming the protocol supports it). As you can see from the speeds listed above, from the two raid configs posted the raid speeds exceed the capacity of gigabit networking. So if you want to build a configuration like those, you'd probably be better off worrying about other bottlenecks if you are that concerned about read speeds.

If you are aiming for a reliable/robust solution, don't forget other layers of the problem. RAID is just one piece, think about the operating system patching, the power supply (both actual P/S and UPS), alarming (so you know a drive has died, or is on the fritz, or a scheduled backup failed), networking (bonding, core network) etc...
 
Yes I'm putting it on the network. And read speed is not important at all. I was thinking either about Gentoo or Ubuntu, which would you recommend? Also are there any backup solutions for Linux with scheduling, error reporting or do I have to get dirty with some scripting?
 
Gentoo offers a lot of flexibility but requires a lot more time to stand up. Ubuntu pretty much works "out of the box" but will probably have a lot more bloat. All Linux systems should come with cron for scheduling, but you could probably easily script something to run every week pulling your critical data across in one fell swoop. There might even be things to automate this. You could also export SMB on the backup server and do the scheduled backups on your workstations.
 
Thanks for info hokatichenci. Any experience with smartmontools? I would probably use them for scheduled error checking. Or can I use something better?
 
Back
Top