.

Is there a better option at the same price point when it comes to a RAID controller? I'm not looking for a pure software RAID option as that will require custom kernels.

RAID on linux is built in no need for custom kernel, all you need is mdadm

(unsure what this site looks like with ads)
https://www.tecmint.com/create-raid-6-in-linux/
there are probery simper solutions then a bunch of linux commands like a GUI way of doing it or buy a 8 port SAS card (they work with SATA drives, just need 2x4 sas to sata cables) as they norm have RAID 6 support

the Highpoint RocketRaid 2840A seems good
just make sure you get the right sas to 4 port SATA cable (avoid the ones that use 4 pin power connector)

sas to sata cable 8643 is what your looking for (there are 3-4 different sas plugs 8643 is for that controler)
 
Last edited:
Picking up an inexpensive old rackmount server with a built-in backplane and RAID card is probably not a bad idea. I'm running a Dell PowerEdge R515 with a PERC h700 RAID card. It provides 8x 3.5" bays that support either SAS or SATA drives and it is very solid.
 
that also works

just getting one that is not a leaf blower is harder as these servers tend to run in racks where sound is not a problem so then tend to be turned for higher fan speeds or 100% if you don't use HP branded SAS disks for HP rack servers sometimes
 
That article use CentOS, as do most articles about Linux RAID as the CentOS kernel supports RAID by default.

That is unfortunately not the case with the mainline Ubuntu kernel that I use in my system.
Code:
Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-51-generic x86_64)

mashie@IONE:~$ cat /proc/mdstat
Personalities :
unused devices: <none>
mashie@IONE:~$

RAID support is compiled as modules.

Code:
sudo modprobe raid456
cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4]
unused devices: <none>
 
Grab a QNAP or Synology and be done with it :D That's what i ended up doing because i got tired of dicking around with OS's and storage crap, then attach another USB drive externally and do nightly backups of important files :D

Ya, i am no fun, just tired of crap breaking and spending hours fixing things when i do it all day at work. I got a sheild and it works great!

Also blue's were not meant to be raided due to their time out times. and yes raid 5 is dead for spinning rust, raid 10 or raid 6 for more resiliency.
 
Is there something wrong with mdadm?

I have used it at work for the better part of 2 decades. Although I am moving (and have most of my 100 TB ) to use zfs and raidz3 arrays.
 
Until the RAIDZ2 arrays can be expanded one drive at a time I will not go down that route.

Expansion is an issue, but IMO bitrot is also an issue so i can deal with the lack of expansion, everything should be backed up anyways so really expansion is a non issue to me, when i need to upgrade ill just kill the array buy new drives then transfer the data back from backups.
 
From a CPU load I'm impressed, only 30% of a single core is used and it is 2 x 200MB/s being copied to the array.

Software RAID on linux has not been CPU intensive for probably 15 years. I know by experience at work.
 
On windows software raid5 or raid6 used to be very CPU intensive for at least 1/2 of that time.
 
Grab a QNAP or Synology and be done with it :D That's what i ended up doing because i got tired of dicking around with OS's and storage crap, then attach another USB drive externally and do nightly backups of important files :D

Ya, i am no fun, just tired of crap breaking and spending hours fixing things when i do it all day at work. I got a sheild and it works great!

Also blue's were not meant to be raided due to their time out times. and yes raid 5 is dead for spinning rust, raid 10 or raid 6 for more resiliency.
I like the Buffalo systems. They are pretty solid backup units.
 
I like the Buffalo systems. They are pretty solid backup units.

I had a buffalo and hated it! Just from the cheap feeling plastic everything and i was always having to reboot it and performance was "meh", but that was about, 6 years ago i had one?
 
I had a buffalo and hated it! Just from the cheap feeling plastic everything and i was always having to reboot it and performance was "meh", but that was about, 6 years ago i had one?
I can see that I was not a fan of their home and small business line, but their enterprise units were and still are beasts. Not cheap though wish they offered models with SFP+ I don't have anything that has a 10G copper port.
 
After nearly losing 3.6TB worth of data last night I'm looking to move from MHDDFS to a RAID 6 solution for my storage.

Thankfully I got the drive that took a dump working in read only long enough to copy everything off it but it was quite a wake up call. Many years ago I was running RAID 5 in a previous system but that appear to have gone out of fashion now with larger arrays.

So here I am, now in need of a RAID 6 storage solution that can run on Ubuntu 18.04 while still allow growth of the array one drive at a time as and when needed. The storage is mainly UHD rips that are streamed to Nvidia Shields around the house.

The current MHDDFS array consisted of 2 x 10TB Ironwolfs and 5 x 4TB WD Blues, it was one of the WD's that died so they will all be retired. Instead I plan on getting another 4 x 10TB Ironwolfs. The end result is the same ~40TB of usable storage but way more robust in case of drive issues.

I have been eyeing up the Highpoint RocketRaid 2840A which looks decent but there are hardly any reviews to be found.

Is there a better option at the same price point when it comes to a RAID controller? I'm not looking for a pure software RAID option as that will require custom kernels.
Raid 6 isn't to much recommended either although it's still better than 5, which is much better than 0... most are using a hybrid now says, like raid 10 (1+0), which gives redundancy and is more performant than raid 6, but > 4 drives raid 6 nets you more storage. Depending on how many drives, how big, etc. Or how often you backup will help with suggestions. I personally run raid 0 and backup files I don't want to lose.
 
Raid 10 can actually be more dangerous than raid6 depending on what drives fail.

The concern with RAID6 is that after one drive failes, the parity-intensive rebuild process would push other drives to fail. With mirrors, the rebuild process can run a drive speed. So they have different use cases.

Could always do three-way mirrors ;)
 
Raid 10 can actually be more dangerous than raid6 depending on what drives fail.

None of them are perfect. The raid 10 tends to rebuild much faster making the chance of a second failure less likely (still not impossible).
 
I ended up using software RAID6 (mdadm) with 7x 10TB drives using the SATA ports on the motherboard. There are enough spare ports for a 14 disk array if needed in the future.

RAID6 is working fine, in case of a rebuild I still have a one drive resiliency for the 18h it will take as I simply will leave it alone to rebuild. This is after all a home server and not production.

Important stuff is backed up to the cloud. The data on the array I just want to survive a dead disk or two which it now will.
Sounds good, how is th software raid 6 working, it has to generate parity twice for everything, how doe sthat affect your speeds? It's much safer than no raid.
 
Last edited:
Dual parity calculation is nothing on a modern x86 CPU. I have software raid arrays (dual or even triple parity) that can read or write large files at over 1GB/s with hard drives.
 
Last edited:
Seems a bit slow for 7 drives, but without knowing specs or the test it's hard to really judge. Still much better than a single drive!!
 
Dual parity calculation is nothing on a modern x86 CPU. I have software raid arrays (dual or even triple parity) that can read or write large files at over 1GB/s with hard drives.
Just because it can doesnt mean it's not working to do so. If you're doing processing of sorts on said data and you're using your cycles for raid, it is going to be slower to some extent. I have a hardware raid card that supports raid 5 (and has a backup battery for power loses), but I'm running it in raid 0... So, no redundancy, but ok for power lose.
 
Just because it can doesnt mean it's not working to do so. If you're doing processing of sorts on said data and you're using your cycles for raid, it is going to be slower to some extent.

I get what you're saying, but we're beyond this point now with spinning disks. Parity calculation isn't going to add up to more than a shrug for modern desktop CPUs. Pointedly, most NAS devices that provide single and dual parity and can house more storage than most consumers and small businesses would actually use themselves run off of mid-range tablet SoCs. At best, they have some form of x86 CPU in the form of an Atom or Jaguar, and those devices are usually spec'd as such for the purposes of running other server processes, media transcoding to weaker devices or streaming to the web, and for high-performance disk encryption.

A potato can do parity calculations these days.
 
I get what you're saying, but we're beyond this point now with spinning disks. Parity calculation isn't going to add up to more than a shrug for modern desktop CPUs. Pointedly, most NAS devices that provide single and dual parity and can house more storage than most consumers and small businesses would actually use themselves run off of mid-range tablet SoCs. At best, they have some form of x86 CPU in the form of an Atom or Jaguar, and those devices are usually spec'd as such for the purposes of running other server processes, media transcoding to weaker devices or streaming to the web, and for high-performance disk encryption.

A potato can do parity calculations these days.
That's what by I was asking, I assumed it wouldn't be to difficult nowadays, but I haven't had a chance or reason to test it which is why I was curious. If it's < 5% of a single core, then it's nothing. If it's 50%, well that's not nothing :). Sounds like it's closer to nothing on newer systems. Last time I ran raid without hardware was my duron 800mhz... So it wasn't nothing.
 
That's what by I was asking, I assumed it wouldn't be to difficult nowadays, but I haven't had a chance or reason to test it which is why I was curious. If it's < 5% of a single core, then it's nothing. If it's 50%, well that's not nothing :). Sounds like it's closer to nothing on newer systems. Last time I ran raid without hardware was my duron 800mhz... So it wasn't nothing.

It really has come a long way. Higher clockspeeds, higher IPC, more SIMD instructions (SSE, AVX, and so on), and higher throughput and lower latency on the bus side all work to reduce overhead for just parity calculation to near zero.

What does eat CPU and memory resources is the overhead from modern filesystems; NTFS to a degree, but also Microsoft's ReFS as well as stuff like BTRFS and ZFS which are designed to ensure data integrity at a software level that hardware controllers cannot match.

And ZFS is the one to watch. The developers are now developing for the latest Linux kernels, and despite a few bumps due to licensing the code as it comes from Sun originally, they've made great progress integrating all the features you'd ever want- and with spinning disks, while their is a minor hit, performance is without question topped out.
 
I'll have to keep an eye for zfs. Looked I said, my raid controller has battery backup to ensure it doesn't stop mid write, has onboard cache and hardware parity, so I haven't concerned myself to much with a lot of things and since it's just a home server for play, data integrity isn't the highest concern anyways.
 
Mostly, you'd attach a battery backup to the system and have it shut down if power goes out.

This allows for a 'clean' shutdown, not just of the data in flight to the drives from the controller, but also network-wide; connections are closed properly, caches are flushed properly, data is checked properly and closed up, etc.

But I do agree that it's overkill for a consumer, non-commercial scenario :).
 
Back
Top