Looking for a RAID 6 solution to easily grow under Ubuntu 18.04

Discussion in 'SSDs & Data Storage' started by mashie, Jun 13, 2019.

  1. mashie

    mashie Mawd Gawd

    Messages:
    4,180
    Joined:
    Oct 25, 2000
    After nearly losing 3.6TB worth of data last night I'm looking to move from MHDDFS to a RAID 6 solution for my storage.

    Thankfully I got the drive that took a dump working in read only long enough to copy everything off it but it was quite a wake up call. Many years ago I was running RAID 5 in a previous system but that appear to have gone out of fashion now with larger arrays.

    So here I am, now in need of a RAID 6 storage solution that can run on Ubuntu 18.04 while still allow growth of the array one drive at a time as and when needed. The storage is mainly UHD rips that are streamed to Nvidia Shields around the house.

    The current MHDDFS array consisted of 2 x 10TB Ironwolfs and 5 x 4TB WD Blues, it was one of the WD's that died so they will all be retired. Instead I plan on getting another 4 x 10TB Ironwolfs. The end result is the same ~40TB of usable storage but way more robust in case of drive issues.

    I have been eyeing up the Highpoint RocketRaid 2840A which looks decent but there are hardly any reviews to be found.

    Is there a better option at the same price point when it comes to a RAID controller? I'm not looking for a pure software RAID option as that will require custom kernels.
     
  2. Luke M

    Luke M Limp Gawd

    Messages:
    365
    Joined:
    Apr 20, 2016
    Why would you need a custom kernel?
     
    mwarps likes this.
  3. likeman

    likeman Gawd

    Messages:
    606
    Joined:
    Aug 17, 2011
    RAID on linux is built in no need for custom kernel, all you need is mdadm

    (unsure what this site looks like with ads)
    https://www.tecmint.com/create-raid-6-in-linux/
    there are probery simper solutions then a bunch of linux commands like a GUI way of doing it or buy a 8 port SAS card (they work with SATA drives, just need 2x4 sas to sata cables) as they norm have RAID 6 support

    the Highpoint RocketRaid 2840A seems good
    just make sure you get the right sas to 4 port SATA cable (avoid the ones that use 4 pin power connector)

    sas to sata cable 8643 is what your looking for (there are 3-4 different sas plugs 8643 is for that controler)
     
    Last edited: Jun 13, 2019
  4. mvmiller12

    mvmiller12 Gawd

    Messages:
    681
    Joined:
    Aug 7, 2011
    Picking up an inexpensive old rackmount server with a built-in backplane and RAID card is probably not a bad idea. I'm running a Dell PowerEdge R515 with a PERC h700 RAID card. It provides 8x 3.5" bays that support either SAS or SATA drives and it is very solid.
     
  5. likeman

    likeman Gawd

    Messages:
    606
    Joined:
    Aug 17, 2011
    that also works

    just getting one that is not a leaf blower is harder as these servers tend to run in racks where sound is not a problem so then tend to be turned for higher fan speeds or 100% if you don't use HP branded SAS disks for HP rack servers sometimes
     
  6. mashie

    mashie Mawd Gawd

    Messages:
    4,180
    Joined:
    Oct 25, 2000
    That article use CentOS, as do most articles about Linux RAID as the CentOS kernel supports RAID by default.

    That is unfortunately not the case with the mainline Ubuntu kernel that I use in my system.
    Code:
    Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-51-generic x86_64)
    
    mashie@IONE:~$ cat /proc/mdstat
    Personalities : 
    unused devices: <none>
    mashie@IONE:~$ 
    So right now I'm leaning towards the 2840A using these cables:
    https://www.ebuyer.com/790088-startech-internal-mini-sas-to-sata-cable-1m-black-sas43sat1m

    I don't have any plans of getting a dedicated server at this time as my Lian-Li PC-777b case from 2005 is doing just fine as an all in one solution for workstation/server duties.
    By default it can hold 6 drives and for future expansion I will probably just add some 3x5.25" to 5x3.5" bay converters for a total of 16 drives.
     
  7. Luke M

    Luke M Limp Gawd

    Messages:
    365
    Joined:
    Apr 20, 2016
    RAID support is compiled as modules.

    Code:
    sudo modprobe raid456
    cat /proc/mdstat
    
    Personalities : [raid6] [raid5] [raid4]
    unused devices: <none>
    
     
    mashie likes this.
  8. mashie

    mashie Mawd Gawd

    Messages:
    4,180
    Joined:
    Oct 25, 2000
    Good question, I must have read a badly written guide where it said a custom kernel was needed. Got the module added (and it survives a reboot :) )

    That saves me getting a controller as the motherboard already has 12 SATA ports, thanks.
     
  9. MrGuvernment

    MrGuvernment [H]ard as it Gets

    Messages:
    19,163
    Joined:
    Aug 3, 2004
    Grab a QNAP or Synology and be done with it :D That's what i ended up doing because i got tired of dicking around with OS's and storage crap, then attach another USB drive externally and do nightly backups of important files :D

    Ya, i am no fun, just tired of crap breaking and spending hours fixing things when i do it all day at work. I got a sheild and it works great!

    Also blue's were not meant to be raided due to their time out times. and yes raid 5 is dead for spinning rust, raid 10 or raid 6 for more resiliency.
     
    Lakados likes this.
  10. Rifter0876

    Rifter0876 [H]Lite

    Messages:
    108
    Joined:
    Nov 1, 2017
  11. mashie

    mashie Mawd Gawd

    Messages:
    4,180
    Joined:
    Oct 25, 2000
    I never did RAID with the blues, they were just bundled together using mhddfs to act as a single drive thankfully. So no data was lost. I did RAID5 on some old 1TB drives a long time ago though.

    I have no interest in a Qnap or similar, the workstation at home is on 24/7 anyways and it has acted as the file server since 2015 where a drive or two have been added per year as required.
     
  12. mashie

    mashie Mawd Gawd

    Messages:
    4,180
    Joined:
    Oct 25, 2000
    Until the RAIDZ2 arrays can be expanded one drive at a time I will not go down that route.
     
  13. Brian_B

    Brian_B 2[H]4U

    Messages:
    2,733
    Joined:
    Mar 23, 2012
    Is there something wrong with mdadm?
     
  14. mashie

    mashie Mawd Gawd

    Messages:
    4,180
    Joined:
    Oct 25, 2000
    Nope, that is the route I will go down next week when I will get this all set up.

    I have bought 5 x 10TB Ironwolf's and will add them to the two I already have for a 70TB RAID6 array with ~50TB available.
     
    Brian_B likes this.
  15. drescherjm

    drescherjm [H]ardForum Junkie

    Messages:
    14,371
    Joined:
    Nov 19, 2008
    I have used it at work for the better part of 2 decades. Although I am moving (and have most of my 100 TB ) to use zfs and raidz3 arrays.
     
  16. Rifter0876

    Rifter0876 [H]Lite

    Messages:
    108
    Joined:
    Nov 1, 2017
    Expansion is an issue, but IMO bitrot is also an issue so i can deal with the lack of expansion, everything should be backed up anyways so really expansion is a non issue to me, when i need to upgrade ill just kill the array buy new drives then transfer the data back from backups.
     
  17. drescherjm

    drescherjm [H]ardForum Junkie

    Messages:
    14,371
    Joined:
    Nov 19, 2008
    BTW, RAID-Z expansion is in the alpha stage on the github sources.

    Matthew Ahrens has a pull request here:

    https://github.com/zfsonlinux/zfs/pull/8853

    Hopefully that means we will have working expansion next year..
     
    mikeo likes this.
  18. Rifter0876

    Rifter0876 [H]Lite

    Messages:
    108
    Joined:
    Nov 1, 2017
    This is great news
     
  19. drescherjm

    drescherjm [H]ardForum Junkie

    Messages:
    14,371
    Joined:
    Nov 19, 2008
    mashie likes this.
  20. mashie

    mashie Mawd Gawd

    Messages:
    4,180
    Joined:
    Oct 25, 2000
    Update.

    I'm slowly getting this new RAID6 array up and running.

    Initially I started with a 4x 10TB RAID6 array. Build time 17h.

    Then I expand it with one more drive as a test. Expansion time 37h.

    Now I'm copying the data across from the two original 10TB drives that were 95% full to the array. ETA 12h.

    With the data migrated those two drives will be used to do one more expansion to the full 7 disk setup. Hopefully that expansion won't be more than 37h as well but we shall see.


    From a CPU load I'm impressed, only 30% of a single core is used and it is 2 x 200MB/s being copied to the array.
     
  21. drescherjm

    drescherjm [H]ardForum Junkie

    Messages:
    14,371
    Joined:
    Nov 19, 2008
    Software RAID on linux has not been CPU intensive for probably 15 years. I know by experience at work.
     
    mashie likes this.
  22. mashie

    mashie Mawd Gawd

    Messages:
    4,180
    Joined:
    Oct 25, 2000
    To be fair it probably was that long ago since last time I was looking at software RAID.

    I'm glad I didn't spend any money on a RAID controller now and used that for an extra drive instead.
     
  23. drescherjm

    drescherjm [H]ardForum Junkie

    Messages:
    14,371
    Joined:
    Nov 19, 2008
    On windows software raid5 or raid6 used to be very CPU intensive for at least 1/2 of that time.
     
  24. mashie

    mashie Mawd Gawd

    Messages:
    4,180
    Joined:
    Oct 25, 2000
    All done and dusted now.

    An 80GB file from NVME to this array will reach around 600MB/s.
     
    IdiotInCharge likes this.
  25. Lakados

    Lakados [H]ard|Gawd

    Messages:
    1,490
    Joined:
    Feb 3, 2014
    I like the Buffalo systems. They are pretty solid backup units.
     
  26. MrGuvernment

    MrGuvernment [H]ard as it Gets

    Messages:
    19,163
    Joined:
    Aug 3, 2004
    I had a buffalo and hated it! Just from the cheap feeling plastic everything and i was always having to reboot it and performance was "meh", but that was about, 6 years ago i had one?
     
    Lakados likes this.
  27. Lakados

    Lakados [H]ard|Gawd

    Messages:
    1,490
    Joined:
    Feb 3, 2014
    I can see that I was not a fan of their home and small business line, but their enterprise units were and still are beasts. Not cheap though wish they offered models with SFP+ I don't have anything that has a 10G copper port.