Motherboard RAID 1

Discussion in 'SSDs & Data Storage' started by mda, Oct 21, 2019.

  1. mda

    mda [H]ard|Gawd

    Messages:
    1,682
    Joined:
    Mar 23, 2011
    Hi All,

    I'd like to ask if anyone has had experience in migrating motherboard RAID1 between consumer grade chipsets. No intention for RAID 0/10. Just plain mirror RAID 1.

    I'm planning to build a SOHO server with motherboard RAID1 - either the SATA kind or the M2 kind.

    Looking at either some X470/570/B450/550 boards or some Intel Z370/Z390 boards for this purpose.

    I'm wondering if anyone has had any experience moving drives to a different platform just in case the board eventually fails.

    I can and probably will be testing the SATA RAID1 migration options assuming I can get a few old/new motherboards with RAID 1 to test. Just don't have many Intel motherboards with 2 M.2s to test it with. I have a few AMD B450/X470s -- probably doesn't count moving from one X470 to another..

    Thanks!
     
    Last edited: Oct 21, 2019
  2. Brian_B

    Brian_B 2[H]4U

    Messages:
    3,356
    Joined:
    Mar 23, 2012
    I tried once like 20 years ago and ... it failed fantastically. I've run software raid ever since.

    That said, I'm sure a lot has changed in 20 years.
     
  3. mda

    mda [H]ard|Gawd

    Messages:
    1,682
    Joined:
    Mar 23, 2011
    Software RAID is my first choice.

    I'm trying to get software RAID running on the OS partition on the CENTOS 7. Unfortunately, I can't get it to RAID with the boot partition -- which I could do with CENTOS 5/6.

    The machine will not boot with CENTOS7 when the drive with the boot partition dies (or is removed). The data is intact on the 2nd drive but it's almost worthless for uptime since the computer fails to boot with drive 2.

    Funny thing is that it was easier to setup Windows Server 2012 in this aspect compared to Centos7.

    I'm only considering motherboard RAID because it's very hard and expensive in my country to obtain decent megaraid/lsi RAID cards outside of purchasing them with servers.

    Found my old thread on the CENTOS forums after a quick google on bootable RAID 1. Still can't find a solution after 5 years...
     
    Last edited: Oct 21, 2019
  4. Brian_B

    Brian_B 2[H]4U

    Messages:
    3,356
    Joined:
    Mar 23, 2012
    I've always been advised against RAIDing your boot volume, no matter what OS you choose.
     
  5. mda

    mda [H]ard|Gawd

    Messages:
    1,682
    Joined:
    Mar 23, 2011
    That's new to me. What's the rationale on that if ever? And in that case, what's a good way to keep up the uptimes on the server OSs?
     
  6. Brian_B

    Brian_B 2[H]4U

    Messages:
    3,356
    Joined:
    Mar 23, 2012
    For both Linux and Windows - for me - boot drive is always a fast single drive.

    If that drive goes bad, yeah, you have downtime. But it's also pretty lean - just the OS and as few other files as you can get away with. If it dies, just a clean re-install and then re-point the data directories in fstab to the correct arrays. Also makes it very easy to transfer between major revisions of OS and the like as well. Also makes it possible to change between entire distros relatively easily.

    All your user files (/home, \Users, etc) go over on your RAID array, or split across multiple arrays or shares or however you see best to distribute your data.

    I'm sure High Availability computing have ways to boot from a mirrored RAID, but in consumer space, it's way more trouble than it's worth. And even in HA computing - that's probably better accomplished with VM's in failover mode.
     
  7. mda

    mda [H]ard|Gawd

    Messages:
    1,682
    Joined:
    Mar 23, 2011
    I'll consider this. Will need to find ways to quickly reinstall the servers etc, although one disadvantage is that since we run a lean organization, having a bootable RAID will buy me [who is the only one half knowledgeable in this] more time to fix the problem and I won't need to be on call.

    Thanks!
     
  8. dbwillis

    dbwillis [H]ardness Supreme

    Messages:
    7,683
    Joined:
    Jul 9, 2002
    why build a soho server with consumer hardware and no real raid?
    a hardware raid card is fairly cheap for just mirror, then you can move the card to another system if need be and not have to mess with trying to resync/recover the array/configure the bios among different brands
     
  9. Denpepe

    Denpepe [H]ard|Gawd

    Messages:
    1,251
    Joined:
    Oct 26, 2015
    I can tell you I moved my storage raid 0 drives between multiple systems without any issue (all intel though), I just had to rebuild the raid on the new motherboard and no issues whatsoever. Never used raid 1 so no idea if that would be different.

    I did have data that I did not want to lose backed up just in case.
     
  10. mda

    mda [H]ard|Gawd

    Messages:
    1,682
    Joined:
    Mar 23, 2011
    Thanks for your reply.

    Reasons for this is due to budget, and hardware RAID cards are hard to obtain by themselves without an accompanying server (In a 3rd world country in Asia).

    The only off the shelf parts available are all consumer grade, and server hardware lead times are not very good just in case I'll need to rebuild. (Took 3 months to deliver our Lenovo Server for our critical apps)

    I also don't need a Xeon and ECC for what I'm doing -- looking for cost effective ways for uptime as I do make backups anyway.

    If hardware RAID cards were easily available here, I'd have gone for that, but they aren't and I don't want to go the branded server route just yet (we've had this SOHO server for 5-10 years doing very menial things). Just want to keep uptime up at minimum cost and don't want to migrate this as a VM to our branded machine.

    You make good points though, but at this point in time, we are willing to take that risk due to budget constraints. (Branded server here is 2-3x the cost of a decent consumer grade machine)

    Thanks for the info. I'll try setting something up soon. I have a Z68 here and a Z87 but no more modern chipset to test RAID1 with (H310s and B360s don't support it). Maybe will need to try a H370 or a Z390 if I can get one for cheap.

    I do have daily onsite/offsite backups so this project is really to ensure uptime just in case I'm not at the office and a hard drive decides to crap up.
     
    Last edited: Oct 22, 2019
  11. OFaceSIG

    OFaceSIG 2[H]4U

    Messages:
    2,119
    Joined:
    Aug 31, 2009
    I'm assuming you just want a safe place to store your data locally. If that case look into FreeNAS. I run FreeNAS with two 3TB seagate drives in an encrypted software ZFS mirror and I can MAX out my 1Gbps connection with just two HDDs. It's safe, fast, and cheap. Don't do it on your motherboard in your machine. ZFS was designed to be an awesome storage protocol and it is. I'm a linux/unix dummy and I did it. Just look up youtube videos. Specs of my server is in my SIG. Works flawlessly.

    Hardware RAID makes sense in a pre-fabbed server. That has a drive cage, hot swappable disk trays, etc. But in a home built machine it's just a mess. Not impossible, but for his need of simple, safe storage. It's overkill.
     
  12. Legendary Gamer

    Legendary Gamer Gawd

    Messages:
    700
    Joined:
    Jan 14, 2012
    Running a data mirror on your boot volume is always a good call if you have the spare money for the second storage device. It's not a performance option if you aren't using a hardware RAID controller (if you are, good chance the RAID controller will push your drives to their performance limits with or without an SSD). The solution will be no less effective than running a standard SATA/SSD device. You will never see the performance hit and in the event of a failure you will be able to disconnect the failed drive and run off your mirror.

    I build these solutions for more bulletproof installations for friends and family that don't demand the (or understand) highest levels of performance.
     
  13. kdh

    kdh Gawd

    Messages:
    767
    Joined:
    Mar 16, 2005
    The advice you got was beyond wrong.
     
    Red Falcon likes this.
  14. Dead Parrot

    Dead Parrot 2[H]4U

    Messages:
    2,592
    Joined:
    Mar 4, 2013
    Never tried moving between different MB/chipsets. I wouldn't expect much chances of success. By the time your first MB is due for retirement, very likely to have been enough changes in how the chipset implements Raid to make moving doubtful. Better to have a good backup recovery process and expect to use that to move between systems.

    I did install a set of Compaq(pre HP) drives from an ancient server that were in a Raid 5 setup and was very surprised when the new server(different Compaq family, CPU, Raid card) not only recognized the array but booted from it. The old drives weren't even in the same order.

    Before picking a MB, make sure they really do have the Raid support in their production level firmware. Have been burned before by MB that advertised Raid capability but only had Raid support in Beta firmware if at all. Also verify how many drive ports/slots are supported. Some MB will only support Raid on a few ports. Can cause issues depending on how many drives you thought you needed.
     
  15. kdh

    kdh Gawd

    Messages:
    767
    Joined:
    Mar 16, 2005
    @OP depends.

    I've used AMD MB's for years with builtin raid controllers. As long as the controller is in the same family as the controller you are currently using, you will be fine. I've done it easily 7 or 8 times with zero issues.

    If you jump motherboards platforms.. like amd to intel, things will most likely will not work.

    As for hardware vs software raid.. Is more or less personal preference. I will pick hardware raid over software raid any day. Why? You have a dedicated card with a dedicated processor doing the work for you, set it up once.. Move on. Software means you are stealing OS resources to keep your volumes alive.

    It just depends.. if you are constantly rebuilding your machine on different hardware platforms.. Software raid would be fine because you can carry it over between systems. If you build a machine and then just leave it alone like I do for the next 4 years.. hardware raid all day.
     
  16. kdh

    kdh Gawd

    Messages:
    767
    Joined:
    Mar 16, 2005
    Those 4 and 5 raid series cards in those old hb servers were some of the best back in the day. Very few vendors at the time could pull that off. Maybe Dell Percs, not many other vendors.
     
  17. Jandor

    Jandor Limp Gawd

    Messages:
    296
    Joined:
    Dec 30, 2018
    All my computers (home and office) are in RAID (except laptops). This is for more than 20 years. Now for SSD you need to put a combination of 2 SSD of completely different brands (diffrent flash and different controller). For instance Crucial MX and Samsung EVO are a good choice. Franckly, it's better to stay on SATA for Raid.
     
  18. kdh

    kdh Gawd

    Messages:
    767
    Joined:
    Mar 16, 2005
    While you can mix and match drives in your raid volume, its also not a great idea. Your raid will only run as fast as your slowest device and you could have unreliable performance mixing drive manufactures. Especially if the drives have different cache sizes. If you are doing hardware raid, don't mix and match drives for the same raid volume. Use the same exact drives for one common volume. Don't mix and match your raid controllers if they are different manufactures. You will have problems. Your idea is correct about trying to us different ports on different controllers, but make sure all the controllers are the same model. If you've been successful in mixing drives and controllers, been stable, and been happy with the performance? Cool.. Sometimes your budget will win over best practices. But if you have the budget? Get the same drives and controllers. There are not many other options for consumer level storage other then SATA devices. SAS is not wildly supported in consumer land, and most consumers don't get the benefit of it. SATA is technically just the interface, under the covers it still all SCSI. So by default, most consumer options will be SATA running in ACHI mode, or Raid mode all with SCSI under covers. You can take a SATA drive and plug it into a Sas controller because its backwards compatible. But you can't plug a Sas drive into a Sata Controller.
     
    Red Falcon likes this.
  19. Stanley Pain

    Stanley Pain 2[H]4U

    Messages:
    2,453
    Joined:
    Apr 5, 2001
    The general rule of thumb with migrating motherboard RAID arrays is that you don't. Heck they can flake out after a BIOS update. Much better to use something like ZFS, or MD Raid, or heck Windows Storage pools.
     
  20. Kardonxt

    Kardonxt 2[H]4U

    Messages:
    3,034
    Joined:
    Apr 13, 2009
    Just make sure you have a decent backup. The RAID is just good for preventing downtime in the event of a drive failure.

    As long as you have decent image backups in place you should be able to restore to similar hardware fairly painlessly regardless of RAID config. If you go the VM route dissimilar hardware configs should be painless too.
     
    kdh likes this.
  21. Kardonxt

    Kardonxt 2[H]4U

    Messages:
    3,034
    Joined:
    Apr 13, 2009
    I believe the standard is using RAID 1 for the OS volume and to avoid all other types of RAID for it.
     
  22. Private_Ops

    Private_Ops [H]ard|Gawd

    Messages:
    1,854
    Joined:
    Jun 4, 2007
    Yea, I can see RAID 1 being of use, atleast protects you from a single drive failure.

    I use it for a general storage pool (with periodic copies/refreshes to an external drive that stays in my drawer) on my main rig.