clone centos software raid 1 to single 500GB drive

FLATcura

Weaksauce
Joined
Oct 20, 2006
Messages
96
so i have centOS as a VMware server, using two 75GB drives in a software RAID 1

it was perfect at first but i added a VM for WSUS and it pretty much stole all of my hard drive space, so im wondering whats my best option?

technically i have 2 500GB drives i can use in another raid 1 if i need too, but i figure if i can clone the OS to a single drive, i should be alright and have things i can do with the other drive

its a production machine, so i havent tried yet, but im wondering if maybe true image or Ghost would simply grab the RAID as a single image and i can transplant it however i want, single disk or another raid

anyone try this before

sorry if its a brutally simple question, long night early morning :p

thanks :)
 
RAID 1? That's mirroring.. I would imaging all you would need to do is pull 1 drive, make an image and restore the image to the new drive. Heck, I bet you don't even need to pull one of the drives. Of course I've never dealt with software RAID, or CentOS so don't listen to me :)
 
its a production machine, so i havent tried yet, but im wondering if maybe true image or Ghost would simply grab the RAID as a single image and i can transplant it however i want, single disk or another raid

What does your disk layout look like? Do you have partitions on top of the RAID? LVM? Do you boot from it?

It should work fine to just clone the disks, especially if you're not booting from them (you can always just use the RAID in degraded state if you don't want to fiddle with any of this), however you may need to modify the partition table and change the filesystem type of the main partition, and may also need to modify some logical references in configuration files and such. That said, I probably wouldn't do this. Instead, something along the lines of (though it will depend on your disk layout):

1) Fail one of the disks in the array manually with mdadm --fail
2) Remove the disk with mdadm --remove
3) Your array is now in degraded state. Use dd to clone the /dev/mdX device onto the disk you just removed from the array.
4) Edit the partition table of this disk you just cloned onto and reinstall your bootloader. If you were using LVM on the whole RAID, I'd do this a bit differently since a disk with no partition table is not bootable, so you'll need to create a new one, and may need to ditch LVM as well.

If you were booting from it:
5) Mount the partition on the cloned disk and modify /etc/fstab (if necessary, I think CentOS uses labels or GUIDs by default), then rebuild your initrd. Modify your bootloader configuration as well (if necessary - probably not). Unmount.
6) Reinstall the bootloader for good measure, just in case.
 
Last edited:
Qwerty Juan>

What is wrong with software raid ?

mdadm ROCKS for all kinds of reasons...
 
I don't see what's right with it... get yourself a nice RAID controller card. :)
Usually, a "nice" raid card costs more than the rest of the system. At least, where software raid might be useful.

I use it myself for my home system. 4 disks in raid5+hotspare. The only raid card I wanted was close to 800 bucks + BBWC module. The entire "server" ran me about 600.
 
I have seen way more performance issues with hardware raid than I care to mention......And software raid bugs/features are updated a heck of a lot more often than hardware raid firmware.

No reason to really use hardware raid these days IMHO.

Also: I'd wager that a cheapie "hardware raid" controller will be eaten alive by (good) soft raid on a decent processor with SSD's.
 
For a graphical solution, try gparted. If you want to do it by command line you can use dd.
 
No reason to really use hardware raid these days IMHO.

Also: I'd wager that a cheapie "hardware raid" controller will be eaten alive by (good) soft raid on a decent processor with SSD's.

How about No. Sorry but any company or person who need redundancy, they will choose a high quality raid component to do what they need.

Software raid offers no protection during boot as well as no real setup/config(except hardware assisted Raid), no protection from a power loss and is tied down to one OS at a time(so you cant migrate os's). Software Raid itself is able to be corrupted or destroyed via some form of malicious code(virus, trojan). Software raid has trouble dealing with many of the newer journalled file-systems.


sure if you are a windows guy on a home workstation looking to get better performance from game or looking for simple 1:1 mirroring, software raid is great. It is way way to vulnerable to be used for business/profiting endeavours.





Now as for the Op's problem. Take the new drive and install it. Use dd or cat to copy the raid to the new drive. you can do this as follows in the simplest form. I choose cat over dd as it is faster, even when dd's blocking factor is set comparable to cats 4096.

sudo bash -c “cat /dev/raid | cat > /dev/nonraid”

Then you will likely need to rewrite the boot sector of the os, so just reinstall grub or lilo. The super grub disc does this nicely also.


Here is it more detailed http://knife-bst.com/tech/?p=52
 
Last edited:
How about No. Sorry but any company or person who need redundancy, they will choose a high quality raid component to do what they need.

Software raid offers no protection during boot as well as no real setup/config(except hardware assisted Raid), no protection from a power loss and is tied down to one OS at a time(so you cant migrate os's). Software Raid itself is able to be corrupted or destroyed via some form of malicious code(virus, trojan). Software raid has trouble dealing with many of the newer journalled file-systems.

sure if you are a windows guy on a home workstation looking to get better performance from game or looking for simple 1:1 mirroring, software raid is great. It is way way to vulnerable to be used for business/profiting endeavours.

Now as for the Op's problem. Take the new drive and install it. Use dd or cat to copy the raid to the new drive. you can do this as follows in the simplest form. I choose cat over dd as it is faster, even when dd's blocking factor is set comparable to cats 4096.

sudo bash -c “cat /dev/raid | cat > /dev/nonraid”

Then you will likely need to rewrite the boot sector of the os, so just reinstall grub or lilo. The super grub disc does this nicely also.

Here is it more detailed http://knife-bst.com/tech/?p=52

Protection "during boot". What do you mean? Make a raid1, install grub on both disks. DONE.

Protection from Power loss: Your OS should be flushing disk buffers before your UPS runs out. You do run a UPS on your box, right?

Tell me how, if I have my box is HARDWARe raid the RAID controller protects it from corruption via virus/malware? That is what backups are for.

RAID != backups. You do have backups, right?

You are so misinformed, it isn't even funny.

Btw, if you want to see real performance issues with hardware raid controllers, go look at the OCZ forum with multiple vertexes.

On top of that, with hardware raid you are constricted to a single bus. There is no reason I can't do software raid across multiple "dumb" controllers, fc, iscsi, etc.

Also: I don't run Windows. Again, please tell me how "software raid" (not "Windows implementation of software raid") is not better than hardware raid? It's less complex & more proprietary -- a tradeoff I'm not generally willing to make.

Btw, if you want the ULTIMATE in data integrity, you run ZFS. Guess what ? It's software volume management + *gasp* raid.
 
Protection "during boot". What do you mean? Make a raid1, install grub on both disks. DONE.

Its quite easy for software raid to be destroyed on boot via some form of malicious injection. Its actually incredibly common if you work enterprise level IT. It is impossible for it to happen on a hardware level.

Protection from Power loss: Your OS should be flushing disk buffers before your UPS runs out. You do run a UPS on your box, right?
That is not how modern file systems work. Nor how software raid works. maybe you should read some papers on how modern operating systems actually work, rather than talking about those from 20 years ago.

Tell me how, if I have my box is HARDWARe raid the RAID controller protects it from corruption via virus/malware? That is what backups are for.

RAID != backups. You do have backups, right?

You are so misinformed, it isn't even funny.

People usually use raid to make backups redundant(netapp, etc), so your apology is crap.

I dont have the time or energy to deal with your ignorance on the subject, this document explains it all very well http://www.adaptec.com/en-US/resourcecenter/Education/tutorials/4423_SWRAID_WP.htm

Have fun reading and being proved wrong.
 
Its quite easy for software raid to be destroyed on boot via some form of malicious injection. Its actually incredibly common if you work enterprise level IT. It is impossible for it to happen on a hardware level.
Huh? What does this even mean? Booting from a software RAID-1, under Linux anyway, usually means booting from one of the two disks and then loading the RAID later during the boot process. If one disk fails, you have the machine configured to boot from the other one. Yes, a 'malicious injection' can corrupt the boot process, but that can happen just as well on a hardware RAID setup and has nothing to do with RAID whatsoever. Worst case you have to physically reset the machine and possibly set it to manually boot off the second disk if the boot disk fails in the middle of the first stage of the boot process and stalls the machine, but that's rare and not really catastrophic. If you really want to get down to it, a 'malicious injection' could write an invalid firmware to the RAID controller, fubaring the whole system until a replacement can be found or it can be re-flashed. That's not really an issue with software RAID.

That is not how modern file systems work. Nor how software raid works. maybe you should read some papers on how modern operating systems actually work, rather than talking about those from 20 years ago.
Again, huh? If you have sufficient UPS battery to safely shut down the machine, there is no issue on a power failure. Other failures can still hose the system of course, e.g. PSU failure, but these can be mitigated as well. What does anything he said have to do with 'how modern operating system actually work' as opposed to those from 20 years ago?

I dont have the time or energy to deal with your ignorance on the subject, this document explains it all very well http://www.adaptec.com/en-US/resourcecenter/Education/tutorials/4423_SWRAID_WP.htm
Note the adaptec.com in the URL. This is marketing material for a hardware RAID vendor. Of course they're going to say it's better. It also seems very Windows-centric.

There are some definite advantages to hardware RAID, a couple of which are outlined in that brochure (while most are silly FUD), but soft RAID has advantages as well and the choice needs to be made on a per application basis. Neither is always the right solution, and both can be used for 'business/profiting endeavours' with just as much success if the choice is considered and made in light of the rest of the system. Your arrogance and dismissal is misplaced.
 
Last edited:
Huh? What does this even mean? Booting from a software RAID-1, under Linux anyway, usually means booting from one of the two disks and then loading the RAID later during the boot process. If one disk fails, you have the machine configured to boot from the other one. Yes, a 'malicious injection' can corrupt the boot process, but that can happen just as well on a hardware RAID setup and has nothing to do with RAID whatsoever. Worst case you have to physically reset the machine and possibly set it to manually boot off the second disk if the boot disk fails in the middle of the first stage of the boot process and stalls the machine, but that's rare and not really catastrophic. If you really want to get down to it, a 'malicious injection' could write an invalid firmware to the RAID controller, fubaring the whole system until a replacement can be found or it can be re-flashed. That's not really an issue with software RAID.

Do you even know what injection means?. Malicious code could not just simply overwrite the firmware on a hardware raid controller, first off it doesn't have that level access without user consent and secondly you have to boot from the controller rather than the disc first. Booting off the controller circumvents the ability for such a thing to happen, as for it to happen a user would need to OK it via boot config.


Again, huh? If you have sufficient UPS battery to safely shut down the machine, there is no issue on a power failure. Other failures can still hose the system of course, e.g. PSU failure, but these can be mitigated as well. What does anything he said have to do with 'how modern operating system actually work' as opposed to those from 20 years ago?

But why should one have to buy a ups?, I really see no need for it, especially when raid controllers as well as new file systems do a very good job keeping data recent. When I was talking about the os'es, I was talking about him saying "OS should be flushing disk buffers". Was not even talking about the ups.

Note the adaptec.com in the URL. This is marketing material for a hardware RAID vendor. Of course they're going to say it's better. It also seems very Windows-centric.
If you don't like the article,. they show a source to disprove it. That is usually how sources work. So what if its windows centric, anything bad that can happen on windows software raid can happen on linux, bud or solaris. It is not hard to write malicious code on those operating systems, especially with C or Python./


I was refering to somebody who seemly didn't think there was any use for hardware raid. He is wrong, there are many very good reasons to use hardware raid and that's the reason why so many people use it. I have been managing raid systems for a very long time, hardware controllers work great and offer lots more functionality than software. I will stick with them, until there are major advancements in other technologies. Though I will say at the level of IT that I work, we actually need to prove the products we use to comities, and they certainly weren't allowing software raid to handle a few of the servers we have at Chase.
 
Do you even know what injection means?. Malicious code could not just simply overwrite the firmware on a hardware raid controller, first off it doesn't have that level access without user consent and secondly you have to boot from the controller rather than the disc first. Booting off the controller circumvents the ability for such a thing to happen, as for it to happen a user would need to OK it via boot config.
Of course I do. Please elaborate on why it's relevant. Everything necessary to boot is stored on the disk in either case and there's no difference between the two. Any change to the RAID configuration or software/firmware requires privilege escalation and specific targeting in either case. What, exactly is the difference? The only one I can see is that there's an additional layer of software (ie. the controller firmware) that could be attacked that doesn't exist in software RAID.

But why should one have to buy a ups?, I really see no need for it, especially when raid controllers as well as new file systems do a very good job keeping data recent. When I was talking about the os'es, I was talking about him saying "OS should be flushing disk buffers". Was not even talking about the ups.
If you want RAID you want a UPS too. RAID is for availability, and to make your disks redundant and ignore power problems is...disingenous. Besides, a hardware controller needs a 'UPS' of its own anyway (battery backup module) to provide this benefit, it's just in a different place. I still don't get your point about disk buffers. Modern filesystems work differently than older ones, sure, but his point is correct in either case.

If you don't like the article,. they show a source to disprove it. That is usually how sources work. So what if its windows centric, anything bad that can happen on windows software raid can happen on linux, bud or solaris. It is not hard to write malicious code on those operating systems, especially with C or Python.
Nobody has written a 'rebuttal to Adaptec's marketing document' paper as far as I know. If you actually understood how these technologies work you'd see where they're making completely overblown claims. My point about it being Windows-centric is that some of the shortcomings are specific to the Windows implementation of software RAID, such as poor performance and the lack of parity RAID. Which makes me assume the rest of their analysis is based on Windows software RAID as well, and maybe the things they cite are actually issues there, but they're not on Linux. It also doesn't take into account the modern RAID-like systems like ZFS, which deal with some of their other legitimate issues. And can we stop talking about 'malicious code'? That's so irrelevant to the issue of RAID technology it's not funny. If you've got privileged mode malicious code running, you're fucked no matter what kind of RAID you're running, and RAID has nothing whatsoever to do with protecting you from it.

There are some good reasons to use hardware RAID, but with things like ZFS I think they're rapidly disappearing. It's already virtually 'free' as far as computation cost is concerned, and with the additional data protection and flexibility offered by giving the OS visibility of the data structure on disk, the minor advantages of hardware RAID are quickly becoming less relevant. They will probably remain useful for heavily loaded DB servers and the like, but for generic mass data storage? If Windows ever gains decent support for something like ZFS, hardware RAID is going to lose a lot of support.

Of course large corporations and old-timers ingrained in their thinking will take longer to migrate to the new technology, that doesn't make it invalid.
 
Back
Top