Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
you should tray opensolaris flavor or solaris with nap-it ... everything in the GUI ..So how do I scrub pools and take snapshoots?
I'm in debian with MDADM...
what filesystem are you using?
some filesytems do padding bytes ...oops..., it looks like bad/corrupted file during cheksuming.
can you load to your program or app (require those files) to check those files are bad/corrupted?
get better hardware to minimize you headache...
I run Debian, XFS + MDADM, Raid-6
I wan't to know how I can scrub pools and I wan't to know how I can snapshot pools, and send pool status on email.
echo repair > /sys/block/md_/md/sync_action
As far as the snapshot part, I think you need LVM on top of md for that, no?
sudo apt-get install lvm2
mdadm can start a check (scrub) by:
Code:echo repair >> /sys/block/md_/md/sync_action
Of course, replace md_ with whatever volume you want to repair.
If you just want a check without repair, replace "repair" with "check".
https://raid.wiki.kernel.org/index.php?title=Scrubbing&oldid=2362
-bash: /sys/block/md1/md/sync_action: Permission denied
My main objection to md+lvm+FS is that everything is separated. For example, if you want to do snapshots with LVM (least this is how it used to work), you had to guess how much space you'd want for them, and keep that aside for lvm to use. If you guessed too small, you don't get the snapshots you want - if you guess too big, you waste space.
There isn't any real guesswork here though. No more guesswork than creating a vdev out of how many disks vs what the end user requires.
In terms of lvm being a separate service it's no more an issue than any other OS. SMB and NFS are separate services and they drive everything from Napp-It to Windows.
I assume you are talking about thin provisioning and even here you need to always calculate how many drives are required, how much space is expected, and your burn through rate. All thin provisioning does is change or delay when you need to figure out your usage patterns. It doesn't remove them. Otherwise the file system in question would be called "magic" and we all wouldn't have to address storage ever again.I don't think we're communicating here. Of course there is guesswork. You put N drives together into a raid volume. You then have to decide how to slice and dice it to assign space to various partitions on the volume. You also have to take a guess as to how much space you need for snapshots. None of this is true for ZFS, since the snapshots and filesystems all draw free blocks from a common pool.
I'm sorry but you need to be more detailed here. Rather than jump to conclusions please give an example of how this relates to file systems and in what example would this be beneficial to the end user.I have no idea where you got the 'lvm being a separate service' thing - I never referred to 'services', which is an OS abstraction. I was referring to conceptual layers - if they are too opaque, one layer cannot make intelligent decisions based on what a different layer might want/do.
I guess you could say it's thin provisioning, if you want.
I could go out my way to help you here but I'm not. What I will say is that your knowledge of what lvm can do and can't do is limited and leave it at that.
Here we go again.... As I said: better hardware will not help against data corruption. I told you this, several times before:get better hardware to minimize you headache...
Yes, the last sentence is a big big problem. You need to do checksums of all files, and update the checksum file when you add more files. And then you need to do checksum calculations every month to see if any files been altered. And then you need to fetch the file from the backup. So this is a huge pain, if you do that manually.I don't know if it's silent data corruption as in "the file was fine on the drive, then it was corrupted", or rather, "the file was never fine because it was corrupted during transfer". I just know that until recently I didn't do hashes of my files but after I discovered that a simple move had corrupted a file I started doing this and find differences between original and backup. What is worse is that I often switch the backup and original drives to even out wear so I don't even know which file is good when I find inconsistencies.
Windows/NTFS and most of these files are video so there is no way that I know of to check them. Usually the changed bit(s) will cause a glitch (I use Total Commander to find the differences between two files, then compute at which time in the video it would happen).
I could go out my way to help you here but I'm not. What I will say is that your knowledge of what lvm can do and can't do is limited and leave it at that.
Nice. I admitted my understanding of lvm was limited - certainly several years ago (my last exposure to it), it had the limitations I mentioned. At no point have you explained to the contrary, just make a snarky comment and run. Oh, by the way, I wasn't the one asking for help, I was trying to provide it. LOL...
well through briefly explanation since LVM is dispersed especially from LVM to LVM2.
My original experience with lvm was as described in my earlier posts. e.g. when you created volumes you needed to decide how much to allot for them and how much for clients (filesystems/partitions). If newer lvm implementations do not share that shortcoming, it should have been simple for kac77 to explain why. Instead he took a cheap shot and left. I'm not sure from your reply if you are agreeing or disagreeing with him. If the former, you are not making things any clearer than he did.
I'm still not clear I guess.
I'm still not clear I guess. My understanding of LVM: when you set up a pool (or whatever LVM calls it), do you not need to specify how much space to reserve for snapshots? If so, that was my point. If not, a simple explanation should suffice without requiring the other person to go searching for LVM resources online (I don't think I'm being unreasonable here - I provided a brief explanation of how ZFS works, and didn't say 'there is plenty of ZFS info out there, go read it...') To elaborate on my original point: with zfs, free space comes from pool itself, so you don't need to create fixed size partitions, whether they can be grown or shrunk is then up to the filesystems in question. Again, not saying this makes md+lvm+fs bad, just less flexible and requiring manual intervention for cases where zfs 'just works'. That's all I was ever saying to begin with...
Seriously guys, this is the worst case of thread hijack I have ever seen!
I had to go with MDADM + XFS since ext4 don't support partitions over 16 TB, but I will keep myself posted in the ZoL development and really hope they get good, because ZFS is what I really want
Also, I got my 10 GbE switch and 10 GbE card yesterday, just need a cable now and I will test speeds over network :-D
Why is oracle developing BTRFS to linux? And I had actually never heard of it, seems very interesting, will it be a raid filesystem or do I have to use MDADM + BTRFS?
I _do_ use btrfs on mdadm. btrfs does not support a parity based redundancy scheme yet and it will stay that way for a long time probably. While the "raid1" scheme of btrfs may be okay for 2 or 3 disks, it is quite wasteful for 8 or more drives. The main reasons I use it are the snapshotting and checksumming capabilities. If a corrupt file is detected I can always restore it from (daily) backups.
my common assumption( just mine).. mdadm would be less deployed when btrfs is stable and mature with features
you are a contributor to them for kernel patch. nice!... BTW, during my testing mdadm a few months back I triggered a kernel bug (scsi_remove_target) that was fixed and I ended up getting my name on the kernel logs / patch.
With that said moving from lvm2. I see some interesting recent developments in lvm2. They are adding raid 5/6 to lvm. Although I highly doubt I would ever use that.