ZFS woes

napster0317

Limp Gawd
Joined
Mar 19, 2001
Messages
175
I have been messing around with my server and noticed that every so often the drives seem to change the mount point in /dev dir. I am looking to see what i can do to make them be assigned permanently to a specific /dev/ mount point so my zfs install stops saying that the drives have gone bad..

Using ubuntu server

have 2 500gb hdd4
640gb hdd
and a 3tb hdd

I can get the zfs pool going but after a few days it dies out. any help would be great!
 
Here is what I have got figured out so far
user@SERV:~$ sudo blkid
[sudo] password for user:
/dev/sda2: UUID="4e628d6d-d7e1-4293-96ec-576160400c40" TYPE="ext4"
/dev/sdb1: UUID="d00ae7b7-54b2-4f5a-b7b7-3081eaea8f50" TYPE="ext4"
/dev/sdd1: UUID="db0c4c38-1d5b-4f5e-8bf1-396fd6ef0525" TYPE="ext4"
/dev/sde1: UUID="fae749aa-9895-4a0f-95c9-cf7e7a2a9714" TYPE="ext4"
/dev/sdc1: UUID="67d11a74-63d5-43a5-a08c-86f88a058960" TYPE="ext4"

lsscsi
[1:0:0:0] disk ATA TOSHIBA MK6476GS 1M /dev/sda part of zfs pool
[2:0:0:0] disk ATA SanDisk SDSSDA24 00RL /dev/sdb OS
[3:0:0:0] disk ATA ST3500630AS K /dev/sdc part of zfs pool
[4:0:0:0] disk ATA WDC WD5000AACS-0 4C05 /dev/sdd part of zfs pool
[5:0:0:0] disk ATA Hitachi HUA72303 A580 /dev/sde part of zfs pool

so when i reboot the system the disks may get a new /dev/sd* assignment and that is what is messing with my zfs running.

I noticed that after rebooting the system with an external hard drive and with a thumb drive in that the /dev/sd* changed again, but rebooting with them not installed the /dev/sd* went back to normal.

What I need to figure out is how to lock the drive to a set /dev/sd* and that way my zfs pool will be set to go.
Any help would be greatly appreciated.
 
Last edited:
I have been messing around with my server and noticed that every so often the drives seem to change the mount point in /dev dir. I am looking to see what i can do to make them be assigned permanently to a specific /dev/ mount point so my zfs install stops saying that the drives have gone bad..

Do a "zpool export <name"> and "zpool import <name>"

That should (in theory) get the array defined using /dev/disk/by-id, which won't change no matter what happens to /dev/sda<x>
 
I have been messing around with my server and noticed that every so often the drives seem to change the mount point in /dev dir. I am looking to see what i can do to make them be assigned permanently to a specific /dev/ mount point so my zfs install stops saying that the drives have gone bad..

Using ubuntu server

have 2 500gb hdd4
640gb hdd
and a 3tb hdd

I can get the zfs pool going but after a few days it dies out. any help would be great!

you should read ZFS on Linux - FAQ

this the important step:
dev/disk/by-id/: Best for small pools (less than 10 disks)
or
/dev/disk/by-path/: Good for large pools (greater than 10 disks)
or
/dev/disk/by-vdev/: Best for large pools (greater than 10 disks)


I suggest you to use dev/disk/by-id :D


these link goes to ZFS on Linux FAQ -> ZFS on Linux
 
I will definitely give that a try later. Tonight and will update the forum here.

Thank you.
 
That worked perfect. used and all is well in the world now. Thank you all as always this forum never lets me down!!!
sudo zpool create -f stor raidz /dev/disk/by-id/ata-Hitachi_HUA723030ALA640_MK0351YHG8HMTA /dev/disk/by-id/ata-ST3500630AS_5QG1RKLS /dev/disk/by-id/ata-TOSHIBA_MK6476GSXN_81RHB1Q2B /dev/disk/by-id/ata-WDC_WD5000AACS-00G8B0_WD-WCAUF0995945
 
Back
Top