I humbly approach for FreeNAS problem

Joust

Supreme [H]ardness
Joined
Nov 30, 2017
Messages
6,303
Gents,

My understanding of networking issues is less than basic. Please be patient with me, I'm trying to claw my way up the learning curve.

I have a FreeNAS box running in a supermicro rig. I set it up and mapped drive s: to the directory I am using to organize my media content (it's a Plex box).

Originally, the FreeNAS box was set up with DHCP, and using the first IP assigned to it everything was cool and I was rocking along...then I had a power outage and it was assigned a different IP, which messes everything up.

So, I configured the box to have a static IP that's the same as was originally assigned. I can ping the ip from another machine on the network, but the shared drive is not accessible and neither is the box when attempting to connect to the ip from a browser.

**I discovered the router was assigned the ip used by the nas box, I could ping the router, not the NAS box**

Is it possible some other device was assigned the ip upon reboot? Not sure exactly how to approach this problem.

Which leads me to my next problem. I was attempting to check out the router (Asus 66u set up as bridge from a firewall box) but couldn't get it to talk to me either.

I used ipconfig and tried to connect to the gateway listed there. I assumed that would be the ip of the router.

If one of you guys would like to take pity on me and offer some guidance, it'd be appreciated.
 
Last edited:
you should use DNS instead of coding static IPs in things...

and you should probably also have a static DHCP entry in your DHCP servers for things on your network that provide a service (mmmm... servers, etc.)

i generally set aside a special DHCP scope for this... so perhaps most of my client devices get .100-.199 on my /24, but i reserve like .10-.99 for static DHCP reservations for servers and ip cams and other things i don't want to move around too much...
 
Can you run an IP scanner like AngryIP and confirm what devices your workstation sees on your network?

Also give us screen shots of both the FreeNAS and your workstation IP settings, in case there is a discrepancy eg. wrong subnet.
 
Tons of FreeNAS experience here - My first suggestion is to move to XPEnology. Save yourself a TON of headache, as well as get a bunch more features.
 
I will take some screen shots. In the meantime, I discovered some things. First: when everything got rebooted, the ip that was assigned to my NAS box got assigned to a router used as an AP. The router cannot see the NAS as a connected device (it's wired in). I cannot ping the nas box from my workstation. And of course, the shared drive remains inaccessible, as is the graphical interface.
 
Tons of FreeNAS experience here - My first suggestion is to move to XPEnology. Save yourself a TON of headache, as well as get a bunch more features.
Is this a joke? A hack for an OS with a buggy and unproven file system vs a well supported rock solid ZFS solution?
 
Is this a joke? A hack for an OS with a buggy and unproven file system vs a well supported rock solid ZFS solution?

Depending on his implementation, nope. If you're a power admin who wants to deal with FreeNAS in a corp environment - Go for it.

If you just need reliable network storage at home - Why not consider XPE? I have 2 of these running, been over a year now, with over 50 TB between the 2. I have no issues with them at all. I get 112 MB/s read/write whenever I need it.

And unproven filesystem? How is Raid 5/6 or 10 unproven? As long as you backup your data, then you never have any worries. I hope you arent suggesting you never need to backup ZFS...
 
Depending on his implementation, nope. If you're a power admin who wants to deal with FreeNAS in a corp environment - Go for it.

If you just need reliable network storage at home - Why not consider XPE? I have 2 of these running, been over a year now, with over 50 TB between the 2. I have no issues with them at all. I get 112 MB/s read/write whenever I need it.

And unproven filesystem? How is Raid 5/6 or 10 unproven? As long as you backup your data, then you never have any worries. I hope you arent suggesting you never need to backup ZFS...
Raid 5/6 is broken on btrfs. ZFS is data integrity/security storage, not just a network share. If all one needs is a network share that's one thing, but there is more to data reliability long term and I wouldn't trust anything besides ZFS for important data.
 
After additional testing, I've concluded the network stack is messed up. Won't talk with my workstation even directly wired to the nic. I've configured it a hundred times. I'm going to try to do a fresh install of the OS. I've just gotta figure how to do it without data loss.
 
After additional testing, I've concluded the network stack is messed up. Won't talk with my workstation even directly wired to the nic. I've configured it a hundred times. I'm going to try to do a fresh install of the OS. I've just gotta figure how to do it without data loss.

Should be easy to just reinstall FreeNAS on your OS drive and re-import the storage volume.
 
Raid 5/6 is broken on btrfs. ZFS is data integrity/security storage, not just a network share. If all one needs is a network share that's one thing, but there is more to data reliability long term and I wouldn't trust anything besides ZFS for important data.

Chalk it up to opinion, then? Because I've had no issues, also if you have maintained backups you won't have issues with any storage implementation.
 
Word of advice: disconnect your storage drives before you install, you'd hate to erase the wrong partition - care to guess how I know that???
Yessir. I'm planning to do that. I have significant amount of data I don't want to lose. Not critical data, but voluminous.
 
Raid 5/6 is broken on btrfs. ZFS is data integrity/security storage, not just a network share. If all one needs is a network share that's one thing, but there is more to data reliability long term and I wouldn't trust anything besides ZFS for important data.
Synology does not use btrfs RAID, so that issue doesn't affect any of their products.
 
So. I reinstalled FreeNAS. I imported the old volume, data appears to be there. I installed Plex in a jail. I cannot seem to get the shared drive to be accessible from a Windows machine. Then I need to figure out how to get the data in the legacy volume to be organized into the new shared drive.
 
So, I've been fiddling with it for about a week now. It's my first adventure into this type of thing.

So far, the major shortcomings of FreeNAS are volume expansion, no global hot spare, and things like that. The lack of hardware raid support slows everything way, way down too.
 
So, I've been fiddling with it for about a week now. It's my first adventure into this type of thing.

So far, the major shortcomings of FreeNAS are volume expansion, no global hot spare, and things like that. The lack of hardware raid support slows everything way, way down too.

I wish freenas had better vm support... Been waiting for ages for a leap forward with bhyve but I get how they might want to keep this out of scope...

Someone else should be working on a plug-in for it

Freenas is really becoming very commercialized I fear
 
So far, the major shortcomings of FreeNAS are volume expansion, no global hot spare, and things like that. The lack of hardware raid support slows everything way, way down too.

Volume expansion I 100% agree with you. That's a ZFS short coming. Changing an existing volume is a problem. No global hot spare is a nice feature I think they should think about for the future. It, in theory, shouldn't be difficult to have a hot spare.

However a lack of hardware raid support is completely counter to what ZFS is. ZFS is going to be slower on "re-silvering" which to the rest of means rebuildilng a drive. However as a guy who used to work in a datacenter and used to love the easy ability to hot swap server drives, ZFS's insane resilience, easy import of volumes, and insanely easy encryption with AES-NI still make it a far better solution than hardware raid.

Hardware raid IS convenient and fast. But if you really worry about data integrity, resiliency and ease of recovery if your controller, in this case your motherboard, fails, ZFS has a hardware controller beat, hands down.
 
Volume expansion I 100% agree with you. That's a ZFS short coming. Changing an existing volume is a problem. No global hot spare is a nice feature I think they should think about for the future. It, in theory, shouldn't be difficult to have a hot spare.

However a lack of hardware raid support is completely counter to what ZFS is. ZFS is going to be slower on "re-silvering" which to the rest of means rebuildilng a drive. However as a guy who used to work in a datacenter and used to love the easy ability to hot swap server drives, ZFS's insane resilience, easy import of volumes, and insanely easy encryption with AES-NI still make it a far better solution than hardware raid.

Hardware raid IS convenient and fast. But if you really worry about data integrity, resiliency and ease of recovery if your controller, in this case your motherboard, fails, ZFS has a hardware controller beat, hands down.

Like anything else, it's got benefits and costs. For me, expansion limits are the biggest problem. It really just means that the array needs to be fully constituted up front. Otherwise, it's a real pain.

I have four 1 tb drives currently working, and I have another four to add. Then I have a whole different set of 4 tb drives to get going - I'm just going to use another volume/share for those.

Like I said, this is new to me. Brand new. I'm learning.
 
Like anything else, it's got benefits and costs. For me, expansion limits are the biggest problem. It really just means that the array needs to be fully constituted up front. Otherwise, it's a real pain.

I have four 1 tb drives currently working, and I have another four to add. Then I have a whole different set of 4 tb drives to get going - I'm just going to use another volume/share for those.

Like I said, this is new to me. Brand new. I'm learning.

I have always thought ZFS's lack of expansion flexibility was to ensure that users had a copy of their data elsewhere.

It is their way of stating "ZFS does *NOT* backup your data!" :)
 
I have always thought ZFS's lack of expansion flexibility was to ensure that users had a copy of their data elsewhere.

It is their way of stating "ZFS does *NOT* backup your data!" :)

They could've just put it in the documentation. Much easier on me.
 
Like anything else, it's got benefits and costs. For me, expansion limits are the biggest problem. It really just means that the array needs to be fully constituted up front. Otherwise, it's a real pain.

I have four 1 tb drives currently working, and I have another four to add. Then I have a whole different set of 4 tb drives to get going - I'm just going to use another volume/share for those.

Like I said, this is new to me. Brand new. I'm learning.
That's why stripped mirrors are the way to go. Best performance, quickest resilvering, easiest expandability. With current price per GB of spinning storage, it's not that cost prohibitive either.
 
I don't get the ZFS Expandability arguments? If you have a zpool and need to expand it, just add a new vdev to it with your new drives. Done. If you must, move your data around to balance the vdevs -- since it's CoW that likely isn't needed for most people though.
If you can't physically add disks and a vdev, then you set autoexpand on, replace disk, then zpool replace, and wait for resilver. As long as you thought your zpool and vdevs through when you built it (i.e. didn't go for large vdevs), it's not too slow nor difficult. Even if you have 8 or 10 wide vdevs, it still is just an exercise in patience, not something "impossible" or even "difficult".
 
I don't get the ZFS Expandability arguments? If you have a zpool and need to expand it, just add a new vdev to it with your new drives. Done. If you must, move your data around to balance the vdevs -- since it's CoW that likely isn't needed for most people though.
If you can't physically add disks and a vdev, then you set autoexpand on, replace disk, then zpool replace, and wait for resilver. As long as you thought your zpool and vdevs through when you built it (i.e. didn't go for large vdevs), it's not too slow nor difficult. Even if you have 8 or 10 wide vdevs, it still is just an exercise in patience, not something "impossible" or even "difficult".

I think the new vdev has to be of the same character as the existing vdev. That's a pretty stiff limitation.

In my case, that's adding another four drive array, which is fine.

In hindsight, I think a work around might be to run the whole thing in a VM. Adds another layer, will slow things down a bit.
 
It absolutely does not. You can mix/match vdevs, though it's not a great idea to.
 
It absolutely does not. You can mix/match vdevs, though it's not a great idea to.
That's an understatement. You could add a single striped drive to a pool to increase storage. The problem is when that drive fails you lose the ENTIRE pool. Honestly it's better to just say that you cant do it and save people from themselves.
 
That's an understatement. You could add a single striped drive to a pool to increase storage. The problem is when that drive fails you lose the ENTIRE pool. Honestly it's better to just say that you cant do it and save people from themselves.
Indeed. That's why I refer back to where I orginally said about thinking through the entire zpool and vdev layout before building :D. I've seen some absolute trainwreck of ZFS installs... the guidelines are so clear, but I think a lot of people want to maximize available storage and don't take the time to fully realize the situation they put themselves in.
 
In hindsight, I think a work around might be to run the whole thing in a VM. Adds another layer, will slow things down a bit.

Not sure if the slowdown is meaningful; generally speaking, your limitation will still be the drives themselves. They can be passed directly to the VM without an OS abstraction layer for direct hardware access, which is what ZFS wants, and go.

Also, the FOSS continuance of FreeNAS is XigmaNAS, which was intermediately named NAS4Free. XigmaNAS is more or less at feature parity and FOSS, while FreeNAS is now a commercial product. Both seem to be a bit ahead of Linux ZFS ports.
 
Not sure if the slowdown is meaningful; generally speaking, your limitation will still be the drives themselves. They can be passed directly to the VM without an OS abstraction layer for direct hardware access, which is what ZFS wants, and go.

Also, the FOSS continuance of FreeNAS is XigmaNAS, which was intermediately named NAS4Free. XigmaNAS is more or less at feature parity and FOSS, while FreeNAS is now a commercial product. Both seem to be a bit ahead of Linux ZFS ports.
Huh? I mostly follow ZFS, but, I just checked and I can still download FreeNAS (11.1 and 11.2) directly from freenas.org. They sell their support contracts and hardware systems, but you can run FreeNAS without paying a penny.
 
Also, the FOSS continuance of FreeNAS is XigmaNAS, which was intermediately named NAS4Free. XigmaNAS is more or less at feature parity and FOSS, while FreeNAS is now a commercial product. Both seem to be a bit ahead of Linux ZFS ports.
Isn't TrueNAS the commercial product from the guys behind FreeNAS which is...free?
 
Isn't TrueNAS the commercial product from the guys behind FreeNAS which is...free?

Probably?

More of a support thing, which is what they can monetize, but yeah. Main point is that it's not FOSS, and thus beholden to commercial interests for development and support. XigmaNAS is FOSS and the successor to the original FOSS FreeNAS project.

Huh? I mostly follow ZFS, but, I just checked and I can still download FreeNAS (11.1 and 11.2) directly from freenas.org. They sell their support contracts and hardware systems, but you can run FreeNAS without paying a penny.

You can! As above, it's 'free', but it's not FOSS.

I'm mentioning the alternatives, I'm not really in a position to make a hard recommendation.
 
So, update:

I burned through storage at an amazing rate. I was running four 1tb drives in a RAID 5 type setup. Now, I've got six 4 TB drives on the way - and I want to migrate the volume exactly as it is in terms of shares/config/jails, etc....but on the bigger volume (ie, six 4 tb drives in RAID 5-type). I think the best way to do it is this: https://forums.freenas.org/index.ph...te-data-from-one-pool-to-a-bigger-pool.40519/

Please give any and all advice as appropriate. I am risking a ton of data.

Edit: I can accommodate 12 drives in my chassis. I have seven 1 TB drives and (soon) six 4 tb drives. What is the best recommended setup for network media streaming?
 
Last edited:
So, update:

I burned through storage at an amazing rate. I was running four 1tb drives in a RAID 5 type setup. Now, I've got six 4 TB drives on the way - and I want to migrate the volume exactly as it is in terms of shares/config/jails, etc....but on the bigger volume (ie, six 4 tb drives in RAID 5-type). I think the best way to do it is this: https://forums.freenas.org/index.ph...te-data-from-one-pool-to-a-bigger-pool.40519/

Please give any and all advice as appropriate. I am risking a ton of data.

I said it before - "ZFS is *NOT* a backup of your data!".

Before doing *ANYTHING* backup the data to storage that isn't connected to FreeNAS.

Even if you screw up and blow up the pools so you have to redo your shares/permissions/jails/etc. at least the data is intact.

Since you are running 4x 1TB in RAID5-style (I assume it is RAIDZ1) you have max 3TB of data to back up. Order an extra 4TB hard drive to act as the backup along with a USB HDD dock. Copy everything over to the 4TB drive.

When you are finished keep that extra 4TB drive around as a cold spare.

If you are flush buy a pair of 8TB drives, that way they are big enough to backup your new pool (assuming you are going RAIDZ2 this time).

I always have on hand enough storage to backup all my pools, I usually buy the drive size I intend to upgrade the pools to so that when I'm ready to upgrade I buy the remaining drives I need and then the new backup drives.

I do this because I prefer to spread my pool drive purchases over time - I hate it when all the drives wear out simultaneously.

Are you keeping the old pool?
 
I was running four 1tb drives in a RAID 5 type setup. Now, I've got six 4 TB drives on the way - and I want to migrate the volume exactly as it is in terms of shares/config/jails, etc....but on the bigger volume (ie, six 4 tb drives in RAID 5-type). I think the best way to do it is this: https://forums.freenas.org/index.ph...te-data-from-one-pool-to-a-bigger-pool.40519/
https://forums.freenas.org/index.ph...te-data-from-one-pool-to-a-bigger-pool.40519/That looks like the best method to migrate filesystems, though I haven't tried it exactly. Note that it works at the zfs, not zpool, level. IOW, first you'll need to configure your destination pool manually. This also means that VDEV type is irrelevant here.

And that last part's good, because most people will say you want RAID-Z2 at this point. I'd agree.

Please give any and all advice as appropriate. I am risking a ton of data.
At the very least, use a couple of the new 4TBs to make a temp backup. Probably best to commit to using mirrors for new pool, so you can easily add those drives later.

There's nothing wrong with later using each 1TB to backup a portion of the larger pool, other than inconvenience.

Edit: I can accommodate 12 drives in my chassis. I have seven 1 TB drives and (soon) six 4 tb drives. What is the best recommended setup for network media streaming?
How many simultaneous streams? If it's just you, 6x 4TB in Z2 gives excellent space, safety & sequential bandwidth. For multiple streams (or more IOPs for non-media use), multiple VDEVs are better, so mirrors here.
 
Back
Top