Best option for adding more sata ports

Well I tried following the steps below, which match the steps in a YouTube video by a Microsoft engineer. Of course it didn't work, cause why would it work? I couldn't even dismount the HBA. It gave me some BS about the OS not being able to manage it and needing to go into the BIOS or UEFI, but I never even configured this card in the BIOS/UEFI. SO I don't know what the hell it's talking about.

https://devblogs.microsoft.com/scri...er-v-vms-by-using-discrete-device-assignment/

I've about had it with this shit though. If I can't get FreeNAS setup to properly run and protect my drives and their data, whats the point? I might as well just use these in a 36TB non-redundant Drivepool if I'm just going to lose all my data anyway...
The HBA has it's own BIOS separate from your motherboard–I'm guessing the configuration you need to change is in there. Check the documentation for your card. If none came with it, it should be on the website.
 
The HBA has it's own BIOS separate from your motherboard–I'm guessing the configuration you need to change is in there. Check the documentation for your card. If none came with it, it should be on the website.
The Avago user manual really doesn't get into the BIOS settings, but there aren't many settings in the first place. The only applicable setting is to change the 'Boot Support' to "Enabled BIOS and OS", "Enabled OS Only", "Enabled BIOS Only", or "Disabled". It was initially set to "Enabled BIOS and OS". I switched to "Disabled" real quick and then it got stuck on that. Tried rebooting, but it was still stuck. Then all of a sudden on the third try I was able to change it again. No idea what happened. Anyway, when its set to "Disabled", Windows cannot see it at all. This is what PS showed:

PS.png


If I set it to "Enable OS Only", it's still disabled in Windows and the feedback from PS is identical to the above. Now I changed it back to "Enable BIOS and OS", but of course the PS commands to dismount still do not work. They still say the same thing about it being controlled by the BIOS/UEFI. No idea where to go from here.
 
The Avago user manual really doesn't get into the BIOS settings, but there aren't many settings in the first place. The only applicable setting is to change the 'Boot Support' to "Enabled BIOS and OS", "Enabled OS Only", "Enabled BIOS Only", or "Disabled". It was initially set to "Enabled BIOS and OS". I switched to "Disabled" real quick and then it got stuck on that. Tried rebooting, but it was still stuck. Then all of a sudden on the third try I was able to change it again. No idea what happened. Anyway, when its set to "Disabled", Windows cannot see it at all. This is what PS showed:

View attachment 257286

If I set it to "Enable OS Only", it's still disabled in Windows and the feedback from PS is identical to the above. Now I changed it back to "Enable BIOS and OS", but of course the PS commands to dismount still do not work. They still say the same thing about it being controlled by the BIOS/UEFI. No idea where to go from here.
Hmm....

Any options like IOV or IO Virtualization in your motherboard bios? Could be under chipset/southbridge settings, or buried somewhere else.
 
Hmm....

Any options like IOV or IO Virtualization in your motherboard bios? Could be under chipset/southbridge settings, or buried somewhere else.
The only mention of Virtualization in the BIOS is in the OC settings (took me forever to find them!), but there's nothing specifically about IO Virtualization.

IMG_20200628_232238.jpg

IMG_20200628_232441.jpg
 
I tried posting about this on the FreeNAS board, and they just ridiculed me for trying to virtualize FreeNAS, especially in Hyper-V. Nevermind the fact that freeNAS.org says it's OK. Also, a blog post by a FreeNAS senior software engineer brought to my attention by someone on Reddit says that the claims that "FreeNAS can't be virtualized" are categorically false and that running FreeNAS in Hyper-V is perfectly acceptable. Confronted with this, the cult board members just tried to say it was an old article and was no longer applicable...as if the OS devolved to be less compatible over the couple years since the article was written. Ironically they fed me forum stickies that were posted around the same time, trying to claim the stickies were more recent. The people on the board were also trying to claim I need a minimum of 32GB ram dedicated to FreeNAS and that the ram must be non-ECC (not even sure what that is, but OK). You'd have to be insane to think I need 32GB ram for this. They're morons over there that don't know how to setup a server if they think those specs are even remotely reasonable for a simple 3 drive Raid 5/Z setup. Just asinine claims being made all around on that board. Anyway, rant over. Just thought I'd share.
 
If you thought the Linux Master Race was bad... now you have met the BSD inferiority race.

Perhaps some discussion of scale might help put their dribble into perspective: ZFS was created by Sun to essentially be an 'open SAN' filesystem. It does everything that is needed on a single node to support a far larger architecture.

In such installations, yes, you want mountains of ECC RAM and various higher-speed caches to back up the pools, but also, you'd be running production containers and VMs with databases of all stripes and various flavors of compute nodes from those pools.


For just raw file storage? You can get away with the very bare minimums. Overshooting those just a tad is plenty. My current OS running my pools (OS has changed... almost a dozen times, hardware a few times, the pools have stayed the same) is a quad-core with 16GB of non-ECC RAM. Only thing truly special about the system is that it has a 10Gbit Aquantia NIC.


[that NIC is also why I'm running Ubuntu 20.04 instead of FreeNAS; FreeBSD 11, on which the current public FreeNAS release is based, doesn't have drivers for that NIC; the current FreeBSD 12 does, so whenever FreeNAS gets ported to that, I'll likely migrate the OS again; also, this is an entirely separate machine from my desktop, but it also ran FreeNAS in a Hyper-V VM on Server 2016 first]
 
If you thought the Linux Master Race was bad... now you have met the BSD inferiority race.

Perhaps some discussion of scale might help put their dribble into perspective: ZFS was created by Sun to essentially be an 'open SAN' filesystem. It does everything that is needed on a single node to support a far larger architecture.

In such installations, yes, you want mountains of ECC RAM and various higher-speed caches to back up the pools, but also, you'd be running production containers and VMs with databases of all stripes and various flavors of compute nodes from those pools.


For just raw file storage? You can get away with the very bare minimums. Overshooting those just a tad is plenty. My current OS running my pools (OS has changed... almost a dozen times, hardware a few times, the pools have stayed the same) is a quad-core with 16GB of non-ECC RAM. Only thing truly special about the system is that it has a 10Gbit Aquantia NIC.


[that NIC is also why I'm running Ubuntu 20.04 instead of FreeNAS; FreeBSD 11, on which the current public FreeNAS release is based, doesn't have drivers for that NIC; the current FreeBSD 12 does, so whenever FreeNAS gets ported to that, I'll likely migrate the OS again; also, this is an entirely separate machine from my desktop, but it also ran FreeNAS in a Hyper-V VM on Server 2016 first]
Yeah, youre way over my head on that one man. Maybe you should go tell them lol I'd just look like an idiot trying to explain that.

The conversation over there has progressed into a more useful and meaningful debate now at least; however, they are still more or less sticking by some of the claims being made. They've at least admitted that my build scenario is "not ideal" and that the non-ECC memory issue is apparently proven false now. But beyond that, technical discussions are way above my level. One guy, Yorick, has actually been fairly helpful. He did mention though that despite the FREE-ness of FreeNAS, the board is still largely filled with admins or other people using the OS for business at large scales.

It seems like if PCIe passthrough is required, then I'm stuck, because that is not possible in Windows 10 (only Server 2016). So unless you're suggesting that PCIe passthrough is not required, then maybe I should just start moving on.
 
I mentioned it as a means of getting SMART data from the drives into FreeNAS; instead of passing the drives, you pass the whole controller.
I didn't find the lack of SMART data to be that big of a deal, as it's rarely a good predictor of failure. A scrub repairing files is a much better predictor, as that can easily pop before SMART throws errors.

If a drive is corrupting data, you can just remove it from the VM, put it online in Windows, and check the SMART data with CrystalDiskInfo there. That'll likely get you far more detailed results than FreeNAS could give you. And if the drive is repairing data during scrubs, then the SMART data is only there to support your warranty claim ;).

They've at least admitted that my build scenario is "not ideal" and that the non-ECC memory issue is apparently proven false now.
That's on the OpenZFS wiki I pointed you to above. It's one of those things that got passed around the community from a different perspective than would be understood by individuals not involved in enterprise storage. Mostly, it's summarized as 'ZFS does not replace the need for ECC memory in enterprise storage servers'. It got recast as 'ECC is required for ZFS', but that's no more or less the case than any other filesystem on any other server. ECC is meant to protect against bits being flipped in memory. It's important where a single bit being flipped in flight can cause actual business losses, but not so much for personal storage for data mostly at rest.
He did mention though that despite the FREE-ness of FreeNAS, the board is still largely filled with admins or other people using the OS for business at large scales.
This isn't a bad thing, but the elitism involved can foster an atmosphere that seems hostile to newcomers :)

[I do not participate on their board; FreeNAS is just one distribution based on just one operating system that includes ZFS; it just happens to be the one that's the most lightweight, well-supported, and end-user friendly when it comes to interfaces]
 
I mentioned it as a means of getting SMART data from the drives into FreeNAS; instead of passing the drives, you pass the whole controller.
I didn't find the lack of SMART data to be that big of a deal, as it's rarely a good predictor of failure. A scrub repairing files is a much better predictor, as that can easily pop before SMART throws errors.

If a drive is corrupting data, you can just remove it from the VM, put it online in Windows, and check the SMART data with CrystalDiskInfo there. That'll likely get you far more detailed results than FreeNAS could give you. And if the drive is repairing data during scrubs, then the SMART data is only there to support your warranty claim ;).


That's on the OpenZFS wiki I pointed you to above. It's one of those things that got passed around the community from a different perspective than would be understood by individuals not involved in enterprise storage. Mostly, it's summarized as 'ZFS does not replace the need for ECC memory in enterprise storage servers'. It got recast as 'ECC is required for ZFS', but that's no more or less the case than any other filesystem on any other server. ECC is meant to protect against bits being flipped in memory. It's important where a single bit being flipped in flight can cause actual business losses, but not so much for personal storage for data mostly at rest.

This isn't a bad thing, but the elitism involved can foster an atmosphere that seems hostile to newcomers :)

[I do not participate on their board; FreeNAS is just one distribution based on just one operating system that includes ZFS; it just happens to be the one that's the most lightweight, well-supported, and end-user friendly when it comes to interfaces]
Its funny, you are basically saying the opposite of most of these guys about S.M.A.R.T. data and non-ECC memory. And me, being the n00b that I am, cannot differentiate what is correct information and which is not. Though I'm far more inclined to believe you over them, if for no other reason due to 16 years of getting pretty solid advice on this forum.

I was DMed by one of the guys who apologized for coming off abrasive, so he and I are definitely good now. We then got into to a technical discussion which was a little over my head. Considering his alleged experience, its probably advice worth considering. So I'd be interested to hear your response. And as you can see, he is also a [H]ard member, so perhaps he might even join in this conversation! I won't name drop out of respect though.

Hey, I've been lurking in [H]ardForum for a long time myself. Small world.

I've been running FreeNAS since 2015, and have seen a few things, and had to rebuild my pool completely once. I've been doing IT professionally since 2007, mostly with networking. I started as a Unix admin, and had to deal with both BSD and Linux flavors. I've also done some web administration, software development, and security operations, including firewalls. Finally, I've done desktop administration since Windows for Workgroups 3.11. I've also run VMs in vmWare, VirtualBox, and Hyper-V. These days, I worry about integrating IT services.

As to your question about FreeNAS support as a Hyper-V guest, FreeNAS is a very stripped-down FreeBSD, with a custom middleware layer that actively configures system files based on GUI settings. See the Microsoft Hyper-V FreeBSD guest support table here: https://docs.microsoft.com/en-us/wi...supported-freebsd-virtual-machines-on-hyper-v. You'll have to scroll down until you find PCI passthrough/DDA. Hyper-V does support FreeBSD versions 12.x and 11.x with passthrough, but only on Server 2019 and 2016.

Now you may wonder if you can just pass drives directly through to the guest. This sounds like a viable option on first blush, but OpenZFS (which is the software FreeNAS uses to run ZFS) assumes that it's talking directly to the drives at a low level. Going through a hypervisor introduces timing delays that can result in OpenZFS thinking the drive has failed, and thus causing the drive to fall out of the pool.

I get that its frustrating to a veteran to explain things to a newcomer, but as with any forum online or offline, you either need to have patience or not participate in the conversation at all. I tried explaining this to them, some were receptive, others not so much. But people forget that newcomers don't just magically know which topics are posted about constantly on boards. And in technical subjects like this, there's the issue of not even having the requisite knowledge required to study a topic on on your own. That's the case I'm in now. Before last week, I'd never once used Hyper-V or any other hypervisor, nor had I used FreeNAS at all. In fact, my experience is almost entirely limited to Windows environment, and even that is considered pretty novice by the standard of this board and many others. Trying to make sense of a bunch of tutorials and articles where you do not understand 99% of the terminology is very difficult. FreeNAS's forum is hardly the only online forum guilty of nurturing a mob-mentality, and honestly I've seen far worse then them (Reddit comes to mind). But in the end it really helps nobody, and I've long utilized online forums for expanding my knowledge on a certain topic so it kills me to have to additionally deal with immature BS like this.
 
So I'd be interested to hear your response.
Basically, this is the same problem that RAID controllers have: if a drive takes too long to respond, the 'dumb' controller logic marks it failed. This can obviously happen in Hyper-V as well, but you'd just re-add the drive and tell it to scrub; at worst, it would be out of sync to the point that you'd need to resilver (i.e., treat the drive as a new blank replacement drive).

The contention is that it's possible for Hyper-V to get 'behind' enough for this to happen; I can't really argue for or against the point because I can't prove a negative. What I can say instead is that the data will still be there, and since Windows can't touch the drives (they're 'offline'), it'll still be there when you get whatever happened fixed.

I'd only be concerned if and when it happens. You're running a 'converged' setup with a significant user-space component, it's going to be a bit of trial-and-error.
 
Basically, this is the same problem that RAID controllers have: if a drive takes too long to respond, the 'dumb' controller logic marks it failed. This can obviously happen in Hyper-V as well, but you'd just re-add the drive and tell it to scrub; at worst, it would be out of sync to the point that you'd need to resilver (i.e., treat the drive as a new blank replacement drive).

The contention is that it's possible for Hyper-V to get 'behind' enough for this to happen; I can't really argue for or against the point because I can't prove a negative. What I can say instead is that the data will still be there, and since Windows can't touch the drives (they're 'offline'), it'll still be there when you get whatever happened fixed.

I'd only be concerned if and when it happens. You're running a 'converged' setup with a significant user-space component, it's going to be a bit of trial-and-error.
I think I would rather opt-out if I'm going to have to be doing trial & error on any particular solution. I just do not have the background knowledge for that to be a relatively painless process. If it was a dedicated NAS, sure maybe I'd give it a try. But this server is first and foremost a surveillance server, and temporarily a workstation. So the risk isn't worth the reward.

I started looking into OpenZFS on Windows, but I think that brings me to the same logical conclusion given it's current level of development. Perhaps in the future it'll be something to try. So I guess I'm only left with snapraid or storage spaces for a windows-based raid solution. I will likely pursue snapraid, but I'll have to refresh myself on the specific issues unique to that configuration, because there were a few. Otherwise I suppose I could just use storage spaces and pray it gets me far enough down the line to be able to convert to an alternative solution. May not be the worst idea ever, so long as I turn off windows update.
 
If it will no longer be a workstation when you're done; ZFS makes even more sense, using FreeNAS or otherwise, since you can just wipe the OS when you're done and load something appropriate on metal. This is what I wound up doing when I realized that I didn't want to be experimenting with a domain controller on the same system that housed my NAS :).
 
If it will no longer be a workstation when you're done; ZFS makes even more sense, using FreeNAS or otherwise, since you can just wipe the OS when you're done and load something appropriate on metal. This is what I wound up doing when I realized that I didn't want to be experimenting with a domain controller on the same system that housed my NAS :).
Well there's still the matter of the Surveillance aspect of it all. If i end up switching to DW Spectrum, that is available on Linux. So if that makes things any easier, i could do some combination using Linux to get both running.

My only other concern would be the miscellaneous Windows programs i use periodically, which are related to Surveillance and Plex. I'm sure programs as simple as meta data writers can easily run on a Win10 VM, but i worry about ripping/writing from my 4k burner, video editing in movie studio, and encoding. If PCIe passthrough is a PITA for a VM in windows, won't it most likely also be an issue on other OSs?
 
but i worry about ripping/writing from my 4k burner, video editing in movie studio, and encoding. If PCIe passthrough is a PITA for a VM in windows, won't it most likely also can an issue on other OSs?
Quite likely; that begs the question as to whether the machine will not remain a workstation though, right?

At this point I'd almost recommend building a NAS machine to host the array(s). Any cheap CPU with built-in graphics, a board with a slot for the HBA and perhaps a NIC, and an enclosure with enough bays for the drives... stuff ain't expensive. Probably get an MATX setup that fits the bill.

And I say that because the reality is that you simply don't want daily tasks and problems affecting the services provided by the NAS. Anything running on your current workstation is going to be a compromize. I've found this out the hard way, more than once...
 
Quite likely; that begs the question as to whether the machine will not remain a workstation though, right?

At this point I'd almost recommend building a NAS machine to host the array(s). Any cheap CPU with built-in graphics, a board with a slot for the HBA and perhaps a NIC, and an enclosure with enough bays for the drives... stuff ain't expensive. Probably get an MATX setup that fits the bill.

And I say that because the reality is that you simply don't want daily tasks and problems affecting the services provided by the NAS. Anything running on your current workstation is going to be a compromize. I've found this out the hard way, more than once...
Not a chance. Dude my company is so old school, I'm shocked they even took it this far. We just got business casual dress in 2018...2018! We don't even have casual Fridays! That's not coming until like 2050 probably. And i work in an industry where we do field work some days, so half the department is dressed in Carhartts and old t-shirts, dredging in mud on the new carpet. The other half are in khakis and pants. And its not like clients ever find their way to any of our cubicles anyway. Ugh, anyway, point is that once this pandemic is over, I won't even have the ability to work from home anymore because my company hates it.

Looking at the long term, i might just build a dedicated Nas and keep this machine for surveillance and miscellaneous tasks. Or I'd just go with one of the pre-built nas boxes if any are worth a damn.

But I'm looking for a short term (~1-3yr) solution right now as building/buying a dedicated nas is not in the cards. I've spent too much recently, and we have other issues to worry about with our house and whatnot. That's why I'm now thinking snapraid or Windows raid. If snapraid is problematic with changes in files, then perhaps i can just make it a point to only make changes outside the array before copying over. I will need to take a look at these two options in detail again.
 
Or I'd just go with one of the pre-built nas boxes if any are worth a damn.
Most are, it's more a question of features, support, and pricing. As basic fileservers they all mostly just work. I'd single out Synology as being the best at everything else, but for a price; however, they have features that 'just work' for users that others can only hope to replicate.

Really the only reason I didn't go that route is that I thought I could do better for less, and I really kind of failed at both. I also wanted to learn, so I guess I succeeded there, and that knowledge has certainly helped my career. One piece of knowledge is priceless: the concept of 'least functionality' is both a security feature and a headache reducer, rolled into one!
 
Most are, it's more a question of features, support, and pricing. As basic fileservers they all mostly just work. I'd single out Synology as being the best at everything else, but for a price; however, they have features that 'just work' for users that others can only hope to replicate.

Really the only reason I didn't go that route is that I thought I could do better for less, and I really kind of failed at both. I also wanted to learn, so I guess I succeeded there, and that knowledge has certainly helped my career. One piece of knowledge is priceless: the concept of 'least functionality' is both a security feature and a headache reducer, rolled into one!
Not that I'm in the market now, but are any of the pre-built boxes based on this illustrious Z file system? Might as well get the best reliability if dropping big coin right?

Even though it's probably the best route, it will pain me to not use this HBA given it's astronomical bang-for-buck value. When i looked up the user manual on this particular card, the specs said it can push thousands of drives! That's crazy! Obviously I'd quickly bottleneck in the power and/or cooling department, but that's really cool for a $49 piece of equipment.
 
Not that I'm in the market now, but are any of the pre-built boxes based on this illustrious Z file system? Might as well get the best reliability if dropping big coin right?
None do; they all carry forward a collection of Linux technologies, similar to Unraid but perhaps more robust. They also lack the CPU and memory capability to do ZFS right if taxed as they usually roll Arm-based SoCs or at most low-clocked Atoms until you get to the models that run desktop hardware.
 
None do; they all carry forward a collection of Linux technologies, similar to Unraid but perhaps more robust. They also lack the CPU and memory capability to do ZFS right if taxed as they usually roll Arm-based SoCs or at most low-clocked Atoms until you get to the models that run desktop hardware.
Gotcha. Well I guess as long as they work, who cares, right? And by work, I also mean work at protecting data lol
 
Gotcha. Well I guess as long as they work, who cares, right? And by work, I also mean work at protecting data lol
More or less? They use their own special sauce in their own distros to tie everything together. That's a lot of work that's not done in your average Linux distro, but it's also work that ZFS does automatically.

Note that none of these companies started development when ZFS on Linux was 'a thing'. The port of ZFS to Linux is a relatively new development given that the tech itself has origins from two decades ago at Sun. It's not that they wouldn't use ZFS at some point, but rather that it wasn't available.

And it still might not be: ZFS hasn't been mainlined in the Linux kernel because its license isn't the same, and basically that opens commercial applications to potential legal action from Oracle. Linus Torvolds doesn't trust Oracle, and really, no one should; but since the code is out there, we can use it regardless!
 
I've seen it mentioned several places that Raid isn't a backup, but based on everything I've read I think that's pretty misleading. The arguments in favor of that statement usually mention issues unique to parity arrays and striping, rebuilding issues due to workload placed on other drives in the array, fire, flooding, theft, etc. But fundamentally any raid array that's not pure striping like raid 0 is in theory providing some amount of security to your data. If it didn't, there'd be no point in using it. If performance was the only factor, everyone would use raid 0. If downtime of a server was the only concern, raid 1 seems like the ideal solution since the only downtime would be the time required for the system to switch over to the mirror. The fact that parity exists to rebuild the data on a failed drive makes it a backup solution by definition, just perhaps not a complete or ideal or fullproof solution.

And when it comes to software solutions like drivepool mirroring, you quite literally are creating a complete independent backup of data that could be used instantly, on any similar system without any configuration, in the event of drive failure. Sure, the physical drive may still be stored in the same physical location as the original. So it too would likely also get lost with fire, flood, or theft, but that only means it's just not a complete backup solution either.

I get that using Raid alone wouldn't constitute an acceptable level of security for a business environment, but consumers usually can't afford and really don't need 100% complete data security. That's a very expensive proposition, so consumers need to weigh the risks and benefits of each option available. Of course being a consumer myself, I'm doing just that. I'm only looking for a Raid solution for my Plex server. If I were to lose all that data, it's not going to be the end of the world. I won't lose any money (aside from having to replace hardware), I won't lose my job, my wife won't divorce me...i don't think lol. It'll just be a royal pain in the ass to rebuild myself, that's all.


Sorry to turn this into kind of a rant. It just seems like sometimes people (probably those who work in the industry) lose sight of the big picture. I want the best balance of security, flexibility, and cost. That's highly subjective of course, but I can tell you that options like cloud storage or physical storage at 2+ independent locations is not on the table in my book... Far too expensive. Even mirroring isn't ideal for me due to cost, but when considering traditional raid 6 vs. mirroring, it's tough not to just pick mirroring due to the benefits gained by using software like drivepool or drive bender. I'm not really concerned with performance based on my use case and the fact that I bought 7200rpm drives. Being very new to all this, the simpler solutions are more attractive to me: simpler setup, simpler management, simpler expansion, and simpler recovery.

So with that said, i think I've ruled out a dedicated raid card for a few different reasons, unless you have a really compelling argument that I haven't heard before. I'm still currently looking into drivepool+snapraid, freenas VM, and unraid VM. VMs are completely foreign to me, so I'm still trying to gage just how much work it's going to be for a newbie like me to get one setup and running. I'm also primarily leaning towards the unraid VM option because of the flexibility with expansion. I'm not getting a rosey feeling about Windows raid solutions, so I think I'd rather not risk it with that option.

Raid is for redundancy and detection and correction of storage data errors in Real time and high uptime (but it's not a backup, its there to make sure data has no downtime and consistent data and high uptime) freenas ZFS with Z2 is comparable to a RAID card but can be complicated to set up (it should be on a dedicated system)

DrivePool seems like a good solution for you as everything is NTFS it seems to be what you want really and you have an HBA card so you have everything to make it work (but do note it does not offer real time data protection just duplication onto 2 other disks, but if the data it copies is corrupted it get copied to other disks in that state, but its unlikely but can happen)

what your attempting to do with hypervisors and running freenas in a VM box is not really somthing i would do (especially if you don't know what your doing just asking for it to blow up in your face really)

not sure why a hardware raid card is not a simple solution (if your not using hot swap bays just make sure you label each disk to id number so you know which one to unplug when you want to replace a disk), you don't have to use linux or anything else apart from another PC if you want actual backup

i just think your trying to overthink what you need (staying with windows, no linux below)
Drive pool looks very interesting as you can always just mount the disk in the event of a broken pool and give it a drive letter so you can open the disk (files are in a hidden folder so just have to turn on show hidden files)
or for actual relable data reducanlty a LSI hardware RAID (ideally with BBU "backup battery" so you can enable WriteBack cache) card set in RAID6
or somthing plug in play on your network like the Synology in SHR2 (2 disk fault redundant or RAID6) mode as it manages everything automatically and if you get a second synology box you can have it backup to that as well very easy to do

i would not bother messing with VM of freenas or alike just asking for total loss of data
 
Last edited:
Raid is for redundancy and detection and correction of storage data errors in Real time and high uptime (but it's not a backup, its there to make sure data has no downtime and consistent data and high uptime) freenas ZFS with Z2 is comparable to a RAID card but can be complicated to set up (it should be on a dedicated system)

DrivePool seems like a good solution for you as everything is NTFS it seems to be what you want really and you have an HBA card so you have everything to make it work (but do note it does not offer real time data protection just duplication onto 2 other disks, but if the data it copies is corrupted it get copied to other disks in that state, but its unlikely but can happen)

what your attempting to do with hypervisors and running freenas in a VM box is not really somthing i would do (especially if you don't know what your doing just asking for it to blow up in your face really)

not sure why a hardware raid card is not a simple solution (if your not using hot swap bays just make sure you label each disk to id number so you know which one to unplug when you want to replace a disk), you don't have to use linux or anything else apart from another PC if you want actual backup

i just think your trying to overthink what you need (staying with windows, no linux below)
Drive pool looks very interesting as you can always just mount the disk in the event of a broken pool and give it a drive letter so you can open the disk (files are in a hidden folder so just have to turn on show hidden files)
or for actual relable data reducanlty a LSI hardware RAID (ideally with BBU "backup battery" so you can enable WriteBack cache) card set in RAID6
or somthing plug in play on your network like the Synology in SHR2 (2 disk fault redundant or RAID6) mode as it manages everything automatically and if you get a second synology box you can have it backup to that as well very easy to do

i would not bother messing with VM of freenas or alike just asking for total loss of data
My point about raid as a backup was merely that you can restore data in the event of a drive lose, not that you should rely on it or that it's even designed for that purpose, just that it can be done. I know that you risk killing another drive in the rebuilding process with raid 5 and that raid 5 doesn't necessarily protect against corruption or other sources of data loss, but you do gain at least a slight edge over just running straight disks because of the ability to recover from a disk loss. That edge is at the expense of potentially losing ALL data, so there are obviously tradeoffs. That's all I'm saying. It was really more a comment about the wording I've seen used than anything. I'm not going to build a raid 5 array and assume all my data is protected or anything.

I don't necessarily "want" NTFS or Windows in general, I just don't have a choice. Given the tasks this server is responsible for, building the server on any other OS may not even be possible. But more importantly, even if it is possible to switch OSs, there's no way I could conceivably do it considering this is my workstation at the moment. It can't be down, cause i got bills to pay! To say it's wishful thinking that i could migrate to a new foreign OS and setup a VM in a matter of hours or even days is the understatement of a lifetime. I can barely even speak Microsoft lingo. Perhaps this could be an option down the line, as IIC & I were just discussing. But in the meantime i need a Windows solution that'll get me by for at least a couple years or so. Then maybe I'll revisit the idea of a new OS or external NAS.

I did not consider raid cards due to the additional risks associated with running them over software solutions. I got plenty of overhead to handle software raid, and i don't even need the added speed from striping, so i saw no reason to even risk it with a hardware card. I already have a UPS, so my system should be fine to shutdown on its own after a power outage without any worry of data loss. These drives are going to be used almost entirely for reads, so I'm not all that worried about write issues.

But you can pretty much disregard everything else that's been said anyway. Buried in all these posts, I've basically determined that it's impossible or nearly impossible to passthrough a pcie card on a Win10 Hyper-V. So I've scrapped the entire idea of FreeNAS in a VM. I'm most likely going to go with Snapraid+Drivepool now. I just want to look into again before committing. Haven't had the time to do that yet.
 
But you can pretty much disregard everything else that's been said anyway. Buried in all these posts, I've basically determined that it's impossible or nearly impossible to passthrough a pcie card on a Win10 Hyper-V. So I've scrapped the entire idea of FreeNAS in a VM. I'm most likely going to go with Snapraid+Drivepool now. I just want to look into again before committing. Haven't had the time to do that yet.
Near as I can tell, the only thing you were missing was SMART, right?
 
Near as I can tell, the only thing you were missing was SMART, right?
You're asking the wrong person lol. This is all new to me, so I'm not even able to determine what I am or am not missing. Alls I know is that i couldn't start the S.M.A.R.T. service in FreeNAS.
 
You're asking the wrong person lol. This is all new to me, so I'm not even able to determine what I am or am not missing. Alls I know is that i couldn't start the S.M.A.R.T. service in FreeNAS.
That's what I meant, though: you got the shares working, transfers to and from the pools / datasets to the Windows host (and thus applications)?

There really isn't more needed; ZFS will keep your data as secure as your hardware is stable. Run regular scrubs and come back to it in a few years when you can put the pools on dedicated hardware.
 
That's what I meant, though: you got the shares working, transfers to and from the pools / datasets to the Windows host (and thus applications)?

There really isn't more needed; ZFS will keep your data as secure as your hardware is stable. Run regular scrubs and come back to it in a few years when you can put the pools on dedicated hardware.
Yeah but i don't even know if it's working to secure my data. Not being able to turn on the S.M.A.R.T. service concerns me. It should have run twice at this point, one long and one short. But how can i even confirm it? Same thing goes for scrubs. How do I know it's going to perform as expected when it really matters?

That reminds me, I keep forgetting about the option of switching drives around to run these off the board. I can do that with my current configuration. I can put the WD Blue, Segate Barracuda, and perhaps my one Samsung Evo (storage SSD, not VM/FreeNAS SSD) on the HBA. Then I'll have 3 sata ports for the 3x WD Golds. Does that get me around this passthrough issue?
 
Without more research, I'd expect that SMART won't work regardless of where you plug them in: that functionality is simply not being passed through the software layers. Could be on the hypervisor side, could be whatever driver FreeBSD has for storage inside the VM.

For scrubs, the easy way to see it is directly with ZFS commands in a terminal, with 'zpool status' spitting out all the info, and 'zpool scrub' being the command to initiate a scrub. The status command will let you know how the scrub is going, if it's still going, or how the last scrub went.

I'm sure there's a way to do scrub reviews in the FreeNAS UI, I've just been using a collection of different operating systems recently (CentOS 7 and then 8, various Ubuntu spins and derivatives, mostly just waiting for FreeNAS to port to FreeBSD 12 to have full support for my hardware). That's the beauty of ZFS: I've changed OSs countless times, run them in VMs from Windows, and the data is still there. If the current OS torched and I needed the data, say the OS drive died, then I'd just boot something off a USB stick.
 
Without more research, I'd expect that SMART won't work regardless of where you plug them in: that functionality is simply not being passed through the software layers. Could be on the hypervisor side, could be whatever driver FreeBSD has for storage inside the VM.

For scrubs, the easy way to see it is directly with ZFS commands in a terminal, with 'zpool status' spitting out all the info, and 'zpool scrub' being the command to initiate a scrub. The status command will let you know how the scrub is going, if it's still going, or how the last scrub went.

I'm sure there's a way to do scrub reviews in the FreeNAS UI, I've just been using a collection of different operating systems recently (CentOS 7 and then 8, various Ubuntu spins and derivatives, mostly just waiting for FreeNAS to port to FreeBSD 12 to have full support for my hardware). That's the beauty of ZFS: I've changed OSs countless times, run them in VMs from Windows, and the data is still there. If the current OS torched and I needed the data, say the OS drive died, then I'd just boot something off a USB stick.
I just ran it a zpool scrub, and this to the output. Does this look like it's working properly?

Screenshot_20200630-191500.png


Screenshot_20200630-191516.png


Sorry for the ghetto screenshot. I'm still at the office, so i had to remote in on my phone.

I also am not able to configure scheduled scrubs of the boot drive, because the boot drive doesn't come up as a drive for selection during scheduling. Not sure what's going on there...
 
I just ran it a zpool scrub, and this to the output. Does this look like it's working properly?
Zero errors; that's what you'll be looking at.
I also am not able to configure scheduled scrubs of the boot drive, because the boot drive doesn't come up as a drive for selection during scheduling.
Meant to respond to this previously. Basically, the boot drive isn't a pool. It's also not that important since it's actually just a file on your desktop drive, and its purpose is to boot FreeNAS to share the pools back to the host.
 
Zero errors; that's what you'll be looking at.

Meant to respond to this previously. Basically, the boot drive isn't a pool. It's also not that important since it's actually just a file on your desktop drive, and its purpose is to boot FreeNAS to share the pools back to the host.
The tutorials i was reading said to scrub the boot and keep it on a dedicated drive. I have it on an ssd separate from Windows. If it were to crash, would the pool still be ok? How would i get it configured in that case?

Also, i was just thinking... Can i just dual boot freenas, just to get it on the bare metal. Then slowly work my way through configuring it and setting up a Win10 VM in freenas. Sort of a round about way of getting it configured the "right" way without having downtime on my workstation. I could work on the dual-booted freenas in my spare time. Good idea or no?
 
The tutorials i was reading said to scrub the boot and keep it on a dedicated drive. I have it on an ssd separate from Windows. If it were to crash, would the pool still be ok? How would i get it configured in that case?
I'm actually not sure what you mean by this. Did you pass another drive through to install FreeNAS on?

But the pool should be okay either way. At worst, you'd just stand up another OS instance with ZFS and import the pool.
Also, i was just thinking... Can i just dual boot freenas, just to get it on the bare metal. Then slowly work my way through configuring it and setting up a Win10 VM in freenas. Sort of a round about way of getting it configured the "right" way without having downtime on my workstation. I could work on the dual-booted freenas in my spare time. Good idea or no?
It's possible; you'd just have to import the pool each time. As to whether it's good idea, can't really say.
 
I'm actually not sure what you mean by this. Did you pass another drive through to install FreeNAS on?

But the pool should be okay either way. At worst, you'd just stand up another OS instance with ZFS and import the pool.

It's possible; you'd just have to import the pool each time. As to whether it's good idea, can't really say.
I mean i didn't pass the boot device like the HDDs, but that's because it needs to be configured differently in hyper-v. I mean, maybe there's a way to do it, but i was just following the standard setup. If there's no risk in losing data on the HDDs in the event the boot SSD goes down, then I won't worry about it.

If i were to dual boot freenas, I wouldn't keep the VM too. Honestly, getting freenas setup and a basic pool created wouldn't be the hard part. The hard part would be setting up the Windows VM in freenas, passing through the equipment, and making sure the surveillance and workstation aspects function properly. But i figure i could do that without disrupting the current windows installation.
 
I mean i didn't pass the boot device like the HDDs, but that's because it needs to be configured differently in hyper-v. I mean, maybe there's a way to do it, but i was just following the standard setup. If there's no risk in losing data on the HDDs in the event the boot SSD goes down, then I won't worry about it.
You could take just the data drives out, hook them up to another machine using USB adapters, boot the machine using a Ubuntu ISO freshly loaded onto a USB stick (for example, say if the machine were a laptop or something with new hardware), and get your data.

If i were to dual boot freenas, I wouldn't keep the VM too. Honestly, getting freenas setup and a basic pool created wouldn't be the hard part. The hard part would be setting up the Windows VM in freenas, passing through the equipment, and making sure the surveillance and workstation aspects function properly. But i figure i could do that without disrupting the current windows installation.
It's not a terrible idea, though at that point you might look at other hosts than FreeNAS. FreeNAS is just easy when it comes to setting up, accessing, and managing ZFS pools. As a host operating system, it's rock-solid itself (Apple used the BSD kernel instead of the Linux kernal for OS X all those years ago...), but hardware support isn't the best.

I'd say overall it's not that big of a deal; you can always wipe the second OS and start over and so on.
 
You could take just the data drives out, hook them up to another machine using USB adapters, boot the machine using a Ubuntu ISO freshly loaded onto a USB stick (for example, say if the machine were a laptop or something with new hardware), and get your data.


It's not a terrible idea, though at that point you might look at other hosts than FreeNAS. FreeNAS is just easy when it comes to setting up, accessing, and managing ZFS pools. As a host operating system, it's rock-solid itself (Apple used the BSD kernel instead of the Linux kernal for OS X all those years ago...), but hardware support isn't the best.

I'd say overall it's not that big of a deal; you can always wipe the second OS and start over and so on.
Ive since been told that FreeNAS VMs aren't the greatest and that what I'm trying to do won't work. I asked about this on the freeNAS subreddit too. Honestly at this point I think I'm just gonna go with snapraid or windows Raid 5. I'm just getting tired of all this lol.
 
I tried posting about this on the FreeNAS board, and they just ridiculed me for trying to virtualize FreeNAS, especially in Hyper-V. Nevermind the fact that freeNAS.org says it's OK. Also, a blog post by a FreeNAS senior software engineer brought to my attention by someone on Reddit says that the claims that "FreeNAS can't be virtualized" are categorically false and that running FreeNAS in Hyper-V is perfectly acceptable. Confronted with this, the cult board members just tried to say it was an old article and was no longer applicable...as if the OS devolved to be less compatible over the couple years since the article was written. Ironically they fed me forum stickies that were posted around the same time, trying to claim the stickies were more recent. The people on the board were also trying to claim I need a minimum of 32GB ram dedicated to FreeNAS and that the ram must be non-ECC (not even sure what that is, but OK). You'd have to be insane to think I need 32GB ram for this. They're morons over there that don't know how to setup a server if they think those specs are even remotely reasonable for a simple 3 drive Raid 5/Z setup. Just asinine claims being made all around on that board. Anyway, rant over. Just thought I'd share.

The FreeNAS forums claim another scalp and continue to live up to their reputation. Congrats though for surviving long enough to have some productive chats with people there.
 
The FreeNAS forums claim another scalp and continue to live up to their reputation. Congrats though for surviving long enough to have some productive chats with people there.
It's as if there's only one reason to run a filesystem like ZFS, and thus only one usecase for FreeNAS...

They really should just set aside a space with appropriate moderation for stuff like small, low-intensity deployments. If people like us can go from 'WTF is Hyper-V / Virtualbox / VMWare' to setting up a virtual instance of FreeNAS, sharing drives or whole controllers into it, and sharing the pools built on those drives to other nodes, then they really should account for that footprint.

Also: when using a system with an OS hypervisor like Proxmox or ESXi, ZFS pools are regularly managed by one of the VMs, not the host... these people have real issues :).
 
Jesus, they got quite the reputation don't they? Yeah, they could at least create like a beginners section or something, so it's obvious that you're probably going to be asking dumb and/or annoying questions. I do that all the time here, and everyone's nice about it :)
 
the drive pool does look nice solution for what you want it to do (if it fails you can just mount the drives to get the data) and you already own a HBA card to do it and setup (use SnapRAID for data consistency and recovery but does require you do rerun it to make sure everything is ok every so often)

you don't restore data when a drive fails in RAID it just stays online until you replace the disk at reduced performance (but hardware raid CPU is norm fast enough),, it will use the other disks parity data to real time regenerate the data, when you replace the disk it restores the data and parity data back to that disk

RAID6 with a hardware raid card is simple enough and assures that the data is intact if a disk fails you replace it and it will rebuild automatically usually (if not you just have to set the new disk as replacement then it does it) you don't really need to do anything apart from make sure its rebuilding in the LSI Megaraid software and configure it to email you when there is any problem detected (i have a dell server right next to me right now and i pulled 2 disks and it just keeps on going, put the disks back in and they auto rebuilded no questions asked, my raid card is old H700 with Writeback backup battery BBU)
 
the drive pool does look nice solution for what you want it to do (if it fails you can just mount the drives to get the data) and you already own a HBA card to do it and setup (use SnapRAID for data consistency and recovery but does require you do rerun it to make sure everything is ok every so often)

you don't restore data when a drive fails in RAID it just stays online until you replace the disk at reduced performance (but hardware raid CPU is norm fast enough),, it will use the other disks parity data to real time regenerate the data, when you replace the disk it restores the data and parity data back to that disk

RAID6 with a hardware raid card is simple enough and assures that the data is intact if a disk fails you replace it and it will rebuild automatically usually (if not you just have to set the new disk as replacement then it does it) you don't really need to do anything apart from make sure its rebuilding in the LSI Megaraid software and configure it to email you when there is any problem detected (i have a dell server right next to me right now and i pulled 2 disks and it just keeps on going, put the disks back in and they auto rebuilded no questions asked, my raid card is old H700 with Writeback backup battery BBU)
This is exactly what i just did tonight. I wiped out my VM and reformatted the drives, then switched over to Drivepool+Snapraid. It was very easy to setup. Did a sync and a scrub on some files as a test. Then i created a registry key to hide the original drives in Explorer. Haven't automated syncs or scrubs yet, but I plan to.
 
Back
Top