OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

For the basic alert and status you must set email method globally (Menu Jobs > TLS email > enable/disable TLS) where current default is TLS (ex Gmail)

That was it, thank you!
TLS email jobs being default without the necessary libs, revering to regular smtp/25 sorted it out. This being all local net I'll leave it for now. :)

Regards,
Wish
 
Hi again :)
New project, new questions:
1. HGST Ultrastar He12 12TB HUH721212ALE604 512e or 4Kn ?
2.They'll be brand new 12 drives , no mixing, provided by Intel's c622@SuperMicro X11SPM-F_-TF_-TPF . Solaris/Solarish - it will be latter decision . No ZoL, very little chance for ZoF.

I need advice for position #1 and general thoughts about #2 .

Thanks in advance .
P.S.
The main filler's "User" is begging me for single raidz2/3, my plan is 2x6raidZ2 in stripe (60), I'll not surrender :).

P.S.2
Large media files :
40% still image sequences, where the smalest chunks are 30mb minimum,
60% "1GB+" files.
 
Last edited:
512e or 4kn
Does not matter for ZFS, both are 4k physical and ZFS will use both with ashift=12 (4k mode)

Dual vdev (ex 2 x 6 Z2) vs one vdev (Z2/3)
Sequential performance is quite similar, With dual vdev you loose capacity but your pool has twice the iops. Important for small random io (this includes access to metadate) or resilvering time.
 
Yep _Gea
I'm grateful, may be for a double digits in a row. The time to move a step forward has come.
OFF TOPIC :
Month ago I made media storage - LSI 9631-8i, RAID 10, SATA multi path, 8 used 4TB HGST HUS724040ALE640, MB Asus x79, Win2012r2 Essent. Emergency build, users satisfied. But I'm not impressed at all with the result. Not to bash LSI off course - best in the class, but lack of setup/setting/tuning options and especially - the inadequately documentation is annoying .

Will report the final outcome of the present effort- week for the MB's delivery but.

Regards!
 
Last edited:
Do you mean LSI 9361-8i ?
This is a Raid 5/6 controller, not good for ZFS. Use a pure HBA (LSI 2008, 9207 or 3008 chipsets), best with IT mode firmware, IR firmware is ok - nothing else.
 
Do you mean LSI 9361-8i ?
This is a Raid 5/6 controller, not good for ZFS. Use a pure HBA (LSI 2008, 9207 or 3008 chipsets), best with IT mode firmware, IR firmware is ok - nothing else.

Running its own FW Lsi Storage Authority Raid under Windows environment, pretty "default" config , ex me for the misunderstanding .
 
Last edited:
Anyone with experience installing the latest LTS Omniosce on a UEFI mobo? I have a new Supermicro A2SDi-2C-HLN4F, bios and IPMI firmware are all up to date. I've been at it for 2 days and noted that the mobo is fine when booting up with an UEFI bootable USB and virtual CDROM mounted in IPMI (tested with Ubuntu).

Tried many methods with the Omniosce ISO and DD image on a USB stick and CDROM and the mobo would never recognize it, even in Legacy or Dual boot mode. I also looked at launching boot by coping the ISO contents and using grub and the EFI shell.

Any suggestions would be appreciated at this point.
 
I am playing around with a 32 2T Drive Chasis (Actually, terrible 3ware 9750 controller can only export 32 Units, and I am playing with single disks). But nappit / openindiana is only seeing 16 drives ? Is there is a fix, something I need to change to get it to detect all drives ?

Thanks!

Depending on disk driver there is a 16 disk limit - beside that your Raid controller is a quite bad choice for ZFS. You may want to use an Expander or the LSI 9305-24 (supported by newer Illumos systems via mpt-sas driver) or simply several HBAs with fewer ports.
 
Last edited:
Anyone with experience installing the latest LTS Omniosce on a UEFI mobo? I have a new Supermicro A2SDi-2C-HLN4F, bios and IPMI firmware are all up to date. I've been at it for 2 days and noted that the mobo is fine when booting up with an UEFI bootable USB and virtual CDROM mounted in IPMI (tested with Ubuntu).

Tried many methods with the Omniosce ISO and DD image on a USB stick and CDROM and the mobo would never recognize it, even in Legacy or Dual boot mode. I also looked at launching boot by coping the ISO contents and using grub and the EFI shell.

Any suggestions would be appreciated at this point.

You may ask at illumos-discuss where devs are around
https://illumos.topicbox.com/groups/discuss
 
Ha, yesterday I played install UEFI vs bios, but with solaris 11.4 at asus x79 sabretooth. Worked in both cases.
Usb prepared with rufus under win7, mbr scheme for the bios install, gpt partition scheme for UEFI.
Bios settings defaults except efi-legacyboot setup.
In uefi mode, when replacing boot mirror drive , needs to reinstall grub manually - read about it , not tested by myself.
Hope this can help.

Edit:
Manage to install omnios at the same mobo -gpt.
Legacy usb3 support OFF and nothing plugged in the usb3 ports. The later causes omnios to crash between the first screen and the language choose.
 
Last edited:
_Gea is that client only, or has SMB 2 server protocol support also been added to the kernel? BTW do you know if it is included in the latest OmniOS stable [what was it, ver. 151028?]?

_Gea For sometimes I've been meaning to report snapshotting problems with napp-it [cannot login to my my old account here @H]
v 18.12 [I believe it was q1] - keeps creating new snapshots without deleting old ones; in a few weeks there's literally 100's of them. This was only solved when I reverted back to v 18.01

v 18.01 [and probably earlier versions] - always creates ZERO snaps; also doesn't delete older ZERO snaps

napp-it [in general, as far back as I can remember] seems to ignore the number of snaps to keep; for an example if I have the number of snaps to keep as 10, there's always 11 snaps for that job!

@everyone / anyone
Now for the main issue: how do I migrate users/groups and napp-it settings to a new OmniOS install. Currently I am using SOLARIS 11.x, but as this is unsupported & quite old, would like move to a supported OS, like Omni/SmartOS.

Using icacls I can see that SOLARIS users/groups have UUID/GUID, just like Windows. So I'm guessing I'd have to backup, then restore users & groups. But no matter where I look I can't seem to find any info or guide online. Also which users & groups should be backed up & restored?

I'm not sure if SOLARIS can be upgraded in place with OmniOS, but I'd like to avoid that if at all possible. With a clean install at least I don't have to worry about some going wrong & hosing the install, or something breaking during upgrade.

I still remember going through a lot of difficulties when compiling & installing SSL/TLS, UPS drivers and setting up email notifications on SOLARIS 11.0. Is this still the case with OmniOS, or is the whole process a lot more improved now? Anything I need to know/should be weary of before installing OmniOS [especially regarding installing SSL/TLS, UPS driver & setting up email notfications]?
 
I still remember going through a lot of difficulties

Switch to Linux. Might as well do it now with perhaps a live installation and get experimenting as OpenZFS development has switched to ZFS on Linux (ZoL) as the primary branch, with changes being backported to BSD and related distros.

[off-topic, random, and not helpful to the issue I admit, however, it seems we're at a turning point where Linux is going to overtake BSD in the storage space among others that have still been bastions of Unix interest]
 
_Gea is that client only, or has SMB 2 server protocol support also been added to the kernel? BTW do you know if it is included in the latest OmniOS stable [what was it, ver. 151028?]?

SMB 2.1 client support (kernelbased SMB server) is in 151030 LTS long term stable. Available next week.
https://github.com/omniosorg/omnios-build/blob/r151030/doc/ReleaseNotes.md You need it mainly to join an AD domain where SMB1 is disabled.

btw
Current napp-it supports 151029 bloody so it should work with the new LTS

[For sometimes I've been meaning to report snapshotting problems with napp-it [cannot login to my my old account here @H]
v 18.12 [I believe it was q1] - keeps creating new snapshots without deleting old ones; in a few weeks there's literally 100's of them. This was only solved when I reverted back to v 18.01

v 18.01 [and probably earlier versions] - always creates ZERO snaps; also doesn't delete older ZERO snaps

napp-it [in general, as far back as I can remember] seems to ignore the number of snaps to keep; for an example if I have the number of snaps to keep as 10, there's always 11 snaps for that job!

should be fixed in current napp-it
https://napp-it.org/downloads/changelog_en.html


@everyone / anyone
Now for the main issue: how do I migrate users/groups and napp-it settings to a new OmniOS install. Currently I am using SOLARIS 11.x, but as this is unsupported & quite old, would like move to a supported OS, like Omni/SmartOS.

Solaris 11.4 is quite new and still the fastest ZFS server and unique with SMB3.11 (kernelbased SMB), NFS 4.1, ZFS encryption, dedup2, sequential resilvering, remove a disk from all sorts of vdevs among others. But no support without subscription. You can just install 11.4 and import the pool.

Using icacls I can see that SOLARIS users/groups have UUID/GUID, just like Windows. So I'm guessing I'd have to backup, then restore users & groups. But no matter where I look I can't seem to find any info or guide online. Also which users & groups should be backed up & restored?

The kernelbased SMB server on all Solarish variants use Windows sid as permission reference as an ZFS extended attribute. This allows a backup and restore to another server with all permissions intact. Unlike other solutions, Solarish use additionally Windows ntfs alike ACL.

So in AD mode, your users remain valid. In workgroup mode, you must recreate all users with same Unix uid/gid. Permissions remain intact then. Otherwise you must reset ACL and set new like you would need for a disk that you connect to a new Windows.


I'm not sure if SOLARIS can be upgraded in place with OmniOS, but I'd like to avoid that if at all possible. With a clean install at least I don't have to worry about some going wrong & hosing the install, or something breaking during upgrade.

Not possible. You must reinstall.
Import of a Solaris 11 pool is only possible when pool is v28/5. You cannot import a newer pool with a genuine ZFS. You must then create a new v5000 pool and copy files ex via robocopy or rsync. Unlike robocopy, rsync cannot preserve ntfs4 permissions.

I still remember going through a lot of difficulties when compiling & installing SSL/TLS, UPS drivers and setting up email notifications on SOLARIS 11.0. Is this still the case with OmniOS, or is the whole process a lot more improved now? Anything I need to know/should be weary of before installing OmniOS [especially regarding installing SSL/TLS, UPS driver & setting up email notfications]?

TLS has a lot of dependencies with OpenSSL, Sun SSH in case of Solaris and Perl.
Usually you find hints at https://napp-it.org/downloads/tls.html

If your TLS mailaccount is at Google, you can also use their open smtp relay. This allows unencrypted mail over port 25
https://support.google.com/a/answer/176600?hl=en

For UPS or in general. If you need an app (acupsd) etc, you can use pkgin
setup see https://napp-it.org/downloads/binaries_en.html

available apps for Illumos/OmniOS, see Joyent: http://pkgsrc.joyent.com/packages/SmartOS/2019Q1/x86_64/All/
 
Last edited:
IdiotInCharge Thanks for that little tidbit; didn't know OpenZFS dev had moved to ZoL as the primary branch.

LINUX? Feels way to hacky/staby and bolted on. Compared to that UNIX, or for that matter any *BSD, feels a lot more stable and streamlined. Not to mention SOLARIS or it's derivatives [OmniOS, OpenIndiana, SmartOS, etc.] feels like Cadillac versions of *NIX's. I cannot emphasize enough, just how much I've been spoiled by SOLARIS/derivatives, and just how much I like them.

I mean where else am I going to get proper/stable BE [Boot Environments] or reboot -f [fast reboot; on SOLARIS just typing 'reboot' also does the same thing]. reboot -f: loads a new kernel and kills the old one, but never initiates cold/hardware reboot, so bypasses firmware [BIOS] POST and OS boot loader*. This is just to name a few things, but there are lots more.
*Have you ever seen how long it takes for server class motherboards to go through initialization and starting the OS boot loader? Hell, would freeze over ...OK, I exaggerate [but only a bit].​

Even the the install media [well, mostly text/ai] of around 300 MB +/- reminds me of the old NT 4.0 CD from back in the mid 90's!

I once clocked almost 2 years of non-stop server time; only reason this streak didn't continue was because they were replacing a pole in the neighborhood and the UPS doesn't last several hours. This server is quite old* too, but SOLARIS has been running stable without a single hiccup, ever [I've never had a single error on the pool and I use the pool every single day, literally treating it like a DAS]. More than 1/2 a dozen drives have failed and were replaced too in this pool
*Chassis, backplane and HBA are commercial grade hardware, but the rest were scrounged up commodity parts. Motherboard/RAM in this server is almost 13 years old, CPU more than 11 [only dual core, also cheapest/lowest rung performance wise] and the 2 sticks of RAM is not even ECC!​

BTW, did they ever resolve licensing issue around including ZFS into the LINUX kernel [CDDL vs. GNU/GPL]; bet it's still tacked on as a hack job [ok, a module/driver].

Don't get me wrong, LINUX is fine for some things and I do run quite a lot of things with it. Hardware based firewall/router/WAP(s) (mostly SoC built around MIPS, sometimes other arch* too), but software based (running on generic hardware/in a VM) and critical stuff is always some kind of *BSD. Most smart phones/TV's/IoT and even most cheapo/reasonably priced tablets/notebooks also run some kind of LINUX. Believe it or not, I even run BASH on Windows machine! And last but not least, always run some sort of SOLARIS [or one of it's derivatives] as a 'proper' ZFS server.

The only way I'm abandoning SOLARIS or one of it's derivatives for LINUX, is if they are no longer maintained and other *BSDs are also abandoned. Both of which seems quite unlikely at this point, and I'm quite thankful for this little fact.

Please don't take this a criticism of your post, or even LINUX for that matter; merely my opinion. I'm also thankful that you took the time informing/updating me of what's happening with OpenZFS [yup, that news came totally as a shock].
 
There's still a conflict with where the Linux kernel is going at 5.0+ and the license ZFS uses, and I'm not sure how that's going to be resolved.

However, Red Hat (and thus CentOS) are stepping up to 4.18 for version 8.0, so it won't be an issue for 'commercial' Linux.

Main point is the progress of the Linux ecosystem, in terms of application development, hardware support, and now, ZFS.

In response to your purpose, I'd say that I understand- I moved off of FreeNAS due to a lack of drivers, but if the hardware is static and remains dedicated, there's not much reason to adjust the software.
 
It is not so that "Open-ZFS switched to Linux".
Open-ZFS is the platform that coordinates ZFS development for Free-BSD, Illumos and Linux (and OSX and upcoming Windows). In the past 100% of all Open-ZFS developments were done at OpenSolaris, then Illumos and then adopted to Free-BSD and Linux.

This has changed with Linux due the huge amount of Linux developers compared to Free-BSD and Illumos. Now you see more and more features coming first on Linux and then must be adopted to Free-BSD and Illumos. Real ZFS encryption similar to the one found in Solaris is an example, see https://github.com/openzfs/openzfs/pull/489 While it is based on the OpenSolaris/Illumos bits but is finished by Datto, a Linux company. Even Free-BSD adds now features like the development of the "add a disk to a raid-Z" project.

But this is why Onen-ZFS was created, to ccordinate these things. This is not bad, this is good for everyone wanting a free ZFS filesystem that is compatible on all platforms.
 
IdiotInCharge that was what I heard about the licensing, years ago.

Yes, "hardware support" [a.k.a. driver availability] is the main reason I still recommend LINUX distros [mainly Ubuntu, though I personally prefer Debian/CentOS] to anyone who wants to try another OS other than Windows. Even FreeBSD [forget about the other *BSD's] is miles behind LINUX in this case, and doubly/triply so when it comes to SOLARIS.

With Windows/LINUX you just install on a PC and voila, most of the hardware just works. For anything else that doesn't work right out of the box, most likely there's a driver available for it. This is less so with any *BSD's, and for SOLARIS you always wan't to pick the hardware by making sure it's on the HCL [Hardware Compatibility List].

As for FreeNAS, well it's just a GUI frontend for a lot of the stuff usually done only on a terminal in FreeBSD.

Back in the mid 90's, I couldn't even get FreeBSD to install on a system with Intel motherboard [and CPU], because it wouldn't recognize the IDE controller and so couldn't find any HDD! After trying a few days, I just gave up and installed LINUX [RedHat v1.x, I believe it was, and later SLACKWARE 2.x]; it would install and run just fine. But I had to reinstall Windows 95 several times a day, just so I could get on the internet*.
*You see, LINUX back then didn't have any softmodem [winmodem?] drivers [those ISA/PCI modems that didn't have any onboard DSP, so relied on drivers & the system CPU to do most of the signal processing].

Someone did write a softmodem driver, but it was source code only and I could never get it to compile properly [probably was for a completely different model too].​

Yeah, so for hardware that usually doesn't go through a few cycles/generations of technology upgrades, UNIX's [*BSD] are just fine; even SOLARIS! The way I see it, for hardware support/driver availability, it's:
  1. Windows
  2. LINUX
  3. FreeBSD
  4. Non-Free *BSD + MacOS
  5. SOLARIS
  6. Name your own obscure OS; ok I'll start, MINIX*
    • Few more honorable mentions: AIX, HP-UX, [and the defunct] QNX, IRIX and Xenix!


_Gea thanks for the heads up regarding new OmniOS LTS release.

Yeah while trying to catch up on what IdiotInCharge had said several post up [ZoL being the main dev branch for ZFS], I had another shock: ZoW [ZFS on Windows]!

"[T]he huge amount of Linux developers compared to Free-BSD and Illumos", actually that's one of two reasons I've always wanted MS to fully adopt ZFS [natively] on Windows [and slowly discard Fat32, NTFS, exFAT (aka. Fat64), just like they did with the older Fat12/Fat16]. MS could literally inject both, [huge amount of] funds and [lots of] developers, into the project.

The other reason being all the things you would be able to do in Windows with proper [and native] support for Boot Environments, Snapshots, datasets and volumes/zvols. Not to mention other cool SOLARIS tech [if adopted] like network virtualization [CrossBow], Zones and last but not least reboot -f; that last one I hope, one day, every other non-SOLARIS OS out there would adopt.


IdiotInCharge / _Gea
BTW, several MS news:
  1. MS is porting DTrace to Windows
  2. WSL 2 [Windows Subsystem for Linux] will have a customized LINUX kernel running natively on Windows
    • So no more performance degrading emulation of WSL/1
And I was excited when BASH could be run on windows; interesting times ahead indeed! [Finally, I think I can get over the melancholy brought about by how CRAPor-acle swiftly strangled to death many of SUN's innovative techs soon after getting their grubby little hands on it all.]
 
Started on FreeNAS, virtualized in Hyper-V, and moved to CentOS after passing the pools through a few other distros. BSD still doesn't support the Aquantia NIC I'm using, and since I'm using CentOS / RHEL at work, it made sense to try it.

For my purposes it's also a better fit, but in this case, that's because I'd prefer the software availability. If it were a pure appliance I'd just make sure the hardware was there.

[I'd liked the idea of using FreeNAS in Hyper-V, but quickly realized that I'd prefer something more static than Windows Server]

[with respect to MS, they're still slowly developing ReFS; along with Storage Spaces, it's feature-competitive with ZFS, but it's likely to be too late to market to gain any traction]
 
Yeah, so for hardware that usually doesn't go through a few cycles/generations of technology upgrades, UNIX's [*BSD] are just fine; even SOLARIS! The way I see it, for hardware support/driver availability, it's:
  1. Windows
  2. LINUX
  3. FreeBSD
  4. Non-Free *BSD + MacOS
  5. SOLARIS
  6. Name your own obscure OS; ok I'll start, MINIX*
    • Few more honorable mentions: AIX, HP-UX, [and the defunct] QNX, IRIX and Xenix!

This is correct (you will have troubles to find any driver support for OSX outside Apple's own offers) for multimedia or desktop use.
When you restrict this to mainstream server use, it is easy to find hardware that is supported on all platforms beside OSX.

If you use the hardware best suggested for a ZFS system under Free-BSD or Solarish, you will see that this is quite often also the best bet on Linux and Windows (given software raid)
 
If you use the hardware best suggested for a ZFS system under Free-BSD or Solarish, you will see that this is quite often also the best bet on Linux and Windows (given software raid)

Agreed. In hindsight, I should have shelled out for an Intel adapter for the server. I just took issue with the available server pulls as they're all x8 PCIe cards while the newer releases are just so expensive while offering basically nothing over the Aquantia chipset in a homeserver environment.
 
OpenIndiana Hipster 2019.05 is out
Release notes: 2019.05 Release notes - OpenIndiana - OpenIndiana Wiki

Editions:
GUI/live for server and desktop use with a Mate 1.22 desktop
Text (prefer this this for server use)
Minimal (minimal Ilumos)

about OpenIndiana.
OpenIndiana is always based on the most current Illumos (the common development platform for all Illumos based distributions) with quite a huge repository including many server and desktop apps. This is different to the new OmniOS 151030 LTS (long term stable) from may 2019 that is a freeze of the current Illumos state for the next 2 years with only security and bug fixes. Beside extras like LX zones and Bhyve OmniOS 151030 is quite similar to OpenIndiana 2019.05 text.

www.napp-it.org
 
IdiotInCharge that was what I heard about the licensing, years ago.

technology upgrades, UNIX's [*BSD] are just fine; even SOLARIS! The way I see it, for hardware support/driver availability, it's:
  1. Windows
  2. LINUX
  3. FreeBSD
  4. Non-Free *BSD + MacOS
  5. SOLARIS
  6. Name your own obscure OS; ok I'll start, MINIX*
    • Few more honorable mentions: AIX, HP-UX, [and the defunct] QNX, IRIX and Xenix!

Deeply concur. In terms of server hardware support I find Solaris right behind the windows and on par with linux. Solarish/BSD behind and of course MacOS at the at the bottom.
 
I have not found the time to try but the SMB 2.1 client in OmniOS 151030 is new what means that you can access an SMB share on Windows from OmniOS or join an AD Domain where SMB1 is disabled.

On serverside, the kernelbased SMB server in OmniOS or Solaris is SMB2.1 for a long time (Solaris SMB server is 3.11)
 
IdiotInCharge that was what I heard about the licensing, years ago.

Yes, "hardware support" [a.k.a. driver availability] is the main reason I still recommend LINUX distros [mainly Ubuntu, though I personally prefer Debian/CentOS] to anyone who wants to try another OS other than Windows. Even FreeBSD [forget about the other *BSD's] is miles behind LINUX in this case, and doubly/triply so when it comes to SOLARIS.

With Windows/LINUX you just install on a PC and voila, most of the hardware just works. For anything else that doesn't work right out of the box, most likely there's a driver available for it. This is less so with any *BSD's, and for SOLARIS you always wan't to pick the hardware by making sure it's on the HCL [Hardware Compatibility List].

As for FreeNAS, well it's just a GUI frontend for a lot of the stuff usually done only on a terminal in FreeBSD.

Back in the mid 90's, I couldn't even get FreeBSD to install on a system with Intel motherboard [and CPU], because it wouldn't recognize the IDE controller and so couldn't find any HDD! After trying a few days, I just gave up and installed LINUX [RedHat v1.x, I believe it was, and later SLACKWARE 2.x]; it would install and run just fine. But I had to reinstall Windows 95 several times a day, just so I could get on the internet*.
*You see, LINUX back then didn't have any softmodem [winmodem?] drivers [those ISA/PCI modems that didn't have any onboard DSP, so relied on drivers & the system CPU to do most of the signal processing].

Someone did write a softmodem driver, but it was source code only and I could never get it to compile properly [probably was for a completely different model too].​

Yeah, so for hardware that usually doesn't go through a few cycles/generations of technology upgrades, UNIX's [*BSD] are just fine; even SOLARIS! The way I see it, for hardware support/driver availability, it's:
  1. Windows
  2. LINUX
  3. FreeBSD
  4. Non-Free *BSD + MacOS
  5. SOLARIS
  6. Name your own obscure OS; ok I'll start, MINIX*
    • Few more honorable mentions: AIX, HP-UX, [and the defunct] QNX, IRIX and Xenix!


_Gea thanks for the heads up regarding new OmniOS LTS release.

Yeah while trying to catch up on what IdiotInCharge had said several post up [ZoL being the main dev branch for ZFS], I had another shock: ZoW [ZFS on Windows]!

"[T]he huge amount of Linux developers compared to Free-BSD and Illumos", actually that's one of two reasons I've always wanted MS to fully adopt ZFS [natively] on Windows [and slowly discard Fat32, NTFS, exFAT (aka. Fat64), just like they did with the older Fat12/Fat16]. MS could literally inject both, [huge amount of] funds and [lots of] developers, into the project.

The other reason being all the things you would be able to do in Windows with proper [and native] support for Boot Environments, Snapshots, datasets and volumes/zvols. Not to mention other cool SOLARIS tech [if adopted] like network virtualization [CrossBow], Zones and last but not least reboot -f; that last one I hope, one day, every other non-SOLARIS OS out there would adopt.


IdiotInCharge / _Gea
BTW, several MS news:
  1. MS is porting DTrace to Windows
  2. WSL 2 [Windows Subsystem for Linux] will have a customized LINUX kernel running natively on Windows
    • So no more performance degrading emulation of WSL/1
And I was excited when BASH could be run on windows; interesting times ahead indeed! [Finally, I think I can get over the melancholy brought about by how CRAPor-acle swiftly strangled to death many of SUN's innovative techs soon after getting their grubby little hands on it all.]

https://openzfsonwindows.org/

Anyone here tried it?
 
Let us dream..
Windows + ZFS + Solarish alike bootenvironments (bootable ZFS snaps) to boot into a former system state prior one of the last bad updates...

Sadly only a dream!
 
Anyone doing an AIO setup with a threadripper 2? I'm thinking I'll get a threadripper 3 when it comes out and would be nice to combine two servers into an AIO setup with PCIe passthrough.
 
I don't really find an 'AIO' setup to be desirable, depending on what you mean by that. I have no problem with running VMs for other purposes depending on what those purposes are and do myself, but it sounds like you're trying to go beyond 'file server' to 'processing station'. Virtualized or not, trying to do performance-bound work on a file server introduces variables that can affect availability and that means more attention to management and maintenence.

This is really a primary downside to ZFS as a NAS filesystem*: you want a separate system to run it, and it's difficult to pare that down to an 'appliance'. For most people that have real work to do, a DAS or NAS just makes more sense.


[*obviously ZFS makes sense as a server filesystem or workstation filesystem where supported, I'm speaking specifically with respect to the desire to separate to mass storage from processing primarily to maintain stability and availability]
 
I don't really find an 'AIO' setup to be desirable, depending on what you mean by that. I have no problem with running VMs for other purposes depending on what those purposes are and do myself, but it sounds like you're trying to go beyond 'file server' to 'processing station'. Virtualized or not, trying to do performance-bound work on a file server introduces variables that can affect availability and that means more attention to management and maintenence.

This is really a primary downside to ZFS as a NAS filesystem*: you want a separate system to run it, and it's difficult to pare that down to an 'appliance'. For most people that have real work to do, a DAS or NAS just makes more sense.


[*obviously ZFS makes sense as a server filesystem or workstation filesystem where supported, I'm speaking specifically with respect to the desire to separate to mass storage from processing primarily to maintain stability and availability]
Are you sure you understand what an AIO setup is? It is using pci-passthrough for the HBA so there is no performance impact to the physical drive access. It then shares the ZFS through an internal 10gb virtual network to the other hosts on the same box which will perform far better than any physical medium you use to connect to your NAS.
 
Are you sure you understand what an AIO setup is? It is using pci-passthrough for the HBA so there is no performance impact to the physical drive access. It then shares the ZFS through an internal 10gb virtual network to the other hosts on the same box which will perform far better than any physical medium you use to connect to your NAS.

I do- I understand that it works, and in limited scenarios it can work very well. I did exactly this with FreeNAS in Hyper-V on Server 2016, which worked great until an update borked the host. Have no idea what happened there as it was an ongoing learning project, but it did push me to separate the two. CentOS 7 now runs on the hardware, and Server 2016 is going in another box.

The further challenge is that doing this and doing a significant amount of work in other VMs or directly on the host can affect stability and availability of the pools in a variety of ways. I don't see as much of an issue with lightweight VMs, but doing significant processing is where I'd want to get the functions physically separated.
 
I do- I understand that it works, and in limited scenarios it can work very well. I did exactly this with FreeNAS in Hyper-V on Server 2016, which worked great until an update borked the host. Have no idea what happened there as it was an ongoing learning project, but it did push me to separate the two. CentOS 7 now runs on the hardware, and Server 2016 is going in another box.

The further challenge is that doing this and doing a significant amount of work in other VMs or directly on the host can affect stability and availability of the pools in a variety of ways. I don't see as much of an issue with lightweight VMs, but doing significant processing is where I'd want to get the functions physically separated.
Oh nevermind, I assumed you were using a real hypervisor. Vmware does a much better job at this, you may want to give that a try before you write the AIO setup off as a whole.
 
Oh nevermind, I assumed you were using a real hypervisor. Vmware does a much better job at this, you may want to give that a try before you write the AIO setup off as a whole.

I have :).

I find VMWare to be a bit limiting- I like it, overall, but I also prefer to have something more flexible on the host. And I take exception to implying that CentOS 7 is not 'a real hypervisor'; KVM is integral to the Linux kernel and is most certainly a Type-1 hypervisor. It's just less purpose-built and more flexible.

But I get your meaning and don't materially disagree :D
 
I have :).

I find VMWare to be a bit limiting- I like it, overall, but I also prefer to have something more flexible on the host. And I take exception to implying that CentOS 7 is not 'a real hypervisor'; KVM is integral to the Linux kernel and is most certainly a Type-1 hypervisor. It's just less purpose-built and more flexible.

But I get your meaning and don't materially disagree :D

What? You said you were running freenas on hyper-v and windows update borked the host. Hyper-V is not a real hypervisor. Anyways enough threadjacking here, like gea need any more pages to this thread
 
What? You said you were running freenas on hyper-v and windows update borked the host. Hyper-V is not a real hypervisor. Anyways enough threadjacking here, like gea need any more pages to this thread

Hyper-V is also a Type-1 hypervisor... like I said, I get what you're saying about 'real', but I could just as easily toss OpenIndiana in KVM or Hyper-V, pass my drives through, add the pools and start sharing that back out to the host and the network. VMWare provides both a leaner host and an excellent management suite, so I get the draw, but I also get the limitations.
 
Yeah,

I was thinking something along the lines of the newest Ryzen / Threadripper 3 that is supposed to come out later this year booting ESXi (assuming it is compatible) with a 16-32 core cpu and Win 10 / Linux running in guest VM's

PCIE1 - 2080ti - Windows 10
PCIE2 - SAS card - Linux ZFS array
PCIE3 - SAS card - Linux ZFS array
PCIE4 - 10g net card - Linux ZFS array
CPU - 4-6 cores to ZFS, the rest to Win 10.

For a home setup where you don't want to run two servers (save on your power bill) would be really nice if it worked.
In the past AMD and ESXi never seemed to play well though, which is why I was wondering if anyone tried this with a Threadripper 2.
 
So, three questions mikeo:
  • If you're going to pass your drives to a VM, why not use one of the Solaris-derived Unix distros for best performance?
  • If you're going to use Linux, why not put the pools on the host, and use a more modern but server-based distro (ESXi is some borged 2.6.x kernel)?
    • CentOS 8 may be a good candidate with the 4.18 Kernel, assuming everything else lines up
  • Why put the 10Gbit card on the VM? This adds another layer of abstraction between your Windows 10 guest and the networking stack, does it not?
If you're going to be gaming, Threadripper seems a bit out of place with limited clockspeeds; even Zen 2 / Ryzen 3 might still be sub-par in terms of per-core performance relative to current Skylake-based releases, but Ryzen 3 would likely be the best compromise. I assume that you have other uses for the extra cores, as games need six at most, and possibly also for the 2080Ti.
 
So, three questions mikeo:
  • If you're going to pass your drives to a VM, why not use one of the Solaris-derived Unix distros for best performance?
  • If you're going to use Linux, why not put the pools on the host, and use a more modern but server-based distro (ESXi is some borged 2.6.x kernel)?
    • CentOS 8 may be a good candidate with the 4.18 Kernel, assuming everything else lines up
  • Why put the 10Gbit card on the VM? This adds another layer of abstraction between your Windows 10 guest and the networking stack, does it not?
If you're going to be gaming, Threadripper seems a bit out of place with limited clockspeeds; even Zen 2 / Ryzen 3 might still be sub-par in terms of per-core performance relative to current Skylake-based releases, but Ryzen 3 would likely be the best compromise. I assume that you have other uses for the extra cores, as games need six at most, and possibly also for the 2080Ti.

  • I migrated from openindiana to Linux when zfsonlinux became stable a while ago, I just prefer Linux to Solaris (personal preference).
  • I figure ESXi is a more stable host than running bleeding edge stuff, but maybe the non vmware options have improved since I last messed with them years ago?
  • The 10g card on the VM would be to connect to another computer on the network down the line, maybe my laptop if usb-c to 10gbe Ethernet cards ever get cheaper like those 100 dollar PCIe 10g cards I have right now to go from windows to the current Linux ZFS, the Win 10 would be connected to the Linux VM with the pools by a fast virtual network.
Yeah, the lower Threadripper clock speeds are why I skipped this generation, but rumors are the next Ryzen 7 could clock up to 5ghz with 16 cores so might go that route, but the extra PCIe lanes on TR would be nice. As far as use cases, I do just as much video editing with 4k h265 as I do gaming, so willing to give up some gaming performance for a huge increase in render speeds.

update: seems like TR3 was pulled from a 2019 release, and rumors are the 16 core Ryzen 7 (or 9?) announced soon will have more PCIe lanes.
 
Last edited:
Back
Top