Perl modules seem to have installed successfully.
However I still cannot enable tls email using the webgui.
Unencrypted email has also stopped working with Google smtp server after a while of working well.
The install is quite an old install, I think beginning with r151014.
If a scrub reported an error and corrected 256kb of data, and is showing 1 checksum error on 3 disks does it mean there was a checksum error on all 3 disks, or there were 3 disks involved in correcting the error which resided on one of them?
If the record size is 128kb, how could there be...
I am running a RAID Z2 array with 4x8TB drives on OmniOS 151030 hosted by ESXI 6.7
During the last 3 months I have encountered 2 checksum errors that were repaired.
Pool view shows 2 in checksum errors with no other error present.
The checksum errors show up on 3 of the 4 disks...
I still had noticeable performance variation even when NFS had sync=disabled.
One of the jobs of the guest VM is to download some multi part files, repair them and move them to a SMB share on the storage VM (which is the same pool as the guest VM).
Doing this on NFS with sync disabled took...
When I build the system several years ago, I initially tried NFS with sync / async setting.
Sync was terribly slow, and async ISCSI still fared way better on my system that async NFS.
Furthermore, if I am not mistaken, when using ISCSI, you can place the VM metadata on a SYNC NFS datastore...
I am positive that I have created a target and target group in the comstar menu after creating the zvol.
By the way, is there any way nowadays to have esxi wait for omnios to load, the rescan hbas and then launch the guest OS?
I really love my setup, it has been stable as enterprise systems...
Ok, looks like it's shared from the filesystem, so I'll need to import it.
I just need to record the path, and re-import it after the pool as follows?
sbdadm import-lu /dev/zvol/rdsk/[Volumename]/[sharename]
I noticed the backup job supposedly backs up users and comstar settings, so supposedly...
I'll try an in place upgrade then firstly and fall back on full re-install if something doesn't work well.
I have the pro version, so I will try the built in mechanisms if re-install is needed.
I think I created the LU using the COMSTAR menu. How can I determine?
Currently running an all in one system with esxi 6.7u1, omnios r151028 with one raidz2 array. The data is shared via smb, another win10 vm handles all the tasks I need and is stored on an iscsi which is passed as rdm.
I wanted to update to r151030. My current system is a little long in...
Your bottleneck is probably elsewhere as both the HDD and NVME SSD should save it within less than a second, as their transfer rates are much higher than 80MB/sec.
I am not familiar with photoshop but I would guess some conversion is going on in.
I was already on 12.06pro which was the latest pro. Still encountered all the errors until moved to the latest dev.
The TLS is no big deal.
I am using the restricted gmail smtp server to send the message, seems to be working well.
By the way, I've been meaning to ask, what is the napp-it pro...
FYI to those upgrading from R151026 to R151028, I've encountered the following errors:
tty and PAM access error, solved by upgrading napp-it to 18.12 dev from 18.06 pro
SSH was down due to /etc/ssh/sshd_config: line 103: Bad configuration option: MaxAuthTriesLog.
I just sold 4 of these.
I dismantled my 2 RAID-Z1 arrays, one with these Samsungs and one with 3TB Seagate NAS Hdds.
The Samsung ones were brilliant. Not enough for statistics, but they were working 24x7 for more than 6 years. One of the seagates which worked less failed. One of a 2TB WD Green...
Yes, it's the 32GB optane, running ESXI 6.5. Right now running on a virtual SAS controller instead of NVME.
The LSI SAS HBA passes through without any issue.
I moved the Windows VM files to the optane, so the Raid-Z2 is only being used as iSCSI LUN for the Windows VM. Something is still...
Tried passing the Optane directly to Omni-OS, didnt work. Afterwards I saw you mention it wouldnt work in the manual.
Tried then creating vdisks and attaching to an NVME Controller. They show up with the correct capacity, but removed.
Any idea what's wrong?
I have unified my 2 RAID-Z1 arrays on 8 small disks to a RAID-Z2 on 4 8TB drives.
It was easy as pie, all the SMB and NFS shares were working once booted up, and the iSCSI LUN remained identical.
The ZFS system never ceases to impress me.
Anyway, the 8TB drives are much louder than...
I've had 3 deathstars that were running without any issue until I tossed them because of old age.
One of 4 Seagate 3TB NAS drives died, and I was biting my nails until the re-silver went through.
Most of the drives I had over the years were WD. One raptor died on me and was replaced under...
Thanks. I ordered 4 x 8TB drives then to replace the 4 x 2TB and 4x 3TB RAID-Z1 arrays.
Important data was stored on both arrays.
The current recommendation is to go with 2 x mirror vdev of 8TB instead of RAID-Z2?
Also, currently I have 2 VMs. One for Omni-OS (Napp-it), and one for a Win10 OS...
I'm out of free ports on the HBA. So I thought doing the following:
1. Export one of the ZFS pools / array.
2. Disconnect those disks
3. Connect new disks in their place, replicate the remaining pool to the new disks.
4. Disconnect the replicated array, and re-connect the disks...
I currently run an AIO host with ESXI, Omni OS, Napp-It 17.06pro.
The storage is 2 RAID-Z1 arrays, 4x2TB and 4x3TB, one with 1 pool the other with 2 pools.
ESXI runs on a separate SSD.
There is one Win10 OS running on iSCSI from one of the pools.
Is there a way to migrate the pools as is...
Unfortunately it didnt work.
The moment I remove hostname.vmxnet3s0 which contains the new hostname, it defaults back to the old hostname.
On the following boot I get the addition to the hosts file from napp-it with the old hostname, although /etc/nodename and system identity remain with...
Another question about networking configuration.
Running OmniOS AIO.
I am trying to change the hostname.
I have set the nodename, changed the hosts file under /etc and /etc/inet
I have change the config/nodename under svc:/system/identity:node service, however every time I restart...
I have a very curious issue accessing napp-it gui via host name.
I have an AIO ESXI, ZFS with 2 network adapters. One of them in the 192.168.1.X class for general networking, and one is 172.16.1.2 use for iSCSI. On the ESXI I have a VMKERNEL port for iSCSI which is addressed...
I have lost a disk in a RaidZ1 array. While im getting a replacement, is it possible to pass through a USB drive to the VM and temporarily replace the bad one with the USB drive, and when the replacement comes replace again?
I already performed an update of esxi 5.5U1 to U2 in the past. Didn't touch anything else in the past.
I initially used iSCSI for the host due to horrible NFS performance with U1 even without sync, and stayed with e1000 due to inconsistent performance with vmxnet3.
I have the napp-it 15b if I am not mistaken, running on ESXI 5.5 (U1).
On top of it, running one guest Win 8.1 O/S using iscsi, 2xRAIDz2 pools.
In the past I tried to tweak vmxnet3 adapters, but quickly ran back to e1000 after inconsistent performance issues. Even after moving back to...
Just to update,
I've updated ESXI to 5.5u2 fixed the bug in the tools and installed the new tools.
I've moved the storage to block iscsi instead of NFS. The performance is much more consistent. ATTO shows about 10Gbit to the storage. But most importantly, there are no more hickups.
For some reason napp-it keeps changing my hostname from san to ZFSSAN. I created /etc/host.nic and add the correct name. That solved the problem while I was using nwam. Now, without nwam it has started renaming my host again.
Any idea how can I get rid of it, or why is it doing it?