OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Its hard to compare a snapsize with a filesystem without knowing all details.

What should happen.
If you transfer a filesystem without snaps, compress or dedup from one server
to another with same disks and pool layout, the size of source and newly created
target filesystem should be quite the same.

If some is different, result is different.
If you are in doubt that everything is transferred, create a filelist on both and compare.
But usually a zfs send that ends without an error is very trusty.

Is there a log where I can check if there was an error?

Is that readonly on by default for Napp-it Async? If not, maybe that's the cause of the error.

Can I send a new separate copy without deleting the existing one on the backup server? Meaning, not an incremental backup but another full backup.
 
Is there a log where I can check if there was an error?

Is that readonly on by default for Napp-it Async? If not, maybe that's the cause of the error.

Can I send a new separate copy without deleting the existing one on the backup server? Meaning, not an incremental backup but another full backup.

A successful zfs send replication ends with a new target snap.
If something happens during a transfer the new target snap is missing (resulting in an error message)

Napp-it sets the target filesystem to read only after a successful replication.
If you want to use it rw, you must disable the replication and set the filesystem to rw

This is because
- during a replication a filesystem may not be accessible
- all modifications on a target filesystems are lost on next replication as
the target is resettet to last snap.

If you want to do a new initial replication without deleting the former:
rename the target filesystem. Replication will then do a new initial transfer
 
I changed my server for a new one, updated all the firmware on the server, installed OmniOS r14 and updated firmware of SAS HBA from P15 to P19.

So far, no errors from SAS expander(that was target 68 lun 0) and everything is working great. I hope it stays that way.

I already have 2 new nodes, 3 JBOD chasis, HBAs, RSF-1 software,... I'm only waiting for 50 SAS drives and then I will finally build a HA failover cluster.

Matej
 
Hey...

We are currently thinking of rebuilding our SAN, since we did some mistakes on the first build. But before we begin, we would like to plan accordingly, so I'm wondering how to measure some data(l2arc and zil usage, current iops,...), that will help us build the right thing.

We currently have a single raidz2 pool build out of 50 SATA drives(Seagate Constellation, 2x Intel S3700 100GB as ZIL and 2x Intel S3700 100GB as L2ARC.

For the new system, we plan to use a IBM 3550 M4 server with 256GB of memory and LSI SAS 9207-8e HBA. We will have around 70-80 SAS 4TB drives in JBOD cases and, if we need, some SSD's for ZIL and L2ARC.

Questions:

1.)
How to measure average IOPS current system encounters? 'zpool iostat poolname 1' gives me weird numbers saying current drives perform around 300 read ops and 100 write ops per second. Drives are 7200 SATA drives, so I know they can't perform that much IOPS.
Output from iostat -vx (only some drives are pasted):
Code:
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b 
data   36621,9 25740,2 19288,6 66191,0 197,6 25,9    3,6  40  77 
sd18    276,3  104,8  145,2   83,3  0,0  0,6    1,5   0  36 
sd19    283,3  106,7  152,1   83,3  0,0  0,6    1,5   0  24 
sd20    281,3  101,8  146,7   79,8  0,0  0,5    1,4   0  35 
sd21    286,3  117,7  146,7   84,3  0,0  0,3    0,7   0  21 
sd22    283,3   85,8  144,2   81,3  0,0  0,5    1,3   0  32 
sd23    275,3  116,7  139,7   82,8  0,0  0,3    0,8   0  21 
sd24    280,3  106,7  155,6   84,3  0,0  0,6    1,6   0  25 
sd25    288,3  106,7  148,6   86,3  0,0  0,4    1,0   0  24 
sd26    269,4  110,7  137,2   91,8  0,0  0,5    1,3   0  24 
sd27    272,4   87,8  141,7   78,3  0,0  0,7    1,8   0  34 
sd28    236,4  115,7  219,0   84,8  0,0  0,9    2,5   0  26 
sd29    235,4  108,7  228,5   83,8  0,0  0,9    2,7   0  33
Output of 'zpool iostat -v data 1 | grep drive_id'
Code:
                              capacity     operations    bandwidth
                pool                       alloc   free   read  write   read  write
    c8t5000C5004FD18DE9d0      -      -    573    220   663K   607K
    c8t5000C5004FD18DE9d0      -      -    563      0   318K      0
    c8t5000C5004FD18DE9d0      -      -    586    314   361K   806K
    c8t5000C5004FD18DE9d0      -      -    567    445   373K  1,02M
    c8t5000C5004FD18DE9d0      -      -    464     25   299K  17,9K
    c8t5000C5004FD18DE9d0      -      -    552      2   326K  3,68K
    c8t5000C5004FD18DE9d0      -      -    421     41   249K  31,3K
    c8t5000C5004FD18DE9d0      -      -    492    400   391K   944K
    c8t5000C5004FD18DE9d0      -      -    313    148   242K   337K
    c8t5000C5004FD18DE9d0      -      -    330    163   360K   390K
    c8t5000C5004FD18DE9d0      -      -    655     23   577K  21,5K
IS it just me, or are there too much IOPS for this drive to handle even in theory, let alone in practice? How to get the right measurement?

2.)
Current ARC utilization on our system:
Code:
ARC Efficency:
         Cache Access Total:             2134027465
         Cache Hit Ratio:      64%       1381755042     [Defined State for buffer]
         Cache Miss Ratio:     35%       752272423      [Undefined State for Buffer]
         REAL Hit Ratio:       56%       1199175179     [MRU/MFU Hits Only]
Code:
./arcstat.pl -f read,hits,miss,hit%,l2read,l2hits,l2miss,l2hit%,arcsz,l2size 1 2>/dev/null
read  hits  miss  hit%  l2read  l2hits  l2miss  l2hit%  arcsz  l2size  
   1     1     0   100       0       0       0       0   213G    235G  
4.8K  3.0K  1.9K    61    1.9K      40    1.8K       2   213G    235G  
4.3K  2.7K  1.6K    62    1.6K      35    1.5K       2   213G    235G  
2.5K   853  1.6K    34    1.6K      45    1.6K       2   213G    235G  
5.1K  3.0K  2.2K    57    2.2K      49    2.1K       2   213G    235G  
6.5K  4.4K  2.1K    68    2.1K      30    2.0K       1   213G    235G  
5.0K  2.5K  2.5K    49    2.5K      44    2.5K       1   213G    235G  
 11K  8.5K  2.8K    75    2.8K      13    2.8K       0   213G    235G  
6.4K  4.8K  1.6K    74    1.6K      57    1.6K       3   213G    235G  
2.3K  1.1K  1.2K    46    1.2K      88    1.1K       7   213G    235G  
1.9K   532  1.3K    28    1.3K      83    1.2K       6   213G    235G
As we can see, there are almost no L2ARC cache hits. What can be the reason for that? Is our L2ARC cache too small or are the data on our storage just too much random to be cached? I don't know what is on our iscsi shares, since they are for outside customers, but as far as I know, it's mostly backups and some live data.

3.)
As far as ZIL goes, do we need it? I think I read somewhere, that ZIL can only store 8k blocks and that you have to 'format' iscsi drives accordingly. Is that still the case? Output from 'zilstat':
Code:
   N-Bytes  N-Bytes/s N-Max-Rate    B-Bytes  B-Bytes/s B-Max-Rate    ops  <=4kB 4-32kB >=32kB
         0          0          0          0          0          0      0      0      0      0
         0          0          0          0          0          0      0      0      0      0
    178352     178352     178352     262144     262144     262144      2      0      0      2
 134823992  134823992  134823992  221380608  221380608  221380608   1689      0      0   1689
 102893848  102893848  102893848  168427520  168427520  168427520   1285      0      0   1285
         0          0          0          0          0          0      0      0      0      0
      4472       4472       4472     131072     131072     131072      1      0      0      1
         0          0          0          0          0          0      0      0      0      0
     41904      41904      41904     262144     262144     262144      2      0      0      2
 134963824  134963824  134963824  221511680  221511680  221511680   1690      0      0   1690
         0          0          0          0          0          0      0      0      0      0
         0          0          0          0          0          0      0      0      0      0
         0          0          0          0          0          0      0      0      0      0
         0          0          0          0          0          0      0      0      0      0
  32789896   32789896   32789896   53346304   53346304   53346304    407      0      0    407
  25467912   25467912   25467912   41811968   41811968   41811968    319      0      0    319
Given the stats, is ZIL even necessary?

4.)
How to put drives together, to get the best IOPS/capacity ratio out of them? We were thinking of 7 RAIDZ2 vdev's with 10 drives each. That way we would get around 224TB pool.

5.)
In case we decide to go with 4 JBOD cases, would it be better to build 2 pools? In case first one has a hiccup, we don't loose everything?

What else am I not considering?

Thanks, Matej
 
Hi Gea,

I just saw that the latest version of OmniOS was released on April 3rd, 2015:
http://omnios.omniti.com/wiki.php/ReleaseNotes/r151014

I'm running r151010 right now..
(omnios-8c08411 from uname -a)

Have you upgraded any systems yet to this new release?

Just wondering if there are any other things to check that would affect Napp-it, before upgrading via the upgrade notes ..
http://omnios.omniti.com/wiki.php/Upgrade_to_r151014


There are some improvements (from r151014 changelog):
ZFS
- Write throttle fixes.
- zpool list -v shows disk capacities
- Reduction in RAM usage of ZFS cache devices.

VM:
- open-vm-tools (9.4.0)


+ plus previous release (r151012) ZFS changes:
ZFS filesystem and snapshot limits
ZFS metadata perfomance improvement (tunable, see zfs(1M) for details).
zdb(1M) can now dump all metadata
Embedded-data block pointers ("zero block compression")
Better behavior in the face of full or nearly-full pools.
Better-behaved zfs rename and zfs create when not sharing the datasets.
Improved DKIOCFREE (used by iSCSI UNMAP) performance for large datasets.


Cheers
 
I have already updated some test machines and a heavy used SMB filer.
Up to now no problems beside:

Check numbers of BEs prior an update

There are a few reports that NFS + ESXi can give problems (see OmniOS discuss mailings).
At the moment I would not update production machines that provide NFS storage for ESXi
without prior tests (you should never update without some tests).

about OpenVM tools
They are currently not the newest (ESXi 5.5 only) and lacks vmxnet3 drivers.
I do my tests currently with the tools that are part of ESXi6 - work fine.

Beside that:
OmniOS 151014 seems to be great new Long Term Stable edition

Success and problem reports are welcome!
 
Last edited:
I have a question on updates. Since I installed AMP through the napp-it installer and plan on using owncloud, how to I perform updates when security fixes or bug fixes come out for say...MySQL.
 
I have a question on updates. Since I installed AMP through the napp-it installer and plan on using owncloud, how to I perform updates when security fixes or bug fixes come out for say...MySQL.

Amp is a community project managed by zos, see
http://napp-it.org/extensions/amp_en.html

There are updates on the amp installer from time to time.
If you need more up to date releases, you can edit the installer
or update modules like MySQL via pkgin (this is used by amp as well)

see http://napp-it.org/downloads/binaries_en.html
 
Hello, Im new to this forum and hoping to get some help with debugging slow CFS performance on my new home nas setup.

Regarding my setup I have an ESXi 6.0/Omni-OS/Napp-it with PCI passthrough of an LSI 2008 raid card and 8x2GB in RAIDZ2. Im currently using a consumer grade Gigabyte GA-990FXA-UD3 motherboard and 8 core processor.

Im currently have things installed and all seems to be working ok, but Im having problems with poor copy of an SMB share to a physical windows machine on my network. Im currently only getting 10MB/s max copy speed when copying a file to a windows box. For comparison I have a windows raid5 hw box that I get 5x or more faster copies to the same machine.

I apologize Im a hobbyist and new to FS benchmarks. I ran the bonnie builtin to napp-it and the numbers seemed good compared to some I have seen posted so I think the FS itself is working ok.

Im suspecting something wrong with possibly nic setup in the virtual machine or SMB setup config.
I followed the nap-it guide for setting up the nic inside omni-os and configured one of the GB nic and left the other unconfigured. It seems to report connection at 1GBPS. Pings to my home router however seem to have a 0.350ms latency which seemed to be a little large, not sure if it is related.

Im trying to figure out how to move forward with debugging the slow copy. Any tips for how to proceed with debugging appreciated.

Thanks in advance,

Kuma
 
Hello, Im new to this forum and hoping to get some help with debugging slow CFS performance on my new home nas setup.

Regarding my setup I have an ESXi 6.0/Omni-OS/Napp-it with PCI passthrough of an LSI 2008 raid card and 8x2GB in RAIDZ2. Im currently using a consumer grade Gigabyte GA-990FXA-UD3 motherboard and 8 core processor.

Im currently have things installed and all seems to be working ok, but Im having problems with poor copy of an SMB share to a physical windows machine on my network. Im currently only getting 10MB/s max copy speed when copying a file to a windows box. For comparison I have a windows raid5 hw box that I get 5x or more faster copies to the same machine.

I apologize Im a hobbyist and new to FS benchmarks. I ran the bonnie builtin to napp-it and the numbers seemed good compared to some I have seen posted so I think the FS itself is working ok.

Im suspecting something wrong with possibly nic setup in the virtual machine or SMB setup config.
I followed the nap-it guide for setting up the nic inside omni-os and configured one of the GB nic and left the other unconfigured. It seems to report connection at 1GBPS. Pings to my home router however seem to have a 0.350ms latency which seemed to be a little large, not sure if it is related.

Im trying to figure out how to move forward with debugging the slow copy. Any tips for how to proceed with debugging appreciated.

Thanks in advance,

Kuma

I would first do some local benchmarks to check if the storage is ok (dd, bonnie etc)
This should give you a few hundred GB/s sequential reading performance.

If this is ok, you should next check your ESXi vnic
I would switch to vmxnet3 as it is much faster than e1000 (but not as slow as your results)

Next, I would create a virtual machine (Windows or others) to check transfers over the virtual ESXi switch
The physical nic is not involved in this case.

Notice: Do not use any Windows speed enhancer like TeraCopy or similar, as
they slow down performance with ZFS or other Non-Windows servers.
For basic tests, copy a large file and check performance with the Windows 8 copy progress bar or use "NAS Performance Tester".

If performance is ok, I would replace the realtek nic with a faster one (=Intel),
optional pass-through the nic to rule out ESXi Nic effects.

Notice: Do not use any special settings like LACP or JumboFrames for these tests.
 
Thanks for the suggestions, trying the VM copy was a good idea.

Tried the file copy to virtual machine and speed was much better 55MB/sec so it looks like it is something with the ESXi physical hardware adapter or internal switch.

Physical NIC is already upgraded to an intel 82574L pci express x1 not realtek.

Looks like copy from my other server to the VM on the EXSi host is slow as well so something not right with the ESXi NIC. Will try another adapter in different PCI slot.

Any ideas how to narrow down/debug further?
 
Did you
- installed Esxi6 vmware tools on every guest
- use vmxnet3 vnic
- assigned 4-8 GB RAM to OmniOS

other options
check iostat if all disks have the same load or if one is weak

check firmware of your LSI 2008
Firmware P20 can give problems (use P19)

check sync settings, try with sync=disabled

disable atime setting

check without compress, try LZ4 compress
avoid dedup
 
i'm setting up a new server omni os + nappit. latest updates for both.

enabled smb server
setup a user group on nappit
add users to group
imported old disks with ACLs from old system.

i'm not able to setup acl from windows7. connecting as root works fine. when i press apply i get error message: No mapping between account names and security Id's was done

using acl on folders extension doesnt work as well.. i can setup local user group for top dir. however the children of that dir doesnt get the ACL passthrough!
aclinherit = passthrough. aclmode = passthrough.
i setup local user group and root for the dir. then i reset ACL's: Current Folder with recursive enabled.

when i execute "set property" it just sets everything back to owner@ / group@ / everyone@.........
which are not the current folder settings

i want to avoid using the nappit acl extensions. setting acls from other servers with older nappit versions just worked fine.

is there a option in nappit to ""replace alle child object permissions with inheritable permissions from object"?
 
Last edited:
i'm not able to setup acl from windows7. connecting as root works fine. when i press apply i get error message: No mapping between account names and security Id's was done

using acl on folders extension doesnt work as well.. i can setup local user group for top dir. however the children of that dir doesnt get the ACL passthrough!
aclinherit = passthrough. aclmode = passthrough.
i setup local user group and root for the dir. then i reset ACL's: Current Folder with recursive enabled.

when i execute "set property" it just sets everything back to owner@ / group@ / everyone@.........
which are not the current folder settings

i want to avoid using the nappit acl extensions. setting acls from other servers with older nappit versions just worked fine.

is there a option in nappit to ""replace alle child object permissions with inheritable permissions from object"?


some remarks

First you should check that there are no id mappings (Windows -> Unix)
as they are only useful for an Active Directory. (AD user -> Unix user)

Then you should be able to set ACL from Windows when connected as root.
(Remote ACL control is not possible with any Windows, works with Windows Server and most Pro editions)
You may need to take ownership on files recursive prior setting new ACLs

ZFS property aclinherit=passthrough is ok
but that does not modify any current ACLs. It controls whether ACLs are inherited when you create new files and folders

The same when you set ACLs on a folder.
This does not affect files and folders ACL below, they keep their individual ACLs.
The inheritance settings controls if this ACL is added as inherited ACL.

If you reset ACL to default, this gives you very restricted ACLs
(your owner@ / group@ / everyone@..)

What you can do with current napp-it free
- Reset ACL to everyone@=modify recursively

If you want to reset ACL to current folder, you need either the ACL extension,
or you must do this via CLI or from Windows.
 
Hi Gea,
it is just a fresh install. no AD active. just a workgroup.

Current mappings:
add -d "wingroup:power Users@BUILTIN" unixgroup:staff
add -d "wingroup:trd@BUILTIN unixgroup:trd
they are automatically created during group and user setup.

the users have a Windows SID in nappit.

when i map a network drive in windows as root and try to setup ACL i still get the same error message:
"No mapping between account names and security Id's was done"

i have setup other nappit servers and had no problems so far :( (been using nappit for some years now. with windows 7 home it was not possible to setup ACL from windows. with win8.1 prof i have no problems finding and adding users)

i'm in the nappit evaluation period for this server. reset ACL from current folder didnt work, see first post. i reverts everything to everyone@ current@ group@.

/usr/bin/chmod -Rf A=user:leo:full_set:file_inherit/dir_inherit:allow '/pool...' saved the day for now.. will try ACL from windows later. time for a beer :)
 
Last edited:
Mapping SMB groups to Unix groups is ok.
A mapping between local Unix users is not allowed.
(As a local Windows user is a Unix user)

Windows home cannot set ACL remotely

Depending on ACL settings, you may try to set permission in a two step way within napp-it.
1. reset all ACL recursively to everyone@=modify
2. set ACL recursively to current folder
 
Last edited:
I have some problems with copying files over the network between OpenIndiana VM and Windows 7, hopefully someone can help me debug it.

When I copy big files from Windows to a folder in OpenIndiana, the speed is good, over 100MB/sec, but if i copy from OpenIndiana back to windows 7 folder, the speed is extremely slow, 1-2MB/sec.

I tested between OmiOS VM and Windows, none of the problem existed, so not sure what's going on with OpenIndiana
 
Some problems rely on some nics on Windows (ex some Realteks) or copy tools that are optimized to Windows (ex TeraCopy).

Some problems are a combination of reasons and sometimes fixed or improved with newer Illumos releases. While OpenIndiana is not dead, the newest OpenIndiana release is quite old. Another problem seems that development is splitted between "old" OpenIndiana and OpenIndiana Hipster.

For my own setups, I completely moved to OmniOS as it is up to date with Illumos and stable with a commercial efford behind.
 
I do have realteks but it seems not the cause since I do the same with OmniOS and Windows, the speed is as good as it should be. So it must be the problem with Openindiana. One thing though, under Openindiana, I can view all of my shares, but with OmniOS only few folders are available even though all of the are set to readonly=off
 
Readonly does not matter.
Check smbshare property or disable/enable sharing.
 
D9CmTLjE.png


In this one, I can view VM but not torrent on windows. what attributes should I change so I can view both on windows?

Thanks
 
i would
- restart the SMB server service

if this does not help
- restart Windows
 
I have to set SMB off and on again so everything is good now. I have one more question regarding to OmniOS, I notice that the time is never sync even though I installed the ntp server. Everytime OmniOS starts, I have to manually restart ntp service to get the correct time. Is there a way to make it work permanently?
 
On bootup you must execute:
Code:
ntpdate pool.ntp.org
either via autostart (Menu Service >> Bonjour and autostart)
or via menu Jobs >> other job >> create

Run it daily or once at bootup
To execute once, use autostart or autojob with hour = once
 
Last edited:
Hi
Having trouble getting the TLS email function working. I have followed the instructions here - http://napp-it.org/downloads/tls_en.html and even tried the bug fix at the bottom of that web page.
Napp-it ver. 0.9f5 Apr.22.2015 OmniOS ver. r151014

Getting error-
Software error:
Couldn't start TLS: SSL Version SSLv3 not supported
at /var/web-gui/data/napp-it/zfsos/15_jobs and data services/04_TLS Email/09_TLS-test/action.pl line 72.

Any ideas?
 
I have tried the same and can confirm the problem with TLS encrypted mails on OmniOS 151014 -
without having a solution at the moment.

ps
napp-it use the Tls.pm module in the /var/web-gui/data/CGI folder where the last fix is included.

The problem is probably around the SSL module and the SSLleay module
(found some infos at https://www.perlmodules.net/viewfeed/distro/IO-Socket-SSL )
 
Last edited:
Hey Gea, having a strange error.

info: 651: incremental zfs receive error after 2 s cannot receive: most recent snapshot of backup/nas does not match incremental source

We do not have "delete empty snapshots" enabled on either the source or the destination.

Any thoughts?
 
You should have a newest and a previous snappair.
If something happens, destroy the newest target snap with the highest number and retry (use the previous as base).

Enter the jobid in menu snapshots to filter snaps for a job
 
Last edited:
Hello
I installed the amp stack and got owncloud installed and it runs. I have integrated with AD and now I'm looking to add external SMB storage and I can, except I can't use the AD integration. What I had to do is install samba from pkgsrc and link to the /opt/local/bin/smbclient. However I can't use AD authentication to map to the external mount. Is there anything else I need to install?
 
Hello,

I'm searching, but can one of you remind me of what is the best IT firmware for a M1015/9240 for OpenIndiana ?

Thanks
 
You should have a newest and a previous snappair.
If something happens, destroy the newest target snap with the highest number and retry (use the previous as base).

Enter the jobid in menu snapshots to filter snaps for a job

So if I don't replicate for a few days, but take snaps daily on the source, the job will not work?
 
So if I don't replicate for a few days, but take snaps daily on the source, the job will not work?

It does not matter how often you run the replication.
You always have a newest snap-pair on source and target ex ...._repli..nr_150 and a previous snap-pair .._repli....nr_149

If something happens with the newest snappair (for example if source and target are not consistent for whatever reason),
you can delete the newest target snap ex .._nr_150. Next replication run will then use the previous snap-pair ..nr_149.

Keep in mind
A replication job creates its own snaps with _repli in their name. If you create additional snaps via an autosnap job, they are
done additionally and they are not used but skipped by replications.
 
Last edited:
some time ago i created a raidz1 pool with 3 disks. ashift = 9
last week 1 drive was kicked out of the pool which resulted in the pool degraded.

i got the same disk new from seagate and trying to replace with the new disk. first i initialised the disk, everything went well.

when i open the menu "replace" it gives me 2x the crashed disk ??

14482685595707703595 mediavaultv2 raidz1-0 UNAVAIL -
14482685595707703595 mediavaultv2 raidz1-0 UNAVAIL -

i select one of the two old disk and the disk to replace with. then i get this error:

Could not proceed due to an error. Please try again later or ask your sysadmin.
Maybe a reboot after power-off may help.

cannot replace 14482685595707703595 with c1t3d0: devices have different sector alignment

The disks are all the same: (3x) ST2000DM001-1CH1
how can i force the disk to use ashift=9? can i initialize the disk as ashift=9 ?

(nappit version 0.93f3 - omnios latest version. hp micro server.)
 
Last edited:
Seems that your new disk is 4k.

You simply cannot replace a 512b disk with a 4k disk with an ashift 9 setting
(This is only possible if you would have created ashift 12 vdevs from your 512b disks)

Your only options are now
- search for a 512b disk for replacement
- backup and destroy your pool, re-create with ashift=12
 
hmz.. i thought the hdd's are 4K sectors physically, but they present 512-byte sectors to the host. thus compatible with 512bytes disks/pools
 
Hi GEA, I have another problem with OmniOS-VM, today I shutdown my ESXi box and restart it, and then restart OmniOS, omnios stops recognizing all of my 3 M1015 cards (flashed to LSI9211-IT ), the error for all three cards is mptsas failed, mptsas bad flash signature. Strangely that, when start Openindiana and other VMs they recognized those 3 cards fine. I did also try not passing through PCI cards and ESXi recognizes those cards too. Not sure what's going on with omnios
 
Last edited:
Back
Top