OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Hallo,

I have a rather stupid question. I have set up OpenIndiana and napp-it in a VM for testing.
I have added 4 hdds and set up a striped mirror pool.
I created a ZFS Filesystem and now I'm trying to mount it as a datastore on my esx (where the VM is running).

I can't connect to the folder.
How can I turn on NFS sharing? I see in the services that it NFS is disabled but I can't figure out how to turn it on, as switching it to enabled doesn't help it just goes back to disabled.

This is the output of zfsinfo:

Test/Test1_sharenfs off
Test/Test1_sharesmb name=Test1,guestok=true

Sorry for the stupid question.

edit:
Figured it out, went to ZFS Filesystems -> reload and turned nfs on there.
Also forgot to add a VMkernel Interface in the right subnet
 
Last edited:
I'm sure the buffer has some effect, but my VM has 8GB of RAM assigned and the test only goes up to 4GB.
 
My VM has 2GB memory and when running with 1GB file, I get speeds I wrote, but when I raise it up to 4GB, speeds drops to read 120, write 50...

MAtej
 
Well I finally took the plunge and setup ZFS (OI & Napp-it) on my H36L just for a play. Current config: H36L Microserver, 8GB RAM (non-ecc), 3x3TB Seagates in RAIDZ1, a Samsung 840 (non-pro) 120GB partitioned with 16GB for OI, 40GB read cache and the rest for log/zil cache.

I'm a linux/ZFS noob but when I start dumping files onto this ZFS box it initially maxes the gbit network connection however it gradually winds down to 10MB's a second. I can see in napp-it disk max busy time is 100 so I'm guessing that is the bottle neck? This value is next to the SSD so guess its a caching issue?

I am copying files from my Win7 PC (8120cpu, 16gb ram, files are on a 1Tb wdblack) and was hoping for more speed than the initial burst rate but this is my first time playing with zfs. I suspect that microserver may need the hacked BIOS since the samsung ssd uses the ODD port but was wondering what else I could change.
 
Well I finally took the plunge and setup ZFS (OI & Napp-it) on my H36L just for a play. Current config: H36L Microserver, 8GB RAM (non-ecc), 3x3TB Seagates in RAIDZ1, a Samsung 840 (non-pro) 120GB partitioned with 16GB for OI, 40GB read cache and the rest for log/zil cache.

I'm a linux/ZFS noob but when I start dumping files onto this ZFS box it initially maxes the gbit network connection however it gradually winds down to 10MB's a second. I can see in napp-it disk max busy time is 100 so I'm guessing that is the bottle neck? This value is next to the SSD so guess its a caching issue?

I am copying files from my Win7 PC (8120cpu, 16gb ram, files are on a 1Tb wdblack) and was hoping for more speed than the initial burst rate but this is my first time playing with zfs. I suspect that microserver may need the hacked BIOS since the samsung ssd uses the ODD port but was wondering what else I could change.


First of all, you should check local disk performance, then over network.

some suggestions:
- for a pure filer, a ZIL is not used at all (do not enable ZFS sync property)
- using a SSD for boot, ARC and ZIL is bad use
I would boot the Microserver from a pair of USB sticks (see napp-it to Go)
and use the whole SSD as ARC when at all (check ARC statistic wirh menu system-statistic)

next:
avoid copytools like Teracopy
If you have a cheap Realtek Nic in your PC, look for newest driver or compare a PC with Intel Nics
read Tuning options at http://napp-it.org/manuals/tuning.html
 
Thanks Gea. After removing the ARC (ssd partition) from the pool speeds picked back up above 80MB/s transfers. When I have a bit more time I will fine tune it some more, but I will build another microserver with those specs you mention. More than happy with its current performance.
 
Hey Guys,

I apologize if this has been covered in this thread. I searched and read all I could but only ended up slightly more confused.

I have a small all-in-one server box running OI and Napp-it. It has 6 2TB Hitachi drives as a raid-z2. I also have a SATA drive sled with 4 individual 2TB Hitachi Drives. Right now the server only has 1.7 TB full, but of course will only continue to grow.

What I am looking to do is keep the 4 drive sleds as off-site backup for the server. The idea I had was to run 2 sets of drives that get swapped out every month with incrementals. To explain, I'll call the Drives A1, A2 and B1, B2.

A1 and B1 get a full backup of the 1.7TB initially. A1 is placed off-site and B1 remains. B1 continues to have daily or weekly backups made to it, then after a month, B1 gets swapped with A1 off-site. A1 is then brought up to date with the new material in the last month, and continues to get the weekly backups until it gets swapped with B1 again. Once A1 and B1 are full, the incrementals would switch over to A2 and B2. Once those are full, new drives and sleds would be purchased as A3/B3 etc.

So, what is the best way to accomplish this task? ZFS Send? RSYNC? Third-Party program? If it helps, the data to backup is 3 filesystems shared on SMB, 1 shared on NFS for ESXi and one iSCSI target for WHS2011 backups.

Thanks for the help! I'm very new to this whole concept and maybe I am asking too much, but it's all a learning experience.

I would never do backups with incrementals on other disks.
Either create a removable backup pool (ex some disks in Z1) and/or use different backup disks for different filesystems or folders.

Then you can sync your data with your backup pool either via ZFS send, rsync or robocopy if you want to do from Windows. Think about a snap-history on backup disks to be prepared of situations when damaged or wrong data is on your pool and your backup.

If your data is really important, build a second box for near-realtime replication of the whole system with free bays for additional offline backups.
 
I just tested my setup, I have 7 Hitachi 3TB green drives in a Raid Z2 connected to an M1015 passed through to Solaris 11. The test was run from a Win 7 VM on the same box, with a 1TB iSCSI partition on the pool. Both VMs use the VMXnet3 NIC driver.

c0EnIO1.png

Can you please run the same test and take a screen shot of the graph from vSphere showing the NICS performance on the Windows 7 VM? Can you also elaborate on your setup? Also what storage is your Windows 7 VM sitting on?

Thanks!
 
I have a question: I've been running my server since about April/May last year and it seems that Napp-it's snaps never get deleted. Is there a way to make it auto destroy after X days/weeks/months?

Ninja Edit: Gea - This has gotta be one of the best filers I've ever used at home. The power of ZFS and what you've managed to achieve with Napp-it is second-to-none. Keep up the great work!
 
I have a question: I've been running my server since about April/May last year and it seems that Napp-it's snaps never get deleted. Is there a way to make it auto destroy after X days/weeks/months?

Ninja Edit: Gea - This has gotta be one of the best filers I've ever used at home. The power of ZFS and what you've managed to achieve with Napp-it is second-to-none. Keep up the great work!

I am glad, you like it.
Easiest way to achieve, is to create several autosnap jobs ex
- create a snap every hour on daytime, keep 12
- create a snap every day, keep 7
- create a snap every week, keep 4
- create a snap every month, keep 12
..

older snaps in each jobs are deleted automatically
You can also delete them manually like
- delete all snaps older 30 days, optionally with other parameters like filesystem or any substring.in snapname.
 
Gotcha. It looks like at this point it's doing system-based automatic snapshots - likely due to Time Slider being enabled on the server. The one thing I do like about that is it gives me more frequent snapshots when I'm heavily using the file system. I suppose that I should just disable the time slider service though as I really like your idea since it'll allow me to drill down what I want to keep. My plan is for hourlies for 24 hours, dailies for 1 week, weeklies, for one month, and monthlies for one year (depending on space).
 
Gotcha. It looks like at this point it's doing system-based automatic snapshots - likely due to Time Slider being enabled on the server. The one thing I do like about that is it gives me more frequent snapshots when I'm heavily using the file system. I suppose that I should just disable the time slider service though as I really like your idea since it'll allow me to drill down what I want to keep. My plan is for hourlies for 24 hours, dailies for 1 week, weeklies, for one month, and monthlies for one year (depending on space).

napp-it autosnap is not based on Timeslider. its a napp-it feature with more options.
 
Ok, so I hope other people can jump in that one.. It's been reported by many user that you can't access CIFS on OmniOS/OI from android app if the share is password protected.. (password is always refused).

I've decided to spend some time looking into it, i've sniffed the traffic and looked into it.

I've found a single android app that was able to access a CIFS share on OI, with it's "compatibility mode", it is total commander.

By comparing working (compatibility mode) with the standard (which is what other apps do), it comes down to two differences.

Compatibility mode set the SMB Extended Security Negotiation to false. I do not think that this is it, since my win8 machine does the same and it work.

This leave only the HASH type.
There are two type of hash supported for SMBv1, they are LM hash and NT hash.

Wireshark identify the LM as ANSI Password and the NT as Unicode Password.

Code:
Observations: (in wireshark)

Standard Android App send:
ANSI Password XXXXXXXXXXXXXXXXXXXXXXXXX (24 bytes)

Working android (compatibility) App:
ANSI Password XXXXXXXXXXXXXXXXXXXXXXXXX (24 bytes)
Unicode Password XXXXXXXXXXXXXXXXXXXXXXXXX (24 bytes)

My win8 machine show
ANSI Password 000000000000000000000000000000000 (24 NULLs)
Unicode Password XXXXXXXXXXXXXXXXXXXXXXXXXXXX (24 bytes)

So.. it would seems that my OI box (same on OmniOS) require the NT hash (Unicode).

I have digged in man smb and found:

lmauth_level

Specifies the LAN Manager (LM) authentication level. The
LM compatibility level controls the type of user authen-
tication to use in workgroup mode or domain mode. The
default value is 4.

4 is defined as:

In Windows workgroup mode, the Solaris CIFS
server accepts NTLM, LMv2, and NTLMv2
requests. In domain mode, the SMB redirec-
tor on the Solaris CIFS server sends LMv2
and NTLMv2 requests.

I've tried setting this lower (compatibility) without success.

If this is any help to anyone I have tracked back the two thing that are done on the android compatibility mode (it's actually using JCIFS lib)

Code:
jcifs.Config.setProperty("jcifs.smb.client.useExtendedSecurity", false);
jcifs.Config.setProperty("jcifs.smb.lmCompatibility", 0);

    0,1 -- Sends LM and NTLM responses.
    2 -- Sends only the NTLM response. This is more secure than Levels 0 and 1, because it eliminates the cryptographically-weak LM response.
    3,4,5 -- Sends LMv2 and NTLMv2 data. NTLMv2 session security is also negotiated if the server supports it. This is the default behavior (in 1.3.0 or later).


Anyone have some ideas?
 
I've spent the whole day fighting with COMSTAR and ESXi 5.1. I'm really hoping someone can help.

Here's the what I've done:

Created a zfs volume
Created a COMSTAR lu from said volume
Created a COMSTAR target
Created a COMSTAR view for said lu (visible to everyone on all interfaces)

And here's the problem:

No matter what I try I can't seem to get my ESXi 5.1 server to recognize the shared iSCSI LUN. The ESXi 5.1 server will discover the COMSTAR target (target shows up under "Static Discover") but it will not find the shared LUN. The same ESXi 5.1 server successfully finds LUNs shared by a Windows 2008 Storage Server. Additionally, the same COMSTAR LUN I'm having trouble with in ESXi 5.1 can be successfully mounted using Windows iSCSI initiator or ESXi 4.1.

Any help or suggestions would be greatly appreciated, this one really has me stumped.

Some additional details:

OpenIndiana (151a7)
Napp-It (v0.9a9 nightly Mar.04.2013)
COMSTAR (11.11,REV=2009.11.11 installed by Napp-It)
All iSCSI traffic on an independent VLAN
All iSCSI traffic on the same subnet

comstartarget.png

comstarlu.png

comstarview.png

vmware41.png

vmware51.png

vmware51hba.png
 
Last edited:
Your esxi 4.1 is using the software iscsi initiator.

But your esxi 5.1 is using the broadcom TOE iscsi initiator.

I have never used the broadcom one before, but that is my guess, based on the supplied info.
 
Your esxi 4.1 is using the software iscsi initiator.

But your esxi 5.1 is using the broadcom TOE iscsi initiator.

I have never used the broadcom one before, but that is my guess, based on the supplied info.

Super observant of you and I think you may be on the right track. I tried booting that ESXi 4.1 box with ESXi 5.1 and it happily finds the COMSTAR lun using the vmware iSCSI Software Adapter.

Is anyone aware of issues between the Broadcom NetXtreme II BCM5708 and COMSTAR? The same HBAs seem to have no trouble connecting to my Windows 2008 Storage Server iSCSI targets.

UPDATE:

I found out that I did have a problem with my vmware iSCSI network config. Reading vmware's storage guide I stumbled onto this tiny little note regarding Dependent Hardware iSCSI Adapters: "If you use separate vSphere switches, you must connect them to different IP subnets. Otherwise,
VMkernel adapters might experience connectivity problems and the host will fail to discover iSCSI LUNs"
. As it turns out, I had made exactly that mistake.

However, after I fixed the network misconfiguration I was disappointed to find that I was still not able to find the COMSTAR LUNs utilizing Dependent Hardware iSCSI Adapters. So, I called it quits on the iSCSI offloading and added the vmware iSCSI Software Adapter. Sure enough, now all the COMSTAR LUNs show up as expected.

Big thanks to patrickdk for the assist!
 
Last edited:
Hoping the hive mind could help me do a sanity check on the disc performance numbers I'm seeing from my new build before I begin migrating data.

Here are the build specs:

System Components:

System Disks:

VM Pool [3TB Usable]:

General Storage Pool [32TB Usable]:

Code:
ryan@NAS:~$ zpool list -v
NAME                        SIZE  ALLOC   FREE  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
perfpool                   2.72T   196G  2.53T         -     7%  1.00x  ONLINE  -
  mirror                    928G  65.4G   863G         -
    c4t5000C5005614F5ABd0      -      -      -         -
    c4t5000C500561512B3d0      -      -      -         -
  mirror                    928G  65.4G   863G         -
    c4t5000C50056148913d0      -      -      -         -
    c4t5000C5005614FA83d0      -      -      -         -
  mirror                    928G  65.4G   863G         -
    c4t5000C5005614EDB7d0      -      -      -         -
    c4t5000C5005614F757d0      -      -      -         -
  c4t5001517BB2AD2BF4d0    18.6G   140K  18.6G         -
rpool                      55.5G  5.64G  49.9G         -    10%  1.00x  ONLINE  -
  mirror                   55.5G  5.64G  49.9G         -
    c6t0d0s0                   -      -      -         -
    c6t1d0s0                   -      -      -         -
storagepool                43.5T  1.69M  43.5T         -     0%  1.00x  ONLINE  -
  raidz2                   21.8T  1.11M  21.7T         -
    c4t5000C50050309249d0      -      -      -         -
    c4t5000C50050310475d0      -      -      -         -
    c4t5000C50050312055d0      -      -      -         -
    c4t5000C5005031E6B3d0      -      -      -         -
    c4t5000C5005031FEF0d0      -      -      -         -
    c4t5000C50050338ECDd0      -      -      -         -
  raidz2                   21.8T   588K  21.7T         -
    c4t5000C5005030476Ed0      -      -      -         -
    c4t5000C50050310031d0      -      -      -         -
    c4t5000C50050311915d0      -      -      -         -
    c4t5000C5005032F4FFd0      -      -      -         -
    c4t5000C500503334B4d0      -      -      -         -
    c4t5000C500503347C7d0      -      -      -         -

And finally the stats from Napp-It:

Bonnie++
Code:
 NAME 	 	 SIZE 	 Bonnie  Date(y.m.d) 	 File 	 Seq-Wr-Chr 	 %CPU 	 Seq-Write 	 %CPU 	 Seq-Rewr 	 %CPU 	 Seq-Rd-Chr 	 %CPU 	 Seq-Read 	 %CPU 	 Rnd Seeks 	 %CPU 	 Files 	 Seq-Create 	 Rnd-Create 
   	   	   	   	   	   	   	   	   	   	   	   	   	   	   	   	   	   	   	   
 perfpool 	 2.72T 	 start 	 2013.04.09 	 128G 	 91 MB/s 	 98 	 740 MB/s 	 83 	 580 MB/s 	 84 	 92 MB/s 	 99 	 1668 MB/s 	 74 	 +++++/s 	 +++ 	 16 	 +++++/s 	 19239/s  
 storagepool 	 43.5T 	 start 	 2013.04.09 	 128G 	 93 MB/s 	 97 	 757 MB/s 	 80 	 544 MB/s 	 81 	 92 MB/s 	 99 	 1558 MB/s 	 71 	 11232.9/s 	 19 	 16 	 +++++/s 	 29805/s

IOStat
Code:
 NAME 		 SIZE 	 Iozone  Initial Write 	 Re-write 	 Read 		 Re-read 	 Reverse Read 	 Stride read 	 Random read 	 mixed workload  Random Write 	 Fwrite 	 Fread 	   
   	   	   	   	   	   	   	   	   	   	   	   	   	   	   
 perfpool 	 2.72T 	 start 	 1544.7 MB/s 	 1653.9 MB/s 	 3951.3 MB/s 	 3954.4 MB/s 	 3342.7 MB/s 	 3264.5 MB/s 	 2835.5 MB/s 	 2312.7 MB/s 	 1463.0 MB/s 	 1757.9 MB/s 	 4016.7 MB/s 	   
 storagepool 	 43.5T 	 start 	 1442.5 MB/s 	 1586.8 MB/s 	 3820.8 MB/s 	 4464.1 MB/s 	 3249.4 MB/s 	 2913.2 MB/s 	 2900.7 MB/s 	 2065.6 MB/s 	 1394.2 MB/s 	 1625.2 MB/s 	 3886.2 MB/s
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
Super observant of you and I think you may be on the right track. I tried booting that ESXi 4.1 box with ESXi 5.1 and it happily finds the COMSTAR lun using the vmware iSCSI Software Adapter.

Is anyone aware of issues between the Broadcom NetXtreme II BCM5708 and COMSTAR? The same HBAs seem to have no trouble connecting to my Windows 2008 Storage Server iSCSI targets.

UPDATE:

I found out that I did have a problem with my vmware iSCSI network config. Reading vmware's storage guide I stumbled onto this tiny little note regarding Dependent Hardware iSCSI Adapters: "If you use separate vSphere switches, you must connect them to different IP subnets. Otherwise,
VMkernel adapters might experience connectivity problems and the host will fail to discover iSCSI LUNs"
. As it turns out, I had made exactly that mistake.

However, after I fixed the network misconfiguration I was disappointed to find that I was still not able to find the COMSTAR LUNs utilizing Dependent Hardware iSCSI Adapters. So, I called it quits on the iSCSI offloading and added the vmware iSCSI Software Adapter. Sure enough, now all the COMSTAR LUNs show up as expected.

Big thanks to patrickdk for the assist!

If I understand it correctly, the Broadcom iSCSI if for offloading only, it's not a real hardware iSCSI HBA. You have to configure VMware software iSCSI initiator and set the same iqn on both software and Broadcom iSCSI initiator, then VMware will offload to Broadcom.
 
Hi all,

I've been trying various NAS solutions for a few weeks now and am finally realising that NappIt comes closest to having all the features I am wanting.
However, I can't get to a share which is nested in another share unless I map directly to that child share.

Is there a limitation preventing this, or am I doing it wrong?

I am running:
-dedicated hardware
-Solaris 11

I want to be able to:
-map my zpool to a windows client and then see/access all subdirectories and all shares from the one mapped drive.
-map individual shares with read-only access on non-windows appliances.

Is this possible?
 
The limitation is the Solaris CIFS server.
While it provides the best Windows compatibility of all Linux/Solaris SMB servers and best performance,
a SMB share is a property of a ZFS filesystem and not of a folder in a filesystem. Each filesystem can have
totally different ZFS properties.

You cannot add a SMB share to a regular folder and if you mount the filesystems in a nested share,
you cannot travers from one share to a share below by clicking on the folder but only by mounting the share.

At the moment you have to live either with this limitation or you must use SAMBA where you can share any regular folder.
Bad thing: not as easy, not as fast and not as Windows compatible i.e. no Windows SID/ACL support, no previous version.
 
Thanks for explaining that.
I think in this case I will map separate drives to each share on the windows client, and perhaps consolidate a couple of the shares.
 
I've finally got enough parts to start my OpenIndiana file server. The OS is installed on a fairly old 160GB drive though, to be replaced at a later point. Is it possible to enable copies=2 on the rpool just in case a sector goes bad ?
 
I've finally got enough parts to start my OpenIndiana file server. The OS is installed on a fairly old 160GB drive though, to be replaced at a later point. Is it possible to enable copies=2 on the rpool just in case a sector goes bad ?

Possible: yes
Helpful: I doubt (only new or modified files are written twice)

Better:
use a second disk (or two fast 16GB+ USB sticks with OmniOS/OI server) and ZFS mirror them
 
So....

I was consolidating shares I found that copying between datasets in the same pool is INCREDIBLY slow.

I'm not hugely experienced with performance tuning so am looking for a few tips from people.


SYSTEM:
===========
AMD Opteron 170
Solaris 11
4GB DDR. (nappit installed with 2GB, then 2GB added after)
root installed to single IDE drive (simple zpool)
datapool made up of 4*3TB Seagate constellation drives in RAIDZ1 (4k sectors)
4 on board SATA2 ports from MOBO (Nforce4 chipset) (4 onboard SATA1 ports from marvell chipset is disabled in BIOS)


EXPERIENCE:
============
I don't have logs available (posting this from work) but as an idea....
if sharebox is 100GB and from Sol I run
#cp -rp /datapool/sharebox /datapool/DATA/

It takes hours!!

Also, if I run another similar copy simultaneously and download from the internet (via windows client) to another share on datapool, it seems to slow down even more.

if I copy from local windows drive to the NAS it seems to go as quick as the 1Gb network will allow so I think the issue is only when the pool is reading and writing simultaneously.


If anyone could help with tuning suggestions or even some good commands to monitor performance. Currently I only really use:
#zpool iostat datapool 1 100

Thanks
 
If I understand it correctly, the Broadcom iSCSI if for offloading only, it's not a real hardware iSCSI HBA. You have to configure VMware software iSCSI initiator and set the same iqn on both software and Broadcom iSCSI initiator, then VMware will offload to Broadcom.

Also, will not be able to do jumbo frames at the same time when offloading iSCSI. It's one or the other.
 
Hi -
I'm looking at using a supermicro server to build an all-in-one ESXi box, with 10-12 internal drives and passthrough storage to FreeNAS/FreeBSD/Solaris or Nexenta. Ideally I would like to have it also capable of managing an exernal SAS JBOD.

I already have the X9Dri with ESXI 5.1 installed to USB and found I could not passthrough the SCU Sata ports, but could passthrough the onboard AHCI Sata.

I was successfully able to install a LSI 9207-8i and connect it to the backplane and passthrough SAS drives to the most recent version of FreeNAS (installed on a local drive made available as a datastore to ESXi)

I'd rather be able to make better use of onboard controllers for the storage server, So I'm wondering whether anyone has successfully used the supermicro x9DRF (which has onboard LSI 2308) in passthrough mode to ESXi - And if so, what OS are you using for storage server?

It looks like if you are able to successfully pass through the onboard controller, FreeBSD/FreeNAS and Nexenta should be able to handle the LSI 2308 - Not so sure about solaris 11.11 and 10_U11 seem to have problems with it https://forums.oracle.com/forums/thread.jspa?messageID=10904694
 
Hi -
I'm looking at using a supermicro server to build an all-in-one ESXi box, with 10-12 internal drives and passthrough storage to FreeNAS/FreeBSD/Solaris or Nexenta. Ideally I would like to have it also capable of managing an exernal SAS JBOD.

I already have the X9Dri with ESXI 5.1 installed to USB and found I could not passthrough the SCU Sata ports, but could passthrough the onboard AHCI Sata.

I was successfully able to install a LSI 9207-8i and connect it to the backplane and passthrough SAS drives to the most recent version of FreeNAS (installed on a local drive made available as a datastore to ESXi)

I'd rather be able to make better use of onboard controllers for the storage server, So I'm wondering whether anyone has successfully used the supermicro x9DRF (which has onboard LSI 2308) in passthrough mode to ESXi - And if so, what OS are you using for storage server?

It looks like if you are able to successfully pass through the onboard controller, FreeBSD/FreeNAS and Nexenta should be able to handle the LSI 2308 - Not so sure about solaris 11.11 and 10_U11 seem to have problems with it https://forums.oracle.com/forums/thread.jspa?messageID=10904694

I have a SUPERMICRO MBD-X9DR7-LN4F-O with the LSI 2308 in passthru with ESXi 5.1 going to an OI+Nappit guest.

This was a pain in the arse to setup. First issue you will have if you use the onboard LSI 230 with the LSI 9207-8i (which I also have) the machine will not boot until you disable the boot-rom for the onboard controller, however you need to remove the LSI 92078i to do this. So just be warned. The LSI will handle the boot-rom for both controllers.

Second issue which is the tricker one is you need to flash the onboard with the IT rom, it comes with the IR rom. Now this is tricky because the board is a UEFI board which means you need to find the Flash binary that works from UEFI, when I did this a while ago I had issues finding the right one, and can not find it now :(. Now BIG WARNIG - you screw this up the best case scenriao is the onboard controller is bricked and not longer usable, worst case the whole motherboard is bricked. So be careful, have a UPS attached to the machine, etc.

If I were building a new machine I would get the board without the onboard controller, unless I had a real need to have not external controllers. Cause my other complaint about the onboard besides the pain I mention making it work, is the location of the ports sucks for most cases. I have a Supermicro 846 chassis and I can just barely make the SAS cable fit.
 
typo in my post. Looks tricky :( I had heard that supermicro provides a version of the board with the 2308 already in IT mode...
 
Hi folks,

need your help here, I am afraid.
My all-in-one gave me a PSOD due to a RDIMM gone bad.
Now one of my pools shows:
Code:
root@tank:~# zpool status -v tank
  pool: tank
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.

[...]

errors: Permanent errors have been detected in the following files:

        tank/[email protected]:<0xfffffffffffffffe>
        tank/[email protected]:<0xffffffffffffffff>

...looks like a data-.snap in the media ZFS-Folder is affected.
How do I go about clearing that error?
Can I safely delete the snap?

TIA,
Hominidae
 
Hi _Gea,

I think I've come across a bug in the latest Napp-It (v.0.9a9 nightly Mar.04.2013).

It looks like when importing a zvol LU via COMSTAR the system cannot handle a LU with an underscore ( _ ) in the name. The GUI errors out with a "metadata" error and the LU import fails. The command shown in the live log window at the bottom is cut off at the underscore in the zvol name.

Running the command as it is shown but with the rest of the zvol name after the underscore is successful and the LU is imported.

Thnanks!!
Riley
 
Back
Top