OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Does anyone here know why Open Indiana VMs cannot boot with more than 2 virtual cpus? Is this a bug that will be fixed soon, or some inherent limitation?

It is not unique to OI. All of the Solaris variants appear to have this problem under ESXi. This is the primary reason that I abandoned the "all-in-one" design and went back to a bare-metal install of Solaris for napp-it (which I absolutely love using - thank you _Gea!).
 
not sure if it helps, but there are a lot of comments about using one vcpu and the ESXi cpu scheduler
http://www.google.de/search?q=vcpu+esxi+only+one

it seems that with several VM's it is better to use only one vcpu for each and
allow ESXi to assign CPU power dynmically

but i have not made any benchmarks to compare
 
Thanks for the prompt feedback. I am running a 5 disk (all-in-one) raidz array. No compression, 3.2ghz quad core sandybridge, 16GB ECC 1333, - Basically a build following _Gea's suggestions - and it's freakin awesome. However, I've noticed that the CPU load is quite high when transferring (although with only 2 physical cpu cores, I can still sustain almost 100Mbyte/sec over gig-e).

Setting the VM to use 4vCPUs moves the max Mhz allowed from 7000 to 12000, and then the CPU is no longer slowing down gig-e transfers. Windows reports ~115-120Mbyte sustained transfers, and the network graph is teetering in the high 90s.

So far this change has not caused any visible instability, but I'm going to investigate other ways for handling CPU distribution (thanks _Gea)

And also, thanks AGAIN for such a good tool (napp-it), such through hardware suggestions, and killer support on this forum and many others! You're doing us all a real service. I appreciate it! :)
 
Hi,

After several weeks of HW testing/burn-in, I’m getting closer to get my all-in-one server ready for production use. I did some benchmarking from within the Windows7 (64 bit) VM and was quite surprised how low the random IO performance (e.g. @ 4k) is:
CrystalDiskMark 2.2:
mailattachment.gif


ATTO 2.46:
mailattachment5.gif

:confused::confused::confused:

Here are the Bonnie results (I use the VMstorage pool for the above mentioned Windows 7 test):
mailattachment1.gif

(the SAS15k pool is a mirror of two old 36 GB 15k SAS drives that I have not removed from the system yet)

Host system specs:
Code:
ESXi 4.1u1
Supermicro X8DT3, 2x XEON L5518, 4x8GB regECC
5x WD VelociRaptor 600GB connected to the on-board LSI 1068E,
passed through to the storage OS (OpenIndiana b_148, 6 GB RAM) and then used as a striped mirror (“RAID10”) + spare

OI is using the "vmxnet3" adapter to connect to the internal virtual network that connects the OI SAN with ESXi.
The "VMstorage" pool is shared to ESXi via NSF.

I have no previous experience running ESXi on a SAN, so maybe this is expected? I would expect much better performance out of the 10k drivers in a RAID10 ... :confused:
What kind of results do you get with these two benchmarks?
Anything else that could be wrong with my setup?

Any feedback is much appreciated!
-TLB
 
Last edited:
I would prefer you ran the e1000 NIC in the OI VM - I and others have seen glitchy behavior from the vmxnet3 NIC.
 
I would like to upgrade my NAS from OpenSolaris 129b to OI. Which build of OI do you prefer? Is OI_151 stable enough? Oh, some details of my home NAS: 2 ZFS RAIDZ1 vdevs (4x2TB Hitachi 5k3000, 4x1TB WD 1TB), 4GB RAM, Intel 1Gb network adapter,... and I would also like to use AOC USAS2-L8i in updated NAS. Where is best to apply IT firmware? Win 7, DOS? Are there any obstacles in importing ZFS pool from OS into OI? Thank you.
 
THANK YOU 10000 TIMES!

I was trying to install vanilla Solaris Express 11 and couldn't get it to work. I ended up using OpenIndiana and your guide... superb.

I would serve you with a BJ if possible, but alas, a simple thank you it will be :D
 
Last edited:
Thanks for the walkthrough! After some time fightting with it, I have ESXi, OpenIndiana, Napp-it (updated to 0.600)

So I just got this whole ZFS thing set up, but I can't access it.

I have both SMB and NFS enabled. If I entirely disable SMB, I cannot find the share at all. I belive there are known issues w/ win 7 ultimate x64.

What's the best (and easiest) way to access my pool from windows 7? :confused:
(I can't back up anything till I have access :( )

Edit:

If I right-click on the share, select properties and click the security tab, I do not have rights to make changes. Is there a fix for this? SMB is fine with me if it'll work. :D
 
Last edited:
Yes, you have discovered suck win7 feature. If it is ultimate, doesn't it have NFS client available? I think so, but it isn't installed by default. Failing that, create an iSCSI target?
 
Yes, you have discovered suck win7 feature. If it is ultimate, doesn't it have NFS client available? I think so, but it isn't installed by default. Failing that, create an iSCSI target?

Thanks. I didn't even realize 7 Ultimate was the only version with NFS.

Well that's installed. I disabled SMB, NFS is still online. If I point to my share from run "//servername/folder" it can't connect nor even see the share. I do I need to leave SMB enabled anyways?

The explanation in windows for using cmd to point NFS at the share is:

mount [–o Options] \\ComputerName\ShareName {DeviceName | *}

It's not working. I left out the device name portion as I don't know what that refers to.

I think my brain has shut down at this point. So much new information.
 
More specifically, what do you get from the command:

sharemgr show -vp

Here's what I get:
default nfs=()
smb smb=()
* /var/smb/cvol **
c$=/var/smb/cvol smb=(abe=*false* guestok=*false*) *Default Share*
zfs
zfs/Hitachi_9TB_array/Storage smb=()
Storage=/Hitachi_9TB_arrayStorage

Pool = Hitachi_9TH_array
ZFS folder = Storage


SMB values make sense (it's disabled), but NFS, I don't know.
 
Storage is the name of the ZFS folder? What does this show?

zfs get sharenfs Hitachi_9TH_array/Storage
 
That gives the following values:

Name: Hitachi_9TB_array/Storage
Property: sharenfs
Value: off <------------------- this is the problem?:confused:
Source: Default
 
If I go under zfs folder, zfsinfo, it also says the same thing. Sharing is off.

So can you share within napp-it or you have to share via OI? Not sure how. Still new to this stuff.
 
Yes, on the ZFS Folders page. Each folder has a number of columns. One of them is 'nfs'. It should say off for your folder. If you click on it, you can change it to on.
 
As if that wasn't obvious! Yes, it's now changed to on and I can see it.

When I got to map the drive, windows requests login/password. I can only seem to set accounts for SMB in Napp-it.


Under the user tap:
User and Group-management. Unix-System-user ex. root,admin,avahi,mysql,nfs,napp-it are not shown

Sorry for all the questions. I'm just not sure how to add this stuff. I am learning though! Almost there, just have to set an account associated with NFS and not SMB.
 
What level security do you need? If none (trusted network), you can go to a shell, and do:

chmod 777 on /pool/folder (change to the real name). I think that should do it. If you need more restrictions, it's a little trickier...
 
I'm telling you, as easy as this is, I'm hard headed today, I guess.

Security doesn't matter so I tried what you said:
chmod 777 /Hitachi_9TB_array/Storage

Nothing happened. If I type in chmod 777, I get a list of commands, but I obviously don't know how to apply it.

Does the command look correct (assuming the pathway is right, which it is)?

Edit; I have it working.

I set ShareSMB to "Storage,guestok". Not secure as I don't need a password, but it's fine. I don't even see how SMB effects NFS, but it works. Thanks!
 
Last edited:
Oh, I bet I know. Set sync=disabled on the pool itself. ESXi does NFS in sync mode, so writes will be dog-slow unless you set sync=disabled. Not a good idea with a standalone box but for an all-in-one it should not be a hazard.
 
The alternative is to shell out for an SSD for a ZIL that you can put on the passed-thru HBA, but that is silly for an all in one.
 
I disabled sync on the pool/dataset and rebooted the SAN. I do get better write numbers now:
win764e1000gnosync.png


Looks like you were right on with your guess. Thanks a bunch!

However ... is this the maximum I can expect from this setup? I did some additional CrystalMark runs with my existing VM hosts:

VMsystem #1:
ESXi 4.0 on a X8SIL-F with XEON 3460, 2x 4GB regECC
LSI 84016E (with BBU): 4x WD Raptor 150GB as RAID10
crystalmark22esxi40.png


VMsystem #2: (this is going to be replaced with my all-in-one)
VMware Server 2.0.x using CentOS 5.3
Tyan Tempest XT5000i with 2x XEON 5420, 8x 2GB FB-RAM (ECC)
Promise SuperTrak EX12350 (with BBU): 5x WD Raptor 300GB as RAID10 + spare
crystalmark22centos.png


Even though, these are 'older systems' and their HW is not quite on par with the new all-in-one server, their numbers are still better (4x on the system with the LSI contoller and the 150 GB Raptors). I expected somewhat better results from the all-in-one setup; the system it is supposed to replace has slightly better numbers already. :(
Is this maybe the effect of the BBUs? Would a ZIL on my all-in-one get me to the numbers that I see on VMsystem #1?

What numbers do you guys get?

-TLB

-------------------------------------------------------------------------------------------
I added some standard desktop/notebook systems for comparison:

My main rig (see sig):
mainrig.png


A 320 GB notebook drive:
win764320gbnotebook.png
 
sync disabled should eliminate the need for a ZIL (in most any case...) The variations may also depend on the guest OS storage drivers?
 
Not that I know of. I will say that an SSD for the OS drive for OI is gross overkill. Like many Unix systems, OI does not do very much reading/writing with the boot drive. That said, if you have the SSD and are going to use it, I am not aware of anything to worry about.
 
Has anyone had a chance to try SmartOS yet? Considering a rebuild of my home server, might be able to integrate my separate PFsense router box into the same hardware as my file server.

Proposed build:

Total cost of upgrade is $660, but it'd give me the ability to ditch the extra Atom board currently attached to the top of my case, and maybe improve over the performance of my S3210SHLC/Q6600/8GB setup. Plus it gives me room for 64GB of ram now, or 128GB when 16GB sticks get reasonable :D

Any comments on this build? I know it seems KVM isn't working on AMD yet, but that might happen at some point, and I already have hardware that works, so there's no loss of functionality.
 
Gea

anything to look out for when installing SSD as os drive?
Thanks

For speed reasons, you do not need a SSD when you compare it
with a cheap 24/7 laptop disk.

For reliabilty, my experience with desktop SSD's are similar to
desktop disks, so a os-Raid is mostly suggested if you need a really good uptime.

But from theory a SSD should have much lower failure rates like spindel-disks.
I hope, that the new 20GB SLC Intel 311 SSD may be a ideal write cache and boot disk for
non-Raid configs (I will try with my next server in a non-Raid boot config)

http://download.intel.com/design/flash/nand/325502.pdf
 
Last edited:
hi gea


anyway i plugged in an intel g2 80gb which i got it cheap.... but i forgot to export the settings from the previous os drive

now after importing the 2 sets of raid1....and joined my AD

I can't access to the smb shares via my domain administrator account

any idea?
 
hi gea


anyway i plugged in an intel g2 80gb which i got it cheap.... but i forgot to export the settings from the previous os drive

now after importing the 2 sets of raid1....and joined my AD

I can't access to the smb shares via my domain administrator account

any idea?

- check ACL (SMB Login as root) or
- add a mapping domainadmin=root (idmap add winuser:adminstrator unixuser:root)

- rejoin domain
 
when you guys run crystal you really need to pick a size larger than 100mb. you're testing your controller/ram cache at that point, you're not testing your hard drives.
 
Back
Top