first of all i want to thank _Gea for all this.
i am in the process off building a file server (has to be very stable)
i have an lsi 9750 8i controller at hand, but i know about the problems with raid cards.
i asked lsi if there is a IT firmware for it, but they said there is not.
so my question is, what exactly is the problem using a raid card for zfs ( i can create single disks with the controler, but they are somehow different) ?
Would you recommend using it in a stable environment ?
What elso should i do with it ? ( it was used for a 4 ssd raid0 but today a single ssd is faster than that). Are the chances good for selling it on ebay ?
thanks for your answers.
and another question regarding my current nas.
is it possible to reset all permissions on all files ? (i messed up)
New question about Solaris 11.1 / napp-it
I just added a new ZFS folder. I set NFS sharing to true, the folder was not actually exported. I tried to change NFS to "off" in the ZFS Folder menus of Napp-it, but nothing changed.
If I share it manually using the shell "share" command it gets shared perfectly.
Did Oracle to something that breaks your scripts for NFS?
Thanks in advance for any help. I know you are not a real fan of Oracle's version of Solaris and I do appreciate any time you spend on it.
changelog: 0.9 pre1 improved disk detection (hot plug) improved performance for disk details and smart infos performance improved smartdetection smart: start short/long checks support for disk partitions disk detection with disks like c3d1 fixed UNAVAIL disk replace with GUID disk not working jobmanagement: edit timetable and parameter, code and logs optimized, all actions in separate files monitor extension: topmenu edit = display cli commands, return values and hashvalues without reload log actions, realtime mini log, realtime page update disable monitor: topmenu "Mon" for best performance replication extension: code and logs optimized, speed improvements with buffered transfer acl extension: new feature: reset ACL recursively to current folder settings, bugfix with resetting files manual network settings: bugfix Solaris 11.1: NFS sharing, and pam settings fixed OmniOS: supported NexentaCore: no longer supported
I'm looking forward to testing the disk partition piece. That's the one thing that forced me to dropping to the CLI calls for ZFS. We're going to be slicing up some new SSD drives for ZIL and this should save me some time and fat fingers.
I believe Gea in the past has recommended the E1000 adapter.
I did ask some questions about multiple adapters in the past and it seems that you can go with 1 adapter unless you are running across different networks. My case was setting storage on an isolated network. If you go with the all-in-one solution you do get some advantages to networking by setting VMWare to use multiple ports in an active/failover scenario.
We did purchase another LSI 9211-8i and had a dog of a time cabling it to the Dell r720xd SAS backplane.
can you expand on this problem?
oh so this is an HBA problem when presenting disks to OS as JBOD (passthrough) and not a dell MD JBOD array problem.
Is there a good doc for network setup for All-in-one solution? I have the All-in-one server with 4 nics, and 2 other servers with dual nics to just vmotion the VMs off the NFS datastore.
Just trying to maxmize my i/o with the VMs
The key component is the ESXi virtual network switch.
Your VMs (storage or others) only need one virtual nic that is connected to that switch. Even if you use the e1000 ESXi virtual nic, you can have several Gbit/s because its software not a physical cabling. With the vmxnet3 ESXi driver, you can have about twice the performance. To separate networks, you can use vlans to your network switch.
Within ESXi you can setup link aggregation over your physical nics but must aware that the other side of the connection (switch or server) must have the same settings.
Other option is to use one physical nic for one server or ip connection and avoid link aggregation.
Depending on settings, a single transfer will be not faster with link aggregation, only concurent connections. Mostly you should think about 10Gb when 1 Gbit is not fast enough. This gives you the performance without tweaking and problems.
If I'm setting up two vSwitches,1 for vMotion and 1 for everything else, I want to setup LACP on the physical switch correct? 4 ports for the Server with quad nic(contains OI VM, and other vmotion'd VMs), and 2 ports for the servers with dual Nics(which just run vMotion'd VMs)
Im eager to test 0.9 but can you confirm if after installing nappit09 I need to rerun the afp installer?
Also by doing so will I loose any afp/smb share information or anything else from nappit 0.8x?
Gea, if you were starting a new config would you go with OmniOS (not sure how close to ready it is) or stick with OI? I've just downloaded OmniOS and I am going to give it a try but I wanted to see if you had a good feel for the overall stability.
I may have come across an issue with the dev version. I have partitioned a drive into 25% slices (10GB virtualbox drive) to simulate splitting out my ZIL SSDs. I have 2 of the drives mirrored for ZIL and I tried to replace one partition with a remaining partition and each attempt to replace c3t2d0p1 with any of the remaining slices from either 10GB disk results in an error that the drives are not the same size.
The actual error is:
Could not proceed due to an error. Please try again later or ask your sysadmin.
Maybee a reboot after power-off may help.
cannot replace c3t2d0p1 with c3t2d0p3: device is too small
It looks like the actual partitions are not the same size as the mini log shows the partitions as:
In this case I would not be able to replace my ZIL partitions unless I repartition the disk into unequal sizes such as 26/26/26/22 which would at least let me sliver 3 ZIL partitions and then keep some free space at the end for wear.
Is there a better way to handle this scenario?
I know its a lot to ask, but can you do a tutorial how to setup all-in-one with OmniOS & napp-it like the current one for Open Indiana? I just built a box on esxi&OI but would love to make it future proof and since i don't need fully desktop OS like OI, OmniOS sounds perfect.you may answer it yourself
- both rely on Ilumos
- OI is a general purpose OS incl desktop use while Omni is a minimal server OS focussed on NAS/SAN
- Omni sells commercial support and the whole company rely on a stable OS
- Omni is much closer to the Illumos development than OI
- Omni has a stable release and offers betas every few weeks
Yes I have a good feeling with OmniOS for a NAS or SAN but i hope the general
approach of OI as a general purpose OS has a future - but that may need more developers
or a major enterprise behind that sponsors to speed up development.
All of my servers are currently OI but for newer setups I will move to/try OmniOS
and thats the reason napp-it 0.9 supports Omni
You can always replace a disk with another that is larger so replacing 3 with 1 is possible. Replacing 1 with 3 is not possible because 3 is smaller than 1.
So its only a matter of using the smaller partitions first and add mirror/ replace larger ones as it is with regular disks. The basic problem is, that you are partitioning based on percent values partition by partition. Using the exact value gives you the problem that mostly some bytes are missing for the last partition.
I may check if its possible, to set 4 x 24% where the last partion is also 24%.
Now the last one takes the remaining space.
I agree with what you are saying, but from within the interface there is no way to tell that one partition was/is larger than another. They all read 2.1GB in size so I had not other indication of the discrepancy.
Maybe another option on the partition screen would be to input the 4 partition sizes and then just leave the remaining space unpartitioned?