OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Thank you very much :)

Just one more thing , on my mac i have to add network location to see my shares with the default smb by openindiana , on my xbmc i must also add the location manually since it doesn't find it under the windows smb shares option , is this normal ? or is there some way to make it broadcast over mac without afp and my linux htpc's?

Cheers
 
Last edited:
Thank you very much :)

Just one more thing , on my mac i have to add network location to see my shares with the default smb by openindiana , on my xbmc i must also add the location manually since it doesn't find it under the windows smb shares option , is this normal ? or is there some way to make it broadcast over mac without afp and my linux htpc's?

Cheers

i suppose, you need ip's unless you setup a common used wins or dns server for smb
to have smb browsing between mac/win/linux
 
Thanks for the info , i'm just trying to add it through the xbmc window share option and it searches and doing a snoop on the openindana server it says success yet after thinking for a bit xbmc will say "share not available".

If i go and add it manually with the "add network location" and enter 192.168.1.95 in my case it will mount it fine.

Need to look into the wins and dns server to see if it will help me

Thank you very much for the help , either way it's no biggie i just wanted to make sure i didn't mess up something in the config files or so.

I added another pc to the network and set up a smb share there , and then did sudo pfexec smbadm join -w WORKGROUP and now it works by default on xbmc , any special reason why this works or ? i'm going to set up a smb share on the htpc itself that way openindiana can piggy ride it all the time , while not perfect it will do until we can figure out why :eek:.

Also the first time i connect to the server after reboot or install ssh takes around 6 seconds to show the password prompt , is this normal ? after that first time until reboot it is instant.
 
Last edited:
Don't wanna be a pain here, but I'm still desperately stuck here with my aggregate not coming up after a reboot. I did all the config according to the documentation from Oracle
Code:
ipadm create-ip aggr0
ipadm create-addr -T static -a 192.168.1.224/24 aggr0/v4
but after a reboot, it still looks like
root@nas1:~# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
net3/_a static ok 192.168.1.222/24
lo0/v6 static ok ::1/128
aggr0/v4 static disabled 192.168.1.224/24

I can then enable it using ipadm enable-if -t aggr0, but as "-t" indicates, this is only temporary.

If nobody sees an error in what I've configured, does anyone know if I can script the "ipadm enable-if" command so it is run like a system service or something? I'm really stuck here, despite I read tons of documentation and blogs, I don't find any more clues...

Thanks a lot!
Cap'
 
Don't wanna be a pain here, but I'm still desperately stuck here with my aggregate not coming up after a reboot. I did all the config according to the documentation from Oracle
Code:
ipadm create-ip aggr0
ipadm create-addr -T static -a 192.168.1.224/24 aggr0/v4
but after a reboot, it still looks like


I can then enable it using ipadm enable-if -t aggr0, but as "-t" indicates, this is only temporary.

If nobody sees an error in what I've configured, does anyone know if I can script the "ipadm enable-if" command so it is run like a system service or something? I'm really stuck here, despite I read tons of documentation and blogs, I don't find any more clues...

Thanks a lot!
Cap'

can't help with aggregation (skipped it to faster and less problematic 10 Gb) but you may enabled it at boot time exmple via init script like i do with napp-it

see http://discuss.joyent.com/viewtopic.php?id=12802
 
Is there some way to remove the spamm messages on the console ? as in i get a whole bunch of tty=unknown napp-it etc , i know it's harmless but somehow i would rather have it all tidy , is there some entry i can make on /etc/sudoers or so to make it more zen like ?

Cheers :)

By the way did anyone successfully compile par2 tab version on any of the solaris system?
 
Last edited:
I am building my first ever home server. Wanted to know which is the most appropriate OS for me if I am going to use Napp-It.

OpenIndiana
Or
Solaris 11 Express.

It of course has to be free for home usage and well integrated with Gea's NappIt. I noticed some posts above about spin-down issues with Solaris 11, hence the question.

BTW, due to the hard-drive shortage I am going to be using the Seagate Barracuda Green 2TB 5900 rpm drives since that is the only one I could find at a reasonable price during a Best Buy Sale. I do not think they are Gea's favorite, so any settings which will help me work with them will be appreciated.

I am going to be setting a 5 disk setup with 1 disk for parity and 1 disk as spare (leaving about 6TB of total storage). This is going on a Z68 motherboard running Pentium G620 (SandyBridge) with 8GB of RAM. I have two Highpoint 620 controllers which add 2 SATA ports each which I might use for future expansion. I will be using an Intel dual NIC card.
 
I would go with Open-Indiana. I don't think Solaris express is avaible anymore, I think you can only get solaris 11 now, which doesn't have full support in napp-it yet.

As for Highpoint 620, check if it's supported in OI.
 
Gea

Any thoughts on the ZFSonLinux project? With all the uncertainty associated with Solaris it is perhaps time to review the future trajectory. The biggest challenge is perhaps going to be the HCL. As Solaris becomes more Oracle proprietary, the number of vendors willing to support it will go down. Support for consumer grade hardware is going to be significantly affected. The Highpoint 620 is perhaps a good example of a controller card which works on Linux/BSD but not on solaris. It offers a very convenient low cost way to add some more ports for folk who are using the motherboard ports primarily, which is the typical home server setup.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Anyway, downloaded the live USB version of OI. It booted to the MB Bios, the Grub screen, and then it displays the "oi_151a 64 bit....Solaris.. Oracle" boot rom type message and just hangs there. Note that I do not have any hard disk attached since I presume that this can run directly on the RAM (8GB). Motherboard is a Z68 Gigabyte, CPU is Pentium G620 (SandyBridge). Tried both the normal mode and the text mode (server) but no luck either way...
^^^^^^^^^^^^^^^^^^^
Had to disable the onboard USB3.0 controller; also disabled the serial port controller to be safe and I got into OI. There is one Intel Cougar Point "HECI" driver it is unable to find. Found some info here: http://software.intel.com/en-us/blo...ication-and-intel-me-module-fw-update-client/ and http://software.intel.com/en-us/forums/showthread.php?t=82888 some downloads here http://software.intel.com/en-us/articles/download-the-latest-intel-amt-open-source-drivers/
^^^^^^^^^^^
Found this package http://pkg.openindiana.org/dev/en/search.shtml?token=HECI&action=Search on OI repository. But not sure how to install it on a Live USB disk??
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The HighPoint 620 uses a Marvell 88SE91xx series adapter (88SE9125) and from the OI Community HCL there are issues and only 32 bit mode seems to work :(
HighPoint claims that it has support for linux kernel 2.6.19 and FreeBSD; kind of sad to see no support in OI.

http://www.highpoint-tech.com/USA_new/cs-series_r600.htm
 
Last edited:
Probably it has been asked before, but anyways.
Question:
Is it possible to change the block size of a LU through Napp-it, or is it possible to choose a block size creating a LU? I'm not seeing the option, but maybe I'm looking over it?

Standard the LU is created with 512kb, let's assume I'd want to change it to 64kb or even 8MB is that possible with the GUI or same thing for creating a LU with a different block size?

Or is it command line only?

Thank you!
 
Thanks _Gea for the tip.
I have now created
Code:
vi /etc/rc3.d/S99aggr
ipadm enable-if -t aggr1
however, I am not sure if I am about to see some side-effects later... i.e. I don't know if the interface comes up early enough? We'll see.

So I have my aggregate coming up now automatically, I have also mirrored my boot drive and snapshotted my BE - it's time to install napp-it, I guess.

Thanks to all for helping me - I'll post my experiences here once I am a step further.

Best,
Cap'
 
Gea

Any thoughts on the ZFSonLinux project? With all the uncertainty associated with Solaris it is perhaps time to review the future trajectory. The biggest challenge is perhaps going to be the HCL. As Solaris becomes more Oracle proprietary, the number of vendors willing to support it will go down. Support for consumer grade hardware is going to be significantly affected. The Highpoint 620 is perhaps a good example of a controller card which works on Linux/BSD but not on solaris. It offers a very convenient low cost way to add some more ports for folk who are using the motherboard ports primarily, which is the typical home server setup.

I can't say anything about stability of ZFSonLinux but i hope they will succeed like ZFS on FreeBSD. ZFS is the most advanced available filesystem. If you need to use apps only available on FreeBSD or Linux, it would be a dream to use ZFS underneath.

But I would always prefer a OS (=kernel + base tools +distribution ) completely developed and distributed from one organisation, just like with Apple OSX, Windows and Solaris. I know, the Solaris family is divided into two development forks due to the ignorance of Oracle (already hope, they may find a common base with Illumos). But even if that not happens, there is a future of a free and open Solaris-Clone. Indeed beside encryption (nearly ready with old OpenSolaris) there is also a serious development outside Oracle. Look at KVM, its only available in Illumos not Solaris 11. There are also a lot of major ZFS-developers leaving Oracle and now working on Illumos oriented enterprises.

The reason for that is mainly usability. Although it is behind Windows and far behind OSX, I hope they (mainly Illumos) will improve. Even simple tasks like setting a fixed IP can be a mess in OI and a nightmare in Solaris 11. I request everyone to demand for a better usability at illumos.org in such base settings.

Beside that, I consider usability of Solaris in basic SAN or NAS use cases as superior compared to MacOSX, Windows or Linux especially due to the integration of NFS, SMB with Windows compatible snap and ACL features and iSCSI sharing into ZFS, other features like dtrace, crossbow or encryption and the whole from one hand experience.

Regarding Hardware compatibility you must use it like OSX on a PC. While it may work more or less on most hardware, its not working on some hardware and its really trouble-free also only on some hardware. Whenever possible, use only "Best to use"-hardware with Solaris, even if its a little bit more expensive.

Just my thoughts
 
One of my smb-groups just disappeared in napp-it. On the user page it says: "failed to find An error occurred while retrieving group data. (invalid name)". Where did it go and how can I get them back? It's still listed under unix groups.
 
Folks:
I keep on hearing about "mirroring" the boot disk. What exactly do you mean by that?

While I am experimenting I am running it off a Live USB OpenIndiana 151 flash drive.
 
It's about a Software based RAID 1 using ZFS and Solaris means to mirror it, as opposed to hardware RAID Controllers doing it. There's a lot of controversy about hardware vs. software RAID, and Solaris ZFS is often mentioned as a better alternative to hardware RAID. Actually, _gea, who developed napp-it, recommends to flash RAID Controllers to only being HBA's (IT-Firmware) and let Solaris do the RAID. Oh, of course, it's meant to offer you more security on your installation - in case your boot disk fails, you can still boot from the mirror, repair the boot disk (aka "replace" in most cases), mirror back and you're all set to continue working. Of course, this makes only sense if you do a permanent installation. As long as you're experimenting with a Live USB stick, don't worry about this.
 
One of my smb-groups just disappeared in napp-it. On the user page it says: "failed to find An error occurred while retrieving group data. (invalid name)". Where did it go and how can I get them back? It's still listed under unix groups.


either go back to the last boot-encironment (system-snap)
or try to re-create the smb-group

Attention:
SMB-groups are Windows compatible groups.
They are independant from unix-groups
 
Thanks for the explanation. This actually bought up an interesting point about the portability of the ZFS RAIDZ1 across boxes. The Intel RAID I have worked with in the past is supposedly pretty good about recovering the arrays in case of some failures. I don't know how it works, whether it saves it in the BIOS or the boot disk or the disk array itself.

In the case of the OI/ZFS does all the information reside on the boot-disk and will the loss of boot disk result in the loss of the ZFS Array?

How easy or difficult is to create a "backup" type image on an alternative disk/machine on the network and restore from that instead of a local mirror using OpenIndiana? I ask this because a mirror of the boot disk itself on the same box implies some kind hardware redundancy of the boot disk and fault tolerance?

My goal is to have a low cost solution without unnecessary HBAs. In the future I would also like to virtualize the box and use it for a WHS/ZFS combo NAS


It's about a Software based RAID 1 using ZFS and Solaris means to mirror it, as opposed to hardware RAID Controllers doing it. There's a lot of controversy about hardware vs. software RAID, and Solaris ZFS is often mentioned as a better alternative to hardware RAID. Actually, _gea, who developed napp-it, recommends to flash RAID Controllers to only being HBA's (IT-Firmware) and let Solaris do the RAID. Oh, of course, it's meant to offer you more security on your installation - in case your boot disk fails, you can still boot from the mirror, repair the boot disk (aka "replace" in most cases), mirror back and you're all set to continue working. Of course, this makes only sense if you do a permanent installation. As long as you're experimenting with a Live USB stick, don't worry about this.
 
I can't say anything about stability of ZFSonLinux but i hope they will succeed like ZFS on FreeBSD. ZFS is the most advanced available filesystem. If you need to use apps only available on FreeBSD or Linux, it would be a dream to use ZFS underneath.

Regarding Hardware compatibility you must use it like OSX on a PC. While it may work more or less on most hardware, its not working on some hardware and its really trouble-free also only on some hardware. Whenever possible, use only "Best to use"-hardware with Solaris, even if its a little bit more expensive

_Gea, thank for the information and the thought process behind it, especially about the different efforts to support Solaris.

As you mentioned FreeBSD also has a ZFS port. How easy or hard would it be to support Napp-IT on FreeBSD? My motivation is primarily HCL driven since most hardware vendors do support it if they support Linux.
===============
Realized that sub.mesa also has the ZFS Guru project which creates a similar web-gui for FreeBSD
===============


To give an example right now I can get a i5 2400 for $130 at MC, a Q67 board for around $130 and some inexpensive Marvell based HBAs for $20-30. This CPU/MB supports Vt-d and other virtualization stuff. The Marvell chipset is used by a huge number of motherboard vendors for SATA 3.0 support so it not just a cheap knock-off.

Equivalent server class stuff costs 50-100% more (an E series Xeon, a C series chipset board etc, an LSI SAS HBA).

For a home/SOHO user the server class stuff becomes a bit of an overkill, especially with the current hard-drive prices precluding most people from making those 10 disk arrays "just for fun".
 
Last edited:
Thanks for the explanation. This actually bought up an interesting point about the portability of the ZFS RAIDZ1 across boxes. The Intel RAID I have worked with in the past is supposedly pretty good about recovering the arrays in case of some failures. I don't know how it works, whether it saves it in the BIOS or the boot disk or the disk array itself.

In the case of the OI/ZFS does all the information reside on the boot-disk and will the loss of boot disk result in the loss of the ZFS Array?

How easy or difficult is to create a "backup" type image on an alternative disk/machine on the network and restore from that instead of a local mirror using OpenIndiana? I ask this because a mirror of the boot disk itself on the same box implies some kind hardware redundancy of the boot disk and fault tolerance?

My goal is to have a low cost solution without unnecessary HBAs. In the future I would also like to virtualize the box and use it for a WHS/ZFS combo NAS
Disks with ZFS pools on them are pretty much completely portable between systems running ZFS as long as you meet a couple of conditions:

- The system you move it to supports the rev level of ZFS you are using
- The system you move it to supports the underlying partition table format on the drives.
- The drives were written natively and did not have any additional RAID headers added by the HBA.

This last condition is why _Gea reccomends that you always use bare HBAs or Raid controllers flashed to act as HBAs.

The second condition (partition table formats) has caused some trouble moving between FreeBSD based systems like ZFSguru and the various Solaris derivatives (Solaris, Solaris Express, OpenSolaris and OpenIndiana).

As long as you meet these conditions your arrays should be portable between systems. ZFS even includes commands to ensure that the pools are closed, stable and ready to port (ZFS Export) and commands to bring a new pool online using data from the pool drives themselves (ZFS Import).
 
Good post, piglover. To expand on one point: there are actually two rev levels, the fs and the pool. Both of those need (AFAIK) to be compatible.
 
_Gea, thank for the information and the thought process behind it, especially about the different efforts to support Solaris.

As you mentioned FreeBSD also has a ZFS port. How easy or hard would it be to support Napp-IT on FreeBSD? My motivation is primarily HCL driven since most hardware vendors do support it if they support Linux.
===============
Realized that sub.mesa also has the ZFS Guru project which creates a similar web-gui for FreeBSD
===============

napp-it as a tool to create a Web-UI will run on FreeBSD but all system-settings, share-settings (only on Solaris you have the Kernel-CIFS server) and iSCSI are completely different - no chance of napp-it on FreeBSD from my side.

With the hardware, use Solaris compatible parts. Mainboards are mostly not a problem but NICs and Disk-Controller. The always suggested LSI and Intel parts are always the best - even on Windows/ Linux and not too expensive if you care quality -
otherwise use onboard Sata (AHCI) with up to 6 ports, mostly enough at home.

For vt-d I would use a cheap uATX Intel based server-chipset-board.
Even if the mainboard/ CPU supports vt-d, there are often stability/ bios problems with desktop chipsets. The premium of 50 Euro is well done.
 
Good post, piglover. To expand on one point: there are actually two rev levels, the fs and the pool. Both of those need (AFAIK) to be compatible.

Second that. Thanks piglover.

======================

_Gea: Thanks again.

I will be using a dedicated Intel Dual Port card so that should not be an issue. The disk controllers certainly can be. I would love to just stick the 6 on board ports and be done with it ;) Each extra card is more power and point of failure.

I appreciate your logic very well. Just that when you move to server grade stuff the pricing not only moves up but the discounts disappear. ECC memory is about 2x the non-ECC memory. I know that if you are using the ZFS you should probably use ECC but for my usage, the reliability of a ZFS is a free bonus while ECC is an extra cost. And in general RAM failures are likely to be orders of magnitude less than the hard-disk related failures.

I am also curious if these OSes can log how often ECC corrections kick in and whether the errors are typically single bit or hidden even bit errors which can not be detected by the single bit correction schemes.
 
ECC is somewhat more expensive, but it's not as bad as you make it sound. Unless you are getting a crap ton of it? According to newegg, 4GB is desktop DDR3/1333 is $30 and 4GB of server ECC DDR3/1333 is $40.
 
ECC is somewhat more expensive, but it's not as bad as you make it sound. Unless you are getting a crap ton of it? According to newegg, 4GB is desktop DDR3/1333 is $30 and 4GB of server ECC DDR3/1333 is $40.

Well I bought tons of DDR3 2x4GB = 8GB for $30-$35..
Very likely a Virtualized box running NappIT + WHS will like at least 8GB if not 16GB.

Even now it is available for $35 at NE; the price is 2x. And on 16GB translates to another $80-$100

http://www.newegg.com/Product/Produ...69&IsNodeId=1&bop=And&Order=PRICE&PageSize=50
 
I guess it depends on the brand. I always get crucial - that is the brand that had the $40 vs $30, not 2x.
 
_Gea:

I have ordered a C206 based P8B WS motherboard. It supports VT-d, does not have any PCI-e lane switching and is a good complete board with ample PCI-e slots, and very flexible support for 1155 CPUs and both ECC/non-ECC RAM..

I have also ordered a pair of used IBM BR10i SAS3082E-R RAID 44E8690 Controller Cards.

I have the option of picking up an i5 2400 for about $140 delivered. This supports VT-d and almost every other technology but for hyperthreading. The Xeons will be about 50% more expensive for the base model. The e3 with HyperThreading are almost 150% more. Is this setup sufficient enough or would you still prefer a Xeon over an i5?
 
Last edited:
_Gea:

I have ordered a C206 based P8B WS motherboard. It supports VT-d, does not have any PCI-e lane switching and is a good complete board with ample PCI-e slots, and very flexible support for 1155 CPUs and both ECC/non-ECC RAM..

I have also ordered a pair of used IBM BR10i SAS3082E-R RAID 44E8690 Controller Cards.

I have the option of picking up an i5 2400 for about $140 delivered. This supports VT-d and almost every other technology but for hyperthreading. The Xeons will be about 50% more expensive for the base model. The e3 with HyperThreading are almost 150% more. Is this setup sufficient enough or would you still prefer a Xeon over an i5?

More RAM ex 16 GB instead of 8GB is nearly always more important that a slightly better CPU performance
 
More RAM ex 16 GB instead of 8GB is nearly always more important that a slightly better CPU performance

_Gea:
Yes I have enough RAM to put in here. Only question right now is that is the i5 good enough or does it HAVE to be a Xeon. This i5 supports every Intel feature but Hyperthreading and an HT supporting e3 Xeon is 2x the cost!
 
I'm running my SAN with i3(cheapest possible) without problems... Although, I'm running OI only, no ESXi, since i3 doesn't support VT-d

Matej
 
I'm running my SAN with i3(cheapest possible) without problems... Although, I'm running OI only, no ESXi, since i3 doesn't support VT-d

Matej

The issue is potential for future virtualization. I have a G620 sitting with me which will work very well for very low power.
 
_Gea:
Yes I have enough RAM to put in here. Only question right now is that is the i5 good enough or does it HAVE to be a Xeon. This i5 supports every Intel feature but Hyperthreading and an HT supporting e3 Xeon is 2x the cost!

good enough is relative. You must decide yourself.
Features that it lacks compared to a Xeon is Hyperthreading and ECC Ram.
The first may improve performance of some apps a little while the second helps on problems
caused by RAM errors (crashes or data-errors).
 
good enough is relative. You must decide yourself.
Features that it lacks compared to a Xeon is Hyperthreading and ECC Ram.
The first may improve performance of some apps a little while the second helps on problems
caused by RAM errors (crashes or data-errors).

My concern is compatibility.

I am aware of the issues related to Hyperthreading and ECC but there are not really significant in my calculus.
 
My concern is compatibility.

I am aware of the issues related to Hyperthreading and ECC but there are not really significant in my calculus.

There is no other relevant difference between
 
Hello, I have a question. I had a single raidz1 vdev consisting of 5 2TB hard drives. I recently added a 9211-8i card and 5 more hard drives for another raidz1 vdev. The originally vdev (or I thought the pool) was configured with ashift=12. I added the 5 disks vdev through OI this time as I thought it would automatically keep the ashift=12 and that didn't work. So now I am trying to figure out how i can remove those 5 hard drives I just added (or the incorrectly configured vdev) and properly add it to the pool.

any help?

EDIT: I guess I will go ahead and transfer the data on my pool to some other 2 TB hard drives I have laying around and use one from the vdevs to get enough space to transfer temporarily. Then I need to destroy the pool and create the two vdevs one by one. great
 
Last edited:
Yo,

Well have been running napp-it with Solaris 11 for a more then a month now on my new server.
Very stable as it's still a test server. The only thing I'm still missing is a working powermanagement with disk spindown as this server doesn't has to be on all the time.
Anyways @ gea gr8 job! Glad I found this as i was looking for an alternative to a NAS.
I had tried Unraid and Freenas but they couldn't satisfy my needs (too slow)! I will stick to ZFS as it is a fast and cheap alternative to buying a decent NAS. I have 3 IBM M1015 cards in my server which I bought cheap on ebay and reflashed them with LSI IT firmware. Probably the best alternative to an expensive hardware raid card.
I'm running 8 2TB drives now in 1 raidz1 pool for testing and when HDD prices stabilize again I will add a second Raidz1 pool to my Norco 4224.
I just hope I get that HDD spindown working by then....

gr33tz & thanks
 
Too bad you didn't go with OpenIndiana, powermanagement is working there and I don't think solaris11 is better in any way, it only has additional zfs encryption support everything else should be the same.
 
OK. Dumb question: why is my pool so much bigger than the filesystem?


Code:
root@nas5:~# zpool list tank
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
tank  9.06T  4.56T  4.51T    50%  1.00x  ONLINE  -

root@nas5:~# zfs list -r tank
NAME             USED  AVAIL  REFER  MOUNTPOINT
tank            2.70T  2.58T   334K  /tank
tank/rootsnaps  11.8G  2.58T  11.8G  /tank/rootsnaps
tank/snapshots   356G  2.58T   356G  /tank/snapshots
tank/work       2.34T  2.58T  2.34T  /tank/work


I got this alert message from napp-it, and I'm not really sure why:

Code:
Alert/ Error on nas5 from  :

-disk errors: none

------------------------------
zpool list
NAME     SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
rpool1  29.8G  4.10G  25.6G    13%  1.00x  ONLINE  -
tank    9.06T  4.56T  4.51T    50%  1.00x  ONLINE  -

Pool capacity from zfs list
NAME	USED	AVAIL	MOUNTPOINT	%
rpool1	6.22G	23.1G	/rpool1	79%
rpool1@0102	0	-	-	%!
tank	3.23T	2.58T	/tank	44%
 
I've tried searching but couldn't find anything that specifically addresses my problem.

I'm having my folder shared with samba. How do I enable read-only guest, and read/write user? (I have done both, but not together.) Can I block read/write for certain subdirectories to guests?

Thanks!
 
Having a very strange issue. I've been using OpenIndiana on the hardware below for just a few days now:

  • SuperMicro X9SCL+-F
  • Intel i3 2100
  • IBM BR10i (flashed to LSI IT Firmware)

I installed napp-it after completing the live CD installation and everything went smoothly at first. I created pools and folders and transferred ~6TB of data and all was well. Last night however, the system hard locked and I was unable to determine any cause (though I am not experienced with Solaris, I simply checked a few obvious places: /var/adm/messages, dmesg, etc.). After a cold boot, everything appeared normal at first, but very shortly after gaining access to the system again it hard locked.

Subsequently, booting in to OpenIndiana completely halts after gdm fails to start and drops in to maintenance mode. The message in the gdm error log is as below:

Code:
/bin/sh: bad interpreter - no such file or directory

The gdm startup script in question is simply setting the shell interpreter, but upon looking, the entire /bin/ directory is gone. How this could have happened, I have absolutely no idea. I haven't even su'd in to do anything that I recall. I should also add that I've done around 10 hours of memtest86+ without any errors (and I'm using ECC memory) and that I was not getting any sort of drive errors before this happened.

Now, the pre-napp-it snap available still boots fine. Is there any way I can recover gracefully from this? If I pave over the OI install on my system drive, will all my pools and folders remain in tact? Anyone have any inklink what may have caused this?

------
Update:

Decided to try symlinking /bin to /usr/bin and this appears to have initially worked. I'll update later if I encounter any issues.
 
Last edited:
Back
Top