OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

As I understand the problem:

All (by the disk-subsystem) confirmed write operation should be really on disk.
ZFS is a Copy on Write system. That means that a data block is written to disk successfully
or the complete write is not done at all. This is needed to keep your data state always in a defined state.

If you have a power failure under such a condition, your currently written file is probably not lost (the
filesystem knows the former state and keep the former file version)

If ZFS asumes, the file is written to disk correctly and you have a power failure it could happen that
this file is damaged. Due to checksums ZFS may discover the problem but the file is damaged

If you disable sync at all, you have the additional problem, that all write operations of the last up to about 30s may be lost.

Secure way is to enable sync writes and disable write-back.
But with ZFS all data not affected by a current write is always save and even in case
of damaged files, ZFS knows about due to end to end checksums.

The last is the most important thing. Your data must be always ok or the OS must report an error-
thats why ZFS is unique.


Gea
Is there a lesser of two evils in this case... Sync =Disabled or Writeback Cache = on or does the use of either make your setup unsecure. I will be using this as a Backup Storage device for my backup software which does some file checking so if something was not there then it should find out and ask to have it sent again.

I am thinking of keeping it set to Sync = Standard and enabling Writeback Cache because I have a SSD Log device and if I were to set the sync = Disabled it would stop using the SSD if I am reading the manuals correct.

Thanks for your input and fast replies.
 
I was able to relatively painless build transmission 2.22 from source on S11E.

There were a few dependencies:

I installed developer/gnome/gettext text/gnu-gettext from the Oracle repos for pkg-config and gnu-gettext and then I built libevent from source (http://monkey.org/~provos/libevent/). Seems to work. There is also a solaris SMF script here: http://www.4amlunch.net/SMF/transmission-daemon/.

-s0rce

finally succes! i had to use:

./configure --disable-nls --disable-gtk --disable-cli --enable-daemon LIBEVENT_LIBS="-L/usr/local/lib -levent" LIBEVENT_CFLAGS=-I/usr/local/include

for making it work :)
my nas is almost complete. one thing left: get openvpn working
 
Is there a lesser of two evils in this case... Sync =Disabled or Writeback Cache = on or does the use of either make your setup unsecure. I will be using this as a Backup Storage device for my backup software which does some file checking so if something was not there then it should find out and ask to have it sent again.

I am thinking of keeping it set to Sync = Standard and enabling Writeback Cache because I have a SSD Log device and if I were to set the sync = Disabled it would stop using the SSD if I am reading the manuals correct.

Thanks for your input and fast replies.

the negative
even with ZFS you could get damaged files
-but for this case you can have snaps-

the positive:
even with most unsecure settings, ZFS will tell you if a file is damaged- and tries to fix the error if possible without any user actions-

Gea
 
Last edited:
Hello,

Still trying to figure out the way to go with raidz. I'll have 10x Samsung F2E 1.5to. First one would be a pool with a unique 10 drives raidz3 vdev. I could go with 2 vdev in the pool, a 6xraidz2 + 4xraidz1 or 7xraidz2 + 3xraidz1 but to me that seems to be less secure. I need minimum 10to available.

What do you think i should do ? I'm unsure about multiple vdev, if a vdev is lost, will the pool still be running fine ( but degraded ) with only the files on the vdev left ? Or is the pool lost and so is the data ?
 
Hello,

Still trying to figure out the way to go with raidz. I'll have 10x Samsung F2E 1.5to. First one would be a pool with a unique 10 drives raidz3 vdev. I could go with 2 vdev in the pool, a 6xraidz2 + 4xraidz1 or 7xraidz2 + 3xraidz1 but to me that seems to be less secure. I need minimum 10to available.

What do you think i should do ? I'm unsure about multiple vdev, if a vdev is lost, will the pool still be running fine ( but degraded ) with only the files on the vdev left ? Or is the pool lost and so is the data ?
The ZFS best practice guide suggests that the number of drives for RaidZ (not counting parity overhead) should be 4 or 8. For 10 drives, that suggests you get maximum performance with RaidZ2 x10 drives (8+2). RaidZ2 is provides perfectly adequate protection for a 10 drive pool.
 
announcement:

napp-it 0.500 is available

changelog:
new feature: manual replication-job between pools (free)
non free extensions allowed in folder "extension"
first extension: timer controlled replication between appliances, first beta
to allow snap-based near realtime backups even with Multi-TB-Pools


new feature: rapid switch to en if another lang is selected (top-menu)
new feature: menu disk-array shows vdev type
new feature: folder overview - AVAILABLE in Capacity + %
new feature: autojob alert on used capacity >=85%
new feature: status email with zfs list
new feature: alert email when used > 85%
new feature: disk replace shows already removed disks as a replace-source
bugfix: force en if no language is preselected on new installs
bugfix: hidden clones overview % available
option: send alert via TLS ex for Googlemail (TLS module must be installed manually)

Up from this version, i will try to establish napp-it as a platform also for independent developers.
They are allowed to add also non-free extensions under a different licence within the menu extensions.
These extensions can be delivered with napp-it (updated with the regular updater) or they can be distributed
independently from napp-it. I hope that this will help to ensure and fasten napp-it development.


distributing appliances:
if you want to bundle napp-it with your own appliances, header name "napp-it" could be replaced by your own brand
(agreement required)


Gea
 
Is the process/form in place to make a donation and receive a registration code?

Also, can a single registration code be used with multiple appliances, such as a replication pair?

Lastly, over the last few revisions I see this at Services > SMB > Active Directory:

Code:
Status: 500 Content-type: text/html
Software error:

syntax error at /var/web-gui/data/napp-it/zfsos/02_services/02_SMB/01_Active Directory/action.pl line 212, near "print"
Compilation failed in require at admin.pl line 722.
For help, please send mail to this site's webmaster, giving this error message and the time and date of the error.

[Thu May 19 11:54:33 2011] admin.pl: syntax error at /var/web-gui/data/napp-it/zfsos/02_services/02_SMB/01_Active Directory/action.pl line 212, near "print" [Thu May 19 11:54:33 2011] admin.pl: Compilation failed in require at admin.pl line 722.


announcement:

napp-it 0.500 is available

changelog:
new feature: manual replication-job between pools (free)
non free extensions allowed in folder "extension"
first extension: timer controlled replication between appliances, first beta
to allow snap-based near realtime backups even with Multi-TB-Pools


new feature: rapid switch to en if another lang is selected (top-menu)
new feature: menu disk-array shows vdev type
new feature: folder overview - AVAILABLE in Capacity + %
new feature: autojob alert on used capacity >=85%
new feature: status email with zfs list
new feature: alert email when used > 85%
new feature: disk replace shows already removed disks as a replace-source
bugfix: force en if no language is preselected on new installs
bugfix: hidden clones overview % available
option: send alert via TLS ex for Googlemail (TLS module must be installed manually)

Up from this version, i will try to establish napp-it as a platform also for independent developers.
They are allowed to add also non-free extensions under a different licence within the menu extensions.
These extensions can be delivered with napp-it (updated with the regular updater) or they can be distributed
independently from napp-it. I hope that this will help to ensure and fasten napp-it development.


distributing appliances:
if you want to bundle napp-it with your own appliances, header name "napp-it" could be replaced by your own brand
(agreement required)


Gea
 
I got a problem my solaris 11 server just get frozen and nede to be reboot to work I have try to reinstall it and install nappit and I get frozen again can it be napp it or ? I cant se when it happen it just happen so befor I try to reinstall the os for 3 time I like to know what you guys think
 
Is the process/form in place to make a donation and receive a registration code?

Also, can a single registration code be used with multiple appliances, such as a replication pair?

Lastly, over the last few revisions I see this at Services > SMB > Active Directory:

Status: 500 Content-type: text/html
Software error:

syntax error at /var/web-gui/data/napp-it/zfsos/02_services/02_SMB/01_Active Directory/action.pl line 212, near "print"

1. about the error:
there is semicolon missing at line 212 of the menu-script.
Edit the script or wait until tomorow. I will fix it with next nightly 0.500b
together with some other bugs.

2. about replication and donations
you can donate at napp-it.org via Paypal

registration code is per location (ex a server-room or a department).
you only need one key.for as many appliances as you have there.
(minimum donation for a key: 100 Euro per year, if you pay more you get
a key for several years)


Gea
 
I got a problem my solaris 11 server just get frozen and nede to be reboot to work I have try to reinstall it and install nappit and I get frozen again can it be napp it or ? I cant se when it happen it just happen so befor I try to reinstall the os for 3 time I like to know what you guys think

try to login at console and enter:
sudo zpool status
sudo zfs list
sudo format

to check basic disk and pool state
(napp-it also call these commands after login, if they block, napp-it cannot continue)


Gea
 
Correct - with a caveat. Your statement requires integrity of the ZFS log (ZIL) to remain true. As long as the ZIL survives the power outage you are correct. However, Oracle and Nextenta have both reported issues when using SSD for the ZIL drive because some SSDs appear to be subject to data integrity problems after an uncontrolled power loss - and if you lose integrity on the ZIL then you lose integrity of the ZFS filesystems. For this reason, both Oracle and Nextenta recommend that you take steps to protect the SSD from uncontrolled power failure. Either use a UPS or only use newer enterprise-class SSDs that include 'super capacitor' protection for their internal cache.

I'd like to know if the Intel SSD 320 with it's reasonable capacitors is considered good enough under the circumstances. It is not the fastest SSD, but given how it is an evolution of the X-25M and is entirely built on a process shrink Intel technology, it bodes well.

Anyone use them yet in ZFS builds ? I'm thinking replacing my current ZIL with a 40-80GB 320 series one.
 
try to login at console and enter:
sudo zpool status
sudo zfs list
sudo format

to check basic disk and pool state
(napp-it also call these commands after login, if they block, napp-it cannot continue)


Gea

that comands work fine but the problem are my solaris just can get stuck and nede to reboot mouse keyboard dont work the whol os just get stuck and nede to reboot
 
that comands work fine but the problem are my solaris just can get stuck and nede to reboot mouse keyboard dont work the whol os just get stuck and nede to reboot


is this a problem on a machine that had worked well
or a problem after a first time install?


Gea
 
I am attempting to migrate from ZFSGuru to nappit since Jason seems to have disappeared. I have a ZFS v13 pool on my disks(on two controllers), created within FreeBSD.

When I connect the controllers to my Open Indiana installation the disks appear in the nappit "Disks" tab, however the ZFS menu and "zpool list" reports no usable pools found on the disks. Is there something I need to do to the pool before attempting an import in OI? I was under the impression that ZFS was fairly portable between systems, is this because of the pool version or is it some other limitation I am unaware of?

Please help :(
 
I am attempting to migrate from ZFSGuru to nappit since Jason seems to have disappeared. I have a ZFS v13 pool on my disks(on two controllers), created within FreeBSD.

When I connect the controllers to my Open Indiana installation the disks appear in the nappit "Disks" tab, however the ZFS menu and "zpool list" reports no usable pools found on the disks. Is there something I need to do to the pool before attempting an import in OI? I was under the impression that ZFS was fairly portable between systems, is this because of the pool version or is it some other limitation I am unaware of?

Please help :(

its a format problem, read about:
http://hardforum.com/showthread.php?t=1575034
i suppose you have to backup and destroy the pool

Gea
 
1. Sweet

2. More specifically: I assume that with the current donation system, registration delivery isn't automated? If true, what's your current estimated turnaround?

For multi-year, how much more?

1. about the error:
there is semicolon missing at line 212 of the menu-script.
Edit the script or wait until tomorow. I will fix it with next nightly 0.500b
together with some other bugs.

2. about replication and donations
you can donate at napp-it.org via Paypal

registration code is per location (ex a server-room or a department).
you only need one key.for as many appliances as you have there.
(minimum donation for a key: 100 Euro per year, if you pay more you get
a key for several years)


Gea
 
1. Sweet

2. More specifically: I assume that with the current donation system, registration delivery isn't automated? If true, what's your current estimated turnaround?

For multi-year, how much more?

Its always 100 Euro per Year min.

And yes there is no automated reply, Its just me, napp-it not netapp.
You do not pay for a ready product, You donate because you like napp-it
and maybee have saved a lot of money or you want to use a special
features that requires a lot of work to develop - in the past and in the future
- like appliance replication and other extensions, hopely to come from a lot of developers
at much lower prices than from the big players.

Gea
 
Ugh, yeah, and I don't suppose there is any way of removing vdevs from a pool is there?

A pool can only grow, not shrink.
Best is to buy some 2 TB disks. Better is to use the extra disks also later for backups.

Gea
 
Gea I'm trying to setup TLS emails with nappit and I just went through the perl install then went to TLS:status and got this error:

syntax error at /var/web-gui/data/napp-it/zfsos/15_jobs and data services/04_TLS/01_status/action.pl line 120, near "->new Net::SMTP::TLS"
Compilation failed in require at admin.pl line 722.

For help, please send mail to this site's webmaster, giving this error message and the time and date of the error.
[Wed May 18 20:52:43 2011] admin.pl: syntax error at /var/web-gui/data/napp-it/zfsos/15_jobs and data services/04_TLS/01_status/action.pl line 120, near "->new Net::SMTP::TLS" [Wed May 18 20:52:43 2011] admin.pl: Compilation failed in require at admin.pl line 722.
 
Gea I'm trying to setup TLS emails with nappit and I just went through the perl install then went to TLS:status and got this error:

syntax error at /var/web-gui/data/napp-it/zfsos/15_jobs and data services/04_TLS/01_status/action.pl line 120, near "->new Net::SMTP::TLS"
Compilation failed in require at admin.pl line 722.

For help, please send mail to this site's webmaster, giving this error message and the time and date of the error.
[Wed May 18 20:52:43 2011] admin.pl: syntax error at /var/web-gui/data/napp-it/zfsos/15_jobs and data services/04_TLS/01_status/action.pl line 120, near "->new Net::SMTP::TLS" [Wed May 18 20:52:43 2011] admin.pl: Compilation failed in require at admin.pl line 722.

TLS module can*t be loaded, try to install again.
( I have not tested by my own, - have no Goooglemail- this item was added due to thread 626
http://hardforum.com/showpost.php?p=1037244417&postcount=626)

Gea
 
Quick question about the all in one setup: I got it all working fine. Until I had to reboot the host. I had the OI SAN set to boot first, but no others did, and all the other VMs were grayed out with "Unknown (inaccessible)". I don't remember this being mentioned one way or the other, but when I set this up, I put everything on the OI SAN, including the virtual machine files. Should it have been just the vhds for the VMs?
 
TLS module can*t be loaded, try to install again.
( I have not tested by my own, - have no Goooglemail- this item was added due to thread 626
http://hardforum.com/showpost.php?p=1037244417&postcount=626)

Gea

it reports its install but im still getting an error:

[Thu May 19 19:52:54 2011] admin.pl: Bareword found where operator expected at /var/web-gui/data/napp-it/zfsos/15_jobs and data services/04_TLS/01_status/action.pl line 120, near "->new Net::SMTP::TLS" [Thu May 19 19:52:54 2011] admin.pl: (Missing operator before Net::SMTP::TLS?) Status: 500 Content-type: text/html
Software error:

syntax error at /var/web-gui/data/napp-it/zfsos/15_jobs and data services/04_TLS/01_status/action.pl line 120, near "->new Net::SMTP::TLS"
Compilation failed in require at admin.pl line 722.

For help, please send mail to this site's webmaster, giving this error message and the time and date of the error.
[Thu May 19 19:52:54 2011] admin.pl: syntax error at /var/web-gui/data/napp-it/zfsos/15_jobs and data services/04_TLS/01_status/action.pl line 120, near "->new Net::SMTP::TLS" [Thu May 19 19:52:54 2011] admin.pl: Compilation failed in require at admin.pl line 722.
 
it reports its install but im still getting an error:

[Thu May 19 19:52:54 2011] admin.pl: Bareword found where operator expected at /var/web-gui/data/napp-it/zfsos/15_jobs and data services/04_TLS/01_status/action.pl line 120, near "->new Net::SMTP::TLS" [Thu May 19 19:52:54 2011] admin.pl: (Missing operator before Net::SMTP::TLS?) Status: 500 Content-type: text/html
Software error:

syntax error at /var/web-gui/data/napp-it/zfsos/15_jobs and data services/04_TLS/01_status/action.pl line 120, near "->new Net::SMTP::TLS"
Compilation failed in require at admin.pl line 722.

For help, please send mail to this site's webmaster, giving this error message and the time and date of the error.
[Thu May 19 19:52:54 2011] admin.pl: syntax error at /var/web-gui/data/napp-it/zfsos/15_jobs and data services/04_TLS/01_status/action.pl line 120, near "->new Net::SMTP::TLS" [Thu May 19 19:52:54 2011] admin.pl: Compilation failed in require at admin.pl line 722.

Yeah. I tried downloading the new napp-it build over my existing install and it errored as well.

Notes:

1) TLS had worked with a different syntax. What was weird (to me) is the first argument had to be placed on the same line as the function otherwise it errored. (See Gea's suggested format vs what I reported to work from page 32.)

2) You will need the net-ssleay module added prior to building TLS as noted in earlier post (unless Gea is adding it for us in napp-it install)

I have since deleted my VM and re-installed from scratch to be sure that I have what anyone else would have after a new install. Time and lack of Perl programming skills limit my ability to help. I will post results if I figure anything out.
 
Yeah. I tried downloading the new napp-it build over my existing install and it errored as well.

Notes:

1) TLS had worked with a different syntax. What was weird (to me) is the first argument had to be placed on the same line as the function otherwise it errored. (See Gea's suggested format vs what I reported to work from page 32.)

2) You will need the net-ssleay module added prior to building TLS as noted in earlier post (unless Gea is adding it for us in napp-it install)

I have since deleted my VM and re-installed from scratch to be sure that I have what anyone else would have after a new install. Time and lack of Perl programming skills limit my ability to help. I will post results if I figure anything out.

I did install the net-ssleay before running the perl commands
 
Yeah. I tried downloading the new napp-it build over my existing install and it errored as well.

Notes:

1) TLS had worked with a different syntax. What was weird (to me) is the first argument had to be placed on the same line as the function otherwise it errored. (See Gea's suggested format vs what I reported to work from page 32.)

2) You will need the net-ssleay module added prior to building TLS as noted in earlier post (unless Gea is adding it for us in napp-it install)

I have since deleted my VM and re-installed from scratch to be sure that I have what anyone else would have after a new install. Time and lack of Perl programming skills limit my ability to help. I will post results if I figure anything out.

hello Rhunt

if you (or any other) has a working syntax, please report.
I will then update the TLS mailpart.

Gea
 
is this a problem on a machine that had worked well
or a problem after a first time install?


Gea

it hard to side it just happen and I try to make a ny install and I first think the problem whas gune but it came after I install napp it but I dont know or it can be napp it or ? can it be a problem if the os drives are on a crontroler (not raid ) ? supermicro X8ST3-F
but if that whas true so will my raidz get a lot of problems but if work just fine maby I just should try a new fresh os install
 
it hard to side it just happen and I try to make a ny install and I first think the problem whas gune but it came after I install napp it but I dont know or it can be napp it or ? can it be a problem if the os drives are on a crontroler (not raid ) ? supermicro X8ST3-F
but if that whas true so will my raidz get a lot of problems but if work just fine maby I just should try a new fresh os install

- shut-down
- unplug your data pool
- restart

if all is ok, one of your disks is blocking the system on mounting
otherwise try a complete reinstall

Gea
 
I got my first build up and running. It was relatively easy and I'm pretty happy with it and the napp-it interface (great work). Here is the link to my build thread (for which parts I used):

http://hardforum.com/showthread.php?t=1586873

I did run into a problem while testing. I am using it present storage to an ESXi host. I went with creating a volume and using comstar to create the iscsi target. Afterwards my performance was bad and I realized I used the wrong block size (incidently that would be a great thing to add to the napp-it GUI when creating a volume - the ability to change the block size). I then tried to delete the volume... of course it wouldn't let me because there was an iscsi target attached to it. So then I tried to figure out what all I needed to detach to get the volume removed. Well after going through and deleting most of the comstar configuration (through the web gui), I still couldn't delete the volume and both the web gui and the solaris gui became unusable. I left it overnight but it was still doing whatever it was doing. I hard booted it and then it sat trying to boot forever. I then left it for another day and a half and finally it is now responsive.

This was just a test so I don't mind wiping it out. However could you tell me what the proper procedure for destroying a volume that is being used through comstar?

So a co-worker of mine actually had a very similar issue as I did. He deleted an NFS shared folder and it became unusable so he reverted back to the pre-napp-it snapshot. I don't know if it is a napp-it issue or if it is solaris 11 express... It doesn't seem to happen all the time. We played around with creating and deleting folders afterwards but it didn't happen again. Could be a combination of compression or dedup and not until you have a significant amount of data on it? Is any one else seeing this or knows what this could happening here?
:confused:
 
Gea, did you see my post about rebooting esxi and having the virtual machines inaccessible? My guess is that I didn't wait long enough for OI to come online, and the esxi initiator times out and does not recover without manual intervention. I notice in your docs, you have all the other VMs as 'manual startup'? That is not a useful option for me - I really need everything to come up automatically...
 
So a co-worker of mine actually had a very similar issue as I did. He deleted an NFS shared folder and it became unusable so he reverted back to the pre-napp-it snapshot. I don't know if it is a napp-it issue or if it is solaris 11 express... It doesn't seem to happen all the time. We played around with creating and deleting folders afterwards but it didn't happen again. Could be a combination of compression or dedup and not until you have a significant amount of data on it? Is any one else seeing this or knows what this could happening here?
:confused:

Well, I guess I found some more information here: http://opensolaris.org/jive/thread.jspa?threadID=137772. So even using Solaris 11 Express, this is still a major problem. I see there were some suggestions in that thread. Some say if you have more RAM, but my setup has 24 GB of RAM with no ssd cache so it should have plenty of RAM to fit my DDT for the ~700 GB of data I was testing with. Anyone have more knowledge/experience with this with some advice?
 
Gea, did you see my post about rebooting esxi and having the virtual machines inaccessible? My guess is that I didn't wait long enough for OI to come online, and the esxi initiator times out and does not recover without manual intervention. I notice in your docs, you have all the other VMs as 'manual startup'? That is not a useful option for me - I really need everything to come up automatically...

My VM's are always online.
You can do nothing but set OI to autostart at first order and activate "continue, when tools are loaded" and set the other VM's to autostart with a long enough delay.

If your NFS datastore goes offline and does not reconnect automatically after some time,
you have to click reload manually. I do not know if you can automate.

Gea
 
Well, I guess I found some more information here: http://opensolaris.org/jive/thread.jspa?threadID=137772. So even using Solaris 11 Express, this is still a major problem. I see there were some suggestions in that thread. Some say if you have more RAM, but my setup has 24 GB of RAM with no ssd cache so it should have plenty of RAM to fit my DDT for the ~700 GB of data I was testing with. Anyone have more knowledge/experience with this with some advice?

i suppose, there are three rules with dedup
- If you must use it, you need 2GB+ RAM per TB Data + a SSD Read cache
- Use the newest available ZFS (ex SE11)
- Disks are cheap, avoid whenever possible

Gea
 
My VM's are always online.
You can do nothing but set OI to autostart at first order and activate "continue, when tools are loaded" and set the other VM's to autostart with a long enough delay.

If your NFS datastore goes offline and does not reconnect automatically after some time,
you have to click reload manually. I do not know if you can automate.

Gea

the issue is booting an all-in-one. by definition the SAN VM is offline to start with. the problem i see is that the inventory shows the other VMs as inaccessible until 15 minutes or so when the HBA re-polls. obviously this does not happen for you, so maybe it just just iSCSI?
 
the issue is booting an all-in-one. by definition the SAN VM is offline to start with. the problem i see is that the inventory shows the other VMs as inaccessible until 15 minutes or so when the HBA re-polls. obviously this does not happen for you, so maybe it just just iSCSI?

I suppose, its just about that i do not power off at all.

The problem.
Not until start of OI, NFS and iSCSI SAN-shares are available.
The only question is, if and when does ESXi relook for them
if they are not available at boot time.

Gea
 
Right, but how does it work for you? Obviously your machine must have been powered on at some point :) Are you saying this never happened for you? I am trying to move my VMs from iSCSI to NFS and see...
 
In .500e, when trying to change admin pw, this message is presented:

admin passwort is not identical

A) bit of a typo in there
B) I've tried this a few times, and I'm pretty sure one of those times I got the passwords the same.

The password I was attempting has special characters in it. Using a password without special characters works.
 
In .500e, when trying to change admin pw, this message is presented:

admin passwort is not identical

A) bit of a typo in there
B) I've tried this a few times, and I'm pretty sure one of those times I got the passwords the same.

The password I was attempting has special characters in it. Using a password without special characters works.


look at allowed chars.
(I was lazy, some chars give problems with Perl)

Gea
 
Right, but how does it work for you? Obviously your machine must have been powered on at some point :) Are you saying this never happened for you? I am trying to move my VMs from iSCSI to NFS and see...

I have a UPS for short power outages (we do not have many in Germany)
Otherwise, i have to be there due to the other problems.

Gea
 
I think I am not being clear. The very first time you set up the all in one, it obviously needs to be powered on. Looking at your howto, I see the other VMs are all showing as manual start, so I am guessing that you started them manually? e.g. you have never seen this issue, since you never reboot it?
 
Back
Top