OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

So my pci-e 16x to 1x adapter didn't work on my old asus p5ke board with the video card... and since it won't post without a video card in one of the x16 slots, I can't use my pci-e 8x sas card and pci-e 8x 10gbe cards at the same time. Most of the server boards I have seen from supermicro do not have a lot of pci-e slots... I was thinking maybe the X8SIA-F, but I would be using 3 of the pci-e slots from the get-go with only one left for expansion... Are there any other server boards with an abundance of pci-e slots?

The other option I was thinking of is getting something like this:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813157269

with an i5 2500k and 16gb of cheap non-ecc ram.
 
Im running napp-it 0.500s. This morning I received this;

NAME STATE READ WRITE CKSUM
megadrive DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
c4t0d0p0 FAULTED 0 0 0 too many errors
c4t1d0p0 ONLINE 0 0 0
c4t2d0p0 ONLINE 0 0 0
c4t3d0p0 ONLINE 0 0 0
c4t4d0p0 ONLINE 0 0 0
c4t5d0p0 ONLINE 0 0 0

the box is only around 2 months old, with all new components. I checked SMART on the faulted harddrive which showed a pass. I also checked it with samsungs diagnostic tool which also passed. It appears the drive is fine?

What might cause this to occur?

I have been searching on how to recover and am getting a huge variety of different options.
Can someone please let me know the quickest and most reliably way to resilver the current harddrive that is showing FAULTED. I cant add another harddrive to the controller, and I dont have another harddrive to use.

What I was thinking was to unconfigure the drive and then reconfigure? would this then automatically start resilvering?
 
_Gea

I'm having an issue (I think) with several of my drives reporting hard errors in the Napp-it console (Link. I'm currently running an all-in-one with ESXi and OpenIndiana (v151) for the SAN. OI has 10 GB RAM assigned to it and Napp-it version 0.6i is installed.

The system is housed in a Norco 4220 case which has been functioning fine for my old Windows Home Server (recenctly swapped system cases to migrate away from my old box). I just replaced the power supply with a SeaSonic 750W to handle the additional drives (the ESXi box was running off an older Corsair 520HX).

The only drives exhibiting the issue are my WD20EADS and Hitachi 2TB drives. I thought it could be a controller issue, but they exhibit the same problems whether they are all on a seperate LSI 3081E-R card, or split between the onboard 1068e and M1015 controllers that the other drives are one. I've also tried changing drive bays on the Norco just to make sure it isn't an isue with the backplanes.

I'm having a hard time believing that all these drives could actually be going bad. These drives were pulled from work from a retired DroboPro, if that would make any difference. Should I need to format the drives in some way before creating a pool from them?

If there is a real problem you will get ZFS errors immediatly (disk faulted or too many errors).
The soft or hard errors are not so relevant and same numbers indicates a common disk independant cause.
I would remove the other pool to check if its a power problem or check if the counter goes up
on special actions (like the soft-counter can go up on current napp-it on every smart check)
 
Im running napp-it 0.500s. This morning I received this;

NAME STATE READ WRITE CKSUM
megadrive DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
c4t0d0p0 FAULTED 0 0 0 too many errors
c4t1d0p0 ONLINE 0 0 0
c4t2d0p0 ONLINE 0 0 0
c4t3d0p0 ONLINE 0 0 0
c4t4d0p0 ONLINE 0 0 0
c4t5d0p0 ONLINE 0 0 0

the box is only around 2 months old, with all new components. I checked SMART on the faulted harddrive which showed a pass. I also checked it with samsungs diagnostic tool which also passed. It appears the drive is fine?

What might cause this to occur?

I have been searching on how to recover and am getting a huge variety of different options.
Can someone please let me know the quickest and most reliably way to resilver the current harddrive that is showing FAULTED. I cant add another harddrive to the controller, and I dont have another harddrive to use.

What I was thinking was to unconfigure the drive and then reconfigure? would this then automatically start resilvering?

if the drive is ok, you can clear the error and/or do a disk replace selecting only this disk as source.
(on some controllers a disk replace in the same slot does not work, a reboot or pool export/import may help then) .
start a scrub afterwards to check the whole pool
 
So my pci-e 16x to 1x adapter didn't work on my old asus p5ke board with the video card... and since it won't post without a video card in one of the x16 slots, I can't use my pci-e 8x sas card and pci-e 8x 10gbe cards at the same time. Most of the server boards I have seen from supermicro do not have a lot of pci-e slots... I was thinking maybe the X8SIA-F, but I would be using 3 of the pci-e slots from the get-go with only one left for expansion... Are there any other server boards with an abundance of pci-e slots?

The other option I was thinking of is getting something like this:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813157269

with an i5 2500k and 16gb of cheap non-ecc ram.

Serverboards with up to 7 pci-slots are based on Intel's 5520 chipset like the
http://www.supermicro.nl/products/motherboard/QPI/5500/X8DTH-6F.cfm

But they are quite expensive even if you count the onboard SAS2 controller.
They are used for high-end machines

If you use a desktop chipset like the Z68 you get more slots but often no ECC,
less max RAM, no Intel nics and sometimes instability on features like vt-d

You may also think about a LSI SAS2 controller + LSI SAS2 expander.
In such a combo you do not need as many slots
 
I have a bit different question.

I will make a HP Microserver NAS with attached external hard drive. The problem is, that external drive will sometimes be used as "traveling hard drive", so it will be plugged on and off.

Since most of the computers are on windows, it will have to have a FS, that windows can read. As far as I searched, solaris and OI supports FAT32, so that's a good sign. The other question is, can OI automount hard drives? So when I plug in the drive, it will mount it and then I can create a cron job to check if the drive is mounted, and if it is, it performs rsync sync.

Is that doable?

lp, Matej
 
I have a bit different question.

I will make a HP Microserver NAS with attached external hard drive. The problem is, that external drive will sometimes be used as "traveling hard drive", so it will be plugged on and off.

Since most of the computers are on windows, it will have to have a FS, that windows can read. As far as I searched, solaris and OI supports FAT32, so that's a good sign. The other question is, can OI automount hard drives? So when I plug in the drive, it will mount it and then I can create a cron job to check if the drive is mounted, and if it is, it performs rsync sync.

Is that doable?

lp, Matej

I would attach these removable disks on a pc via usb3 and copy/backup files via network and
pc sync-tools like robocopy. I would also avoid FAT.
NTFS is not as good as ZFS but much better than FAT when it comes to data security.
 
if the drive is ok, you can clear the error and/or do a disk replace selecting only this disk as source.
(on some controllers a disk replace in the same slot does not work, a reboot or pool export/import may help then) .
start a scrub afterwards to check the whole pool

Thanks Gea for your response and efforts!

I hot unplugged and hot connected the drive. Napp-it (zfs status) then showed the drive is unavailable. After reconfiguring the drive via
Code:
cfgadm -c configure sata0/0
and then clearing the error the pool is back to healthy.

Will do a scrub now to check the pool.

Still wondering though, why/how would this problem occur if nothing seems to be wrong with the hardware?
 
I would attach these removable disks on a pc via usb3 and copy/backup files via network and
pc sync-tools like robocopy. I would also avoid FAT.
NTFS is not as good as ZFS but much better than FAT when it comes to data security.

The problem here is, that not the same person will always take the drive, so I would have to make and teach the whole company how to use sync programs and I want to automate that process. I will think about other type of syncing, but so far, that is the best solution I think there is.

As far as FAT is going, I don't really need data security, I just need all the data from NAS on the removable media, since there will be word files that might sometimes be used on another location.
 
Hi all,

I'm just going through the initial steps of setting up napp-it with OI 151 and don't seem to be able to get the vmxnet3 driver running.

I'm on an X9SCL-F and so far have installed OI with vmware-tools. The tools installation seemed to go fine and I accepted all the defaults. However, having shutdown, removed the e1000 NIC and added the vmxnet3 one before restarting, no network interface came up.

This should work out of the box with zero config shouldn't it?

Thanks.
 
Another question regarding networking. I've set up a test pool with AFP and NFS enabled and I've just tried to copy some folders from my Mac across to the pool. I have Cat6 cable connected to a Netgear GS108 Gb switch but my transfers are capping out at 100Mb speeds of 11.4MB/s.

How do I debug where the bottleneck is?
 
The problem here is, that not the same person will always take the drive, so I would have to make and teach the whole company how to use sync programs and I want to automate that process. I will think about other type of syncing, but so far, that is the best solution I think there is.

As far as FAT is going, I don't really need data security, I just need all the data from NAS on the removable media, since there will be word files that might sometimes be used on another location.

You should be able to use ntfs on solaris via FUSE to read an NTFS drive.

Can't you just access the word files over the networks (internet if remote?)
 
Hello all,

I've installed OpenIndiana 151a Server on my HP Microserver recently, without any problems. Activating CIFS and configuring ACL hasn't been an issue either. But I realised that my MacBook Air (Lion 10.7.2) and CIFS on OpenIndiana are not really good friends, so I installed AFP via Napp-it (Napp-it is already installed, very nice tool btw).
Unfortunately I still can't access my files via AFP.

Here's what's in the AFP config according to Napp-it (netatalk 2.2.1) (Services/AFP/Volumes):
- -tcp -noddp -uamlist uams_randnum.so,uams_dhx.so,uams_dhx2.so -nosavepassword
/tank/incoming incoming allow:chris,root rwlist:chris,root

User "chris" has (full) access to the directory /tank/incoming (ACL entry), and root:root is the owner of the directory. I can access the directory/files through CIFS with the user (Windows Notebook).
I haven't changed anything else in the configs.

Whenever I click on the server in Finder (Lion 10.7.2, see above), Finder says "Not connected". I click on "Connect as...", enter the credentials for "chris", and then Finder says "Connection failed." dmesg on the file server says:
Dec 4 19:38:32 server afpd[21589]: [ID 702911 daemon.notice] AFP logout by chris
Dec 4 19:38:32 server afpd[21589]: [ID 702911 daemon.error] dsi_stream_read: len:0, unexpected EOF
Dec 4 19:38:32 server afpd[21589]: [ID 702911 daemon.notice] afp_over_dsi: client logged out, terminating DSI session
Dec 4 19:38:32 server afpd[21589]: [ID 702911 daemon.notice] AFP statistics: 0.60 KB read, 0.44 KB written

Anyone got a clue how I can get access via AFP to my files? Or at least a hint where I could look?

Thanks a lot in advance.

Chris
 
Sorry, forgot the following line in the AFP config (again according to Napp-it:
:DEFAULT: options:acl,upriv

Again, thanks for your help.

Chris
 
Sorry, forgot the following line in the AFP config (again according to Napp-it:
:DEFAULT: options:acl,upriv

Again, thanks for your help.

Chris

I would try to set 777 to the folder tank/incoming

about Lion and SMB
this problem is known and fixed in Illumos
https://www.illumos.org/issues/1718

(Let's hope for a fixed OI version soon) ,
Solaris 11 is already working with Lion
 
Last edited:
Hello all,

I've installed OpenIndiana 151a Server on my HP Microserver recently, without any problems. Activating CIFS and configuring ACL hasn't been an issue either. But I realised that my MacBook Air (Lion 10.7.2) and CIFS on OpenIndiana are not really good friends, so I installed AFP via Napp-it (Napp-it is already installed, very nice tool btw).
Unfortunately I still can't access my files via AFP.

Here's what's in the AFP config according to Napp-it (netatalk 2.2.1) (Services/AFP/Volumes):
- -tcp -noddp -uamlist uams_randnum.so,uams_dhx.so,uams_dhx2.so -nosavepassword
/tank/incoming incoming allow:chris,root rwlist:chris,root

User "chris" has (full) access to the directory /tank/incoming (ACL entry), and root:root is the owner of the directory. I can access the directory/files through CIFS with the user (Windows Notebook).
I haven't changed anything else in the configs.

Whenever I click on the server in Finder (Lion 10.7.2, see above), Finder says "Not connected". I click on "Connect as...", enter the credentials for "chris", and then Finder says "Connection failed." dmesg on the file server says:
Dec 4 19:38:32 server afpd[21589]: [ID 702911 daemon.notice] AFP logout by chris
Dec 4 19:38:32 server afpd[21589]: [ID 702911 daemon.error] dsi_stream_read: len:0, unexpected EOF
Dec 4 19:38:32 server afpd[21589]: [ID 702911 daemon.notice] afp_over_dsi: client logged out, terminating DSI session
Dec 4 19:38:32 server afpd[21589]: [ID 702911 daemon.notice] AFP statistics: 0.60 KB read, 0.44 KB written

Anyone got a clue how I can get access via AFP to my files? Or at least a hint where I could look?

Thanks a lot in advance.

Chris

I found the best way to access files from my girlfriends macbook (lion) to my solaris server was to use NFS. From my windows desktop I use SMB and my ubuntu laptop I use NFS.
 
After more research, I think I found the setup for me:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813131725
4 PCIe 2.0 x16 (x16, x4, x4 or x8, x8, x4, x4)
ECC memory (16gb) - http://www.newegg.com/Product/Product.aspx?Item=N82E16820139262
along with an e3-1220 xenon processor.

The rest of the parts I have are reusable from the current setup (older q6600 desktop).

I looked into the SAS expanders, but they seem to cost more than the LSI SAS2 pci-e card, having two of those cards for extra sata ports seems to be the cheaper option for me, if I need to add more drives in the future.

The supermicro remote console thing looks cool, but I really have no need for it.
 
After more research, I think I found the setup for me:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813131725
4 PCIe 2.0 x16 (x16, x4, x4 or x8, x8, x4, x4)
ECC memory (16gb) - http://www.newegg.com/Product/Product.aspx?Item=N82E16820139262
along with an e3-1220 xenon processor.

A workstation board? Why?
What about one from SM, with IPMI of course, like this: http://www.newegg.com/Product/Product.aspx?Item=N82E16813182253

edit: Oh. I see you don't see a use for IPMI...have you tried it?...I got to love it as soon as I first tried it.
 
hmmm, I missed that one when I was looking... Too bad it doesn't have an x16 slot so I can't use my nvidia card with the wobbly windows in compiz (not necessary though as it is primarily a home server).

Are there any other advantages of using that board other than IPMI?
 
Q: Is it possible to configure storage disks to spin down with COMSTAR services enabled ?

I have a Solaris Express 11 server running as my home file server / SAN. The issue that I'm running into is that when I have COMSTAR enabled, the disks seem to spin down after the specified interval (currently set at 60s) and then immediately spin back up again. This is confirmed by the readings Im getting from kill-a-watt. The readings fluctuate between 70 ~ 90 watts after the 60s period before settling down at 90 W. When I disable COMSTAR target services, the disks do spin down and stay down until there's some activity of the storage disks. Kill-A-Watt readings drop down to ~56 Watt and stay there until I access the storage disks.

Is it possible to configure the disks to spin down if COMSTAR services are enabled ?
 
Quick update on my AFP issue:
- Thanks for the NFS hint, but I'm not a big fan of NFS when it comes to normal file sharing...
- I kind of followed Gea's recommendation, and created a new share/ZFS dataset through Napp-it (access 777, everyone). Interestingly, after creating that share, and also enabling it for AFP, I could see all my shares through AFP on the server. Any explanation for that?

Thanks for your help.

Chris
 
Thanks Gea for your response and efforts!

I hot unplugged and hot connected the drive. Napp-it (zfs status) then showed the drive is unavailable. After reconfiguring the drive via
Code:
cfgadm -c configure sata0/0
and then clearing the error the pool is back to healthy.

Will do a scrub now to check the pool.

Still wondering though, why/how would this problem occur if nothing seems to be wrong with the hardware?

Something odd has happened since this event. The smartinfo for the affected drive is now displaying in napp-it! (it wasn't before, and currently the other 5 drives still don't). The only thing I can put this down to is having re-configuring the drive via 'cfgadm'.

If I was to unconfigure and reconfigure each of the other drives in succession, would this have a negative impact? I would try with just one first and see if this does enable the smartinfo but want to check first with others to see if I might be causing problems by doing this.
 
Quick update on my AFP issue:
- Thanks for the NFS hint, but I'm not a big fan of NFS when it comes to normal file sharing...
- I kind of followed Gea's recommendation, and created a new share/ZFS dataset through Napp-it (access 777, everyone). Interestingly, after creating that share, and also enabling it for AFP, I could see all my shares through AFP on the server. Any explanation for that?

Thanks for your help.

Chris

thats the way it works.
You can restrict access on created files and folders but not on the shared folder itself.
 
Something odd has happened since this event. The smartinfo for the affected drive is now displaying in napp-it! (it wasn't before, and currently the other 5 drives still don't). The only thing I can put this down to is having re-configuring the drive via 'cfgadm'.

If I was to unconfigure and reconfigure each of the other drives in succession, would this have a negative impact? I would try with just one first and see if this does enable the smartinfo but want to check first with others to see if I might be causing problems by doing this.

should not be a problem.
You may also try napp-it 0.6m. Smart checks are now working on more type of disks/controllers.
 
I just installed Nexentastor on my test system. The latest build.
It was the only one that installed on the hardware raid 1 (a basic mirror, not 10, like I first wrote, too much going on in my head) that I had in spare and wanted to use for this test system. The volumes for storage are on another controller and aren't configured yet.

The other systems, like Oracle Express and Openindiana froze once they booted up, is there something I can do about that? Or is Nexentastor the best alternative/solution for this test system?

Besides installing Nexenta, I configured it just basic, so no volumes etc ... I'd like to test Napp-It, so I thought I'd just leave Nexta basic, or are there any other things I need to do?

I don't mind getting pointed to interesting documentation, I've read the Napp-It document, but maybe I missed a more advanced one?

Thanks
 
Last edited:
hmmm, I missed that one when I was looking... Too bad it doesn't have an x16 slot so I can't use my nvidia card with the wobbly windows in compiz (not necessary though as it is primarily a home server).

Are there any other advantages of using that board other than IPMI?


..it will work with ESXi and vt-d for an all-in-one.

edit: with ESXi only one of the Intel NICs (82574L) will work until a driver is out/updated (for 82579LM).
The third NIC is a Realtek...used for IPMI only, if you wish to do so (IPMI will also work with the other Intel NICs alternatively).
An alternative based in an older XEON platform is this: http://www.supermicro.nl/products/motherboard/Xeon3000/3400/X8SIA.cfm?IPMI=Y (http://www.newegg.com/Product/Product.aspx?Item=N82E16813182235&Tpk=X8SIA-F)
 
Last edited:
Gea, thanks for the comment. The interesting thing is:
- I couldn't connect to AFP at all according to the finder.
- I haven't changed anything on my existing shares.
- I've created a new share with access for everyone.
- And now I see all my shares, the new one and the existing ones, and I have access on both...

So this one new share with access for everyone has also somehow open access to the other shares which are restricted in access. Perhaps I need to take a closer look into the AFP access options...
Anyone got an idea?

Thanks.

Regards,

Chris
 
I just installed Nexentastor on my test system. The latest build.
It was the only one that installed on the hardware raid 1 (a basic mirror, not 10, like I first wrote, too much going on in my head) that I had in spare and wanted to use for this test system. The volumes for storage are on another controller and aren't configured yet.

The other systems, like Oracle Express and Openindiana froze once they booted up, is there something I can do about that? Or is Nexentastor the best alternative/solution for this test system?

Besides installing Nexenta, I configured it just basic, so no volumes etc ... I'd like to test Napp-It, so I thought I'd just leave Nexta basic, or are there any other things I need to do?

I don't mind getting pointed to interesting documentation, I've read the Napp-It document, but maybe I missed a more advanced one?

Thanks

Current NexentaStor 3, based on OpenSolaris 134 is end of live.
New next Nexenta 4 will be based on Illumos just like OpenIndiana and should come out soon.
So you must rethink you boot environment in any case.

You said that you use a hardware raid-1
Why? Nexenta and Solaris supports software ZFS raid-1 based on standard mainboard sata

There are also driverless Sata Raid-1 enclosures if you want an all-in-one based on ESXi
 
Gea, thanks for the comment. The interesting thing is:
- I couldn't connect to AFP at all according to the finder.
- I haven't changed anything on my existing shares.
- I've created a new share with access for everyone.
- And now I see all my shares, the new one and the existing ones, and I have access on both...

So this one new share with access for everyone has also somehow open access to the other shares which are restricted in access. Perhaps I need to take a closer look into the AFP access options...
Anyone got an idea?

Thanks.

Regards,

Chris

You can restrict access via allow option in the share settings-
thats the default way. If you set acl's to folders in such a share within Solaris, AFP must attend.
(You cannot change these settings in OSX but you can not only use users but also owner/creator)

But all in all, usability of AFP is not the best compared with SMB in a mixed environment.
I would use AFP only for TimeMachine or as long as the SMB bug in OSX persists.
 
can i use the 0.6 version without a license key?

napp-it 0.6 is free for end-users even if used commercially.
A license key is needed only for comfort extensions like async replication between appliances or monitoring
that extends the functionality of napp-it to ensure further development or enhance functionality in an enterprise environment.
 
Can someone please post a step by step guide to sharing a folder via NFS. I've followed what I think are the instructions on this thread and cannot for the life of me get it working. I can only ever see any folders if I enable AFP.

The flow I've tried is create a new folder from the Create tab. All I modify is the folder name keeping everything else default. I then click on 'off' under the nfs column and set the property to 'on'.

My pool is called tank. My folder is called nfs.

Code:
sjalloq@openindiana:~$ showmount -e
export list for openindiana:
/tank/nfs (everyone)

I even tried following the manual steps on the OI wiki and I can see my share as follows:

Code:
sjalloq@openindiana:~$ sudo zfs get sharenfs tank/nfs
NAME      PROPERTY  VALUE     SOURCE
tank/nfs  sharenfs  rw        local

But whatever I do, if I try to connect from my Ubuntu machine I I get an error:

Code:
sjalloq@sjalloq-T42p:~$ sudo mount 192.168.1.113:/tank/nfs /mnt/test
mount: wrong fs type, bad option, bad superblock on 192.168.1.113:/tank/nfs,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)
       In some cases useful info is found in syslog - try
       dmesg | tail  or so
 
Silly question: are you sure the NFS service is running?

You mean under the Services->NFS tab? Yes, it says 'online' and I've tried restarting it a number of times. The text on the page also says that if you nfs-share a folder the nfs-service is started automatically.

As an aside, and to rule out a few things, should it be possible to mount the NFS share from within OI?
 
On the ubuntu client, are you sure you have nfs-common installed?

:rolleyes: I thought it was installed by default, thanks. So I can mount the folder now and having change the permissions I can access the folder.

What sort of performance should I expect when using NFS? Should I be able to max out a Gb ethernet link? What about AFP - is that much slower?
 
It depends on your hardware, but going between two quad core intel chips I have no problem maxing gigabit using smb / cifs.
 
Heh, still can't create an NFS datastore in ESXi though. :(

Getting this error from within ESXi:

Code:
Call "HostDatastoreSystem.CreateNasDatastore" for object "ha-datastoresystem" on ESXi "192.168.1.144" failed.
Operation failed, diagnostics report: Sysinfo error on operation returned status : Unable to query remote mount point's attributes. Please see the VMkernel log for detailed error information
 
Hello,

Would anyone know if it's possible to spin down disk in Solaris Express 11 with COMSTAR iSCSI active ?

My home server is a Supermicro X9SCM with 6 1 TB drives connected to the motherboards SATA ports. The server runs SE11 native (no VMWare). I've enabled devicethreshold settings in /etc/power.conf to spin down my storage drives after 60s of inactivity (I plan to increase that after I figure out why it's not work).

The behaviour I've noticed is that when the COMSTAR iSCSI is active I hear quite a few clicks after ~ 60s of inactivity. During that time, I even see the wattage drop from about 90-100 W to about 60 and then it jumps back to about 90 watts before creeping back to about 100 w is a few seconds.

But if I disable COMSTAR iScsi services, the disks do seem to spin down after 60 seconds of inactivty and the wattage of the server also drops down to about 55w and stays there until I start accessing the shares exported on the storage disks.

Can someone please help me resolve this issue ? Of does iSCSI also assume that the block devices need to be alway kept alive. My iSCSI is configured using volumn based LUNs.

The other behaviour I've noticed is that every time try to export my pool, I get 'zpool busy' as long as iSCSI target services are active (even if the client that use the targets are shutdown completely). I'm able to successfully 'export' my storage pool as soon as I reboot after disabling the iSCSI target services.

Can someone please help and let me know what else needs to be done to get iSCSI to play nicely and not keep the 'pool alive' ?

Thanks in advance,
Groove
 
Is there a way to get better network performance between guest systems? 100MB/sec sucks.
 
Back
Top