OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Hey,

Found an update for my Samsung F3 HD103SJ drives here. I know it is to fix a different problem, but my desperation level is incredibly high, so I'll try it anyway. I was too upset yesterday to try this, so I'll give it a shot this weekend. Also, I have not yet checked if my drives already have this FW level.

However, I did not find for the life of me an update for my Samsung F1 HD103UJ drives. So this whole thing might not really do something, but calming my mind knowing that I tried everything possible.

Unfortunately, I never paid attention which of the drives is failing when it does - the F1 or the F3's. I will note that in future events, which I'm sure by now they will reoccur.
What also hits me by now is that I never had *any* problems whatsoever on my rpool (both SATA Seagate and now SAS Fujitsu drives) or the VM pool (also both SATA Seagate and now SAS Fujitsu drives). It's always my datapool with the Samsung drives.

So how are my drives connected? At the moment, they're connected onboard (LSI SAS2008 Controller) with forward breakout cables, 50cm long, SFF-8087 to 4 x SATA. It's the second set I am trying, they're from Supermicro and should therefore fit perfectly the Supermicro mainboard with the onboard LSI controller. I have the same cables but 70cm in length to the SAS drives from my 9211-8i controller, and they work perfectly there, even though I use two Raidsonic backplanes for SAS.

If I see an error (means if I can still connect to the console), it's simply "too many errors" on one or several disks. In the past, I could often not connect anymore, which did not happen anymore since I turned off multipath. When I hit "clear errors", it resets everything, so it will show me S:0 H:0 T:0 afterwards... I checked the log yesterday, and it began with the same like last time, but this time for target 13, so I guess it's not really following a pattern. Then it later said "too many errors" on a device and turned it off, taking the spare into duty.

Best regards,

Cap'

Captain,

I sooo understand the level of frustration you're feeling. I'll have a pint in your honour! Okay, maybe three.

I have seen the firmware update you found. As all my drives are of the firmware excluded I've yet to attempt to flash. The documentation says the flash will fail on drives with a certain firmware.

I've 1m cables due to the size of my case. I thought, perhaps, the cable length may be causing a bit of problems.

When the spare is activated see what errors are shown. All that I see are transport errors. You're able to clear the faults for a certain time. Once the drive goes offline, only a hard reboot will clear it. Even going to init 6 doesn't clear the counters. It may be a function of the quick reboot.

Looking through the mptsas source the time out error is thrown when a scsi-reset times out. So one wonders whether the controller hangs when executing the reset or the drive isn't responding fast enough. Some data from SMART indicates when it comes out of sleep - I can't recall if they're errors or just counters. I tried a while back disabling power management thinking the drives may be going to sleep and then don't come back fast enough. I didn't really notice a difference. Before I migrated other services to this box it would sit idle most of the night. I noticed then drives would go off line once the load increased on the pool.

I've moved my pool of samsungs to the on-board sata and left one drive on the sas controller with the system disk. I'll see how this goes.

When I imported the pool it couldn't locate the hot spare. So I removed it and went to add it with the sata name and *bam* kernel panic. I've yet to go through the dump with mdb but after a restart I was able to add the drive no problem. It's also one of the drives that gives the most problems.

I'm going to fire-off some emails to LSI and Seagate and see if I can find any more info. I'm not holding my breath with Seagate. I am hoping, though, that some dev at LSI goes, "wait, I know what this is!" One can only hope.

I'll let you know how those pints taste ;)

Cheers,
Liam
 
How many drives do you have altogether, what type are they, how are they all connected and how are your zpools configured?

I have 11 drives overall:
- 2 drives in a mirror for OS. Those are connected to the LSI 9211-8i Controller and are SAS drives (36 GB).
- 2 drives in a mirror for testing as a VMWare pool. Those are also connected to the 9211-8i Controller, and those are also SAS drives.

--- I NEVER had a single problem with one of those drives or pools! ---

- 7 drives = 3 x mirrored 1 TB Samsung SATA drives, plus one as spare. Those are connected to the onboard LSI SAS2008 Controller, which of course does also support SATA.

All drives are connected with forward breakout cables (SFF-8087 to 4 x SATA), the SATA drives directly, the SAS drives are in a Raidsonic Backplane for 2.5" SAS.

Here are my pools:
pools9WANN.png


Cheers,
Cap'
 
Does anyone have an idea why would SMB filetransfer slowly slowdown on a big file transfer?

While tranfering a 30GB file over SMB from a Win7 with SSD to my ZFS pool (4X 2TB WD Black in 2X Mirror - Benchmark around 300 MB/s write),

File transfer start around 100MB/s but after around 10Gb it stabilize around 30MB/s

Any idea? ZFS is provided from a OI, running on a ESXi5 with E1000 NIC on fast hardware..

Thanks.!

EDIT: The problem is limited to a single ZFS "Folder" on the POOL. I have a pool with multiple folder, with different SMB names. They all have SYNC=Disable. On all I get 80MB/s on the except one that start at 80 and quickly slow down to 27MB/s and stay there.
 
Last edited:
I have 11 drives overall:
- 2 drives in a mirror for OS. Those are connected to the LSI 9211-8i Controller and are SAS drives (36 GB).
- 2 drives in a mirror for testing as a VMWare pool. Those are also connected to the 9211-8i Controller, and those are also SAS drives.

--- I NEVER had a single problem with one of those drives or pools! ---

- 7 drives = 3 x mirrored 1 TB Samsung SATA drives, plus one as spare. Those are connected to the onboard LSI SAS2008 Controller, which of course does also support SATA.

All drives are connected with forward breakout cables (SFF-8087 to 4 x SATA), the SATA drives directly, the SAS drives are in a Raidsonic Backplane for 2.5" SAS.


OK - I might be inclined to export the datapool, and move the 3 mirrors (ie 6 drives) to the motherboard's SATA ports. The spare could either move to the 9211-8i or stay where it is (it will only be used in the event of a disk failure anyway - and iirc, the 9211-8i is the same controller as the onboard SAS-2008 anyway).
Then re-import the datapool and see how you get on - I can't help feeling that your issue may be something to do with the SAS2008 controller/driver and/or the drives' interoperability with it!

I might also move the vm pool to the SAS2008 controller - though I'd take this all one step at a time - if you make too many changes at once, it's easy to end up either with the issue disappearing and you don't know what the cause was, or else you might end up with multiple issues, and not be sure what step introduced them.
 
EDIT: The problem is limited to a single ZFS "Folder" on the POOL. I have a pool with multiple folder, with different SMB names. They all have SYNC=Disable. On all I get 80MB/s on the except one that start at 80 and quickly slow down to 27MB/s and stay there.

I have created a new Folder and copied all my data from the "bad" one to the new one, and the new one seem to behave properly.....

If anyone have an idea or something to try on the bad one let me know, as I fear that this problem could re-appear if I don't understand what's going one there... :(
 
Hi!

At home i am running a little server, based on desktop hardware with an areca raid controller
card and 8 drives providing some services for me. Because i want to test some things and
extend my home services i am thinking of building a new home server.

After doing some research it was clear that the new server should run esxi5 and virtualize the
machines and services i need. I found also nappit this beautiful peace of software which i
tested in vmware fusion under openindiana. Even if i am not a daily user yet, i want to say
many thanks to the author.

Now at this point i need some advice what options i have now to get the optimum out of my
resources and i have some questions:

1. Hardware choice:

I.
Mainboard: ASUS P8B WS
CPU: Intel Xeon E3-1235
RAM: GeIL EVO Corsa DIMM Kit 32GB PC3-10667U CL9-9-9-24 (DDR3-1333) (GOC332GB1333C9QC)

II.
Mainboard: SUPERMICRO X9SCM-F
CPU: Intel Xeon E3-1230
RAM: 2 x KINGSTON 8GB DDR3 1333MHz ECC CL9 KVR1333D3E9S/8G

Which one would you prefer? My budget is limited so its about having server hardware vs.
amount of ram in the first step. Another 2 RAM Kingston RAM Modules could follow...

2. Storage:

Right now i have the following config:

Areca Raid Controller ARC-1221 running raid 6 on
8 x 1 TB Seagate SATA Drives (ST31000340NS)
Netstor NA760B Tower 8-bay SATA Enclosure with I2C for Areca HBA
connected with 2 multilane cables to the raid controller.

Disk space 5.85 TB / 1.78 TB free
The speed over network to different machines using nfs or netatalk is about
60 mb/s.

Would it make sense to copy the data over to a temp storage and build an zfs with the drives
(disable the hardware raid and provide the single drives to zfs) to be more
flexible. What would be the best options on building a pool with that setup?

Thanks!

Schleicher
 
Most people will tell you to go ECC.

Personally I didin't and went for a i7 3930K (newest stepping for VT-x), which support 64GB non-ECC ram. The reason was cost, as I wanted 32GB right away and many PCI-e slots.

You need to move the data off, I would play with ESX+OI, and then copy the data once you are happy.

As for disk configuration is depends on your need. I run my VMs off the ZFS pool so I wanted a good mix of IOPS and Redondancy. The only way to climb in IOPS with a few disk is to mirror them.

I would create 5 vdev of 2 Mirrored 1TB drive, which will give you about: 4TB and IOPS from all 8 drives.

However it seem that your array is pretty full... you would need to delete some stuff of have more drives, or start remplacing some 1TB by 2TB or more..

A nice thing about 2disks mirror is that you only need to remplace or add 2 disk to add to your pool.

Anyway, this is what I'm doing here.
 
@x-cimo:
you meant probably: 4 vdev of 2 mirrored 1 tb drive

i like the idea with 2 mirrored drives, because when i give up my hardware raid i can only replace some
disks and will not have to buy 8 hds at once to get more space. i didn't think about that yet by myself,
so many thanks for the hint. maybe i will get two 3 tb drives and replace two actual 1 tb drives with
them, plenty of space left then...

anyone knows if i could run in trouble with the areca 1221 if i break the raid and jbod the single drives
via the controller and the multilane cable to the mainboard?
 
Last edited:
UPDATE:
i already found some threads on the net having problems with the areca, solaris and jbod mode with zfs,
so that option i can forget.
new plan: turn my real machine to a vmware, vt-d the areca raid controller to the vm, leave all as it is...
buy another two 3 tb hard drives and connect them to the mainboard, install oi and use them as
additional disk space in mirrored mode. get someday a lsi based controller with external multilane
support, copy the data over to a temp space. replace the areca by the lsi controller and replace two of
the 1 tb drives in the external enclosure by the 3 tb drives already holding data. add the other six 1 tb
drives to the pool and copy the data back.

any comments on the plan are welcome, also i search for a cheap nice recommendation for an lsi
based controller with 2 external multilanes so that i can use my enclosure.
 
OK - I might be inclined to export the datapool, and move the 3 mirrors (ie 6 drives) to the motherboard's SATA ports. The spare could either move to the 9211-8i or stay where it is (it will only be used in the event of a disk failure anyway - and iirc, the 9211-8i is the same controller as the onboard SAS-2008 anyway).
Then re-import the datapool and see how you get on - I can't help feeling that your issue may be something to do with the SAS2008 controller/driver and/or the drives' interoperability with it!

I might also move the vm pool to the SAS2008 controller - though I'd take this all one step at a time - if you make too many changes at once, it's easy to end up either with the issue disappearing and you don't know what the cause was, or else you might end up with multiple issues, and not be sure what step introduced them.

Billy,
Thanks for your suggestions.
Honestly, I was so frustrated last weekend, I ripped out the samsung drives and put in 6 x 750GB SATA drives from Seagate that I found in my basement. They're certified 24x7, so I assume they *should* be working better than the Samsungs. I created a Raidz-1, so I have about 3.4 TB disk space, which is OK for me right now.

I began copying back the data from my USB disk, so this might take a while, as I have the same phenomenon as x-cimo with transfer speeds slowing down. On my pool, it drops even to 10 MB/s... I have to admit though that I don't have a ZIL in the system.

Any ideas on the slowing down issue would be very welcome!

Thanks,
Cap'
 
Only one ZFS (napp-it Folder) was affected by the slowdown, the others were only slowing down a little which is normal because of burst speed. (Start a 100MB/s slow down to around 90MB/s) while that one was starting around 75 and slowing to 27...

I created a new folder, and copied all the data over, then deleted my bad one, and problem seem fixed, for now.

I also found that for each large file transfer (~20TB), the network transfer would slowdown to nothing for a few sec (around 30 sec) then resume. It was resolved by setting ESXi power management to disabled.
 
Hey guys, I've been running ZFS solidly for a while now and have run into a snag.

I cannot log into OpenIndiana. There's a couple different passwords I've used and none work with the admin login. I know this is a known problem, but there doesn't seem to be a work around. :mad:

Is it best to just transfer all the data to another couple of drives and rebuild from scratch or what? I have time to think this over as I have 2.5TB free from 4.5TB of usable space.
 
do you have napp-it installed? if so I had the same issue, I used napp-it to set the pw for root, and then I was able to log-in again in the gui.

God I love napp-it
 
do you have napp-it installed? if so I had the same issue, I used napp-it to set the pw for root, and then I was able to log-in again in the gui.

God I love napp-it

Yeah, I followed the original 2 PDFs for walkthroughs in the OP.


How can you use napp-it if you can't log in? Maybe there's something simple I don't understand? :confused:
 
Only one ZFS (napp-it Folder) was affected by the slowdown, the others were only slowing down a little which is normal because of burst speed. (Start a 100MB/s slow down to around 90MB/s) while that one was starting around 75 and slowing to 27...

I created a new folder, and copied all the data over, then deleted my bad one, and problem seem fixed, for now.

I also found that for each large file transfer (~20TB), the network transfer would slowdown to nothing for a few sec (around 30 sec) then resume. It was resolved by setting ESXi power management to disabled.

Yep, I have the same thing (with stalling transfers, then resuming). I am copying from my Windows PC though. Following your advice, I have disabled Power Mgmt, but it did not help. Do you know if there is such a thing on Solaris that I could disable?

Thanks,
Cap
 
Yep, I have the same thing (with stalling transfers, then resuming). I am copying from my Windows PC though. Following your advice, I have disabled Power Mgmt, but it did not help. Do you know if there is such a thing on Solaris that I could disable?

Thanks,
Cap

I'm running ESXi 5 with the highest amount of powersave on(can't remember name), and I don't have these issues at all. But I did run into some speed issues a couple months ago and noticed "compression" on my ZFS folders was activated. I de-activated this for the ZFS folder and speeds was maximum again.

Compression uses A LOT of CPU when reading, even my XEON 1230 was struggling to keep up.

Try it out.
Jim
 
I have 11 drives overall:
- 2 drives in a mirror for OS. Those are connected to the LSI 9211-8i Controller and are SAS drives (36 GB).
- 2 drives in a mirror for testing as a VMWare pool. Those are also connected to the 9211-8i Controller, and those are also SAS drives.

--- I NEVER had a single problem with one of those drives or pools! ---

- 7 drives = 3 x mirrored 1 TB Samsung SATA drives, plus one as spare. Those are connected to the onboard LSI SAS2008 Controller, which of course does also support SATA.

All drives are connected with forward breakout cables (SFF-8087 to 4 x SATA), the SATA drives directly, the SAS drives are in a Raidsonic Backplane for 2.5" SAS.

Here are my pools:
pools9WANN.png


Cheers,
Cap'

I had a similar issue as you and Liam recently. I had been running a couple of arrays off of a pair of AOC-SAT2-MV8's under Solaris 11, using 10 Samsung drives. F1's and F3's mixed across both cards in one RaidZ2 array and 6 Samsung 2.5" 1tb drives in a separate RaidZ2 array.

I was finding that after every 1-3 days, 1 or 2 drives would suddenly drop out of the array and stop responding. After a cold reboot of the server all drives would be available and pass the scrub. The dropouts were spread across both cards but were always the HE103UJ's never the HE103SJ's or the 2.5" ones. This was indicating to me that there was something about that setup that doesn't play well with that particular drive model or firmware version, and I had resigned myself to an expensive replacement / rebuild cycle of disks that aren't even actually failing.

Problem resolved itself though. I decided that I would rather run a more open server OS with a brighter future, so I tore down my arrays and installed Openindiana. I recreated the arrays, copied my data back, and am now on 2 weeks uptime with zero issues. There must be something in the Solaris 11 system (driver? PM issue?) that has a weird obscure bug or something with these drives.

Maybe switching is an option for you, if it really is the same cause. Less expensive than replacing the disks, and Openindiana has been super stable for me. Kinda boring, actually... No issues to troubleshoot.
 
Some new stuff about performance.

I little while ago, I tried the vmxnet3 nics, but found they were unreliable, fast but with lots of speed drop to 0 then burst again.

I swapped my E1000 NIC for VMXNET3 for my OI vm and they are now VERY fast and reliable.

With E1000 I was getting around 90MB/s SMB write to the pool, with a 75% VMware Host CPU load.

Now with vmxnet3, I am getting a constant 114MB/s SMB write, with about half the load, so around 40% CPU load (shown in vmware cpu performance graph).

Since nothing changed, I am putting my previous vmxnet3 issue on cpu power saving...
 
I had a similar issue as you and Liam recently. I had been running a couple of arrays off of a pair of AOC-SAT2-MV8's under Solaris 11, using 10 Samsung drives. F1's and F3's mixed across both cards in one RaidZ2 array and 6 Samsung 2.5" 1tb drives in a separate RaidZ2 array.

I was finding that after every 1-3 days, 1 or 2 drives would suddenly drop out of the array and stop responding. After a cold reboot of the server all drives would be available and pass the scrub. The dropouts were spread across both cards but were always the HE103UJ's never the HE103SJ's or the 2.5" ones. This was indicating to me that there was something about that setup that doesn't play well with that particular drive model or firmware version, and I had resigned myself to an expensive replacement / rebuild cycle of disks that aren't even actually failing.

Problem resolved itself though. I decided that I would rather run a more open server OS with a brighter future, so I tore down my arrays and installed Openindiana. I recreated the arrays, copied my data back, and am now on 2 weeks uptime with zero issues. There must be something in the Solaris 11 system (driver? PM issue?) that has a weird obscure bug or something with these drives.

Maybe switching is an option for you, if it really is the same cause. Less expensive than replacing the disks, and Openindiana has been super stable for me. Kinda boring, actually... No issues to troubleshoot.

Couple of users on OCAU have had some funny problems with the Supermicro AOC-USASLP-L8i cards

Drives dropping out / channels not detected etc

Some problems were fixed by swapping PCI-E slots,
Other problems were RESOLVED (not fixed) by replacing 1 x Supermicro AOC-USASLP-L8i with a BR10i card...

Makes me kind of wonder if the AOC-SAT2-MV8's might throw similar problems ??

Can you borrow another SAS card from someone and run it along side one of your AOC-SAT2-MV8's... ie a BR10i or such

See if the problem either dissapears or lessens...

The above is assuming you have a PCI-E slot as well as PCI-X slots
If no PCI-E Slots maybe try a LSI 3080X-R flashed to IT mode... or a Hp 8 port SAS controller.... (Yes that's what they are really called.) Search eBay for hp part number 347786-B21 or 435709-001 or 435234-001

.
 
Couple of users on OCAU have had some funny problems with the Supermicro AOC-USASLP-L8i cards

Drives dropping out / channels not detected etc

Some problems were fixed by swapping PCI-E slots,
Other problems were RESOLVED (not fixed) by replacing 1 x Supermicro AOC-USASLP-L8i with a BR10i card...

Makes me kind of wonder if the AOC-SAT2-MV8's might throw similar problems ??

Can you borrow another SAS card from someone and run it along side one of your AOC-SAT2-MV8's... ie a BR10i or such

See if the problem either dissapears or lessens...

The above is assuming you have a PCI-E slot as well as PCI-X slots
If no PCI-E Slots maybe try a LSI 3080X-R flashed to IT mode... or a Hp 8 port SAS controller.... (Yes that's what they are really called.) Search eBay for hp part number 347786-B21 or 435709-001 or 435234-001

.

I have only one PCIe slot, my AOC cards are PCI-X, and the chipset is supposed to be well supported in solaris. I would need to use an expander if I went PCIe, which seems to not be a good idea either.

The problem really seems to be isolated to that one model of drive or firmware. The card may be a factor in the equation... but only with that specific drive and only under Solaris not Open Indiana.

PC-X SAS card may be worth looking into if I start having troubles again. Thanks for the info.
 
Last edited:
So I have destroyed my Samsung Disks-based pool and created a new raidz-1 pool with 6 Seagate 24x7 drives (5+1, 750 GB each). Yesterday, I copied 1.5 TB of data onto it. At first, I had the problem that it would copy pretty fast and every 30 seconds or so, the transfer stalled completely for about 20 seconds, then resumed. After a while, performance came down to less than 10 MB/s!
I solved it by rebooting both, the NAS and my Office PC and then everything was back to normal (on the Office PC, I also disabled power management on the NIC). It then came to my mind that I hotplugged the SATA drives so I could note their Sol identifier (C27T...) one by one, and did not reboot afterwards. Seems as if it didn't like that.

The data copy went pretty fine with about 50 MB/sustained, is this about what I can expect from it? I have "only" 12 GB of RAM in the box, would it help to put in more RAM, or is that for reads?

I ran also two scrubs, and they completed fast and with zero errors found. I have to keep my fingers crossed that it will stay that way... If that is the case, then I think colbyu is probably right by saying the F1s and F3s in a mixed environment are probably just a bad setup...

Cheers
Cap'
 
Guys,
Just trying to bring my (newly assembled) system back to the state it was before. Had no troubles installing net-ssleay to use TLS Email, but when I send a test message, I get

Code:
invalid SSL_version specified at /usr/perl5/site_perl/5.12/IO/Socket/SSL.pm line 308

For help, please send mail to this site's webmaster, giving this error message and the time and date of the error.
[Sun May 20 17:26:09 2012] admin.pl: invalid SSL_version specified at /usr/perl5/site_perl/5.12/IO/Socket/SSL.pm line 308

I have to admit I first installed net-ssleay 512 and 584, then installed the TLS packages. Saw my mistake later and uninstalled 584, as the installed perl version is 512, but it did not solve the problem... any ideas?

Thanks!

Cap'

Since my system begins to work stable and reliable, there is this one thing left that I need to find a fix for... anyone? Is it possible to uninstall the TLS component and reinstall it? How?

Thanks!

Cap'
 
So I have destroyed my Samsung Disks-based pool and created a new raidz-1 pool with 6 Seagate 24x7 drives (5+1, 750 GB each). Yesterday, I copied 1.5 TB of data onto it. At first, I had the problem that it would copy pretty fast and every 30 seconds or so, the transfer stalled completely for about 20 seconds, then resumed. After a while, performance came down to less than 10 MB/s!
I solved it by rebooting both, the NAS and my Office PC and then everything was back to normal (on the Office PC, I also disabled power management on the NIC). It then came to my mind that I hotplugged the SATA drives so I could note their Sol identifier (C27T...) one by one, and did not reboot afterwards. Seems as if it didn't like that.

The data copy went pretty fine with about 50 MB/sustained, is this about what I can expect from it? I have "only" 12 GB of RAM in the box, would it help to put in more RAM, or is that for reads?

I ran also two scrubs, and they completed fast and with zero errors found. I have to keep my fingers crossed that it will stay that way... If that is the case, then I think colbyu is probably right by saying the F1s and F3s in a mixed environment are probably just a bad setup...

Cheers
Cap'

Just to ask the most simple question, is the source you are copying from capable of more than 50MB/sec on those same files?
 
The data copy went pretty fine with about 50 MB/sustained, is this about what I can expect from it? I have "only" 12 GB of RAM in the box, would it help to put in more RAM, or is that for reads?

Cheers
Cap'

Depends on the following

What you are copying - eg lots of little files or one big 10gb file?
Where you are copying from - Windows? Slow laptop drive? Fast >= 50mb/s drive?
What you are using to copy - Windows copy or a copy program like terracopy? SMB or FTP etc?

.
 
Hey,
Good question...
I am copying from 2 sources: a 2TB SATA-2 conventional harddrive, and a 120GB SSD SATA-2 drive. Of course, I only copy from SSD for test (comparison) reasons.

So I copied 1.5 TB of data back from my 2 TB SATA-2 disk. About 1.2 TB being pretty large files of 1.8 - 2 GB each. I use SMB only, never tried FTP or another protocol. I copy from a Windows 7 Ultimate PC with an i7 CPU and 8 GB RAM. For such large copy jobs, I use ViceVersaPro. To be able to compare speeds, I also copied from Windows Explorer directly, which is about 10 MB/s faster than VVPro as an average.

To be able to compare, I also copied one large 10 GB file from the SSD to the pool. I had an average of about the same, more or less 50 MB/s, and the SSD is certainly capable to deliver more.

So with my profile mostly copying rather large files, if I understand the whole thing correctly, it's bandwidth I need, not IOPS. The theoretical bandwidth of SATA-2 is 300 MB/s. The theoretical bandwidth of a Gigabit connection is 125 MB/s. I would be super-happy with 80-90 MB/s... should be possible, or did I miscalculate?

Cheers,

Cap'
 
I installed OI today, latest version and updated it, installed napp-it, rebooted, installed afp "wget -O - www.napp-it.org/afp | perl" and i'm unable to connect to the share from a lion computer with afp.

I added - -tcp -noddp -uamlist uams_dhx.so,uams_dhx2.so -nosavepassword to /etc/afpd.conf and restarted the service and restarted the machine, still no progress.

The error connecting is: "The version of the server you are trying to connect to is not supported." Does anyone have any ideas what could be wrong? I've searched for a few hours now and i don't believe i've done anything wrong.

*edit*

I decided to run the installer again, figured i had nothing to lose. Everything works now, very strange. Looking through the install log it doesn't look like it made any obvious changes except for compiling the binary again, most everything else was skipped.
 
Last edited:
Try updating your client nic drivers. And swap e1000 nic to vmxnet3.

I updated my Windows PC's NIC drivers and disabled Power Mgmt and Green Ethernet on it. I can't update NIC drivers on Solaris, don't know how to do this... I am not running an all-in-one box! So no VMWare involved here...
Cheers,
Cap'
 
What compression? Maybe gzipN, the default compression should be insignificant, I think.


Napp-it->ZFS Folder->COMPR

I had that as default to "On", in which you all possibly have. If streaming video or caring about speed from that folder is true, then you need to disable it, unless you have a VERY powerfull CPU(I'm using Xeon E1230).


/Jim
 
one small question - after installing a afp / netatalk server onto napp-it, how to i get my apple tv to see it?
 
Napp-it->ZFS Folder->COMPR

I had that as default to "On", in which you all possibly have. If streaming video or caring about speed from that folder is true, then you need to disable it, unless you have a VERY powerfull CPU(I'm using Xeon E1230).


/Jim

I have the same processor and can stream over gigabit with vanilla compression with no issues. 1gb/sec is only 100MB/sec or so - that should be easily manageable by the default compression. Something else sounds wrong.
 
I have the same processor and can stream over gigabit with vanilla compression with no issues. 1gb/sec is only 100MB/sec or so - that should be easily manageable by the default compression. Something else sounds wrong.

Great to hear! I found it very odd as well, but streaming a BD ISO(~50GB) to my Dune HD Max had minor lag spikes while watching. Turning COMPR in napp-it to "Off" solved my problem instantly and lowered CPU load by a whole lot.

EDIT: I should mention that speed test from the Dune player to the NAS maxes out the 100Mbit/s the Dune player can handle. The only other thing I think it could be is the ESXi setting for power usage, which I have set to save the MOST power possible.
 
Oh, okay. That's different. I have to admit I don't do that - just speaking from the PoV of copying stuff from the share, where little glitches are not going to be perceptible. On the other hand, I'm surprised to hear about CPU load - what were you seeing when copying from a compressed share? On a 3rd note, I think it probably doesn't make much sense to have compression on for a dataset that has things like MPEG and such, since they are already compressed?
 
Oh, okay. That's different. I have to admit I don't do that - just speaking from the PoV of copying stuff from the share, where little glitches are not going to be perceptible. On the other hand, I'm surprised to hear about CPU load - what were you seeing when copying from a compressed share? On a 3rd note, I think it probably doesn't make much sense to have compression on for a dataset that has things like MPEG and such, since they are already compressed?

Indeed just copying maxes out the connection on the Dune player(only 100Mbit/s), just fine.

When copying from a compressed share to my PC, I never noticed anything wrong. Albeit it didn't max out my 1Gbit connection completely (~70MiB/s). But after turning off compression and enabling VMNET3 on the OI VM, I'm seeing 100-110MiB/s just copying.

And finally, using compression on a video share is pretty much pointless, hence I turned it off :)

EDIT: Whether is was VMXNET3 or COMPR which leaded to higher throughput, is questionable, as VMXNET3 does provide more bandwidth in theory.
 
I forgot this was virtualized. I stopped using vmxnet3 due to anomalies (spike and lags and such).
 
I installed OI today, latest version and updated it, installed napp-it, rebooted, installed afp "wget -O - www.napp-it.org/afp | perl" and i'm unable to connect to the share from a lion computer with afp.

...
I decided to run the installer again, figured i had nothing to lose. Everything works now, very strange. Looking through the install log it doesn't look like it made any obvious changes except for compiling the binary again, most everything else was skipped.

There was a bug with OI 151a1 (=prestable0) and multiple pkg install without a reboot after each.
It is suggested to update to OI 151a4 (=prestable3) prior running installers

http://wiki.openindiana.org/oi/oi_151a_prestable3+Release+Notes
 
Last edited:
There was a bug with OI 151a1 (=prestable0) and multiple pkg install without a reboot after each.
It is suggested to update to OI 151a4 (=prestable3) prior running installers

http://wiki.openindiana.org/oi/oi_151a_prestable3+Release+Notes

Thanks for the help Gea, one thing i'm confused about is i started with a base iso of OI 151a3, ran pkg update and got all the updates, rebooted, installed napp-it, rebooted again, then tried the afp installer. I should have been on the latest release of OI at that time and unaffected by that bug?
 
Back
Top