OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

So there needs to be a snapshot created on both appliances?

The transfer begins - A snap is created on source filesystem - A ZFS filesystem is created on the target filesystem - But there is no snapshot on the target appliance. The transfer will run for about 1800s and error out with the above error.

When I first got the error I did delete all snaps on both appliances manually. Ever since then doing a new initial sync fails every time.

I've also recreated the appliance groups and double checked my network settings, as well as try to use different NICs on both servers. Still fails every time.
 
So there needs to be a snapshot created on both appliances?

The transfer begins - A snap is created on source filesystem - A ZFS filesystem is created on the target filesystem - But there is no snapshot on the target appliance. The transfer will run for about 1800s and error out with the above error.

When I first got the error I did delete all snaps on both appliances manually. Ever since then doing a new initial sync fails every time.

I've also recreated the appliance groups and double checked my network settings, as well as try to use different NICs on both servers. Still fails every time.

- A target snap is only created after a sucessful initial replication as it is the base for the next incremental replication.

- if the initial replication fails for whatever reason, you do not need to care about the snaps. This is done by napp-it. But if you have a target filesystem that is left from an unsuccesful initial replication, you must delete/rename this filesystem manually or napp-it tries to do an incremental transfer afterwards that must fail.
 
Thanks for the replies! I'm not sure what happened, the initial sync failed over 20 times but this morning it finished without a problem. Let's just hope it lasts.
 
When creating a snapshot of my ZFS Filesystem, it takes a snapshot of the entire NFS share, is this working as intended or is there a way to get individual snapshots of each subdirectory inside of the share?

Also, what's the difference between auto service "enable auto 1min" and "enable auto 15min"? Just the interval at which cron checks the job list?
 
When creating a snapshot of my ZFS Filesystem, it takes a snapshot of the entire NFS share, is this working as intended or is there a way to get individual snapshots of each subdirectory inside of the share?

Snapshots are done on a filesystem level not on a "regular folder" level.
As shares are also a filesystem property, you cannot divide this into two settings.

Also, what's the difference between auto service "enable auto 1min" and "enable auto 15min"? Just the interval at which cron checks the job list?

Yes
 
Snapshots are done on a filesystem level not on a "regular folder" level.
As shares are also a filesystem property, you cannot divide this into two settings.



Yes


Thanks _Gea, top notch work on napp-it :cool: Really enjoying linespeed centralized disk access at home
 
One of my drives in a 2 drive basic pool configuration has failed its SMART status. I tried pulling as much data off as possible, but now it has constant time outs and can't copy any more.

What's the best way to retrieve data from this?

Currently on OI+ latest version of napp-it (ESXI AIO) setup.
 
One of my drives in a 2 drive basic pool configuration has failed its SMART status. I tried pulling as much data off as possible, but now it has constant time outs and can't copy any more.

What's the best way to retrieve data from this?

Currently on OI+ latest version of napp-it (ESXI AIO) setup.

Why for heavens sake have you build a Raid-0 when data is in any way relevant?
If one disk fails, your pool is lost. You can hope that a power-off/on or a pool export/import readonly may help.

Otherwise you need the help of a commercial data rescue service.
They can copy your data over to a new disk but this can be quite expensive.
 
Hi Can any recommend a sas card that can take 4TB HDD and work whit solaris I got a AOC-SASLP-MV8 now but cant get it to work.
I have read it work for some whit Windows ? but are that a driver problem ?.
I cant fine any driver for solaris and only get 2.2TB
 
Hi Can any recommend a sas card that can take 4TB HDD and work whit solaris I got a AOC-SASLP-MV8 now but cant get it to work.
I have read it work for some whit Windows ? but are that a driver problem ?.
I cant fine any driver for solaris and only get 2.2TB

use LSI HBA adapters ex
IBM 1015 (reflash it to raidless LSI 9211-IT mode, the cheapest)
LSI 9211 (flash it to IT mode)

one of the best:
LSI 9207 (IT mode per defaul)
 
use LSI HBA adapters ex
IBM 1015 (reflash it to raidless LSI 9211-IT mode, the cheapest)
LSI 9211 (flash it to IT mode)

one of the best:
LSI 9207 (IT mode per defaul)

thanks for the quick response one LSI 9207 order :)
 
Why for heavens sake have you build a Raid-0 when data is in any way relevant?
If one disk fails, your pool is lost. You can hope that a power-off/on or a pool export/import readonly may help.

Otherwise you need the help of a commercial data rescue service.
They can copy your data over to a new disk but this can be quite expensive.

Thanks Gea, I realize my mistake now. Still pretty new to this.

So I should export the pool first then import it RO? Or are there steps I should be taking before importing? Thanks!
 
Thanks Gea, I realize my mistake now. Still pretty new to this.

So I should export the pool first then import it RO? Or are there steps I should be taking before importing? Thanks!

If a disk is dead, you can do nothing.
If the disk is semi dead, you can try a power off/on or a pool export/ pool import
with the optional ro option that may help (or not)
 
Hei Gea_!

I come across an interesting thing. I monitor my disk I/O with zabbix, for which I created a script which uses iostat to get statistics. I want to monitor per device speed and per pool as well.

But today I noticed very unlikely high speeds on one of my pools, so I went ahead and checked my script in case there was a bug in there giving me the wrong statistics.

Here is the output of my script:
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
23.8 22.5 1517.9 275.6 0.2 0.7 4.5 14.5 2 11 pool0
24.4 854.4 3109.0 94983.3 213.2 1.8 242.6 2.1 49 66 pool1
1.3 6.7 20.1 36.6 0.0 0.0 2.3 0.3 0 0 rpool
1.4 7.1 20.1 36.6 0.0 0.0 0.0 0.2 0 0 c1t0d0
6.3 5.4 387.7 69.6 0.0 0.1 4.2 6.1 2 5 c1t4d0
6.2 6.1 385.3 68.9 0.1 0.1 7.3 7.6 3 6 c1t5d0
6.1 6.0 372.7 68.1 0.1 0.1 8.5 5.7 4 6 c1t6d0
6.1 5.9 375.0 69.0 0.1 0.1 9.6 6.6 5 7 c1t7d0
12.2 428.4 1514.9 47470.5 0.5 0.4 1.1 1.0 26 44 c1t2d0
12.7 429.1 1595.6 47512.9 0.4 0.4 0.9 1.0 20 43 c1t3d0

I noticed that statistics for pool1(mirror pool) are "wrong". I always though iostat will tell me the speed at which system is writing data to pool1 pool, not the combined speed of all drives. In my case, pool1 write speed should be 47MB/s, but iostat reports 94MB/s, which is a combined speed of both drives. Read speed in this case is right, since system can read from both drives as the same time.

How do you monitor per pool speed in napp-it?
Do you even monitor per pool or only per drive?
I would love to have some way of monitoring pool read/write speeds, so I can quickly notice high usage. Can't do that with iostat, could do with iostat and some scripting, which figures out RAID type and calculates speeds accordingly.

Do you have any idea, how could I get per pool i/o stats? Mainly speed and iops.

Matej
 
I use iostat and zpool iostat to monitor per device and per pool.
Beside that iostat is not wrong. It shows disk io - does not matter
if in case of a mirror this is 2 x the same data - not the effective data rate.

With zpool iostat -v can monitor per vdev (this may show what you want)
http://docs.oracle.com/cd/E19253-01/819-5461/gammt/index.html

or
you must care about the vdev type in your statistic script.
 
I set my zfs nas as torrent download target, but utorrent randomly return some "write to disk" and "hash" errors, is there something i can do to fix this or it's not recommended to use things like torrent directly on the zfs volume?
 
I set my zfs nas as torrent download target, but utorrent randomly return some "write to disk" and "hash" errors, is there something i can do to fix this or it's not recommended to use things like torrent directly on the zfs volume?

This can be the case as ZFS has two special behaviours CopyOnWrite and a 5s write cache where it collects small random writes and transform them to a single large sequential write. This improves performance but can be a problem with some applications that use special performance functions but are targeting a generic Windows + traditional filesystem.

This is a problem for example with robocopy (a sync tool from Microsoft) where you need to set the /b parameter or it may produce empty files on ZFS.

If the problem occurs randomly, it may be a problem that occurs with small random writes but only when the share commits not fast enough. You may try a SSD for the torrent share or check if a sync=always helps.
 
It looks like zpool iostat might give me better results. I'm aware that output of iostat is not wrong per see, I just expected different results.
Can you interpret something for me, since I'm not sure if I understand that correctly:

Code:
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
pool0       4.92T  2.33T      8      8  1.01M  83.5K
  raidz2    4.92T  2.33T      8      8  1.01M  83.5K
    c1t4d0      -      -      4      4   261K  50.8K
    c1t5d0      -      -      4      5   263K  49.6K
    c1t6d0      -      -      3      5   254K  50.1K
    c1t7d0      -      -      3      5   253K  48.9K
----------  -----  -----  -----  -----  -----  -----
pool1        127G   801G     15    248  1.82M  20.4M
  mirror     127G   801G     15    248  1.82M  20.4M
    c1t2d0      -      -      7    189   960K  20.4M
    c1t3d0      -      -      7    190   904K  20.4M

In case of pool0, which is a RAIDZ2, when I read from that pool, it says it read 1MB.
How come it read the same amount of data from all drives?
Am I lucky and all my data were evenly distributed or does it have to read data + parity for integrity check?
In case it needs to check integrity, it didn't actually read 1MB of "usable" data, but only 500k, the rest was just for checking...
In case of writing, data makes sense. It wrote 83k of data, half to each hard drive and 2x parity to another two.

Not pool1, which is a mirror pool:
Read and write makes sense now, but write ops don't. It says it needed 250 write ops to write the data, but every hard drive made 190 write ops. Shouldn't combined write ops be like 400?
iostat gives more sense in this manner:
Code:
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   16.3   19.6 1037.0  201.0  0.0  0.3    1.2    9.4   1   6 pool0
   15.4  383.0 1877.7 42126.2 118.9  1.0  298.4    2.5  25  36 pool1

What don't I understand?

Matej
 
Not pool1, which is a mirror pool:
Read and write makes sense now, but write ops don't. It says it needed 250 write ops to write the data, but every hard drive made 190 write ops. Shouldn't combined write ops be like 400?
iostat gives more sense in this manner:
Code:
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   16.3   19.6 1037.0  201.0  0.0  0.3    1.2    9.4   1   6 pool0
   15.4  383.0 1877.7 42126.2 118.9  1.0  298.4    2.5  25  36 pool1

What don't I understand?

Matej

It depends on what you are looking at.

If you write a datablock to a mirror and when and you check the pool/vdev ,
you write this single block resulting in the io to write this one block.

If you look at the disks, you need to write this datablock on both disks
resulting in the same or similar io load on both disks.

On reads, the io load on disks can be about the half of the vdev/ pool io
as ZFS reads from both disks simultaniously.
 
I'm aware of that, but there seems to be some inconsistency in the output that I don't understand.

Code:
  capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
pool1        127G   801G     32    253  4.07M  21.1M
  mirror     127G   801G     32    253  4.07M  21.1M
    c1t2d0      -      -     16    193  2.04M  21.1M
    c1t3d0      -      -     16    194  2.03M  21.1M

What you are saying is true for read iops. Pool1 made 32 read ops, 16 per drive. That is OK.

But if you look at write ops, things don't add up. Pool1 made 250 write ops, but there was 190 write ops per drive. Given what you said, there should be 250 write ops per drive, since system writes the same data on both drives. At least to my understanding of RAID1. But ZFS might do something else when writing data that I don't know.

Matej
 
I would say the problem is that you cannot measure the real value at disk level but at ZFS or controller level at a certain point.
Between this and the disk is place for some optimisations.

ex
If ZFS would do the 194 operations per disk alone on both disks it should show 388 but it shows 253.

If the mirrorring would be done completely at the driver level, the pool value should be 194 as well.
The truth seems to be between but this needs a ZFS developper to explain correctly.
 
Hey guys,

I'm having a strange issue with the napp-it GUI, and I was wondering if it's just me, or if others are experiencing the same...

I'm running napp-it 0.9f1 on OmniOS v11 r151010, and no matter what I do in the GUI, when I create a view for an LU, it always assigns any new view a LUN of 0. I'm not sure if this is a bug ("feature", as Microsoft would call it :D), or if I've b0rked something in OmniOS or napp-it. It's been this way as long as I can remember (since the early 0.9 days), but if I create a view from the CLI, I can assign any available LUN without issue.

Thanks for any input! :)

-Ben
 
A GUI always forces a consistent workflow. At CLI you always have some more options.
But I do not really understand your problem as you assign a view not to a LU but for a LU to a target group (or all).

The idea behind Comstar
- Create one or more LUs (logical unit, a file, disk or a ZFS volume)
- create one or more targets (that you can connect from a client)
- create one or more target groups with targets as members (to manage targets with LUNs)

- up to this, you can connect the target(s) from a host but there are no LUNs within
- to make one or more LUNs visible in a target, you need to add views to your LUs

Now you need to know, that you do not assign a view to a target but a target group
with the effect that all targets that are a member of the targetgroup show this LU as a LUN.
The LUNs are numbered starting with 0 for a target group.

Seems quite complicated at the moment, but offers an enormous flexibility in larger environments.
For quick and easy assignments, you can create iSCSI shares in menu ZFS filesystem where
a filesystem can be shared via iSCSI in a on/off manner. In the background it creates a 1:1 assignment
of one Zvol per filesystem=LU=Target=Targetgroup=view.

http://www.c0t0d0s0.org/archives/6140-Less-known-Solaris-Features-iSCSI-with-COMSTAR.html
 
ok I think I maby messed up I got backup so it not that bad but. I try to upgrade my 2TB zfs vdev whit 4TB after the first 4TB was resilvering I got a unavail disk and my nye 4TB got DEGRADED can I save it ? what to do for to get the DEGRADED fix


pool: tank
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function in a degraded state.
action: Wait for the resilver to complete.
Run 'zpool status -v' to see device specific details.
scan: resilver in progress since Mon Aug 18 07:42:24 2014
169G scanned out of 19.3T at 334M/s, 16h41m to go
13.7G resilvered, 0.85% done
config:

NAME STATE READ WRITE CKSUM CAP Product
tank DEGRADED 3 0 0
raidz2-0 DEGRADED 0 0 0
c6d0 ONLINE 0 0 0 2 TB
c4t13d0 ONLINE 0 0 0 2 TB WDC WD20EARS-00M
c4t1d0 DEGRADED 0 0 0 2 TB WDC WD20EADS-00R
c4t2d0 ONLINE 0 0 0 2 TB WDC WD20EADS-00R
c4t3d0 ONLINE 0 0 0 2 TB WDC WD20EADS-00R
c7d1 ONLINE 0 0 0 2 TB


raidz2-1 DEGRADED 0 0 0
c0t50014EE2AF1C4945d0 ONLINE 0 0 0 2 TB WDC WD20EARS-00S
replacing-1 DEGRADED 0 0 0
c0t50014EE0AD12C436d0 UNAVAIL 0 0 0 2 TB WDC WD20EARS-00M
c0t50014EE20A711435d0 DEGRADED 0 0 0 (resilvering) 4 TB WDC WD40EFRX-68W
c0t50014EE057BD2769d0 ONLINE 0 0 0 2 TB WDC WD20EARS-00M
c0t50014EE2039D6941d0 ONLINE 0 0 0 2 TB WDC WD20EVDS-63T
c0t50014EE2AD7583D2d0 ONLINE 0 0 0 2 TB WDC WD20EADS-00R
c0t50014EE25F5FA45Ad0 DEGRADED 0 0 0 4 TB WDC WD40EFRX-68W
 
I would say the problem is that you cannot measure the real value at disk level but at ZFS or controller level at a certain point.
Between this and the disk is place for some optimisations.

ex
If ZFS would do the 194 operations per disk alone on both disks it should show 388 but it shows 253.

If the mirrorring would be done completely at the driver level, the pool value should be 194 as well.
The truth seems to be between but this needs a ZFS developper to explain correctly.

Maybe the ZFS overhead (writing checksums) is not taken into account at the pool level but is at the drive level.
 
ok I think I maby messed up I got backup so it not that bad but. I try to upgrade my 2TB zfs vdev whit 4TB after the first 4TB was resilvering I got a unavail disk and my nye 4TB got DEGRADED can I save it ? what to do for to get the DEGRADED fix


pool: tank
state: DEGRADED
..
replacing-1 DEGRADED 0 0 0
c0t50014EE0AD12C436d0 UNAVAIL 0 0 0 2 TB WDC WD20EARS-00M
c0t50014EE20A711435d0 DEGRADED 0 0 0 (resilvering) 4 TB WDC WD40EFRX-68W

I suppose, you just removed the old 2 TB disk as it is UNAVAIL
You should insert a 4 TB disk and start a disk-replace as this will not affect redundancy.

Now you must wait until the resilver is finished, then do a
clear error and/or a zool detach tank c0t50014EE0AD12C436d0
 
I suppose, you just removed the old 2 TB disk as it is UNAVAIL
You should insert a 4 TB disk and start a disk-replace as this will not affect redundancy.

Now you must wait until the resilver is finished, then do a
clear error and/or a zool detach tank c0t50014EE0AD12C436d0
thx think the disk just was go bad I try to replace it in napp-it and after clear error now it work:D now just 4 more 4TB to go ;)
 
I am still on OI 151_a9 and as documented napp-it realtime monitoring from 0.9f1+ does not work. I have messed up pkg versions attempting to get this to work.

I want to move to OmniOS but am not comfortable with CLI only :mad:

I have attempted to install gnome and kde based on the link below, but I have not gotten either to work.
http://www.perkin.org.uk/posts/whats-new-in-pkgsrc-2013Q2.html

Has anyone attempted and successfully implemented gnome,kde or similar in OmniOS or know where to find clear documentation on how to do so?
 
I have not tried to add a GUI but a GUI howto for OmniOS would be nice.
Beside that, you do not need that many CLI knowledge with napp-it.

Other Option: look at OpenIndiana Hipster
It comes with a newer Perl so napp-it monitoring should work.
http://wiki.openindiana.org/oi/oi_hipster
 
hi I have just upgradet my hdd on one of my vdem from 2TB to 4TB but I dont got eny exstre space do both vdev nede to be the same hdd size ?
 
Last edited:
Hi.

Using OpenIndiana + Napp-It since 2012, I would like to try OmniOs + new Napp-it version.

So, I set up a ZFS file shared storage for video editing
-> with Adobe Premiere and Avid Media Composer
-> to handle ~ 16TB / project
-> of original rushes 4k/UHD compressed footage : h264 100Mb/s 25p
... + converted rushes for draft editing 4k/UHD ProRes Proxy: 151 Mb/s
.. and at the end : color graded rushs full quality 4k/UHD ProRes 422 : 492 Mb/s

  • Intel Xeon E3-1220V2
  • Supermicro X9SCM-IIF
  • 32GB : 4x 8GB Ram ECC
  • 3x m1015 flashed on LSI with IT Firmware
  • 1x 10 GbE Dual SFP+ port
  • Entropia Ananas 4U 24 bay
  • 4x vDev of RaidZ1 with 3x 2TB WD SE for Rushes (4k/UHD compressed footage : Proxy)
  • 2x vDev of RaidZ1 with 4x 1TB WD Black for additional HD Media + render (HD fottage at 185Mb/s)
  • OmniOS + lastest Napp-iT
  • Switch 8x 1 Gb ports Cisco SMB

Pools Rushes and Media are shared with SMB to Windows 7 Pro clients :
  • 1x client accesses to these 2 pools with 1x 1 GbE cooper : the encoding wokstation (NIC buffer : TX 512 / RX 512, MTU 1500)
  • 1x client accesses to these 2 pools with 1x 10 GbE SR fiber : the editing workstation (NIC Buffer : TX 2048 / RX 2048, MTU 1500)

With converted rushes for draft editing 4k/UHD ProRes Proxy: 151 Mb/s on Rushes pool
-> No files are less than 760 Mb (95 MB)
-> Biggest files are 34 GB​
Once converted from original quality 4k/UHD h264 to draft 4k/UHD ProRes Proxy to edit smoothly, the Rushes pool only reads.
The Media pool often reads video files at 185 Mb/s and sometimes write at this bitrate too.

The Rushes pool is always solicited to open, read and edit video with play, accelerated play forward, play backward, accelerated play backward ...

The thing is : with this kind of files, from 95 MB to 34 GB
-> once loaded, reading video files on editing software is ok
BUT
-->> loading / opening one file is laggy
-->> opening a timeline with 10 minutes editing footage (about 120 to 200 clips) take a while !!

I tried with MTU > 1500 but it's worst :/

So, I'm wondering if managing the ZFS Prefetch and add a 256 Go SSD for L2ARC with 128 Go Over-provisioning on hdparm could reduce the latency of accessing to these video files ?

What could be the best solution for video editing as described ?

Cheers.

ST3F.
 
You say, performance is ok when files are once loaded what means that the RAM based readcache is large enough, a larger cache (RAM-based or via SSD) can help if you do not power off the server. As your files are > 32 GB, more RAM would be better than a SSD cache but this would require a 2011 board.

If you open a project, this is slow:
The reason may be that you do not read one large fille sequentially. Your workload with video-ediing is reading/ writing in parallel what means that I/O performance becomes relevant. A pool build from 4 raId-Z1 vdevs has the IO of four disks (around 400 IOPS). If you build your pool from 6 x mirror, you have around the same sequential performance but a 50% better IO performance.

Using Jumboframes may help with seqential performance especially on 10Gb, not with IO

Using iSCSI with a high blocksize (64/128 KB) may also improve performance as well as NFS (you need a good Windows client)

Best would be using SSD only pools as they offer a 1000x better iops rate.
I just ordered some Sandisk Pro Extreme 960 GB for my next storage for video editing.
 
Indeed, SSD pool should be the best but 16 TB / project... 12 projects in 2015... too expensive :(

RaidZ has been planned as the most know shared storage system, Avid Isis 5000 is in fact a 3x Raid5 with 4 drives ; so we decide to match these same specification with ZFS...

Today the server is in production.
The next will be on 2011 socket (xeon 2643 + supermicro X9SRL + 256 Go ram with HGST 4 To SAS).

Now, I'm finding a way to improve performance withh the xeon 1230v2 + 32 Go Ram.

When a 10 min sequence is opened, about 150 clips from 95 MB to 34 GB are loaded : each entire clip are loaded, not only the part into IN / OUT points... so when this kind of sequence is opened,the amount of loaded storage is more than 32 GB.

Iscsi is not a shared storage solution without metadata controler (as Quantum Stornext, EditShare, Tiger MetaSan...).

Inquiring in Prefetch, I was thinking it could be a way to dig with decent L2ARC... don't you ?

What editing software do you use ?

Cheers.

ST3F
 
There is nothing wrong with Raid-5/ Z1 as it offers a good sequential performance and more usable capacity than mirrors. But if your workload is I/O sensitive a pool build from more mirror vdevs than z1 vdevs is faster as IO performance scale with number of vdevs.

You can extend your RAM readcache (ARC) with an L2ARC SSD and check if it helps. (It does not improve initial loads).

ps
We use FinalCut, Adobe CC/CS6 and Cinema. I just ordered several MacPro (10 GbE) for our students together with a OmniOS/ NFS shared storage (10TB SSD). This is a new setup for next semester.
 
Ok, I will try with L2ARC.

You never tweak prefetch function on ZFS for video skills ?

The next system will have osx 10.9 clients with Première CC 2014, but in smb : it seems Afp and NFS are getting issues with Maverick...

Cheers.

ST3F.
 
That is my usual strategy too with my filer and backup systems.
I started with a pool build from 2 x Raid-Z2 - 1 TB.
After some time when I need more space. I replaced all 1 TB disks in one vdev with 2 TB disks.
Next step is to replace the other 1 TB disks in the second vdev with 3 or 4 TB disks.

No problem beside that pools are unbalanced so sometimes your performance is only same as with one Raid-Z vdev but mostly it is better and mostly not a limitation.

How do you get you new space I just opgrade ne 1 og 2 raidz2 vdev to 4TB do itvnede to tale ofline and online or have I misunderstand you
 
Ok, I will try with L2ARC.

You never tweak prefetch function on ZFS for video skills ?

The next system will have osx 10.9 clients with Première CC 2014, but in smb : it seems Afp and NFS are getting issues with Maverick...

Cheers.

ST3F.

In my expectations, prefetch can help but only a little with smaller files-

About SMB on Macs.
The SMB implementation on current Macs is lousy with SMB1 as its is up to 40% slower than NFS on Macs or SMB1 on Windows. SMB2/3 is much faster on Macs but currently you need either the newest SAMBA or NexentaStor (based on Illumos like OmniOS but they updated SMB to 2.1, this is currenly not included in Illumos).

Therefor I will try NFS first. There are some hints around for 10.9 and hopely 10.10 gets better.
 
Back
Top