OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

it depends on the ZFS version.

If you want to import a pool,
If must be supported by your target system.

SE 11: ZFS V.31
OpenIndiana ZFS max V. 28
ZFSGuru beta max V.28
Nexenta ZFS max V.26
Free BSD ZFS max V.14/15

You can import a pool up the supported version



Gea
 
I am playing around with a 32 2T Drive Chasis (Actually, terrible 3ware 9750 controller can only export 32 Units, and I am playing with single disks). But nappit / openindiana is only seeing 16 drives ? Is there is a fix, something I need to change to get it to detect all drives ?
Thanks!

Did you use an Expander?
But in any case, you should not use a hardware-raid controller with ZFS.
I would suggest to try/ use another controller.
best are HBA types in IT mode with LSI 1068 or 2008 chipsets.

Gea
 
Last edited:
Did you use an Expander?
But in any case, you should not use a hardware-raid controller with ZFS.
I would suggest to try/ use another controller.
best are HBA types in IT mode with LSI 1068 or 2008 chipsets.

Gea

Oh, I know all of this, but this is what I had on hand to experiment with :).

Yes, I used a expander, but I don't think that is the issue. There is a proper driver for the 9750, that exports the units that you make in the cards bios/web interface. I've made 32 units of single disks, just so I could have some drives to experiment with. But for some reason, nappit only sees the first 16 ?

Another question. Alot of people here talk about the 4K Drive issues with ZFS, and that there is a setting that needs to be changed. Will it be defaulted to the proper setting if one creates vdevs with 4K drives in nappit ?

And are the rules against / for 4K drives per pool or per vdev ?

Thanks alot!
 
Yes, I used a expander, but I don't think that is the issue. There is a proper driver for the 9750, that exports the units that you make in the cards bios/web interface. I've made 32 units of single disks, just so I could have some drives to experiment with. But for some reason, nappit only sees the first 16 ?

napp-it shows the drives, reported by the os.
i suppose its either a driver or a cabling problem

Another question. Alot of people here talk about the 4K Drive issues with ZFS, and that there is a setting that needs to be changed. Will it be defaulted to the proper setting if one creates vdevs with 4K drives in nappit ?

And are the rules against / for 4K drives per pool or per vdev ?

Thanks alot!

about 4k drives on *Solaris*

I expect, that in future all large disks will be 4k.
They have a better space-usage on large disks with large files.

Currently 4k disks are only used in low power, low priced disks
not in high speed disks, 24/7 ones or enterprise ready disks

so if you use them now, you do it for high capacity on low price not
for performance reasons. currently creation of proper aligned pools is not
officially supported on Solaris based systems. You can expect it in the next
major updates as a ZFS property.

If you want to try it now, you could either import such a pool from Free-BSD
or use the modified zpool command from:
http://digitaldj.net/2010/11/03/zfs-zpool-v28-openindiana-b147-4k-drives-and-you/
http://wiki.openindiana.org/oi/SATA+-+Advanced+Format+and+4K+Sector+drives

(i have not tested by my own, do it at your own risk, it may be dangerous like any new code,
but only if someone tries, it could become stable - do not try with critical data-)

You can expect about 10% better sequential write-values with about the same
read performance according to the bench on that page. Thats far less than
the values, you can get with faster drives or a raid-10 instead of a raid-z
or with a proper amount of disks in a raid-z config.

usually i would say:
you could use 4k drives without problem,
avoid them in high performance usages.

my general opinion about:
- if you want to win a car race, 1% better performance can do it
- if you use a computer, you need about 30% better values to say,
ah, i can feel it


Gea
 
Last edited:
Also re: the 4k drives (This is based on my understanding of some information sub.mesa posted in another thread - so hopefully someone will correct me if I am wrong)

There is an ashift stored with each vdev, but also a global ashift for the pool. If your pool was created with an ashift of 9 then you will typically have issues adding a vdev with an ashift of 12. If your pool has an ashift of 12 you can add vdev's with ashifts of both 9 and 12 - so if you plan on expanding your pool in the future with 4k sector drives you really need to create it with an ashift of 12 to start with. Or plan on a full migration.

(at least that's how the situation currently looks to me)
 
Now that the cheap 512byte sector 5K3000's are out, there is really no reason to use drives with 4K sectors emulating 512 byte ones. There are ways as you point out to make the 4K emulation work, but why bother with the F4's or the EARS drives if you can get a cheap cool 2 TB drive like the 5K3000? It's just not worth the hassle.
 
And since I'm a GUI man I like GUI's, I'm not totally wiped when it comes to terminal, but I'm not a genies either. So as a GUI man, which operating system is doing it best?

MenloWare provides a (non-free) multi-system GUI for NexentaStor and OpenSolaris.
 
Gea, I know you recommend putting napp-it+OS/OS/SE on top of esxi for an all-in-one. Unfortunately, the amd processors/motherboard I have to not support pci passthrough, and performance is abysmal without it :( I am trying to go the other way. e.g. install OI with GUI. Install napp-it on that. Install virtualbox on the OI+GUI. From a quick test, the VM performance is pretty good. What do you think?
 
A little confused about setting up iscsi target using napp-it. I have a target for my win7 box and another for my ubuntu server. I created target groups and host groups as indicated. I created views too. Unfortunately, my win7 box can still see (and connect to) the ubuntu target. I thought the whole point of the views/host-groups was to restrict who could connect to what? I assume I can use chap secrets to protect the two targets, but I didn't think I needed to?
 
A little confused about setting up iscsi target using napp-it. I have a target for my win7 box and another for my ubuntu server. I created target groups and host groups as indicated. I created views too. Unfortunately, my win7 box can still see (and connect to) the ubuntu target. I thought the whole point of the views/host-groups was to restrict who could connect to what? I assume I can use chap secrets to protect the two targets, but I didn't think I needed to?

i have not done it for some time, but you could:

restrict by IP (two nics needed) and two target-portal groups or
restrict by initiator name (iqn.1991-05.com.microsoft:vm4.mydomain.de) and host-groups

google for iscsi comstar to get more infos about

Gea
 
Gea, I know you recommend putting napp-it+OS/OS/SE on top of esxi for an all-in-one. Unfortunately, the amd processors/motherboard I have to not support pci passthrough, and performance is abysmal without it :( I am trying to go the other way. e.g. install OI with GUI. Install napp-it on that. Install virtualbox on the OI+GUI. From a quick test, the VM performance is pretty good. What do you think?

it depends

1.
if you virtualize with a type2 hypervisor like virtualbox, vmware workstation or fusion on
top of a full featured os, its ok for smaller or test installations or if you mainly need the
base-os, ex have a mac and need a Windows-app from time to time.

problem: huge overhead for full featured base-os or reduced stability or reduced features
or reduced overall performance compared with 2.

2.
advantage of a type 1 barebone hypervisor like Xen or ESXi:
- minimal cpu/ram overhead, < 100 MB in size, its more like a extended bios or firmware
nearly all cpu and ram is available to guests
- better stability due to less code, optimized driver, better guest support
- cpu and ram assignment to guests, overcommitment of ram
- direct hardware access (pass-through to USB, Nics, Storage adapter)
- much better management, move or clone capabilities

if you are looking for a enterprise ready pro solution, its always 2.

Gea
 
i have not done it for some time, but you could:

restrict by IP (two nics needed) and two target-portal groups or
restrict by initiator name (iqn.1991-05.com.microsoft:vm4.mydomain.de) and host-groups

google for iscsi comstar to get more infos about

Gea

I specifically DID put the initiator name in, but it still allowed me to connect. oh well...
 
Hi,

I have an all-in-one ESXi, OpenIndiana test server but I am not happy with the speed.


ESXi / OpenIndiana server:

Supermicro X8DTH-6F mobo, 2 x Intel Xeon E5520, 24 GB ECC memory
Supermicro SC846E26-R1200B dual port 6 gb/s SAS expander
24 x ST32000444SS 2 TB SAS drive
1 x LSI 9211-8i 6 gb/s SAS HBA
2 x Intel X25-E 32 GB SSD drives in RAID1 for LARC
1 x 120 GB SSD drive 6 gb/s
1 x QLE2562 Fibre Channel Host Bus Adapter 8 gb/s

ESXi 4.1 and OpenIndiana is installed on an external RAID1 eSATA array, connected to the motherboard on an AHCI SATA port.

OpenIndiana has 3 pass-through devices: LSI 9211-8i, LSI SAS2008 (on mobo), QLE2562 fibre channel 8 gb/s HBA:


a) LSI 9211-8i (flashed with latest IT firmware from LSI) connecting to the dual SAS expander, containing 24 x 2TB ST32000444SS 6 GB/s SAS drives, with 2 internal mini-SAS cables.

Currently I have two zfs pools:

tank ONLINE
mirror-0 ONLINE
c6t5000C5002624816Bd0
c6t5000C50026248D73d0
mirror-2 ONLINE
c6t5000C50026092163d0
c6t5000C50026093A3Fd0
mirror-3 ONLINE
c6t5000C50026097EF3d0
c6t5000C500260A1343d0
mirror-4 ONLINE
c6t5000C500260E6FEBd0
c6t5000C50026105CBBd0
mirror-5 ONLINE
c6t5000C500261D3BBFd0
c6t5000C5002622589Fd0
mirror-6 ONLINE
c6t5000C50026225967d0
c6t5000C500262276F7d0
mirror-7 ONLINE
c6t5000C5002622795Bd0
c6t5000C50026227D57d0


tank2 ONLINE
mirror-0 ONLINE
c6t5000C50025E9630Fd0
c6t5000C5002609145Fd0
mirror-1 ONLINE
c6t5000C5002619E457d0
c6t5000C500261BFDC3d0
mirror-2 ONLINE
c6t5000C500261C738Fd0
c6t5000C500261D040Bd0
mirror-3 ONLINE
c6t5000C500261D190Bd0
c6t5000C500261D3A0Bd0
logs
mirror-4 ONLINE
c6t500151795946773Ad0
c6t500151795946FA36d0
cache
c6t500A075102FEE8A4d0


b) SAS HBA on mobo (LSI SAS2008), connecting the mirrored ZIL Intel X-25-E SSDs and a single 120 GB SSD cache drive on tank2 pool.

The SSD drives are on a separate SAS2008 controller, since the Intel SSD's are SATA drives and only the cache drive is 6 GB/s, the Intel SSDs are 3 GB/s. Keeping them on a separate controller allows me to avoid mixing SAS and SATA and different speed drives in the expander.

A 1 TB slice of tank is mounted as an NFS storage under ESXi, using the 10 GB/s vmxnet3 NICs, connecting to a virtual switch.


c) One QLE2562 fibre channel 8 gb/s HBA is also connected to OpenIndiana with direct-path. Using COMSTAR two 1TB slices are set up and assigned to the fibre card, one from the tank pool, one from the tank2 pool.

A slice from each pool is also shared via smb for testing.

TEST1

2 COMSTAR zfs LUNs mounted as volumes under a separate, standalone Windows 2008 box with a 8 gb/s fiber card .

CrystalDiskMark

1000MB F drive tank2 SAN

549.4 270.8 seq
485.3 230.7 512k
49.59 13.25 4K
248.7 12.94 4K QD32


4000MB F drive tank2 SAN

556.9 303.5 seq
516.1 107.7 512k
54.54 12.98 4K
296.8 19.53 4K QD32


IOMeter 16 workers

Mix of 4K, 8K, 16K, 32K random read/write, sequential

LUN on tank

Total I/O p/s: 1376.04
Total MSs p/s: 43.00
Aver. I/O resp. time: 23.9749 ms
Max. I/O resp. time: 471.2918 ms
CPU util. total: 4.36%
Error: 0


These numbers don't seem very good.

Copying an 8 GB file from the stand-alone Win2008 server to the mounted 8 GB/s fiber connection LUN starts up at 1200 mb/s, then continuously dropping, to 150 mb/s by the time the file copied over.

Very similar result to copy from tank or tank2 to Win2008.

There is very little difference between copying to tank or tank2, although tank2 has SSD zil and readcache.

In comparison, the Win2008 box has an other volume, mounted on a different SAN Lun (not the zfes server), with 4 gb/s fiber link and copying the same file there gives a steady 450 gb/s speed.

Here is the CrystalDiskMark test for that other SAN LUN volume:

1000MB X drive 4 GB fc SAN

149.2 250.6 seq
48.88 32.49 512k
0.628 0.287 4K
4.445 0.116 4K QD32

My first dilemma: why do I have continuously declining speed on the zfs server LUN, while the speed remains constant on a different SAN?



TEST2 over GB copper, smb

Benchmarks:

1000MB W:\\san1 smb tank 2GBs connection

34.95 95.62 seq
37.62 84.92 512k
5.142 4.038 4K
61.82. 27.21 4K QD32

Copying the same 8GB file over smb to tank shows a steady 105-110 mb/s speed.
This speed did not change by aggregating two GB network cards on the Windows 2008 server, static ether channels, no LACP, load balancing with IP HASH on the ESXi side.


I am not happy with the speed, especially over the 8 gb fiber and I can not explain why the initial good speed continues to decline during copying, while over smb the speed is maintained.

Thanks!
 
Copying the same 8GB file over smb to tank shows a steady 105-110 mb/s speed.This speed did not change by aggregating two GB network cards on the Windows 2008 server, static ether channels, no LACP, load balancing with IP HASH on the ESXi side.

Your TCP packets have to stay in order, so for a single stream you aren't going to get a bandwidth increase using any of the 802.3ad aggregations techniques. Theoretically you could break them down and reassemble them (this is how 10Gb works on a protocol level), but I don't know of any solution that does that for multiple 1Gb links. Multiple operations form the same server won't even increase bandwidth (unless you have multiple IP's (with IP HASH based balancing) and the streams originate from different ip's. What you get is increased bandwidth when multiple clients connect.


What do you get from

iostat -xcn 5

when copying a big file over the FC link? Also, if you read that large file you copied over the FC link do you see the same tailing performance? (if you can copy it to a ram drive locally so you aren't bottlenecking there).

Also is that 120G drive (your ZIL) a 510 or X25? You may be capped by the write speed of your SSD.
 
Last edited:
a) LSI 9211-8i (flashed with latest IT firmware from LSI) connecting to the dual SAS expander, containing 24 x 2TB ST32000444SS 6 GB/s SAS drives, with 2 internal mini-SAS cables.


Which ports on the expander are you connecting your HBA cables to?
 
Random.nick: how many threads are you going to cross post this question to? You do realize that this is considered rather anti-social behavior, right?
 
Random.nick: how many threads are you going to cross post this question to? You do realize that this is considered rather anti-social behavior, right?

your question seems to assume he knows this is anti-social. might have been less confrontational to just say 'it is generally considered antisocial to crosspost..."
 
Thank you for your quick reply.

iostat for copying a 3 GB file to RAM disk on the Win2008 box over 8 gb/s fiber, from each pool (hope I will figure out how to post here an attached file next time):


cpu
us sy wt id
2 5 0 94
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 fd0
0.1 1.0 7.6 7.8 0.0 0.0 0.0 3.8 0 0 c2t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
0.4 2.1 44.2 246.9 0.0 0.0 0.0 10.8 0 0 c6t5000C500261BFDC3d0
25.7 59.0 3278.8 6591.5 0.0 1.1 0.0 12.8 0 21 c6t5000C50026097EF3d0
25.9 59.1 3304.5 6591.5 0.0 1.1 0.0 12.7 0 21 c6t5000C500260A1343d0
25.4 57.0 3241.0 6477.2 0.0 1.2 0.0 14.9 0 23 c6t5000C50026248D73d0
25.9 58.9 3309.0 6580.8 0.0 1.1 0.0 12.4 0 21 c6t5000C50026092163d0
26.4 62.7 3371.7 7037.4 0.0 1.1 0.0 12.6 0 21 c6t5000C500262276F7d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026228C07d0
26.5 62.6 3386.7 7037.5 0.0 1.1 0.0 12.5 0 21 c6t5000C50026225967d0
0.4 2.1 44.4 246.9 0.0 0.0 0.0 10.9 0 0 c6t5000C5002619E457d0
26.4 62.0 3369.9 7001.9 0.0 1.1 0.0 12.7 0 22 c6t5000C50026227D57d0
26.3 62.0 3350.9 7001.9 0.0 1.1 0.0 12.8 0 22 c6t5000C5002622795Bd0
25.9 59.3 3300.8 6626.7 0.0 1.2 0.0 14.7 0 23 c6t5000C50026105CBBd0
25.3 57.0 3235.6 6477.2 0.0 1.2 0.0 15.1 0 23 c6t5000C5002624816Bd0
0.4 2.1 44.6 248.5 0.0 0.0 0.0 10.9 0 0 c6t5000C500261D040Bd0
0.4 2.1 44.3 246.6 0.0 0.0 0.0 10.5 0 0 c6t5000C500261D190Bd0
0.4 2.1 44.1 246.6 0.0 0.0 0.0 10.3 0 0 c6t5000C500261D3A0Bd0
26.0 59.5 3322.5 6626.7 0.0 1.2 0.0 13.6 0 23 c6t5000C500260E6FEBd0
0.4 2.1 43.8 247.0 0.0 0.0 0.0 10.7 0 0 c6t5000C5002609145Fd0
0.3 2.1 43.7 247.0 0.0 0.0 0.0 10.6 0 0 c6t5000C50025E9630Fd0
0.4 2.1 44.7 248.5 0.0 0.0 0.0 10.9 0 0 c6t5000C500261C738Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002623690Fd0
26.6 63.1 3393.3 7078.5 0.0 1.1 0.0 12.4 0 21 c6t5000C5002622589Fd0
26.5 63.0 3379.6 7078.5 0.0 1.1 0.0 12.8 0 22 c6t5000C500261D3BBFd0
25.9 58.9 3307.1 6581.0 0.0 1.1 0.0 13.0 0 21 c6t5000C50026093A3Fd0
0.0 1.2 0.0 81.8 0.0 0.0 0.0 3.7 0 0 c6t500151795946773Ad0
0.0 1.2 0.0 81.8 0.0 0.0 0.0 3.8 0 0 c6t500151795946FA36d0
0.2 1.9 18.4 242.8 0.0 0.0 0.0 7.0 0 0 c6t500A075102FEE8A4d0


us sy wt id
2 0 0 98
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 fd0
0.4 29.6 25.6 247.0 0.0 0.1 0.0 1.8 0 1 c2t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261BFDC3d0
0.0 14.4 0.0 99.2 0.0 0.0 0.0 0.7 0 0 c6t5000C50026097EF3d0
0.0 14.0 0.0 99.2 0.0 0.0 0.0 0.7 0 0 c6t5000C500260A1343d0
0.0 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026248D73d0
0.0 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026092163d0
0.0 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500262276F7d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026228C07d0
0.0 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026225967d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002619E457d0
0.0 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026227D57d0
0.0 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002622795Bd0
0.0 14.0 0.0 99.1 0.0 0.0 0.0 0.7 0 0 c6t5000C50026105CBBd0
0.0 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002624816Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D040Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D190Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D3A0Bd0
0.0 14.8 0.0 99.1 0.0 0.0 0.0 0.7 0 0 c6t5000C500260E6FEBd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002609145Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50025E9630Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261C738Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002623690Fd0
0.0 12.4 0.0 96.9 0.0 0.0 0.0 0.7 0 0 c6t5000C5002622589Fd0
0.0 12.6 0.0 96.9 0.0 0.0 0.0 0.7 0 0 c6t5000C500261D3BBFd0
0.0 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026093A3Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946773Ad0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946FA36d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500A075102FEE8A4d0


extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 fd0
0.0 13.2 0.0 35.7 0.0 0.0 0.0 1.0 0 0 c2t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
11.0 0.0 1395.2 0.0 0.0 0.2 0.0 21.2 0 5 c6t5000C500261BFDC3d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026097EF3d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500260A1343d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026248D73d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026092163d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500262276F7d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026228C07d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026225967d0
13.0 0.0 1664.0 0.0 0.0 0.3 0.0 19.5 0 5 c6t5000C5002619E457d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026227D57d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002622795Bd0
0.0 0.4 0.0 0.8 0.0 0.0 0.0 0.1 0 0 c6t5000C50026105CBBd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002624816Bd0
13.6 0.0 1728.0 0.0 0.0 0.3 0.0 22.3 0 6 c6t5000C500261D040Bd0
13.6 0.0 1740.8 0.0 0.0 0.3 0.0 23.9 0 5 c6t5000C500261D190Bd0
15.0 0.0 1920.0 0.0 0.0 0.3 0.0 19.7 0 5 c6t5000C500261D3A0Bd0
0.0 0.4 0.0 0.8 0.0 0.0 0.0 0.1 0 0 c6t5000C500260E6FEBd0
10.6 0.0 1332.4 0.0 0.0 0.3 0.0 28.0 0 6 c6t5000C5002609145Fd0
13.0 0.0 1664.0 0.0 0.0 0.3 0.0 23.0 0 6 c6t5000C50025E9630Fd0
16.4 0.0 2099.3 0.0 0.0 0.4 0.0 26.3 0 7 c6t5000C500261C738Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002623690Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002622589Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D3BBFd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026093A3Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946773Ad0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946FA36d0
6.4 0.6 102.4 54.4 0.0 0.0 0.0 0.2 0 0 c6t500A075102FEE8A4d0
cpu


extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 fd0
0.0 25.0 0.0 237.8 0.0 0.0 0.0 1.9 0 1 c2t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
13.8 0.0 1766.4 0.0 0.0 0.2 0.0 16.8 0 4 c6t5000C500261BFDC3d0
0.0 5.2 0.0 12.7 0.0 0.0 0.0 0.5 0 0 c6t5000C50026097EF3d0
0.0 5.2 0.0 12.7 0.0 0.0 0.0 0.4 0 0 c6t5000C500260A1343d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026248D73d0
0.0 1.2 0.0 0.8 0.0 0.0 0.0 0.4 0 0 c6t5000C50026092163d0
0.0 10.2 0.0 86.4 0.0 0.0 0.0 0.7 0 0 c6t5000C500262276F7d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026228C07d0
0.0 10.0 0.0 86.4 0.0 0.0 0.0 0.7 0 0 c6t5000C50026225967d0
15.2 0.0 1945.6 0.0 0.0 0.2 0.0 14.8 0 5 c6t5000C5002619E457d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026227D57d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002622795Bd0
0.0 14.4 0.0 99.0 0.0 0.0 0.0 0.7 0 0 c6t5000C50026105CBBd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002624816Bd0
13.4 0.0 1702.4 0.0 0.0 0.2 0.0 14.4 0 4 c6t5000C500261D040Bd0
14.6 0.0 1868.8 0.0 0.0 0.2 0.0 14.0 0 4 c6t5000C500261D190Bd0
15.0 0.0 1844.4 0.0 0.0 0.3 0.0 18.5 0 5 c6t5000C500261D3A0Bd0
0.0 14.0 0.0 99.0 0.0 0.0 0.0 0.7 0 0 c6t5000C500260E6FEBd0
11.6 0.0 1484.8 0.0 0.0 0.2 0.0 17.0 0 4 c6t5000C5002609145Fd0
14.2 0.0 1817.6 0.0 0.0 0.3 0.0 19.3 0 5 c6t5000C50025E9630Fd0
16.0 0.0 2035.2 0.0 0.0 0.3 0.0 16.9 0 5 c6t5000C500261C738Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002623690Fd0
0.0 11.2 0.0 96.0 0.0 0.0 0.0 0.7 0 0 c6t5000C5002622589Fd0
0.0 11.4 0.0 96.0 0.0 0.0 0.0 0.8 0 0 c6t5000C500261D3BBFd0
0.0 1.2 0.0 0.8 0.0 0.0 0.0 0.4 0 0 c6t5000C50026093A3Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946773Ad0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946FA36d0
4.8 27.2 76.8 3379.2 0.0 0.2 0.0 6.3 0 2 c6t500A075102FEE8A4d0
cpu


us sy wt id
2 3 0 95
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 fd0
0.0 25.0 0.0 237.8 0.0 0.0 0.0 1.9 0 1 c2t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
13.8 0.0 1766.4 0.0 0.0 0.2 0.0 16.8 0 4 c6t5000C500261BFDC3d0
0.0 5.2 0.0 12.7 0.0 0.0 0.0 0.5 0 0 c6t5000C50026097EF3d0
0.0 5.2 0.0 12.7 0.0 0.0 0.0 0.4 0 0 c6t5000C500260A1343d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026248D73d0
0.0 1.2 0.0 0.8 0.0 0.0 0.0 0.4 0 0 c6t5000C50026092163d0
0.0 10.2 0.0 86.4 0.0 0.0 0.0 0.7 0 0 c6t5000C500262276F7d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026228C07d0
0.0 10.0 0.0 86.4 0.0 0.0 0.0 0.7 0 0 c6t5000C50026225967d0
15.2 0.0 1945.6 0.0 0.0 0.2 0.0 14.8 0 5 c6t5000C5002619E457d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026227D57d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002622795Bd0
0.0 14.4 0.0 99.0 0.0 0.0 0.0 0.7 0 0 c6t5000C50026105CBBd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002624816Bd0
13.4 0.0 1702.4 0.0 0.0 0.2 0.0 14.4 0 4 c6t5000C500261D040Bd0
14.6 0.0 1868.8 0.0 0.0 0.2 0.0 14.0 0 4 c6t5000C500261D190Bd0
15.0 0.0 1844.4 0.0 0.0 0.3 0.0 18.5 0 5 c6t5000C500261D3A0Bd0
0.0 14.0 0.0 99.0 0.0 0.0 0.0 0.7 0 0 c6t5000C500260E6FEBd0
11.6 0.0 1484.8 0.0 0.0 0.2 0.0 17.0 0 4 c6t5000C5002609145Fd0
14.2 0.0 1817.6 0.0 0.0 0.3 0.0 19.3 0 5 c6t5000C50025E9630Fd0
16.0 0.0 2035.2 0.0 0.0 0.3 0.0 16.9 0 5 c6t5000C500261C738Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002623690Fd0
0.0 11.2 0.0 96.0 0.0 0.0 0.0 0.7 0 0 c6t5000C5002622589Fd0
0.0 11.4 0.0 96.0 0.0 0.0 0.0 0.8 0 0 c6t5000C500261D3BBFd0
0.0 1.2 0.0 0.8 0.0 0.0 0.0 0.4 0 0 c6t5000C50026093A3Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946773Ad0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946FA36d0
4.8 27.2 76.8 3379.2 0.0 0.2 0.0 6.3 0 2 c6t500A075102FEE8A4d0
cpu

us sy wt id
2 0 0 98
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 fd0
0.0 13.6 0.0 36.7 0.0 0.0 0.0 0.7 0 0 c2t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261BFDC3d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026097EF3d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500260A1343d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026248D73d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026092163d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500262276F7d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026228C07d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026225967d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002619E457d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026227D57d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002622795Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026105CBBd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002624816Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D040Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D190Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D3A0Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500260E6FEBd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002609145Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50025E9630Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261C738Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002623690Fd0
0.0 0.4 0.0 0.8 0.0 0.0 0.0 0.1 0 0 c6t5000C5002622589Fd0
0.0 0.4 0.0 0.8 0.0 0.0 0.0 0.1 0 0 c6t5000C500261D3BBFd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026093A3Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946773Ad0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946FA36d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500A075102FEE8A4d0
cpu
us sy wt id
1 0 0 98

0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 fd0
0.0 13.4 0.0 37.2 0.0 0.0 0.0 0.7 0 0 c2t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261BFDC3d0
0.0 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026097EF3d0
0.0 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500260A1343d0
0.0 1.0 0.0 0.8 0.0 0.0 0.0 0.5 0 0 c6t5000C50026248D73d0
0.0 1.2 0.0 0.8 0.0 0.0 0.0 0.4 0 0 c6t5000C50026092163d0
0.0 4.6 0.0 14.7 0.0 0.0 0.0 0.6 0 0 c6t5000C500262276F7d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026228C07d0
0.0 4.6 0.0 14.7 0.0 0.0 0.0 0.6 0 0 c6t5000C50026225967d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002619E457d0
0.0 1.0 0.0 0.8 0.0 0.0 0.0 0.5 0 0 c6t5000C50026227D57d0
0.0 1.0 0.0 0.8 0.0 0.0 0.0 0.5 0 0 c6t5000C5002622795Bd0
0.0 6.6 0.0 17.0 0.0 0.0 0.0 0.6 0 0 c6t5000C50026105CBBd0
0.0 1.0 0.0 0.8 0.0 0.0 0.0 0.5 0 0 c6t5000C5002624816Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D040Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D190Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D3A0Bd0
0.0 6.6 0.0 17.0 0.0 0.0 0.0 0.6 0 0 c6t5000C500260E6FEBd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002609145Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50025E9630Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261C738Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002623690Fd0
0.0 6.4 0.0 16.9 0.0 0.0 0.0 0.6 0 0 c6t5000C5002622589Fd0
0.0 6.4 0.0 16.9 0.0 0.0 0.0 0.6 0 0 c6t5000C500261D3BBFd0
0.0 1.2 0.0 0.8 0.0 0.0 0.0 0.4 0 0 c6t5000C50026093A3Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946773Ad0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946FA36d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500A075102FEE8A4d0
cpu
us sy wt id

1 1 0 98
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 fd0
0.0 27.6 0.0 274.2 0.0 0.0 0.0 1.7 0 1 c2t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261BFDC3d0
0.0 1.2 0.0 0.8 0.0 0.0 0.0 0.4 0 0 c6t5000C50026097EF3d0
0.0 1.2 0.0 0.8 0.0 0.0 0.0 0.4 0 0 c6t5000C500260A1343d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026248D73d0
0.0 1.2 0.0 0.8 0.0 0.0 0.0 0.4 0 0 c6t5000C50026092163d0
0.0 13.2 0.0 109.7 0.0 0.0 0.0 0.8 0 0 c6t5000C500262276F7d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026228C07d0
0.0 12.8 0.0 109.7 0.0 0.0 0.0 0.8 0 0 c6t5000C50026225967d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002619E457d0
0.0 3.2 0.0 34.8 0.0 0.0 0.0 0.5 0 0 c6t5000C50026227D57d0
0.0 3.2 0.0 34.8 0.0 0.0 0.0 0.5 0 0 c6t5000C5002622795Bd0
0.0 14.8 0.0 78.0 0.0 0.0 0.0 0.8 0 0 c6t5000C50026105CBBd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002624816Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D040Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D190Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D3A0Bd0
0.0 14.4 0.0 78.0 0.0 0.0 0.0 0.7 0 0 c6t5000C500260E6FEBd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002609145Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50025E9630Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261C738Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002623690Fd0
0.0 14.6 0.0 111.9 0.0 0.0 0.0 0.8 0 0 c6t5000C5002622589Fd0
0.0 14.4 0.0 111.9 0.0 0.0 0.0 0.7 0 0 c6t5000C500261D3BBFd0
0.0 1.2 0.0 0.8 0.0 0.0 0.0 0.4 0 0 c6t5000C50026093A3Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946773Ad0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946FA36d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500A075102FEE8A4d0
cpu


us sy wt id
1 2 0 96
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 fd0
0.0 24.2 0.0 240.6 0.0 0.0 0.0 1.8 0 1 c2t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261BFDC3d0
4.2 0.0 489.0 0.0 0.0 0.1 0.0 12.0 0 2 c6t5000C50026097EF3d0
2.4 0.0 307.2 0.0 0.0 0.0 0.0 7.8 0 1 c6t5000C500260A1343d0
2.2 0.0 281.6 0.0 0.0 0.0 0.0 9.2 0 1 c6t5000C50026248D73d0
3.4 0.0 422.4 0.0 0.0 0.0 0.0 6.2 0 1 c6t5000C50026092163d0
4.8 0.0 601.6 0.0 0.0 0.0 0.0 9.4 0 1 c6t5000C500262276F7d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026228C07d0
2.8 0.0 358.4 0.0 0.0 0.0 0.0 10.3 0 1 c6t5000C50026225967d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002619E457d0
4.2 0.4 537.6 0.8 0.0 0.1 0.0 11.9 0 2 c6t5000C50026227D57d0
4.4 0.4 550.4 0.8 0.0 0.0 0.0 10.0 0 2 c6t5000C5002622795Bd0
3.8 0.0 486.4 0.0 0.0 0.0 0.0 12.7 0 1 c6t5000C50026105CBBd0
5.0 0.0 640.0 0.0 0.0 0.1 0.0 16.0 0 2 c6t5000C5002624816Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D040Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D190Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D3A0Bd0
2.6 0.0 332.8 0.0 0.0 0.0 0.0 13.3 0 1 c6t5000C500260E6FEBd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002609145Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50025E9630Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261C738Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002623690Fd0
3.8 0.0 473.6 0.0 0.0 0.0 0.0 12.8 0 2 c6t5000C5002622589Fd0
4.2 0.0 524.8 0.0 0.0 0.0 0.0 8.5 0 2 c6t5000C500261D3BBFd0
3.8 0.0 486.4 0.0 0.0 0.0 0.0 12.4 0 2 c6t5000C50026093A3Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946773Ad0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946FA36d0
0.0 37.0 0.0 4691.1 0.0 0.3 0.0 7.6 0 3 c6t500A075102FEE8A4d0
cpu
us sy wt id


0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 fd0
0.0 25.0 0.0 240.2 0.0 0.0 0.0 1.0 0 1 c2t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261BFDC3d0
7.4 1.2 947.0 0.8 0.0 0.2 0.0 18.5 0 3 c6t5000C50026097EF3d0
7.4 1.2 947.0 0.8 0.0 0.2 0.0 20.3 0 3 c6t5000C500260A1343d0
4.8 0.2 614.3 0.0 0.0 0.1 0.0 23.9 0 2 c6t5000C50026248D73d0
5.6 0.2 716.7 0.0 0.0 0.1 0.0 12.7 0 2 c6t5000C50026092163d0
7.2 16.0 921.4 138.8 0.0 0.2 0.0 7.1 0 3 c6t5000C500262276F7d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026228C07d0
4.2 16.4 537.5 138.8 0.0 0.1 0.0 3.5 0 2 c6t5000C50026225967d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002619E457d0
7.2 14.8 921.4 136.6 0.0 0.1 0.0 5.6 0 3 c6t5000C50026227D57d0
7.0 14.8 895.8 136.6 0.0 0.2 0.0 8.2 0 3 c6t5000C5002622795Bd0
6.6 1.2 844.7 0.8 0.0 0.1 0.0 18.5 0 3 c6t5000C50026105CBBd0
5.6 0.2 716.7 0.0 0.0 0.1 0.0 21.8 0 2 c6t5000C5002624816Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D040Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D190Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D3A0Bd0
7.6 1.2 972.6 0.8 0.0 0.2 0.0 18.4 0 3 c6t5000C500260E6FEBd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002609145Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50025E9630Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261C738Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002623690Fd0
6.0 18.0 767.9 139.7 0.0 0.1 0.0 4.9 0 3 c6t5000C5002622589Fd0
6.6 17.2 820.4 139.7 0.0 0.1 0.0 5.9 0 3 c6t5000C500261D3BBFd0
7.4 0.2 947.0 0.0 0.0 0.2 0.0 20.8 0 3 c6t5000C50026093A3Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946773Ad0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946FA36d0
0.0 38.4 0.0 4914.4 0.0 0.3 0.0 8.1 0 3 c6t500A075102FEE8A4d0
cpu
us sy wt id


1 0 0 98
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 fd0
0.0 28.4 0.0 274.7 0.0 0.0 0.0 1.3 0 1 c2t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261BFDC3d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026097EF3d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500260A1343d0
0.0 0.4 0.0 0.8 0.0 0.0 0.0 0.1 0 0 c6t5000C50026248D73d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026092163d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500262276F7d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026228C07d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026225967d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002619E457d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026227D57d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002622795Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026105CBBd0
0.0 0.4 0.0 0.8 0.0 0.0 0.0 0.1 0 0 c6t5000C5002624816Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D040Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D190Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D3A0Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500260E6FEBd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002609145Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50025E9630Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261C738Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002623690Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002622589Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D3BBFd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D3BBFd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026093A3Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946773Ad0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946FA36d0
0.0 0.6 0.0 32.0 0.0 0.0 0.0 0.2 0 0 c6t500A075102FEE8A4d0
cpu
us sy wt id

us sy wt id
1 0 0 98
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 fd0
0.0 16.8 0.0 74.5 0.0 0.0 0.0 0.8 0 0 c2t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261BFDC3d0
0.0 8.8 0.0 112.1 0.0 0.0 0.0 0.7 0 0 c6t5000C50026097EF3d0
0.0 8.6 0.0 112.1 0.0 0.0 0.0 0.7 0 0 c6t5000C500260A1343d0
0.0 8.4 0.0 112.1 0.0 0.0 0.0 0.6 0 0 c6t5000C50026248D73d0
0.0 9.2 0.0 112.1 0.0 0.0 0.0 0.7 0 0 c6t5000C50026092163d0
0.0 8.8 0.0 22.6 0.0 0.0 0.0 0.6 0 0 c6t5000C500262276F7d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026228C07d0
0.0 8.8 0.0 22.6 0.0 0.0 0.0 0.6 0 0 c6t5000C50026225967d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002619E457d0
0.0 6.8 0.0 20.4 0.0 0.0 0.0 0.6 0 0 c6t5000C50026227D57d0
0.0 6.8 0.0 20.4 0.0 0.0 0.0 0.6 0 0 c6t5000C5002622795Bd0
0.0 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026105CBBd0
0.0 8.4 0.0 112.1 0.0 0.0 0.0 0.6 0 0 c6t5000C5002624816Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D040Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D190Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D3A0Bd0
0.0 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500260E6FEBd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002609145Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50025E9630Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261C738Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002623690Fd0
0.0 8.8 0.0 22.7 0.0 0.0 0.0 0.6 0 0 c6t5000C5002622589Fd0
0.0 9.0 0.0 22.7 0.0 0.0 0.0 0.6 0 0 c6t5000C500261D3BBFd0
0.0 8.8 0.0 112.1 0.0 0.0 0.0 0.7 0 0 c6t5000C50026093A3Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946773Ad0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946FA36d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500A075102FEE8A4d0
0.0 8.8 0.0 112.1 0.0 0.0 0.0 0.7 0 0 c6t5000C50026093A3Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946773Ad0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946FA36d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500A075102FEE8A4d0
cpu
us sy wt id
 
Astronot, the LSI 9211-8i is connected to the PRI_J0 and to the SEC_J0 connectors on the expander. (Item 8 and 11 in the documentation, Appendix D, page D-3.) Thanks!
 
ChrisBenn, thanks... copying from zfs LUN to RAM drive seems to give a sustained ~ 400 mb/s speed.
I might have been bottlenecking on the Win2008... I will get an other 8 GB/s fiber card soon that I can install as a storage HBA for an ESXi 4.1 server, I will run more test on that.

However, copying from RAM drive to tank2 (zil + cache) still gives declining speed, ending around 110 mb/s, copying to tank (no zil + cache) declines to 150 mb/s.

The 120 GB SSD is the cache log, the write log is the two mirrored Intel X25-E (32 GB).
 
Astronot, the LSI 9211-8i is connected to the PRI_J0 and to the SEC_J0 connectors on the expander. (Item 8 and 11 in the documentation, Appendix D, page D-3.) Thanks!


I saw strange performance issues with the ST32000444SS drive and this setup when benchmarking. I'd recommend that you connect the cables to PRI_J0 and PRI_J1 and run the same tests, ignoring the failover (SEC_0x) expander for now and see how it looks.
 
I changed to cables, this is the new benchmarks (and some new).

9 4000MB F drive tank2 SAN with ZIL and cache 4 mirrored drives

572.1 305.4 seq vs previous test 556.9 303.5
515.5 250.8 512k 516.1 107.7
35.09 11.65 4K 54.54 12.98
300.3 10.71 4K QD32 296.8 19.53

Please note that I am getting some different results, running the same test again:

576.4 306.6 seq
524.2 229.8 512k
42.39 5.451 4K
303.2 11.09 4K QD32



5 100MB F drive tank2 SAN with ZIL and cache 4 mirrored drives

587.1 281.7 seq
524.5 267.4 512k
46.10 25.78 4K
296.9 180.4 4K QD32


9 4000MB D drive tank SAN no ZIL no cache 7 mirrored drives

581.5 341.3 seq
525.7 266.5 512k
44.24 21.54 4K
283.7 19.58 4K QD32


5 100MB D drive tank SAN no ZIL no cache 7 mirrored drives

572.1 364.1 seq
518.3 333.4 512k
51.45 26.22 4K
306.5 206.3 4K QD32
 
Try the same thing, but use a single pool, either filled with 2-drive mirrors or as multiple 8 or 10 drive raidz's. Not sure why you want more than one pool, as you can organize below the pool for multiple uses and (especially with a raid-10 like mirror) your performance is based on a multiplier with how many arrays it can stripe across. With a single pool, you can also use the same zil and l2arc cache devices for the entire thing.
 
I was just thinking the same to test :)

Here is the new pool:
pool: tank
state: ONLINE
scan: scrub repaired 0 in 5h0m with 0 errors on Wed Mar 16 00:45:21 2011
config:

NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c6t5000C5002624816Bd0 ONLINE 0 0 0
c6t5000C50026248D73d0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c6t5000C50025E9630Fd0 ONLINE 0 0 0
c6t5000C5002609145Fd0 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
c6t5000C50026092163d0 ONLINE 0 0 0
c6t5000C50026093A3Fd0 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
c6t5000C50026097EF3d0 ONLINE 0 0 0
c6t5000C500260A1343d0 ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
c6t5000C500260E6FEBd0 ONLINE 0 0 0
c6t5000C50026105CBBd0 ONLINE 0 0 0
mirror-5 ONLINE 0 0 0
c6t5000C500261D3BBFd0 ONLINE 0 0 0
c6t5000C5002622589Fd0 ONLINE 0 0 0
mirror-6 ONLINE 0 0 0
c6t5000C50026225967d0 ONLINE 0 0 0
c6t5000C500262276F7d0 ONLINE 0 0 0
mirror-7 ONLINE 0 0 0
c6t5000C5002622795Bd0 ONLINE 0 0 0
c6t5000C50026227D57d0 ONLINE 0 0 0
mirror-8 ONLINE 0 0 0
c6t5000C5002619E457d0 ONLINE 0 0 0
c6t5000C500261BFDC3d0 ONLINE 0 0 0
mirror-9 ONLINE 0 0 0
c6t5000C500261C738Fd0 ONLINE 0 0 0
c6t5000C500261D040Bd0 ONLINE 0 0 0
mirror-10 ONLINE 0 0 0
c6t5000C500261D190Bd0 ONLINE 0 0 0
c6t5000C500261D3A0Bd0 ONLINE 0 0 0
logs
mirror-11 ONLINE 0 0 0
c6t500151795946773Ad0 ONLINE 0 0 0
c6t500151795946FA36d0 ONLINE 0 0 0
cache
c6t500A075102FEE8A4d0 ONLINE 0 0 0
spares
c6t5000C50026228C07d0 AVAIL
c6t5000C5002623690Fd0 AVAIL


And here is the new ChrystalDiskMark result

9 4000MB pool: tank via 8 GB/s FC

Read Write
567.5 265.4 seq
526.8 255.4 512k
45.55 27.07 4K
304.6 34.58 4K QD32

Running now IOMeter, will post it later.
 
While running IOmeter, I am noticing something strange: a bunch of drives in the pool don't seem to do anything.

iostat -xcn 5

extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c2t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261BFDC3d0
139.6 0.0 17869.3 0.0 0.0 3.7 0.0 26.6 0 61 c6t5000C50026097EF3d0
131.4 0.0 16819.7 0.0 0.0 3.6 0.0 27.3 0 61 c6t5000C500260A1343d0
140.6 0.0 17997.3 0.0 0.0 4.5 0.0 32.4 0 73 c6t5000C50026248D73d0
125.8 0.0 16102.9 0.0 0.0 3.4 0.0 27.3 0 59 c6t5000C50026092163d0
134.6 0.0 17229.3 0.0 0.0 3.8 0.0 28.4 0 63 c6t5000C500262276F7d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50026228C07d0
123.8 0.0 15846.9 0.0 0.0 3.4 0.0 27.1 0 57 c6t5000C50026225967d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002619E457d0
129.8 0.0 16614.9 0.0 0.0 3.7 0.0 28.4 0 62 c6t5000C50026227D57d0
125.4 0.0 16051.7 0.0 0.0 3.6 0.0 28.4 0 61 c6t5000C5002622795Bd0
138.2 0.0 17690.1 0.0 0.0 5.0 0.0 36.3 0 76 c6t5000C50026105CBBd0
121.0 0.0 15488.5 0.0 0.0 3.9 0.0 32.0 0 64 c6t5000C5002624816Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D040Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D190Bd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261D3A0Bd0
141.2 0.0 18074.1 0.0 0.0 4.7 0.0 33.1 0 71 c6t5000C500260E6FEBd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002609145Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C50025E9630Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C500261C738Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t5000C5002623690Fd0
133.6 0.0 17101.3 0.0 0.0 3.7 0.0 27.8 0 62 c6t5000C5002622589Fd0
131.6 0.0 16845.3 0.0 0.0 3.9 0.0 29.7 0 64 c6t5000C500261D3BBFd0
138.2 0.0 17690.1 0.0 0.0 3.9 0.0 28.2 0 63 c6t5000C50026093A3Fd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946773Ad0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c6t500151795946FA36d0
0.0 199.8 0.0 25575.2 0.0 1.6 0.0 8.0 0 18 c6t500A075102FEE8A4d0
cpu
us sy wt id
 
Not familiar with OpenSolaris or Linux for that matter, but I am looking for a new NAS software since my WHS bit the dust. I want some sort of fault tolerance and I am considering a ZFS solution. I have two questions:

1. Are Port Mutlipliers supported? I know _Gea has mentioned not to use any expansion cards, but I am thinking about buying an HP MicroServer which only has 5 bays. So if I want to add more drives it would have to be through a port multiplier.

2. Can you install UPnP servers like the PS3 Media Server in OpenSolaris? How about Squeezebox?
http://code.google.com/p/ps3mediaserver/
http://www.mysqueezebox.com/download
 
Well, here's an idea (speaking only to point #2): If you install openindiana with gui (this is basically opensolaris), you can install virtualbox on it and run virtual appliances for things like squeezebox and PS3 media server and such?
 
You can install solaris 11 with a gui as well. You can install squeezebox server in solaris, but you will have to build it from source and resolve the dependencies yourself.
 
You can install solaris 11 with a gui as well. You can install squeezebox server in solaris, but you will have to build it from source and resolve the dependencies yourself.

Have you got squeezebox working in Solaris Express 11?

I can install it and it runs, however it always gets lots of errors in the logs and has trouble streaming to devices.
 
Well, here's an idea (speaking only to point #2): If you install openindiana with gui (this is basically opensolaris), you can install virtualbox on it and run virtual appliances for things like squeezebox and PS3 media server and such?

I will have to read the microserver thread, but the processor might not be strong enough for virtual machines. I will load it up with 8GB of ram which might help, but i don't want the write or read performances to be bogged down due to the load of virtual machines on the processor. Nevertheless, thanks for the suggestion. I will have to try it out if I go that route.
 
_Gea,

Got a bug for ya. The smtp username field under napp-it, setup needs to be longer. I can't authenticate to my smtp server because I need more characters to type in the entire email address.

Using smtp-test under jobs, email appears to have a non-character limited field (or at least a really long field limitation). If you could replicate those field settings to the one under napp-it, setup that would be fantastic. Thanks in advance.
 
A few questions regarding using SE11 with napp-it in a workgroup:

1. How to allow windows clients to connect without having to enter the user login information, assuming the windows user has rights to the share?

2. When in domain mode, if you check the permissions tab from a windows client, you can add domain users. In workgroup mode, I can only see the zfs server as a location, and the zfs smb user can't be found (though the smb groups can).

3. If I add an smb user to the smb administrators group and give them permissions on a share, it's not effective?

Thanks!
 
_Gea,

Got a bug for ya. The smtp username field under napp-it, setup needs to be longer. I can't authenticate to my smtp server because I need more characters to type in the entire email address.

Using smtp-test under jobs, email appears to have a non-character limited field (or at least a really long field limitation). If you could replicate those field settings to the one under napp-it, setup that would be fantastic. Thanks in advance.

ok, fixed in current 0.415l nightly
http://napp-it.org/downloads/changelog_en.html

update via wget.. or
update/ downgrade from menu napp-it updates (up from 0.415j+)

Gea
 
A few questions regarding using SE11 with napp-it in a workgroup:

1. How to allow windows clients to connect without having to enter the user login information, assuming the windows user has rights to the share?

thats a Windows question, not special to Solaris.
you can do by:

- enable guest access
- use Windows AD Domain
- map the share to a drive letter and store login information (reconnect at startup)

2. When in domain mode, if you check the permissions tab from a windows client, you can add domain users. In workgroup mode, I can only see the zfs server as a location, and the zfs smb user can't be found (though the smb groups can).

In workgroup mode, you can only use users created on the server side (Solaris),
you cannot use users created on your Windows machine.
There is no "AD Domain light"-functionality without creating a Domain in Windows
and there is no "network-password".

-> Create your local Windows users also on the server-side with the same pw
or use a domain with a common and centralized user database.

3. If I add an smb user to the smb administrators group and give them permissions on a share, it's not effective?

Thanks!

If you modify share-level ACL's, you have to restart smb-service to have it working


Gea
 
All-In-One performance

In current napp-it i have added a dd performance test bench
to the bonnie+ option with a variable block-size option to compare
values with others, made with dd


example (done with my All-In-One from OpenIndiana on SuperMicro X8-DTH-6F
6 VM's running, 4 Cores out of 8, 20 GB Ram of 48 assigned to OpenIndiana VM
datapool made of 3 vdevs each is 3way mirror of sandforce ssd's 120GB via pass-through on LSI 1060E/ 2008)

write 10.24 GB via dd, please wait...
time dd if=/dev/zero of=/pool1/dd.tst bs=1024000 count=10000

10000+0 records in
10000+0 records out

real 19.7
user 0.0
sys 5.5

10.24 GB in 19.7s = 519.80 MB/s Write

read 10.24 GB via dd, please wait...
time dd if=/pool1/dd.tst of=/dev/null bs=1024000

10000+0 records in
10000+0 records out

real 8.7
user 0.0
sys 4.2

10.24 GB in 8.7s = 1177.01 MB/s Read


bonnie+ values:
seq write 367 MB/s
seq. read: 1372 MB/s


both benches were made under medium load of other VM's
 
Last edited:
thats a Windows question, not special to Solaris.
you can do by:

- enable guest access
- use Windows AD Domain
- map the share to a drive letter and store login information (reconnect at startup)



In workgroup mode, you can only use users created on the server side (Solaris),
you cannot use users created on your Windows machine.
There is no "AD Domain light"-functionality without creating a Domain in Windows
and there is no "network-password".

-> Create your local Windows users also on the server-side with the same pw
or use a domain with a common and centralized user database.



If you modify share-level ACL's, you have to restart smb-service to have it working


Gea

Thanks. I guess my confusion comes from seeing other solutions (like unraid) that support windows clients as users without the need to enter the username and password. I could never tell if unraid was using samba or cifs though, so it's probably a samba feature?
 
Back
Top