OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

It was from within Nappit - Services - XAMPP

It said

wget -O - www.napp-it.org/xamp | perl

But now XAMPP is installed it has been replaced with

Status:

default Apache service: not running (ok)

XAMPP installed : yes
 
_Gea -

I'm running 0.7b and used the "wget -O - www.napp-it.org/afp | perl" command to install AFP. Everything runs it course and at the end tells me it's installed. I try starting the service in napp-it and then i get the error that netatalk is not installed... Any ideas?


on a side note... Anyone using a WDTV Live to access their NFS/SMB shares and stream content to the media player? For whatever reason, I can see the Server from the Live device and can select the ZFS folder however it freezes up... Anyone using this setup have issues?
 
The WDTV Live issue, is the same as normal samba issue with solaris.

You need to make the name of the machine, findable via netbios or dns. I just added it to my dns. Just the wdtv is doing a lookup on the hostname, and is failing to locate an ip for it.
 
Gea,

REF XAMP

Why is it that when I edit the httfd.conf file within nappit the changes are not saved? I have tried to edit the file directly with Midnight Commander and it still does not save changes?
 
The WDTV Live issue, is the same as normal samba issue with solaris.

You need to make the name of the machine, findable via netbios or dns. I just added it to my dns. Just the wdtv is doing a lookup on the hostname, and is failing to locate an ip for it.

forgive me, but networking is not my forte =/ Can you point me in the right direction on how I go about doing that - I'll search in the mean time? This im sure is somehow related, or a reason, as to why I cannot just assign a static IP in the OI VM... however DHCP works.

EDIT:
Should I follow this;
http://www.cyberciti.biz/tips/linux-how-to-setup-as-dns-client.html

EDIT 2:
When I open my resolv.conf - its empty... ?
 
Last edited:
Gea,

REF XAMP

Why is it that when I edit the httfd.conf file within nappit the changes are not saved? I have tried to edit the file directly with Midnight Commander and it still does not save changes?

if you want to edit manually, you must look for files under /opt/xampp/etc or
you can update to todays napp-it 0.7c nightly where this is fixed together with
a backup option and virtual server editing.
 
Thank you Gea,

I have updated to 7c and tried to edit the httpd.conf file, when I saved it I got this error message:

1489 sub edit_file: file not found
/var/web-gui/data/napp-it/zfsos/02_services/15_XAMPP=-nex/02_Xampp-Configs/
 
GEA,
I know we go round and round on acl. It must be the second most discussed topic about zfs short of performance.;) I am wondering if its possible with Windows Home Premium to use windows credentials to log into my server instead of it prompting me on the share and typing the same password that I log into windows with. I can set acl permissions on the shares with no problems. I just would like to easily click on a folder and not have to retype my password. I'm in a workgroup and I removed the user mapping. I tried mapping my group how I thought it would work, but no luck. I tried idmap add 'winuser:chad@chad-pc' 'unixuser:root'. My zfs server name is ZFS-01.
Thanks.
Also, I noticed that the new solaris that zfs version 33 says "Improved share support". Any idea what this is suppose to mean? Any better luck using win enterprise and setting acls?
 
I am wondering if its possible with Windows Home Premium to use windows credentials to log into my server instead of it prompting me on the share and typing the same password that I log into windows with.

create an account (on the OI box) that has the same username/password as your windows account. windows tries to log on with your windows credentials by default, so this will generally "just work"
 
So with SAS2, what is best way to identify drives? Like what if I got a backblaze storage pod, seems like a mess to identify drives. Seems like you'd have to setup server, then add drives one at at time and record uuid/location?
 
create an account (on the OI box) that has the same username/password as your windows account. windows tries to log on with your windows credentials by default, so this will generally "just work"

Figured it out. The usernames are case sensitive also! WOW.
Thanks guys!
 
So with SAS2, what is best way to identify drives? Like what if I got a backblaze storage pod, seems like a mess to identify drives. Seems like you'd have to setup server, then add drives one at at time and record uuid/location?

Gea's latest changes (last few months) work great for this

* If you have something like a norco setup with a HD activity light you can use the identify via dd method on the disks page.

* If not his latest changes to the smartctrl configuration have it now reliably identifying drive serial number. You can run the smart page once (it does cause transfer errors for me, but no problems otherwise - only 1 TRN / H/W error typically while invoking) and it now seems to cache the serial number on the disk page. From this you can map physical disk to the solaris ID. At that point I just used a brother pTouch and added some labels.
 
So with SAS2, what is best way to identify drives? Like what if I got a backblaze storage pod, seems like a mess to identify drives. Seems like you'd have to setup server, then add drives one at at time and record uuid/location?

1. insert disk by disk and write down the GUID and serial
2. use dd detection (works with disk activity led)
3. on professional ses backplanes (those wth red alert led), you
may try my monitor extension. part of it is a ses backplane/slot detection
(shows physical slotnr like id 500000e010731c30 = controller 2, enclosure 3, slot 18
with the option to switch red alert led on for a sas2 disk)
 
Last edited:
Yeah not sure how to do the netbios thing either , my resolv.conf on the openindiana box has the router internet router stuff which both my xbmc machine and the other are connected to.

Anyway not sure if the problem is the same , to add shares on xbmc i must add the location as it won't auto detect the smb shares (using nfs now and detects it fine plus it's faster but yeah).

Anyway i managed to make it auto detect it was to run another machine in the network with smb shares and then i think this is what i did : "smbd -j WORKGROUP" and then i could browse the smb share without add location or afp in my macos machine or have xbmc auto detect it.

Problem was if i rebooted etc i would have to manually join it again , is there no way for the solaris smb to hosts it's own workgroup so it's automatically visible ?

Also really need some help with hard drives as i'm about to decide , should i get hitachi 7k3000 , wd ears or samsung's f4? (samsung is like 30 euros cheaper) but i would like to have decent zfs speeds (400ish).

Would the hitchis require a much more powerful power supply ? i will be running 30 drives , also would it heat up too much or not much difference between the 7200 and the 5400 hitachi models ?

Cheers

Also very nice addition the xampp :)
 
Got a bit of a weird problem on full Solaris 11 that I didn't get on Express.

When mirroring my rpool with identical disks, I get the following warning;

root@server01:~# prtvtoc /dev/rdsk/c4t0d0s0 | fmthard -s - /dev/rdsk/c5t0d0s0
fmthard: Partition 2 specifies the full disk and is not equal
full size of disk. The full disk capacity is 312528510 sectors.
fmthard: Partition 2 specified as 312560640 sectors starting at 0
does not fit. The full disk contains 312528510 sectors.
fmthard: New volume table of contents now in place.

root@server01:~# prtvtoc /dev/rdsk/c4t0d0s0
* /dev/rdsk/c4t0d0s0 partition map
*
* Dimensions:
* 512 bytes/sector
* 63 sectors/track
* 255 tracks/cylinder
* 16065 sectors/cylinder
* 19456 cylinders
* 19454 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 16065 312512445 312528509
2 5 01 0 312560640 312560639
8 1 01 0 16065 16064

--------------------------------------------------------------------------------------------------

root@server01:~# prtvtoc /dev/rdsk/c5t0d0s0
* /dev/rdsk/c5t0d0s0 partition map
*
* Dimensions:
* 512 bytes/sector
* 63 sectors/track
* 255 tracks/cylinder
* 16065 sectors/cylinder
* 19456 cylinders
* 19454 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* Unallocated space:
* First Sector Last
* Sector Count Sector
* 16065 312512445 312528509
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
2 5 01 0 312528510 312528509
8 1 01 0 16065 16064
 
I finally got down to assembling the machine this weekend :). I am not virtualizing anything yet; just want to set up the home server. Some questions

(1) How to make a disk attached to the onboard controller visible?
I have attached four 2TB disks to the IBM BR10i flashed to the LSI IT firmware. I have a 5th disk which I attached to the on-board controller, as the boot disk (a 16GB MTron SLC SSD). When I explore napp-it,
the 4 disks attached to the controller appear in the disk menu, but the fifth disk attached to the onboard controller does not. The disks have not been touched (formatted/partitioned) and are in the state which they are when they come from the shrink-wrap.

I can see the disk in diskinfo appearing as a different controller.

I want this to be the spare disk for this pool. I also want to do some benchmarking so an independent disk will be useful. Is there a way I can enable it so it can be made a part of the pool, while it is connected to the onboard controller?

NappItDisk.jpg
 
I finally got down to assembling the machine this weekend :). I am not virtualizing anything yet; just want to set up the home server. Some questions

(1) How to make a disk attached to the onboard controller visible?
I have attached four 2TB disks to the IBM BR10i flashed to the LSI IT firmware. I have a 5th disk which I attached to the on-board controller, as the boot disk (a 16GB MTron SLC SSD). When I explore napp-it,
the 4 disks attached to the controller appear in the disk menu, but the fifth disk attached to the onboard controller does not. The disks have not been touched (formatted/partitioned) and are in the state which they are when they come from the shrink-wrap.

I can see the disk in diskinfo appearing as a different controller.

I want this to be the spare disk for this pool. I also want to do some benchmarking so an independent disk will be useful. Is there a way I can enable it so it can be made a part of the pool, while it is connected to the onboard controller?
]

First I would go into bios and set your onboard SATA controller to AHCI mode as this may fix the problem.

Also if your on openindiana then i found the following commands may be helpful:

In OpenIndiana you can list the newly plugged in SATA devices with the command

cfgadm

They show up as connected but unconfigured and you do the following command to activate for example port sata0/2

cfgadm -c configure sata0/2

Also if you want cfgadm to do it automatically you edit your /etc/system and add the following line to the end:

set sata:sata_auto_online=1

Then SATA hot plug works the same as SAS and you are good to go


Also you could try the latest preview .7 version of napp-it as he added new SATA drive support handling in.
 
I was trying out the AHCI mode but for some reason OI does not boot in it. I get to grub and the OI splash comes, and then the system hangs, requiring a hard power down. I have turned it back to IDE and can boot.

=============================
I think the AHCI problem may have something to do with the boot drive I am using. It is a 3.5" first generation SLC drive. Very likely it is not a native implementation of SATA but some adapter which does not support AHCI properly.
=============================
The logs do not show anything between the two successful boots (I am using the System Log Viewer). I did notice the following error in the logs, though have no idea what it means.

Code:
Jan 30 17:32:46 OIServer acpica: [ID 190582 kern.notice] ACPI: Executed 1 blocks of module-level executable AML code
Jan 30 17:32:46 OIServer acpica: [ID 191476 kern.notice] ACPI Error (psargs-0464): [RAMB] Namespace lookup failure, AE_NOT_FOUND
Jan 30 17:32:46 OIServer acpica: [ID 912048 kern.notice] Executing subtree for Buffer/Package/Region
Jan 30 17:32:46 OIServer acpica: [ID 334967 kern.notice] ACPI Exception: AE_NOT_FOUND, Could not execute arguments for [RAMW] (Region) (20091112/nsinit-440)
Jan 30 17:32:46 OIServer unix: [ID 190185 kern.info] SMBIOS v2.6 loaded (3553 bytes)
==================

Search on this error revealed this linux fix
http://thread.gmane.org/gmane.linux.acpi.devel/51405/focus=51407
with some more comments here
http://comments.gmane.org/gmane.linux.kernel/1233771
It seems to be a broken "Suspend to RAM" functionality which is being fixed by the ACPICA folks.

Now how do we get this fix into OpenIndiana?
 
Last edited:
Search on this error revealed this linux fix
http://thread.gmane.org/gmane.linux.acpi.devel/51405/focus=51407
with some more comments here
http://comments.gmane.org/gmane.linux.kernel/1233771
It seems to be a broken "Suspend to RAM" functionality which is being fixed by the ACPICA folks.

Now how do we get this fix into OpenIndiana?

Looks to me like this is probably that bug you found and it will just be happening with your motherboard/bios. This error is only likely to disable some of the acpi power/sleep options for you. This bug is tagged: Fixed in ACPICA version 20111123.
http://www.acpica.org/bugzilla/show_bug.cgi?id=937

So with this version of ACPICA or newer installed you would be fine. But this would need to be merged into the openindiana ACPI kernal package which I would leave for an openindiana developer.

found this link http://www.mail-archive.com/[email protected]/msg00951.html with someone doing just this but its only a test build and it was done just before the 20111123 version was released so it may not include this fix at all yet...

Also note that you may have problems booting if you switch between AHCI to IDE mode without doing a full reinstall from scratch.

Edit: Another Idea to try is install a hypervisor OS like vmware/xen/kvm and this OS may not have the acpi problem with your board and then load openindiana as a guest and see if you still get the acpi bugs. I'm not sure if some of these VM platforms might emulate a standard acpi bios to their vm's or pass though the host bios acpi settings (in which case it will still have the same error).
 
Last edited:
Latent:
1. Very interesting note about switching to AHCI requiring a reinstall. Will try that.
2. Right now, the server hangs when I try to put it in suspend mode. I have all C-States reporting enabled in Bios (not Auto). So I think it will never go down to the lower power mode which is bad.
3. Yes link the the acpica is the bug fix; it is the same bug.
If I understand the link correctly, I can use the command listed in that post and then I will be able to run the newer ACPI code; will try it out.
4. I have never done virtualization before so I am going to keep it simple for the time-being.
 
Yo,

So I added 8GB of ram to my server and have a total of 16GB now but read speed is still slow 42MB/s in comparison to write speed +100MB/s. So still stuck with the same problem!
Some help would be grateful as I really want to solve this speed problem!

gr33tz

Supermicro X9SCM-F, 16GB DDR3,Xeon E-1230,3x flashed LSI IT mode IBM M1015,Open Indiana latest + napp-it, Win7-64 PC core I7,Cat-6 cables,Netgear Prosafe Gigabit switch
screenshot025tw.png
 
Last edited:
Yo,

So I added 8GB of ram to my server and have a total of 16GB now but read speed is still slow 42MB/s in comparison to write speed +100MB/s. So still stuck with the same problem!
Some help would be grateful as I really want to solve this speed problem!

gr33tz

Supermicro X9SCM-F, 16GB DDR3,Xeon E-1230,3x flashed LSI IT mode IBM M1015,Open Indiana latest + napp-it, Win7-64 PC core I7,Cat-6 cables,Netgear Prosafe Gigabit switch

Adding more memory for a bigger ARC may have little effect on many benchmarks like this as the likes of sequential will be limited by network and protocol and not fetching from disk instead of ARC. you may notice some gains in random read if the data is cached in the ARC but Its hard to say how this will work with artificial benchmark workloads. Also note that your benchmarking with a 2GB dataset and your ARC before the 8GB memory upgrade would already be far bigger than this so the added ram would mainly help with real world performance when doing small random reads to larger data sets.

If your using SMB networking there will always be a bit of overhead but i'm not sure why it's only fast one way like that. You can try testing from other machines if possible. Also as a way to test to make sure it's windows SMB file sharing that you need to look at you can do a quick test with iscsi instead.


http://www.windowsnetworking.com/articles_tutorials/Connecting-Windows-7-iSCSI-SAN.html

in napp-it export a iscsi LUN and install the above software iscsi initiator on your windows machine. Note that you may not be able to use iscsi for anything other than testing as it works differently to a normal file share and just gives you access to a virtual hard disk instead.

Also windows 7 now has NFS support so you can test with this as well

http://sagehacks.wordpress.com/2009/01/21/howto-mount-nfs-shares-under-windows-7/

note that NFS may default to turning all writes to Sync writes which may slow down without a high speed ZFS LOG device installed so you may want to test with sync write off as well to compare it with.
 
Last edited:
Yo,

So I added 8GB of ram to my server and have a total of 16GB now but read speed is still slow 42MB/s in comparison to write speed +100MB/s. So still stuck with the same problem!
Some help would be grateful as I really want to solve this speed problem!

gr33tz

Supermicro X9SCM-F, 16GB DDR3,Xeon E-1230,3x flashed LSI IT mode IBM M1015,Open Indiana latest + napp-it, Win7-64 PC core I7,Cat-6 cables,Netgear Prosafe Gigabit switch

Try using IOmeter with 2MB transfer size and queue depth of 4.

I think from memory I only saw 40MB/s with a single IO queue but could max out Gb link easy with 4 IOs queued.
 
Thanks for the advice but I like to mention that I get 100MB/s from my QNAP nas using Samba to the same Win7 machine!
Also could it be that my test machine slows down as I have 1 pool consisting of 1vdev of 3 sata-2 7200 rpm 1tb drives and 1vdev of 3 sata-3 5400rpm 2tb drives? Both vdevs are raidz.

Thanks
 
Folks:
Is there a way to save the ZFS settings created by napp-it and the port it to a new machine? I want to reinstall OI but was hoping that I do not have to recreate all the napp-it stuff for ZFS.
 
Folks:
Is there a way to save the ZFS settings created by napp-it and the port it to a new machine? I want to reinstall OI but was hoping that I do not have to recreate all the napp-it stuff for ZFS.

I don't think there is a save/load option in napp-it. however there is a great feature built into ZFS that does make it easier. Once you reinstall OI you can connect the same disks used from your previous setup and import the existing pool. A lot of the settings are stored in the zpool itself and they will keep working. All the non pool related settings will be lost though and you will have to re set these things up.

Also before you do the move you can export the pool which will disconnect you from the pool and make sure the pool is in a good state to be imported again later. But I think that if you forget to do this and reinstall OI it will still let you import the existing pool. The export/import feature also allows you to move pool's between machines or change to a different ZFS based OS as long as it supports the version of the POOl (ie a pool created with version 28 will work with any OS supporting version 28 or newer)
 
Thanks a lot Latent. Right now the pool is empty so there is nothing to import/export :). It took me some time to get the SMBs to work so I just do not want to mess with something which is working.

I also logged into my customac. I can see the the OI server. But I can not connect to it. I enter the username/password (root) but the connector just shakes (which I presume means authentication failed). This is on Lion 10.7.2; the NetTalk version is 2.2.1. I know that there were issues with AFP and Lion and NetTalk 2.2.3 (which is out) talks about support for AFP 3.3. But I though that was just for TimeMachine stuff and not for just regular fileserver use.

I then tried to connect to it via Finder. When I try afp://OIServer I get the popup about not supported as seen in this post.
When I try smb//OIServer, and then authenticate, it adds the ZFS folder but then says that I can not view the files because I do not have permissions. Not sure why that happens.
============
^^^^^^^^
Looks like a known issue as revealed by this post on the OI update.
============
 
Last edited:
OpenIndiana prestable Release
http://wiki.openindiana.org/oi/oi_151a_prestable0+Release+Notes

OSX Lion + SMB is working with this release

Updated miniHowto
http://napp-it.org/doc/downloads/napp-it.pdf

Thanks.
I was able to install it using the commands in the comment section of the wiki.
I am now able to access the SMB share from my Mac.

Just to educate this noob, what is the difference between accessing the folders via AFP vs SMB?

Also is there a way I can update the nettalk version to 2.2.2 which seems to fix the lion AFP issue?
========================================================================
Also in general, I was wondering how much does compression affect performance or results in space savings. If I have pictures and videos, does compression even help at all unless of course it is a pure raw image.
 
Last edited:
Thanks.
I was able to install it using the commands in the comment section of the wiki.
I am now able to access the SMB share from my Mac.

Just to educate this noob, what is the difference between accessing the folders via AFP vs SMB?

CIFS/SMB is the default way Windows PC's share files between them and a Windows server. It is quite fast, offers best support for Windows ACL and was usually the way with the least problems for filesharing between Macs and a Non-Apple server (was true until Lion breaks this rule).

AFP (Apple filing protocol) is the default way Macs share files between them and a Apple server. Its usually a little bit faster than SMB, has better finder integration and is the only way offering proper Timemachine support. Main problem: Its a Apple only protocol and they change basics quite often. You must always hope, that the free netatalk implementation is updated. Other problem, its a quite complex thing with a database and a lot of extra files in a share. Not funny if you like to share it in parallel with another protocol. Another missing point is AD integration. While its klick and ready with SMB, i know nobody have it working with AFP.

Thats the reason I use SMB only although I support more than than 60% Mac's at work.

Also is there a way I can update the nettalk version to 2.2.2 which seems to fix the lion AFP issue?

wget -O - www.napp-it.org/afp | perl
updates to netatalk 2.2.2

see http://netatalk.sourceforge.net/2.2/ReleaseNotes2.2.2.htmll
http://napp-it.org/downloads/changelog_en.html

Also in general, I was wondering how much does compression affect performance or results in space savings. If I have pictures and videos, does compression even help at all unless of course it is a pure raw image.

If you have mainly textfiles or uncompressed images it can save alot of space and improve performance.
With already compressed files like binaries, JPGs or movies it does not improve anything.
With mixed files It may help.

about napp-it settings:
most settings are ZFS properties. They are part of a Pool. If you import a pool, these settings are used.
Other settings, that are part of napp-it like keys, jobs or logs can be save with current napp-it in menu
extension -register- backup napp-it. (Copy complete napp-it folder to a datapool, to restore copy it back and optionally set perm to 777)

If you use napp-it as a webserver (www, Mysql, php, ftp) via XAMPP, you can save all XAMPP settings with menu service - XAMPP - Backup cfg
 
Last edited:
Adding more memory for a bigger ARC may have little effect on many benchmarks like this as the likes of sequential will be limited by network and protocol and not fetching from disk instead of ARC. you may notice some gains in random read if the data is cached in the ARC but Its hard to say how this will work with artificial benchmark workloads. Also note that your benchmarking with a 2GB dataset and your ARC before the 8GB memory upgrade would already be far bigger than this so the added ram would mainly help with real world performance when doing small random reads to larger data sets.

If your using SMB networking there will always be a bit of overhead but i'm not sure why it's only fast one way like that. You can try testing from other machines if possible. Also as a way to test to make sure it's windows SMB file sharing that you need to look at you can do a quick test with iscsi instead.


http://www.windowsnetworking.com/articles_tutorials/Connecting-Windows-7-iSCSI-SAN.html

in napp-it export a iscsi LUN and install the above software iscsi initiator on your windows machine. Note that you may not be able to use iscsi for anything other than testing as it works differently to a normal file share and just gives you access to a virtual hard disk instead.

Also windows 7 now has NFS support so you can test with this as well

http://sagehacks.wordpress.com/2009/01/21/howto-mount-nfs-shares-under-windows-7/

note that NFS may default to turning all writes to Sync writes which may slow down without a high speed ZFS LOG device installed so you may want to test with sync write off as well to compare it with.

I have a sata-3 128GB SSD write cache and a 60GB sata-3 read cache, forgot to mention
Also only Win7 Ultimate supports NFS!
I will keep trying to fiddle some more and try speeding up the reads of the system!

EDIT1 : tried FTP and now get 75MB/s from the server to my win7 pc, so must be a SMB problem? What I don't understand is when you google on speed problems in ZFS you allways get complaints about poor write speeds....in my case it's the other way round,my read speeds suck and I would really like to solve it as I would like to start using the server instead of testing it!
 
Last edited:
Hi!

I've two zpools:
- medias: 4x1TB WD greens on AOC USAS2-L8i
- tank: 4x2TB Hitachi 5k3000 on board SATA

The problem is in tank zpool (RAIDZ) where one of HDDs (not always the same) drops out (fault). There is present a periodical clicking sound. I assume that this is a HDD power problem. Anyway the zpool goes into degraded status and I have to shutdown, restart a server and to clear zpool errors to get zpool out of degraded status. Yes there is always some reslivering present. After resilver I always perform scrub on this zpool.
The power management for CPU and HDDS is enabled via napp-it but both parameters are blanked.
So any ideas what is going on?
 
Hi!

I've two zpools:
- medias: 4x1TB WD greens on AOC USAS2-L8i
- tank: 4x2TB Hitachi 5k3000 on board SATA

The problem is in tank zpool (RAIDZ) where one of HDDs (not always the same) drops out (fault). There is present a periodical clicking sound. I assume that this is a HDD power problem. Anyway the zpool goes into degraded status and I have to shutdown, restart a server and to clear zpool errors to get zpool out of degraded status. Yes there is always some reslivering present. After resilver I always perform scrub on this zpool.
The power management for CPU and HDDS is enabled via napp-it but both parameters are blanked.
So any ideas what is going on?
@Lui -- a disk clicking is symptomatic of a failing/failed drive. Can you display the Smart info? Hopefully you have backed up the tank zpool. What's a bit strange is that you say that the drive that gets dropped differs (as opposed to the same drive each time). You might want to disable power management for a while to see if the problem goes away.
 
@Lui

Be carefull to use a sufficient power supply as this is also a sypmtom of drives not getting enough power!
 
Thanks for the advice but I like to mention that I get 100MB/s from my QNAP nas using Samba to the same Win7 machine!
Also could it be that my test machine slows down as I have 1 pool consisting of 1vdev of 3 sata-2 7200 rpm 1tb drives and 1vdev of 3 sata-3 5400rpm 2tb drives? Both vdevs are raidz.

Thanks

I am not suggesting the client end is at fault. just trying to see where the problem is. That was my experience while testing ZFS. Whether the cifs server built into solaris can be tuned to perform better at low queue depths i don't know.
 
Is anyone here currently using this as a second/third tier production setup? We currently have our MD3000i/MD1000 configuration for first/second tier storage with 15k and 7.2k drives for storage. I was looking at setting this up as a third tier VSphere server to handle our PHD backup devices, non-critical fire shares, and for additional development and test VMs to remove the storage requirements in our primary array.

The cost of hard drives right now is making this approach an easy to justify solution for third tier data. I just didn't know if anyone has trusted their live systems to this yet.
 
I am not suggesting the client end is at fault. just trying to see where the problem is. That was my experience while testing ZFS. Whether the cifs server built into solaris can be tuned to perform better at low queue depths i don't know.

Well I use this server at home for movie and BD bluray backup and have 2 mediaplayers connected to it with NFS. Sometimes I need to move a huge amount of data from 1 server to another so it it isn't the speed with simultanious users that interest me but rather the speed with 1 user moving a whole bunch of data mostly files from 8-45TB!
I'm really trying to tackle this problem as it's the only major obstruction my server has!
My read speeds should more or less be raised to my write speed!

ty
 
Back
Top