Search results

  1. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Hi J-san, Thanks for your help. I think I've narrowed it down to actual disk/zfs performance. Even though CrystalDiskMark displays wonderful results, I came to think about that it creates a 1GB file which ZFS then hold in ARC. This effectively means I'm running disk test from the ARC cache. I...
  2. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Hi J-san, Thanks for your help. I tried adding the disk as SCSI 2:0 - paravirtual. Same results, unfortunately. Each disk is it's own VMDK. I have not partitioned the drives added from ESXi. VMWare tools is installed and updated to latest version. So weird that the Ubuntu runs @ ~200MiB/s...
  3. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    .........performing the EXACT same operation in Ubuntu, yielded a much better result :confused::confused::o
  4. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    The D-drive is a harddrive(VMDK) file mounted in ESXi, just like regular ESXi drives. I'm trying to copy an MKV file about 5GB in size. See below, green arrow is SSD and red arrow is ZFS datastore via NFS: BR Jim
  5. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Hi J-San, Thanks for your reply. Unfortunatly disabling ATIME didn't help anything. I was actually enabled on my pool, so thanks for the tip. I'm copying from a volume that resides on the NFS datastore being presented by OmniOS, to the SSD. I'm copying a 5GB file. See below: Copy from->to...
  6. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Hi J-San, Thanks for taking time to reply :) Direct attached SSD is a VM datastore attached to the VM as SCSI 0:2. SSD is connected directly to the motherboard, with the same cable. Never touched it. I'm talking about internal VM to VM copy, from the ZFS datastore to the SSD...
  7. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Not sure if this is the right forum to post in, but I'll try anyway. Running my ESXi 5,5 U2(latest build) All-in-one, Gea's concept basically. All my VM's use VMXNET3 NICs. One OmniOS VM w/ 32GB RAM, 4vCPU, LSI9211-8i(P19 firmware) Napp-it runs MTU9000 against the vSwitch, presenting...
  8. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Thanks for the hint about the P20 firmware, I had a ton of hard errors and transport errors on my 6x3TB mirror pool on a new build. Reverted to using P19 and all is well again :)
  9. J

    LSI 9260-8i - Weird RAID10 speeds

    Solved by recreating the RAID10 array from the Windows Storage Manager from LSI. Read speeds are now 6 times better ~300MB/s
  10. J

    LSI 9260-8i - Weird RAID10 speeds

    Dear community, I'm currently running ESXi 5.5U1 on a Supermicro X9-SRL. I use a LSI 9260-8i RAID controller for all my datastores. Main main datastore is a 4 x 3TB RAID10, consisting of WD RED series drives. For some odd reason I can't understand, I'm experiencing 3 times faster WRITE...
  11. J

    Napp-it - NFS & ESXi 5.5

    I ended up using Nexentastor v3.15 with a 5.5 TB striped mirror(RAID10). Performance was slightly better with Napp-it, but Nexentastor has the better web interface and easier config. Napp-it is a powerfull tool, if one has the time to learn it. I use it for my much bigger standalone fileserver...
  12. J

    Napp-it - NFS & ESXi 5.5

    Thanks for all your replies, still not sure whether I should move to ZFS+NFS yet. But I guess it's work a try, considering I will also get a bit more disk space (RAID10 vs RAIDZ1) out of my 4 drives. At least ZFS is more flexible and scales better than a local RAID controller. Input is still...
  13. J

    Napp-it - NFS & ESXi 5.5

    Thanks for your replies, I'll try and post some more info here: Currently my performance is "good" when only using one or two VM's to access the RAID10, but when I start using other VM's, the disk performance is terrible. Seems like the RAID10+BBU+Spindles+VMFS doesn't really scale all...
  14. J

    Napp-it - NFS & ESXi 5.5

    Dear community, Before I repurpose all in one fileserver, I have a few questions for the experts. I'm currently running hardware RAID10 VMFS with ESXI 5.5. Multiple VM's are used for fileserver, streaming server, etc. I'm finding performance to be very limited on the IO side, even though...
  15. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Hi all, Can someone shed light on how to treat permission when mounting a CIFS share from a Ubuntu Server to my OmniOS Napp-it server? Everytime I try to mount the CIFS share with a SMB user created on Napp-it, I'm not getting the same access rights as I do on fx. a Windows Server. It...
  16. J

    Server motherboard: Asus vs SuperMicro

    Dear community, For a new file server build, I've been looking at the Supermicro X9-SRL-F, but they're hard to come by in my region. The Asus Z9PA-U8, seems to be very similar and is easily available. I own a Supermicro board, and I'm quite pleased with it, so I wanted to hear what you...
  17. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    You are forgetting your "dpool" folder. Even though it's not a share, initial authentication polls that folder(at least that's what I'm seeing). Try this from a SSH session/console. Clean current ACL: /usr/bin/chmod -R A- /dpool/ Allow everyone to access dpool and dpool/datatank and set...
  18. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    As a temporary solution I've created this script, basically it does the job. /usr/bin/chmod -R...
  19. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    @_Gea: Is it possible to add the option of defining your own ACL Reset parameters? I would like to reset all my ACL's with the following parameters(just example): 0:group:vh31777:list_directory/read_data/add_file/write_data...
  20. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    ESXi 5.1 and OI 151a7, same as you.
  21. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Never the less: VMXnet3 transfers = 60-70MB/s E1000 transfers = 100-110MB/s
  22. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    With VMXnet3, I'm getting between 600-800Mbps when copying to another random PC in my network. With E1000 that speed is what one should expect from 1Gbit network. Very weird but that's how it is here. Thanks Jim
  23. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I'm aware of this, but never the less the amount of bandwidth it delivers is exactly 1Gbit. Hence the need for aggregation. VMXnet3 runs slower than E1000 on OI, so thats not an option. Thanks so far Jim
  24. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Because I'm using E1000 vNIC's, which only allow throughput of ~1GBit/s Aggregation on the ESXi is allready setup using IP Hash to my Cisco SG200 switch, using 3 x pNIC's. But for the guest VM's to utilize this, they need to be able to Rx/Tx with more than 1GBit/s, which the E1000 is not...
  25. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I have now successfully aggregated 2 x E1000 NICs(virtual) in OI. This works and connectivity is there, BUT with the aggregation I'm getting speeds of around 50MiB/s. Without the aggregation(single NIC), I'm getting 100-110MiB/s. Can anyone tell me whats wrong? My setup is an All In One...
  26. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Nice to know it's working. I'm not sure what you mean by port group. I have 4 x NICs on a single vSwitch in ESXi. They're load balancing via IP Hash to a Cisco SG200 switch. So AFAIK, ESXi should have ~4Gbit/s throughput in the vSwitch out to my network. I just tried adding 2 x Intel E1000...
  27. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Has anyone messed with Link Aggregation on OI? I running mine as an all-in-one on ESXi, but had poor network performance using the VMXNET3 adapter. I then switched to E1000, but because of the lower bandwidth properties of the E1000, I'm not able to utilize the 2xdual port NIC's I have in...
  28. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    :mad::mad: Damn WD, I can't replace any drives at all, since all new drives(ie. WD REDs) are reporting 4K sectors :( EDIT: Can I recreate the zpool with forced 4K sectors if I offload my data?
  29. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Can someone enlighten my as to why why my ZFS pool parameter "ASHIFT" concerning disk sector sizes reports as "ashift=9", when ALL my drives are Advance Format drives, that should report 4K sector size and result in "ashift=12"?
  30. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Many thanks for your assistance, _Gea. Best regards Jimmy
  31. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    During the last week, I've been replacing 3 x WD 2TB Greens due to reported errors in Napp-it. Yesterday I finished a Resilver with the disk "c6t9d0", brand new drive. Today I started another Resilver, replacing yet another faulty WD Green. Just an hour into the Resilver, I'm again getting...
  32. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Thanks all for your answers, all things considered I'm gonna go with the 3TB WD REDs. The lower power usage means a lot. /Jim
  33. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I need guidance on drives to purchase, can anyone please assist? Criterias for drives: 1) Energy efficient 2) Performance is not an issue (1Gbit network only) 3) May not run too hot, room with server in it is approx 28degrees celcius. 4) CAN'T be WD Greens(have 16 allready and I'm running back...
  34. J

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    One after the other WD-Green 2TB drive is failing on me ATM :( What are the general recommendations for 2TB / 3TB drives these days for ZFS? I would like to avoid the special raid edition drives if possible, they're super expensive here. WD RED Series perhaps? Input appreciated /Jim
  35. J

    Cisco SG200 switch keeps polling

    For some reason, my NAS is now hibernating as it should, even with the factory settings on the Cisco SG-200. No idea, what caused it to work again. Thanks for your kind assistance. /Jim
  36. J

    Cisco SG200 switch keeps polling

    I think mine is in port 1 or 2. Should that have any impact? There's no mention of 25/26 being defined as uplinks? Thanks /Jim
  37. J

    Cisco SG200 switch keeps polling

    Thanks for your reply, can you please elaborate? /Jim
  38. J

    Cisco SG200 switch keeps polling

    Thanks for your reply, I'm not sure what you mean by MAIN in? Is the port that has my router connected? /Jim
  39. J

    Cisco SG200 switch keeps polling

    Thanks for your reply. I've disabled both CDP and LLDP. Currently looking through the other settings. /Jim
  40. J

    Cisco SG200 switch keeps polling

    Hello all, Just purchased a Cisco SG200-26 (non-PoE version). It works great, but since hooking it up to my network, my Synology NAS won't hibernate. So I went and checked if the switch was polling, and indeed it is. Every 1-2 seconds, the activity LED on all connected ports...
Top