Recent content by levak

  1. L

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I think OmniOS is the way to go if you want stable server OS. Regarding my post about iSCSI target dying. I guess I spoke too soon. It just crashed 4 times today. Will report when I'll have some more info. Matej
  2. L

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I didn't change any settings on the server (target), but I must set higher timeouts on clients (initiators) because of the way we use iSCSI, which is not ideal (remote clients, sometimes high latency,...) and lower timeout lead to iSCSI drives dropping. I know iSCSI wasn't made for that kind of...
  3. L

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Interesting limits. Wonder why there isn't a bigger default limit on systems with more memory. In my case, time that was needed to free ram cost us crash of a service. Unfortunately my graphs are already aggregated and I can't look back at them to see if there was a momentary lack of free...
  4. L

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Glad I can be of help. Currently I did "over the finger" limit and just limited my ARC to 200GB. Looking at my graphs, I usually had around 10GB of free memory and now I have around 18GB after ARC is filled. Matej
  5. L

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    About a year ago, I was writing about my problems with ZFS pool freezing and iSCSI target dying. On random and towards the end, at the same time every week, our iSCSI target died (svc reported service running, but there was nothing listening on 3260 port) and we were unable to write to pools...
  6. L

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    So you can't delete Volume shadow copies and with it snapshots? Only local root user can delete snapshots? Matej
  7. L

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I guess I will have to do some tests on my own. I want to fully test it before I deploy it to a production SAN. Don't want any troubles with 400TB of data, it takes ages to restore:) Matej
  8. L

    ZFS. Lost a drive and pool goes offline?

    Get us the output of 'zpool status' or if you, for some reason, don't see the pool there, get us the output of 'zpool import'. Matej
  9. L

    Seagate Drives

    I have around 100 4TB SATA Constellation drives and around 200 4TB & 6TB SAS Constellation drives running over 4 years and I think 2-5 have failed in that time. Matej
  10. L

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Hey there! I was wondering what is currently the recommended version of LSI SAS2 firmware? Some say P15, other P18 and some P19. What do you use? Anyone updated to the updated (??) P20?:) Is anyone having problems with iSCSI being randomly disconnected (I notice this mostly with Windows...
  11. L

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    You set it when you create LU: $ stmfadm create-lu -p blk=512 Check stmfadm create-lu -? You can see the current size with: $ stmfadm list-lu -v lp, Matej
  12. L

    Seagate SAS in SM JBOD - lots of read errors (anyone else?)

    So far no problems. In the mean time, one drive did failed, but it was easy to find: - drive was 100% busy even then there was almost no traffic - SMART showed 'Drive failing' Other drives are humming along nicely and they took some beating by now with various test scenarios... Matej
  13. L

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I'm running one server with 256GB memory for cca 2 years. We have some troubles, but they are not connected to memory. I'll start a new cluster in a month, with both nodes having 256GB memory. Ping me after half a year and I can report:) Matej
  14. L

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    We have a server with 256GB running for 2 years without a problem. I also talked about that with OmniTI 2 days ago and they said that there should be no problem with that much memory. They probably run their production server with even more than 265GB of memory. Matej
  15. L

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I did enable ARC on the tests and it might be that data was not in the ARC. I should run the same test twice or more time, to eliminate reading (I have enough memory to cache everything). On the other side, I has recordsize set to 4k, so there shouldn't be any RWM. It could be that blocks...
Back
Top