Search results

  1. S

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    The 3rd disk finished after about 3 days where as the first 2 took about 12 hours. I'm on to the 4th disk and it is going at almost the same slow speed as the 3rd disk. The drives all came in identical packaging and have the same model number. One thing I notice is the slower drives have a SN...
  2. S

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I am trying to replace my 6 3TB drives in a Raidz1 pool with 8 TB drives. I was able to replace 2 drives and am working on the 3rd. The 3rd is taking substantially much longer. I've tuned all the resilver properties the same way as the other drives. I've looked at the iostat output and it...
  3. S

    Dell MD1200 and Dell R720 - Home Data Storage

    The H710p is fine for controlling the local 2.5" drives in the R720. So for instance you could create a simple Raid 1 on two 2.5" drives. Install ESXi on them. Use the remaining space for an OpenSolaris VM (or a FreeNas VM if you are more comfortable with that.) From there you add an NFS...
  4. S

    Dell MD1200 and Dell R720 - Home Data Storage

    Based on the specs listed you will need an external SAS card. If you plan to use the MD1200 with hardware raid you would need something like the Dell H800 card. If you want to use it as a jbod for ZFS you would need a card like the LSI 9207-8e card. If it were me I would do a napp-it...
  5. S

    Got 160 TB ZFS FreeNAS. Thinking of converting to ReFS Storage Spaces on Server 2016

    Have you considered running Solaris based OS with Napp-IT gui on top? It is actually pretty simple to install and I believe is a lot more robust than FreeNas. I haven't noticed any performance issues with my setup, granted I only have a 1Gb network at home. At work I have similar setups that...
  6. S

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I have a question. I created a LUN in OpenIndiana with napp-it. I then backed up the LUN, reformatted the box with Nexenta, and now can't figure out how to enable the LUN in Nexenta. It appears Nexenta uses zvols for LUNs rather than files. I tried dd'ing the file LUN to a blank zvol LUN...
  7. S

    Any Compellent Resellers or Dell Guys Here?

    Then go with Nexenta for the support or a premaid Nexenta solution from one of their partners. That is what I did at my work, saved the client over a million dollars and $100K in annual support for NetApp.
  8. S

    Any Compellent Resellers or Dell Guys Here?

    Just build your own system with MD1200/MD1220s. Get a server like the R720 with lots of ram for the the head unit and install a version of Solaris with Napp-IT webgui for management. Get LSI 9207-8e cards to connect the MD12XX jbods. No vendor lock-in. Dell will replace failed drives if you...
  9. S

    Thoughts on company NAS build?

    Yes they are called HBA Adapters: http://www.lsi.com/products/storagecomponents/Pages/HBAs.aspx
  10. S

    Thoughts on company NAS build?

    I don't think that card supports IT mode: http://www.lsi.com/products/storagecomponents/Pages/MegaRAIDSAS9261-8i.aspx
  11. S

    Thoughts on company NAS build?

    Why would you choose an LSI raid controller? ZFS works best with an HBA card like the newest 9207-8i.
  12. S

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Quick question. How do I configure NFS so only certain IPs on my local subnet can access NFS shares?
  13. S

    All SSD ZFS Pools

    Madrebel, I know OCZ has a crap reputation. I didn't have input into their procurement. I'm 99% sure we are stuck with them. But to be fair the OCZs thus far have benchmarked slightly better than the high-end STEC ZeusIOp demo drives we got in. As for the block size we are pretty much...
  14. S

    All SSD ZFS Pools

    msitpro, Not sure, I just hit the up arrow to previous commands I had run and modified it with the other setting so Im 99% sure the command was correct.
  15. S

    All SSD ZFS Pools

    paret0, I tried using this command which is what I use to tune other settings to the live system that get reset after a reboot: "echo zfs_unmap_ignore_size /W0t0 |mdb -kw" normally it would confirm that it has changed the setting instead for this setting it says "mdb: failed to...
  16. S

    All SSD ZFS Pools

    paret0, I tried to tune that sertting but it doesn't seem to be in Nexenta. Must be Solaris specific.
  17. S

    All SSD ZFS Pools

    I have used all block sizes from 4K to 128K. Oracle uses 8K blocks. Also as I've stated I've tried with and without a Zil. I read Nex7's blog post (as well as spoken to him on the phone and email) and his conclusion was that you won't see a performance increase for 1 thread. My tests are...
  18. S

    All SSD ZFS Pools

    ddrdrive, I have tried all different configurations such as having the ZeusRam on its own dedicated 9207. Currently everything is going through 2 LSI SAS Switches so every drive has 4 paths to the 2 HBAs.
  19. S

    All SSD ZFS Pools

    Gea, I have been working with the Nexenta Engineer who writes that blog and he is the one who said I was saturating the Zeusram and needed about 8 to keep up. Yes I am doing sync writes in the benchmark because they system is for a massive Oracle db. The numbers do look a lot better with...
  20. S

    All SSD ZFS Pools

    Gea, dd benchmarks max out at 2.2GB writes and 3gb reads. We don't have any other hardware on hand such as jbods or other hbas. We could try OmniOS, I didn't realize it had the newest drivers. We have tried direct connect, daisy chained, LSI SAS switches, multi-path, single-path, etc... all...
  21. S

    All SSD ZFS Pools

    My client has purchased over 100 OCZ Talos 2 drives. Dell MD1220 JBODs. Dell R710 Servers with 144GB of ram. STEC Zeusram drives. LSI 9207e HBA cards. I have setup Nexenta for them and I have been very underwhelmed by the benchmarks thus far. Specifically the writes. No matter what...
  22. S

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I have read that. Those instructions are for a previous camera. I have the new 1080P camera. I assume the issue is Solaris CIFS. It is strange though because when I look in the log files I can see errors if I put in a non-exsistant share name. So that should mean that when I dont see a log...
  23. S

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Has anyone been able to connect a Y-cam IP camera to their SAN? It says it supports SMB/CIFS shares. It asks for the server ip /Domain Name and the share name. I have tried it a million different ways but cant get it to connect.
  24. S

    60 Disk 4U Dell JBOD

    It has 8 6GB SAS ports.
  25. S

    60 Disk 4U Dell JBOD

    $77K online quote filled with 3TB SAS drives.
  26. S

    60 Disk 4U Dell JBOD

    Need to get my work to buy one of these. :eek: Wish I could cancel my puny 12 disk MD1200 shelves on order. http://www.dell.com/us/enterprise/p/powervault-md3200/pd
  27. S

    Wondering about ZFS hand built vs. Thecus

    Same thing as the R510 but with AMD instead of Intel CPUs. The 510 also has the 2 internal 2.5" drives. I use them for booting ESXi as a Raid 1. Then I pass through the HBA card with the 12 drives to an Open Indiana VM
  28. S

    Wondering about ZFS hand built vs. Thecus

    Instead of a Thecus why not just get a Dell R510 I got one with 12 bays last year for $1600 could get it cheaper now I assume. The H200 card is a rebranded LSI HBA.
  29. S

    OCZ SSD's

    There is a newer firmware v 2.22 I believe. Also you may want to run the temperature fix that comes on the Linux cd. Basically they report the wrong temperature causing raid cards to drop them.
  30. S

    ZFS SSD Performance Issue

    Money is not really an issue. What are the best performing HBA cards? We can't connect the hard drives to the motherboard easily. There are only two onboard ports, 1 is used for the cdrom. There are no molex or sata power cables in Dell servers plus there is no place to put a drive, all...
  31. S

    ZFS SSD Performance Issue

    I think I have figured out the problem. The Dell H200 HBA card (rebranded LSI 9211) is simply not strong enough. It only supports 350MB per port and seems to have a max of 600MB so that makes sense that when testing individual drives, even the ZeusRam, we are only seeing 350MB/sec writes.
  32. S

    ZFS SSD Performance Issue

    I intially tried Solaris 11 Express. I then switched to Open Indiana with no noticible difference. Right now I'm installing ESXi 5 and passing through the HBA cards to see if that makes a difference.
  33. S

    ZFS SSD Performance Issue

    Update: I connected the 14 ssds to the internal H700 Raid card. I created 2 7 disk Raid 0 stripes. In Solaris I created a RAID 0 of the two. The benchmarks were the same as with the H200.
  34. S

    ZFS SSD Performance Issue

    The H200 was pulled from another system. It is not the integrated version. Open Indiana is running on bare metal. The only other HBA's I have are the Dell Perc 6i cards which are much older. The system did come with an H700 I guess I can test with that but I would prefer to avoid hardware...
  35. S

    ZFS SSD Performance Issue

    Output is in Kbytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random...
  36. S

    ZFS SSD Performance Issue

    Here is the output of iozone not sure how to read it: Children see throughput for 32 random writers = 1400575.40 KB/sec Parent sees throughput for 32 random writers = 950491.35 KB/sec Min throughput per thread = 29922.35 KB/sec...
  37. S

    ZFS SSD Performance Issue

    Updated the Dell H200 firmware to the newer LSI 9211 firmware and the results are the same. Switching the OS to Open Indiana now.
  38. S

    ZFS SSD Performance Issue

    Any other suggestions? Today I'm going to try flashing with LSI firmware. Maybe swithing OS to the full Oracle Solaris 11 and/or Open Indiana.
  39. S

    ZFS SSD Performance Issue

    It is a SAS backplane.
  40. S

    ZFS SSD Performance Issue

    zpool list: NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 136G 20.0G 116G 14% 1.00x ONLINE - ssdpool 2.82T 122K 2.82T 0% 1.00x ONLINE - zfs list: rpool 53.9G 80.0G 94K /rpool rpool/ROOT...
Back
Top