The 3rd disk finished after about 3 days where as the first 2 took about 12 hours. I'm on to the 4th disk and it is going at almost the same slow speed as the 3rd disk. The drives all came in identical packaging and have the same model number. One thing I notice is the slower drives have a SN...
I am trying to replace my 6 3TB drives in a Raidz1 pool with 8 TB drives. I was able to replace 2 drives and am working on the 3rd. The 3rd is taking substantially much longer. I've tuned all the resilver properties the same way as the other drives. I've looked at the iostat output and it...
The H710p is fine for controlling the local 2.5" drives in the R720. So for instance you could create a simple Raid 1 on two 2.5" drives. Install ESXi on them. Use the remaining space for an OpenSolaris VM (or a FreeNas VM if you are more comfortable with that.) From there you add an NFS...
Based on the specs listed you will need an external SAS card. If you plan to use the MD1200 with hardware raid you would need something like the Dell H800 card. If you want to use it as a jbod for ZFS you would need a card like the LSI 9207-8e card.
If it were me I would do a napp-it...
Have you considered running Solaris based OS with Napp-IT gui on top? It is actually pretty simple to install and I believe is a lot more robust than FreeNas. I haven't noticed any performance issues with my setup, granted I only have a 1Gb network at home. At work I have similar setups that...
I have a question. I created a LUN in OpenIndiana with napp-it. I then backed up the LUN, reformatted the box with Nexenta, and now can't figure out how to enable the LUN in Nexenta. It appears Nexenta uses zvols for LUNs rather than files.
I tried dd'ing the file LUN to a blank zvol LUN...
Then go with Nexenta for the support or a premaid Nexenta solution from one of their partners. That is what I did at my work, saved the client over a million dollars and $100K in annual support for NetApp.
Just build your own system with MD1200/MD1220s. Get a server like the R720 with lots of ram for the the head unit and install a version of Solaris with Napp-IT webgui for management. Get LSI 9207-8e cards to connect the MD12XX jbods.
No vendor lock-in. Dell will replace failed drives if you...
Madrebel,
I know OCZ has a crap reputation. I didn't have input into their procurement. I'm 99% sure we are stuck with them. But to be fair the OCZs thus far have benchmarked slightly better than the high-end STEC ZeusIOp demo drives we got in.
As for the block size we are pretty much...
msitpro,
Not sure, I just hit the up arrow to previous commands I had run and modified it with the other setting so Im 99% sure the command was correct.
paret0,
I tried using this command which is what I use to tune other settings to the live system that get reset after a reboot:
"echo zfs_unmap_ignore_size /W0t0 |mdb -kw"
normally it would confirm that it has changed the setting instead for this setting it says
"mdb: failed to...
I have used all block sizes from 4K to 128K. Oracle uses 8K blocks. Also as I've stated I've tried with and without a Zil.
I read Nex7's blog post (as well as spoken to him on the phone and email) and his conclusion was that you won't see a performance increase for 1 thread. My tests are...
ddrdrive,
I have tried all different configurations such as having the ZeusRam on its own dedicated 9207. Currently everything is going through 2 LSI SAS Switches so every drive has 4 paths to the 2 HBAs.
Gea,
I have been working with the Nexenta Engineer who writes that blog and he is the one who said I was saturating the Zeusram and needed about 8 to keep up.
Yes I am doing sync writes in the benchmark because they system is for a massive Oracle db. The numbers do look a lot better with...
Gea,
dd benchmarks max out at 2.2GB writes and 3gb reads.
We don't have any other hardware on hand such as jbods or other hbas.
We could try OmniOS, I didn't realize it had the newest drivers.
We have tried direct connect, daisy chained, LSI SAS switches, multi-path, single-path, etc... all...
My client has purchased over 100 OCZ Talos 2 drives. Dell MD1220 JBODs. Dell R710 Servers with 144GB of ram. STEC Zeusram drives. LSI 9207e HBA cards.
I have setup Nexenta for them and I have been very underwhelmed by the benchmarks thus far. Specifically the writes. No matter what...
I have read that. Those instructions are for a previous camera. I have the new 1080P camera. I assume the issue is Solaris CIFS. It is strange though because when I look in the log files I can see errors if I put in a non-exsistant share name. So that should mean that when I dont see a log...
Has anyone been able to connect a Y-cam IP camera to their SAN? It says it supports SMB/CIFS shares. It asks for the server ip /Domain Name and the share name. I have tried it a million different ways but cant get it to connect.
Need to get my work to buy one of these. :eek:
Wish I could cancel my puny 12 disk MD1200 shelves on order.
http://www.dell.com/us/enterprise/p/powervault-md3200/pd
Same thing as the R510 but with AMD instead of Intel CPUs. The 510 also has the 2 internal 2.5" drives. I use them for booting ESXi as a Raid 1. Then I pass through the HBA card with the 12 drives to an Open Indiana VM
Instead of a Thecus why not just get a Dell R510
I got one with 12 bays last year for $1600 could get it cheaper now I assume. The H200 card is a rebranded LSI HBA.
There is a newer firmware v 2.22 I believe. Also you may want to run the temperature fix that comes on the Linux cd. Basically they report the wrong temperature causing raid cards to drop them.
Money is not really an issue. What are the best performing HBA cards?
We can't connect the hard drives to the motherboard easily. There are only two onboard ports, 1 is used for the cdrom. There are no molex or sata power cables in Dell servers plus there is no place to put a drive, all...
I think I have figured out the problem. The Dell H200 HBA card (rebranded LSI 9211) is simply not strong enough. It only supports 350MB per port and seems to have a max of 600MB so that makes sense that when testing individual drives, even the ZeusRam, we are only seeing 350MB/sec writes.
I intially tried Solaris 11 Express. I then switched to Open Indiana with no noticible difference. Right now I'm installing ESXi 5 and passing through the HBA cards to see if that makes a difference.
Update:
I connected the 14 ssds to the internal H700 Raid card. I created 2 7 disk Raid 0 stripes. In Solaris I created a RAID 0 of the two. The benchmarks were the same as with the H200.
The H200 was pulled from another system. It is not the integrated version. Open Indiana is running on bare metal. The only other HBA's I have are the Dell Perc 6i cards which are much older. The system did come with an H700 I guess I can test with that but I would prefer to avoid hardware...
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
random...
Here is the output of iozone not sure how to read it:
Children see throughput for 32 random writers = 1400575.40 KB/sec
Parent sees throughput for 32 random writers = 950491.35 KB/sec
Min throughput per thread = 29922.35 KB/sec...