Ah thanks! That's exactly what I wanted to know, and PLP is something I missed to consider. The cost is ~$150 on eBay so acceptable.
Just wondering has anyone used it for ZFS SLOG? It seems too good to be true...ending my decade long effort of finding the perfect (and cheap) ZFS SLOG device @_@
Hi all,
I recently come across these devices called Radian RMS-200 and Radian RMS-300. Looking at the description and spec, it seems to be a "spiritual successor" to Zeus RAM and a perfect fit for a ZFS SLOG device. However Radian does not specify the latency of it.
On the other hand I already...
Yes you can. Connectivity-wise you can use iSCSI over GbE/10GbE, FCoE over 10GbE, FC, SRP/iSER over IB, etc.
For ethernet based solution you can use virtual switch (optionally with VLAN) or passthrough the NICs. For FC and IB based solution you will probably need to passthrough the HBAs.
For home use VMware vsphere hypervisor ESXi is good enough, which is also free, to some degree.
For computing power you probably want to look at Xeon E5 series CPUs, based on your current and future needs.
I was in the exact same situation but I decided to get a S3500 80GB instead of larger S3500 and S3700, because per Intel's data sheet, all the S3500s and S3700s have the same latency figure under 4KB QD1 write: 65 us.
Now I have explored much deeper in the virtualization world, and also due to this recent bump of the thread, i can finally come back to this with some useful information.
My original question 1.5 years ago was not very well posed. As @patrickdk mentioned CIFS/NFS/iSCSI protocols are not of the...
Wouldn't this cause vibration issues? I feel it is bad for spindles. High temp is bad too tho as it affects stability of the magnetic layers and all components to some extend.
I'm using napp-it with Solaris with ESXi now. It is not exactly AIO per _Gea's concept as my VM storage is from SSD directly (thus lacks the protection offered by ZFS). But I'm quite happy with it. Currently I'm looking into moving into a complete AIO setup.
well I have to say you made a good point...it's all about cost. my intention is that by using a HW RAID card, which is in the similar cost range of a consumer SSD, I can achieve better performance than a consumer SSD.
Intel DC S3500 80GB might be another choice for SLOG if it's better than a...
Yes that's why I'm not too concerned with HA for now.
Just reading Intel DC S3700/3500 specs and found that they both have latency 50μs for read and 65μs for write, whereas the latency is defined as "Device measured using Iometer. Latency measured using 4KB (4,096 bytes) transfer size...
I think HW RAID cache has really low latency thus good for ZIL usage? 2GB would be good for my use as the load is very light most of the time. Even 512M would be good, as I currently use a commercial SSD not even having capacitors.
I can't try this idea on my own as I've maxed out my PCI-E...
I suppose when QD=1 we have latency=1/IOPS?
Emmm...seems this will not work with HA. My setup is kinda of all in one so didn't notice this. @danswartz do you have SLOG then? If yes is it located in your JBOD?
I was just stumped upon this article when searching to improve my ZFS performance by improving the SLOG device:
http://forums.servethehome.com/index.php?threads/poor-mans-diy-zeusram.2712/
The main idea here is to use a hardware RAID card's write cache (in the form of DDR) as ZeusRAM's...
Small update, just for the sake of comparison.
Non-destructive pool benchmark with a 10 HDD RAIDZ2 pool (performed on old 10x Seagate ST3000DM001-1CH166):
ZFSguru 0.2.0-beta8 (9.1-004) pool benchmark
Pool : data_temp (27.2T, 0% full)
Test size : 256 GiB
normal read : 644 MB/s...
I asked the same question here but no one gives a good answer to the last calculation. It is reflected in the code here:
RAIDZ4 would be slow, according to here:
That's my feeling as well. So instead of performance I'm more focused on the space efficiency, which has proof here . I will join my previous year's 10 ST3000DM0001 to the pool so it can saturate my GbE. (Well, before I get my 40Gbps IB fully working:))
Selling some PC games via internet download:
Assassin's Creed 4 Black Flag
Splinter Cell Blacklist
Batman Arkham Origins
Prices:
Assassin's Creed 4 Black Flag + Splinter Cell Blacklist = $20
Batman Arkham Origins = $10
Or combo deal:
Assassin's Creed 4 Black Flag + Splinter...
You are right. I think they are more about space efficiency but are somewhat related to performance. I specifically selected a large test size and 5 passes to rule out idiosyncratic difference. Not sure what's going on here.
Or run vmware server/workstation/player on Windows with a ZFS-capable VM and pass the storage back to Windows host using iSCSI or CIFS.
The setup will be more complicated than hardware RAID and you may suffer some performance due to your CPU and RAM. However you can then get both a rock-solid...
Seems to be a good opportunity for ZFS? Either native ZFS or ZFS on Linux.
If you want to stick with Windows then you can take a look at Storage Space. Not as good as ZFS but better than before.
OK actually the test condition is not the same...I was using GNOM label 1 year ago but I used GPT partitioning this time. Probably the sector is not aligned this way.
Anyway here is the new performance figure and it seems normal now:
http://hardforum.com/showthread.php?t=1795121
So I am upgrading my ZFS rig and here are the performance figures (after some troubles) (compared to 1 year ago):
ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 5
Cooldown period: 2 seconds
Sector size override: 4096 bytes
Number of disks: 10 disks
disk 1...
So I just got 10 ST4000DM000 to do some testing but found that the write performance is ridiculously low, as shown below. I have terminated the test as I figure it will not go anywhere.
ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 5
Cooldown period: 2...
So I have 10 HDDs in the case using 3 backplanes and it is working great. I just ordered 10 more HDDs to fill in the case but found that I cannot power up the server whenever I have more than 3 backplanes connected. It is not specific to any backplane as I tried many combinations. I don't even...
The enclosure you are looking at exposes 3 x SFF-8088 ports directly that's why.
For a comparison see these as examples:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816133046
vs
http://www.newegg.com/Product/Product.aspx?Item=N82E16816133047
Both are 24-bay enclosures but DS-24D...
LSI SAS 9201-16e HBA will give you 4 SFF-8088 ports, more than enough for your enclosure. However it is HBA only so you will need software RAID (e.g. ZFS).
If you want hardware RAID then look here:
http://www.lsi.com/products/raid-controllers/pages/default.aspx#tab/product-family-tab-1...
The power of 2 requirement is easy to understand. But this is the part I'm still confused with, even after I looked at the vdev_raidz.c code. Do you have any clear explanation on the reason?
Simple answer: It means you need 3 such cards in the host to fully connect the 3 SFF-8088 ports on the enclosure.
I would recommend a more advanced card, although it will be more expensive.
I am currently using a 900W PSU from a SuperMicro SuperWorkstation 5037A-i in a NORCO RPC-4224 Case. The specs are as follows:
AC Voltage 100-240 V, 50-60 Hz, 10-6 Amp
+5V Standby 3 Amp
+12V1 25 Amp
+12V2 25 Amp
+12V3 25 Amp
+12V4 25 Amp
+5V 25 Amp
+3.3V...