Need ZIL(ZFS) advice please

shnelson

Limp Gawd
Joined
Feb 10, 2012
Messages
145
I'm in the final stages of setting up my storage server. Primary purpose is to serve up iscsi for my home esxi lab, but I'd also like to use it as a reliable CIFs source & stow all of my photos/video to it.

I will have 6x1tb drives in raidz and am planning on an SSD for ZIL. Thinking about system integrity, I'm going to follow best practice and mirror the ZIL. I have a hard time justifying a pair of SLC SSDs for home storage, is it really necessary? How bad does an MLC drive get thrashed in a minimal use home storage system?

If you guys do recommend the SLC, would there be any problem in using an intel 311 20gb and a secondary MLC drive of similar capacity - or is it not possible to determine an active drive in the mirror? I've settled on FreeNAS for an OS but am open for other suggestions if that's relevant.


The *cheap* side of me wants to get a pair of kingston 16gb MLC drives for the ZIL, but I don't want to be repairing a catastrophie 3 months down the road because of premature failure.
 
This is Hardforum. People will tell you to buy Gold Plated, Ruthenium insulated Intel SLC drive.

But in my opinion 2 Cheap Kingston drive will do it as long as you zfs version is high enough ( v18 I think ? ). If you are unlucky 1 drive will fail and so what? replace it.
If you are VERY VERY unlucky the two drives will fail at the same time, no big deal the pool will not die as long as you have ZFS > 18.
 
I've attempted this with ocz vertex 2 drives, for mirror zil. They both failed in under 7days.

I've replaced them with intel slc's, and no issues, those failed ocz came back to life, and I've added them as more l2arc space, and been working good.
 
Thanks for the insight Darknight - looks like FreeNAS 8.0.1 is @ zfs 15. Not scheduled for zfs28 until later this year but there are some custom builds available with it (not stable?).

I'm probably going to pull the trigger on the kingstons and see how they treat me. I do like the idea of getting zfs above v18 somehow.
 
right now using the 20gig 311. I don't have huge amounts of writing data, just random writes.
 
So yeah -- when playing around with log devices you know might die, you most definitely want to be at a high enough revision to support not only log removal, but "zpool import -m". If you do not support -m, you're just asking for a world of hurt. That's the flag that allows ZFS to import a pool with a missing ZIL; prior to the inclusion of that flag, if you lost the ZIL, you were in big trouble if the pool had to be imported (which happened, more than once).

Also, on SSD's -- how fast we're going to destroy your SSD has much and more to do with your workload, and how much of that workload utilizes the ZIL (often async writes may not). In a very heavy environment, considering the low 'maximum writes' threshold that the MLC cells have, I wouldn't be surprised to hear of death in mere weeks. Then again, depending on what you do at your home, it might last years? Really depends on average writes/s. If memory serves, every cell in an MLC SSD is good for like 100K writes.. assuming it's a 16+ GB MLC SSD and you're using it as a ZIL, chances are good we're never actually needing more than a GB or two of it, so as it dies we'll probably be fine for awhile.

Honestly, SSD death and how it occurs and what it feels like when it happens remains something of an enigma to me - I suppose their general 'newness' means it isn't common wisdom, I don't know. I do know at my home, on my little 'SAN' offering up storage to a handful of home desktops that at best see me play a few games, I have never felt the need for a ZIL (I do sync=disabled on filesystem datasets, and of course on iSCSI zvols COMSTAR's wce setting is basically doing that for me in a home use-case). But I just use them as 'hard disks' for a handful of desktop machines, nothing remotely major goes on at the house (I leave all that for the labs).

Oh, and bear in mind -- a cheap MLC SSD may very well end up bottlenecking you worse than just leaving it on the spinning disks! See this post (which I just made tonight in fact): http://www.nex7.com/node/12 -- essentially if the average write latency on the log device is not great, it'll quickly be your bottleneck due to how ZFS uses it. Getting more than one in a stripe won't help.
 
Thanks for that detail Nex7, I have enjoyed catching up on your recent posts here.


I had a really simple test to determine if I wanted a ZIL in my home lab - I first tried iscsi without it, then with it. I saw noticeable improvements with an SSD attached as the ZIL.

Where I might have shot myself in the foot, is using an OCZ 60gb agility3 for my testing, then ordering a pair of 16gb kingstons for my actual mirror implementation. There is a good chance that they will not perform anywhere near the ocz that was in there. Drives will be at my doorstep tomorrow so we will find out either way!

I also have to wonder, with the ZIL being essentially mirror, both SSDs will theoretically see the same writes? Thinking it through, I'd suspect that if & when one drive failed, the other would follow suit in a very short manner.
 
Drives will be at my doorstep tomorrow so we will find out either way!

Post what you find out! I would definitely suggest that OCZ Agility3 probably was more performant, but I am not familiar personally with either it or the Kingston's. What it really boils down to is controller and design.

I also have to wonder, with the ZIL being essentially mirror, both SSDs will theoretically see the same writes? Thinking it through, I'd suspect that if & when one drive failed, the other would follow suit in a very short manner.

This is going to depend on the nature of their death. If they both are good and well manufactured and built the same day and all else is equal, and the cause of death ends up being due to write wear, then I would suggest that yes, they will indeed die almost at the same time, if not at the same time, because they have indeed seen exactly the same I/O pattern their entire lives.
 
Hey Nex7....

Thanks for your posts here, good info.

From your blog post and the issues with that system.

Question about the SSD locking.
If the SSD gets locked until the write is complete.....

Why then if you add a second SSD into the mix, do both not get locked...? ie both are now striped... So one would presume the writes are spread across the drives.... Not written in a round robin type fashion. One drive then the next etc yes?

Or does the ZIL work slightly differently to a typical ZFS stripe? ( that is adding more than one SSD into the ZIL pool is not like adding another drive to your typical vdev )

.
 
@Stanza33

Mmm, there's a very big difference between what happens with a write WITHIN a vdev (like, say, on the 5 disks making up one of your raidz1 vdevs or your 2-disk mirror vdev) and what happens with an incoming write at the initial ingestion into the pool. There is a difference between how writes received by the pool and writes received into the ZIL are handled in terms of passing them out to vdevs, but the way the pool is handled is not I think the way you're thinking it is.

Your comment:
Why then if you add a second SSD into the mix, do both not get locked...? ie both are now striped... So one would presume the writes are spread across the drives.... Not written in a round robin type fashion. One drive then the next etc yes?

That implies that when writes come into ZFS the individual writes get broken down and then pieces of them are written across top-level vdevs. This is not the case. A lot of this is covered here, if you're interested: http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/fs/zfs/metaslab.c#1282.

For a simple example, umm.. ok, try this, off the cuff here so forgive me.. You have a guy holding 5 sheets of paper. In front of him are 2 guys, and behind each of those 2 guys are 2 more guys each, and behind each of THOSE guys is a big stack of paper for each one. The main guy looks at his 5 sheets, looks at the 2 people in front of him, have a quick discussion, then hands the first guy 3 sheets of paper and the other guy 2.

Those 2 guys immediately photocopy the paper they received, then both turn around to the two guys behind them and hand each one copy. Those guys then turn and file them in the big pile behind them.

I just described a pool with 2 mirrored 2-disk vdevs. Basically. Kinda. Probably badly. If it was raidz, those middle guys would have ripped the piece of paper up into bits but also created some extra pieces of paper to help rebuild missing slips of the pieces and then handed it to a bunch of guys sitting behind them. But the duplication for mirror or the splitting of a single piece of paper into pieces for redundancy is handled by that middle set of guys. Those are your top-level vdevs, not your pool. Your pool, that first guy, only sends entire pieces of paper to each of the guys under him, and tries to lay out his pieces of paper in a fair way (going off the sizes of the stacks of paper behind his guys).

Speaking only to the ZIL, it is a simple for loop across top-level vdevs, a bit different from the above situation, because instead of getting in a bunch and then spreading it out evenly, I get in some, write to the first while receiving some more, take what I received while writing to the first and write to the second while receiving more, then back to the first to write those, repeat, repeat. It is held hostage until it gets back a response from the vdev saying the write/flush action has completed, and it doesn't take the writes it received in the interim and 'spread them out', it just goes one by one through top-level vdevs. Then it goes on to the next vdev in the ZIL, repeat, repeat. This means it is held hostage by the write latency of whichever top-level vdev it is speaking to at the moment. Now, it is queuing up writes in the interim between that and the next (and to be clear, those writes coming in that it's queuing it is NOT responding to the clients saying 'done', it is holding them awaiting a log vdev to put them on first and only responding once that's complete -- thus where write latency comes from).

Since we're talking about something going in a for loop at what is on SSD's or RAM-based devices measured in low milliseconds or preferably microseconds, it helps to put this in perspective. Even on a fairly slow SSD as described with a .35ms response time, that's still .35 MILLISECONDS. While held hostage by that SSD until the write/sync commit is complete, ZFS was still able to do about 3,000 operations/s to the device (1 second / .35ms = 3000).

Because I'm held hostage by each one and then on to the next, I'm basically as slow as my slowest drive. Adding one slow drive and a bunch of fast drives isn't going to improve the situation much. I'll get to do a bunch of writes real fast as I iterate through the fast ones, but then I hit the slow one and I'm stuck until it comes back, leaving those other fast ones sitting idle. An improvement, but not nearly as good as pulling the slow one out. Note that as the latency drops, you can get to a point where the average latency of your log devices is so low that it is NOT the bottleneck, and instead the IOPS or throughput potential of the devices themselves become the bottleneck.

I think there's room for improvement in this code in ZFS, but it isn't simple. You can't just introduce a dumb queue that takes in writes and sends them to the 'first free log vdev' -- then your writes get out of sequence, and with the ZIL I'm pretty sure that's bad. Some work would be needed, but still, maybe long-term it can be improved.
 
@Stanza33

I think there's room for improvement in this code in ZFS, but it isn't simple. You can't just introduce a dumb queue that takes in writes and sends them to the 'first free log vdev' -- then your writes get out of sequence, and with the ZIL I'm pretty sure that's bad. Some work would be needed, but still, maybe long-term it can be improved.

Thanks for the reply.... answered a few questions I had.

Now I am off to find out if the ZIL can have it's record set changed as can be done with the L2ARC.... if for no reason than... just to know if it can and the possible benifits / downsides of doing so... ie Higher to increase IOPS etc

ie reading this old blog
https://blogs.oracle.com/brendan/entry/l2arc_screenshots

He talks of changing the record sets of the L2ARC

quote
•The L2ARC is currently suited for 8 Kbyte I/Os. By default, ZFS picks a record size (also called "database size") of 128 Kbytes - so if you are using the L2ARC, you want to set that down to 8 Kbytes before creating your files. You may already be doing this to improve your random read performance from disk - 128 Kbytes is best for streaming workloads instead (or small files, where it shouldn't matter.) You could try 4 or 16 Kbytes, if it matched the application I/O size, but I wouldn't go further without testing. Higher will reduce the IOPS, smaller will eat more DRAM for metadata.
/quote

.
 
Thanks for the reply.... answered a few questions I had.

Now I am off to find out if the ZIL can have it's record set changed as can be done with the L2ARC.... if for no reason than... just to know if it can and the possible benifits / downsides of doing so... ie Higher to increase IOPS etc

ie reading this old blog
https://blogs.oracle.com/brendan/entry/l2arc_screenshots

He talks of changing the record sets of the L2ARC

quote
•The L2ARC is currently suited for 8 Kbyte I/Os. By default, ZFS picks a record size (also called "database size") of 128 Kbytes - so if you are using the L2ARC, you want to set that down to 8 Kbytes before creating your files. You may already be doing this to improve your random read performance from disk - 128 Kbytes is best for streaming workloads instead (or small files, where it shouldn't matter.) You could try 4 or 16 Kbytes, if it matched the application I/O size, but I wouldn't go further without testing. Higher will reduce the IOPS, smaller will eat more DRAM for metadata.
/quote

.

I've heard about this too so im curious to see your findings.
 
I think you're misreading that comment, but I'm also not sure what he means by 'The L2ARC is currently suited by 8 K I/Os'. The size of the L2ARC's entries are determined by your dataset; they are not always 128K. That is why when you have a 128K typical record size, the RAM requirement for L2ARC is significantly lower for say, 100 GB, than it is if you have 100 GB of 8 K average record size entries (you'll need 16x more RAM to address that 100 GB than you did @ 128 K avg).

As far as I know, you cannot alter the ZIL record size, it will be determined by the incoming data.
 
I think you're misreading that comment, but I'm also not sure what he means by 'The L2ARC is currently suited by 8 K I/Os'. The size of the L2ARC's entries are determined by your dataset; they are not always 128K. That is why when you have a 128K typical record size, the RAM requirement for L2ARC is significantly lower for say, 100 GB, than it is if you have 100 GB of 8 K average record size entries (you'll need 16x more RAM to address that 100 GB than you did @ 128 K avg).

As far as I know, you cannot alter the ZIL record size, it will be determined by the incoming data.

I agree with what your saying, but I too have read about changing the ZFS default of a 128k record size to something smaller to better suite speed on the SSD's, but wasnt sure if that was really recommended or not. I understand it would take up more RAM, but i don't know the tangible benefits of it so its hard for me personally to compare.
 
Ugh, this gets into a lot of areas -- prefetch settings, block alignment, over/under-utilization, etc. I'm not the best equipped to deal with these having not spent years in performance labs or doing real-world performance tuning (we do have people at Nexenta who have!), but..

In general, I tend to suggest sticking to the defaults unless they just do not perform well enough for you. When they don't, start by looking at the easiest performance benefits you can do, and only really dig into things like block size when you've exhausted the easy stuff. Easy (note I don't say cheap) stuff is like -- better pool configuration, add a log device, split load amongst multiple machines, change application settings, upgrade from 1g to 10g, etc.

When talking block size, I tend to suggest that if you are going to modify it from the default (which is rarely a huge boon to performance, if you've exhausted other tunables first), you line everything up. If you have a database that talks in 4K chunks to a filesystem that's 4K then if that filesystem is actually an iSCSI zvol from Nexenta, make its block size 4K. Again, that advice is mostly applicable only to specific situations -- usually involving tons of small-block I/O (sometimes Exchange or other MTA's especially their spool area, often databases, and in some cases virtual machine environments). Sometimes the answer, by the way, is not to just line ZFS up with the application, but to modify the application as well -- 4K is in many cases just a silly low default size -- 8K, 16K, even 32K can often not lead to any appreciate under-utilization of space on the disk but when all aligned, get you some good performance benefit over a straight 4K solution. Other times, 4K IS the answer (dedup, for example, when the dataset in question can have a DDT that fits in RAM, you've got the RAM for it, and the dataset dedupes well -- when those are all true, and I caution you they RARELY are, dedup can do wonders -- I've seen a paste of a 400:1 ratio dedupe, for example).
 
Why do you need ZIL? You are a home user. Sure, if you move big files, then your VMs in ESXi might lock briefly. But you can live with that. I suggest you try without ZIL first. Maybe you dont need ZIL. It is very easy to add ZIL to your zpool later. Just one command, I think...
 
You will always have ZIL unless you set sync=disabled. You are confusing ZIL separate device with ZIL itself. With no ZIL device, the pool itself is used. Note if this is an ESXi NFS datastore, writes will be agonizingly slow with sync on and no separate ZIL device.
 
Hi,

what about the intel 520 ssd?
I read some benchmarks here and I think they look very good!?
Intel 520 review
and here an enterprise review:
Intel 520 enterprise review

The write IOPS looks very good, especially if you enlarge the over provisioning.

In the first review, they also tested the write latency with 0.05ms and max. 38ms.

EDIT: I forgot one question. I think, that a ssd with a sandforce controller doesn't have a cache, correct? Without a cache, no backup battery should be necessary, doesn't it?

ghandalf
 
Last edited:
Hi,

what about the intel 520 ssd?
I read some benchmarks here and I think they look very good!?
Intel 520 review
and here an enterprise review:
Intel 520 enterprise review

The write IOPS looks very good, especially if you enlarge the over provisioning.

In the first review, they also tested the write latency with 0.05ms and max. 38ms.

EDIT: I forgot one question. I think, that a ssd with a sandforce controller doesn't have a cache, correct? Without a cache, no backup battery should be necessary, doesn't it?

ghandalf

Unfortunately there are manufacturers making Sandforce drives with supercaps or capacitor arrays. See here on Sandforce's site also: http://www.sandforce.com/index.php?id=21&parentId=2

So it must still be necessary. :(
 
OCZ have a full range of SandForce Enterprise drives:

Deneva 2 C: SandForce 2281, Async, Sync and Toggle MLC, eMLC or SLC NAND, 3.5" and 2.5", no supercap, SATA

Deneva 2 R: SandForce 2581, Sync NAND, 3.5" and 2.5", supercap, SATA


Talos 2 C: Sync NAND, 3.5" and 2.5", no supercap, SAS

Talos 2 R: Sync NAND, 3.5" and 2.5", supercap, SAS


There's also the Z Drives. C series again with no power protection, R series has it.
 
I agree that a logging device may be more than any home system needs. I fully agree that dropping cash on an SLC SSD is more than overkill for a home system.

What I can tell you, is that my performance gain is rather significant when I put a pair of 16gb (mirror) kingston SSDs as my log device. I am doing a lot of lab work where I need to provision a windows system, blow it away then provision a new one so it helps a lot. I am trying to find a worthy benchmark to run against my storage system with & without dedicated logging devices. I also want to test the benefit of a cache device.
 
You will always have ZIL unless you set sync=disabled. You are confusing ZIL separate device with ZIL itself. With no ZIL device, the pool itself is used. Note if this is an ESXi NFS datastore, writes will be agonizingly slow with sync on and no separate ZIL device.

vmware forces o_sync, doesnt matter what the server settings are. disabling zfs sync does speed things up a bit but vmware is still using o_sync.
 
Well I have just setup / setting up a Microserver SAN

Going to be a test rig for ZIL ARC and possibly FC as well

Setup is
N36L Microserver
8gig ECC Ram
120gig 2.5inch Hitachi SATA Drive <<<OS
2 x 1TB 2.5inch Drives
1 x STEC Mach16IOPS 50g SSD for ZIL
1 x STEC Mach16IOPS 50g SSD for L2ARC
1 x Intel CT Gigabit PCI-E x1 Lan card in the x1 slot
1 x NC360T Dual Port Gigabit Lan Card in the x16 slot

Nice n low powered so I don't worry about Energy bills

2 x 2.5 1Gb drives will be Mirrored and shared over NFS to another 2 Microservers running ESXi

I tried the above config on my ML350 G5 with 16Gb ram and 4 x 72gig SAS drives it seemed to purr along very nicely, so will be interesting to see how 8Gb less memory and 2 x less CPU cores affects the overall snappy feel of it all.

Heres hoping

.
 
SandForce drives do have a cache of sorts-- it's just internal, not external DRAM, though it's probably quite small. The Intel 320 series uses some smaller capacitors instead of super-capacitors (which are expensive and a single point of failure). Only the 320 and 720 series have PLP at this point (as far as Intel drives go). The sucessor to the X25-E, the 711 series, may or may not have it.

But very few consumer drives do have power-loss protection (almost none save the Intels), so if that is a concern, the 320 series is your best bet. They don't write fast enough to wear out in just a few weeks, or even a few months. Even the 40GB 320 would take the better part of a year to kill if it was writing as fast as it could 24/7. Assuming the the workload was a full-span 4K RW @ QD32, where WA would be well above 10, it would take just as long to kill. Additionally, the 320 does have extra flash on board (for instance, the 320 40GB actually has 48GB, with the extra 8GB for the "raid 4-like" redundancy scheme). I have seen one 320 lose a die and keep ticking, so it does work.

The JEDEC testing for PE cycle ratings are by their nature quite conservative. Enterprise drives have the benefit of only having to retain data for 3 months vs. 1 year for consumer drives. This, combined with overprovisioning, allows for much more TBW.

That doesn't mean that some MLC drives won't flake out under the workload, but some are better than others when faced with severe conditions. On the other hand, there is no substitute for SLC. Single level cell latencies are much, much better than MLC. Even if endurance wasn't an issue, the performance from SLC is spectacular.
 
Last edited:
OK More play tonight

Simple setup

Microserver
8Gig Ram
120gig 2.5inch drive for OS
2 x 1TB WD EAVS Green drives for Pool (mirrored)
1 x STEC 50gig Mach16IOPS for ZIL
1 x STEC 50gig Mach16IOPS for L2ARC

First Test

Code:
  pool: tank
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM     CAP            Product
	tank        ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    c3t0d0  ONLINE       0     0     0     1000.20 GB     WDC WD10EAVS-00D
	    c3t1d0  ONLINE       0     0     0     1000.20 GB     WDC WD10EAVS-00D

errors: No known data errors

Shared via CIFS to Windows 7 64bit on another Microserver with 4Gb Ram

Run the SQLIO Bench (Good test of random IO) on a file size over 4 times the Ram installed... I used 30gig as the Microserver has 8gig ram to avoid ram caching

ie bang a 30gig file on the shared drive, and hit it hard with 8k random IO for 600seconds

Code:
C:\Program Files (x86)\SQLIO>sqlio -kW -s600 -frandom -o8 -b8 -LS -FSQLmon3.txt
sqlio v1.5.SG
using system counter for latency timings, 1267441 counts per second
parameter file used: SQLmon3.txt
        file z:\testfile.dat with 2 threads (0-1) using mask 0x0 (0)
2 threads writing for 600 secs to file z:\testfile.dat
        using 8KB random IOs
        enabling multiple I/Os per thread with 8 outstanding
size of file z:\testfile.dat needs to be: 31457280000 bytes
current file size:      0 bytes
need to expand by:      31457280000 bytes
expanding z:\testfile.dat ... done.
using specified size: 30000 MB for file: z:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec:   219.71
MBs/sec:     1.71
latency metrics:
Min_Latency(ms): 8
Avg_Latency(ms): 72
Max_Latency(ms): 8727
histogram:
ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 [B][COLOR="Yellow"]23 24+[/COLOR][/B]
%:  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  [B][COLOR="yellow"]2 98[/COLOR][/B]

Yes yes, lots of numbers n stuff....
But the interesting things (Latency) are highlighted Yellow

That is, 2% of the requests took 23ms to perform, and 98% of them took over 24ms to perform.... pretty damn sad

Now lets pop into the pool a ZIL SSD and a L2ARC SSD

Code:
  pool: tank
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM     CAP            Product
	tank        ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    c3t0d0  ONLINE       0     0     0     1000.20 GB     WDC WD10EAVS-00D
	    c3t1d0  ONLINE       0     0     0     1000.20 GB     WDC WD10EAVS-00D
	logs
	  c3t2d0    ONLINE       0     0     0     50.02 GB       STEC MACH16 M
	cache
	  c3t3d0    ONLINE       0     0     0     50.02 GB       STEC MACH16 M

errors: No known data errors

Errase the SQLIO test file and try again

Code:
C:\Program Files (x86)\SQLIO>sqlio -kW -s600 -frandom -o8 -b8 -LS -FSQLmon3.tx
sqlio v1.5.SG
using system counter for latency timings, 1267441 counts per second
parameter file used: SQLmon3.txt
        file z:\testfile.dat with 2 threads (0-1) using mask 0x0 (0)
2 threads writing for 600 secs to file z:\testfile.dat
        using 8KB random IOs
        enabling multiple I/Os per thread with 8 outstanding
size of file z:\testfile.dat needs to be: 31457280000 bytes
current file size:      0 bytes
need to expand by:      31457280000 bytes
expanding z:\testfile.dat ... done.
using specified size: 30000 MB for file: z:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec:   350.07
MBs/sec:     2.73
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 45
Max_Latency(ms): 4235
histogram:
ms: 0  [B][COLOR="red"]1  2  3  4  5  6  7[/COLOR][/B]  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 [B][COLOR="Yellow"]24+[/COLOR][/B]
%:  0 [B][COLOR="Red"]26 26  9  7  3  1  1[/COLOR][/B]  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0 [B][COLOR="yellow"]25[/COLOR][/B]

Red and Yellow this time
We now see 26% of requests took 1ms to perform
another 26% of requests took 2ms to perform
another 9% of requests took 3ms to perform
another 7% of requests took 4ms to perform
another 3% of requests took 5ms to perform
2 lots of 1% @ 6ms and 7ms to perform
With only 25% of our requests taking longer than 24ms

.
 
Stanza33, have you heard back from anyone at STEC regarding availability of FW updates?
 
Why not use a raptor for the ZIL? AFAIK the ZIL is pretty much entirely sequential, and so one of those would work well, and you would need to worry about it wearing out.
 
Why not use a raptor for the ZIL? AFAIK the ZIL is pretty much entirely sequential, and so one of those would work well, and you would need to worry about it wearing out.

Raptor just wouldn't have enough IOPS to handle the load
 
Back
Top