Software RAID6 vs Hardware

ok, but in response to that I say that my Intel software RAID 5 on my i7 920 server gets a staggering 35MB/s and takes days to rebuild.

That might be, a couple dozen things could be at play here. Point is, software raid can be extremely fast, on cheap shit, like I posted.
 
My point is, hardware raid is just a tiny computer on a card, running software. It has a cpu, RAM, it runs a BIOS, etc - it is a tiny computer (that is why they cost so much). What is the difference if you run the software on the card's cpu, or on the server's cpu?

Sure hardware raid will offload some bus traffic and latency, but is that a problem today? ZFS does checksum calculations on every block data read (similar to do a MD5 sum checksum on every data block, so ZFS can detect data corruption) and this takes cpu power, yes. But it is something in the order of 3-5% of one core, in a quad core cpu. Surely you can trade 3-5% cpu power in exchange for not needing to buy another piece of hardware. I doubt there are many users who can not afford 3-5% of one core. In servers there are 6-12 core cpus. And some have dual or even multiple cpus.

Hardware raid was once useful and had it's place, long time ago when cpus where weak. But today you can do it all in software running on core. I would sell my hardware raid cards today because there is still a market for them. But I predict the market will diminish.

Yet, cheap ($300-$500) dedicated controllers seems to beat the crap out of a fairly modern CPU when running RAID6 in terms of raw throughput. However, as you implied, storage systems seem to be moving towards providing more features through more complex file systems. Since the file system is already doing a lot of work, we can move things from the underlying RAID layer directly into the file system without much of a performance penalty.
 
Yet, cheap ($300-$500) dedicated controllers seems to beat the crap out of a fairly modern CPU when running RAID6 in terms of raw throughput.

Do they? I am not so sure of that. I mean the LSI and PERC5 mentioned in the last page gave very slow performance compared to what I see in software raid. I achieved that level of performance (350 MB/s reads and writes) nearly a decade ago under linux and mdadm.
 
Last edited:
I dont understand the numbers, can you explain them to me?

volume, milliseconds in zpa_sync (time to flush to disk), write megabytes, read megabytes, write IOPs, read IOPs

actually just read this

Code:
/* Description: This script measures the ZFS transaction group commit time, and tracks
 * several variables that affect it for each individual zpool. One of the key things it
 * also looks at is the amount of throttling and delaying that happens in each individual
 * spa_sync().
 * Some important concepts:
 * 1. Delaying (dly) means injecting a one-tick delay into the TXG coalescing
 *    process,  effectively slowing down the rate at which the transaction group
 *    fills. Throttling (thr), on the other hand means closing this TXG entirely, sending
 *    it off to quiesce and then  flush to disk, and pushing all new incoming data
 *    into the next TXG that is now "filling".
 * 2. The feedback loop which determines when to stop filling the current TXG and
 *    start a new one depends on a few kernel variables. The cutoff trigger (size)
 *    is calculated from dp_tempreserved and dp_space_towrite, which this script
 *    combines into a value of reserved_max (res_max), duplicating the calculation
 *    that happens in the kernel. When res_max reaches 7/8 of current dp_write_limit,
 *    system starts delaying writes. When res_max reaches current dp_write_limit,
 *    system attempts a throttle, which has higher impact on performance. It is not
 *    normal for a system to be constantly throttling/delaying, but if this happens
 *    from time to time it's okay - the feedback loop likely set dp_write_limit too
 *    low because there was no need for it to be high, and when write pattern changes,
 *    the adjustment happens due to dp_throughput rising.
 * 3. dp_write_limit is calculated as dp_throughput (dp_thr) multiplied by
 *    zfs_txg_synctime_ms, with certain thresholds applied if necessary. NOTE: It
 *    accounts for write inflation, so it does not actually represent the amount of
 *    data that goes into any given TXG. The output of this script shows a spread of
 *    minimum and maximum of dp_write_limit recorded during each TXG, as well as the
 *    maximum of the reserve, and the current dp_throughput, which is calculated at
 *    the end of each TXG commit.
 * 4. Some comments on other output values:
 *    The X ms value at the beginning of each line is the length of the spa_sync() call
 *        in milliseconds. As a general rule, we should strive for it to be less than
 *        zfs_txg_synctime_ms, but that is not the only condition. When this number is
 *        pathologically high, this might indicate either a hardware issue or a code
 *        bottleneck; an example of such code bottleneck might be a metaslab allocator
 *        issue when pool space utilization reaches 75%-80% (sometimes even earlier),
 *        also known as free space fragmentation issue. Other causes of slowdowns may
 *        include checksumming bottleneck on a system with dedup enabled, ongoing ZFS
 *        operations such as a ZFS destroy, or an ongoing scrub/resilver, which by design
 *        will borrow time from each TXG commit to do its business.
 *    wMB and rMB is the amount of data written and read in MB's during the spa_sync()
 *        call. They are the total data written by the system, not just for the specific
 *        zpool.
 *    wIops and rIops are the I/O operations that happened during spa_sync(), also global
 *        unfortunately. They are already adjusted per second.
 *    dly+thr are the delays and throttles. Those, normally 0+0, are for the individual
 *        zpool.
 *    dp_wrl, res_max and dp_thr are covered above. Also for the individual pool. */

/* Author: [email protected] */
/* Copyright 2013, Nexenta Systems, Inc. All rights reserved. */
/* Version: 3.0 */
 
Yeah, this is just my backup array for media and web files/dbs. (Pics,Movies, PHP,HTML,MySQL) and thus I want the most redundant for the $.
RAID is not a back-up, never has been, never will be.

It's intended for disk failure in 24/7 uptime situations and not for backup.

Do a little research and see what I mean.
 
I'm not sure how to parse what he said. If it was the only copy of his data, I'd agree with you. It could be read to mean it is a backup array of production data, and he wants raid to minimize the chances of losing data off the backup?
 
I'm not sure how to parse what he said. If it was the only copy of his data, I'd agree with you. It could be read to mean it is a backup array of production data, and he wants raid to minimize the chances of losing data off the backup?
So why not just buy a drive and do back-ups?

Maybe read this and see what I mean.

Even in the professional situations where RAID is used they have separate backups of the data.
 
My point is, hardware raid is just a tiny computer on a card, running software. It has a cpu, RAM, it runs a BIOS, etc - it is a tiny computer (that is why they cost so much). What is the difference if you run the software on the card's cpu, or on the server's cpu?

This is actually incorrect. These "tiny computers" are optimized for certain tasks. They excel over your typical CPU in these specialized tasks because that is what they were designed for.

For example, take a Cisco VPN ASA. It's a hardware device that does firewall/vpn/etc. Now, that Cisco device can process AES encryption (VPN) at an insane rate. It was designed for it, and these Cisco devices can hands down BRUTALIZE any typical Intel Xeon/Core CPU. It's not even close. Intel has made steps toward leveling the playing field with AES hardware acceleration, but it still can't touch a device built for AES encryption/decryption.

RAID parity works the same way. I'm not saying MDADM sucks, I'm merely saying a device built for RAID will beat a device built for general usage computing every damn day.

I could use bitcoin mining as another example (the PS3 DESTROYS typical processors at it), or graphics processors, but I believe you get the picture.
 
I'm merely saying a device built for RAID will beat a device built for general usage computing every damn day.

I disagree in this case. A modern cpu has a lot more processing power than even the specialized processors found in these cards even for parity calculations.

In other cases like compression (I can think of LTO tape drives) or encryption there are cases where a modern CPU is outperformed by a special purpose processor.
 
Last edited:
This is actually incorrect. These "tiny computers" are optimized for certain tasks. They excel over your typical CPU in these specialized tasks because that is what they were designed for.

For example, take a Cisco VPN ASA. It's a hardware device that does firewall/vpn/etc. Now, that Cisco device can process AES encryption (VPN) at an insane rate. It was designed for it, and these Cisco devices can hands down BRUTALIZE any typical Intel Xeon/Core CPU. It's not even close. Intel has made steps toward leveling the playing field with AES hardware acceleration, but it still can't touch a device built for AES encryption/decryption.

RAID parity works the same way. I'm not saying MDADM sucks, I'm merely saying a device built for RAID will beat a device built for general usage computing every damn day.

I could use bitcoin mining as another example (the PS3 DESTROYS typical processors at it), or graphics processors, but I believe you get the picture.
You are actually supporting my claim, that "earlier we needed hardware, but today we can do it in software".

These hardware raid cards with 800 MHz PowerPC and 512MB RAM are optimized for a certain task, yes. But I suspect a decent (non high end) cpu will beat the PowerPC hands down at parity calculations. Doing XOR is not costly, and many cpus have vector instructions that maybe can do that? If you pit a multi core 2.4GHz cpu vs a 800 MHz PowerPC, both doing the same task in software (XOR calculations) I am pretty sure the server cpu is way faster. It has more IPC, and 3x higher clocked, and the cpu cache is huge in comparison.

Regarding Cisco VPN ASA, it might be faster than a server cpu, sure. But in time, ordinary cpus will be much faster and there will come a time when a cpu is faster. Then you wont need specialized Cisco hardware. Just like hardware raid, back in the days a hardware raid was way faster than a server, but not today. Besides, your comparison is not really accurate, because hardware raid typically stays at weak 800MHz cpus (more is not needed), whereas the Cisco has much more horse power. If you upped hardware raid to 2.4GHz and 64GB RAM - then a general purpose server would lag behind. But that horse power is not needed in a hardware raid card doing XOR parity calculations, 800MHz is enough. Servers today are much more powerful than tiny hardware raids, so there is a big difference. Servers are not much more powerful than Cisco VPN ASA, but that time will come too.

Have you heard about an company called Effnet in... 2001? They had researched an algorithm allowing you to use ordinary PCs instead of a huge Cisco router. They failed, I dont know why, so they could not replace Cisco. But I would not be surprised if someone else will succeed researching further in the same direction, in the future. In that case, we can use an ordinary server with specialized algorithms. These normal, ordinary servers maybe will have 100s of cores and TB of RAM caching everything in RAM, running algorithms in parallell. Or maybe they will have quantum cpus running everything in parallell. The Cisco routers then, might stay at a weak hardware in comparison - because more is not needed. Servers will be more powerful than Cisco routers, and then you dont need specialized hardware routers anymore.

Eventually, normal servers will surpass specialized hardware (because you dont need faster), and that is the time when you can move the functionality into software. Say a task requires X amount of resources, which can only be done by specialized hardware. When servers reach X in the future, the specialized hardware is not needed anymore. This time will come. Not yet for Cisco routers, but for hardware raid it has come long time ago. Eventually you can run everything in cpus.

Back in the days you needed different specialized hardware cartridges to play Atari games. When you wanted to play a different game, you changed the cartridge - that was a huge thing. Because earlier, an arcade machine was constructed to play only one game, so if you wanted to play another game, you needed another arcade machine. Then, with Atari you could just change cartridge, so you did not need to buy different Atari machines if you wanted to play different games. Flexibility. Later came computers, and when you wanted to play different games, you LOADED a different game into the computer, you did not need specialized hardware. A computer can allow you to different tasks, merely by loading different software - it is REPROGRAMMABLE. You dont need different hardware pieces anymore, you can do it in software. That is the point of computers, it can change functionality, by merely telling it to do so (by loading a software). Imagine you had a machine that could turn into a car, or laundry machine, or TV, or whatever merely by telling it to do so - it would be the equivalent to a computer, it would be reprogrammable. It could change functionality. Computers are doing this - that is why they are successfull. Today, I load an emulator allowing to play Atari, SEGA, nintendo, Amiga, etc - I dont need different hardware machines, everything can be replaced and done in software today. One PC can replace many many different hardware devices: video player, Music studio, word processor, arcade machine, telefax, etc.

My mobile phone, can act as a game machine, GPS, camera, calender, video player, MP3 player, telephone, etc etc - just by starting different programs. Earlier, you needed lot of different pieces of hardware to do that. You need 10s of devices, but today I only need one device. But that device can change functionality, it is programmable. So, all these hardware devices, can be done in software today on one device. Everything piece of hardware is moving into software. Earlier I needed to buy lot of diffferent devices to create music: guitar effect pedals (one for echo, another for reverb, etc), and music recorders, etc. I had to buy tons of things, costing very much money. Today I create music on a PC and depending on the functionality I need - I download it. Everything is done on the PC today: guitar effect boxes, drum machines, keyboard, music studio, etc. I dont need tons of devices. I just need one PC and dowload the functionality I want. I can even play piano on the keyboard! Today new pianos have arrived, they are just a cheap keyboard with good piano feeling, which you connect to a PC, and the PC emits the piano sound using a software. If you want another piano, just change software sample. No need to buy a Bösendorfer or Yamaha C90 for $100.000s, no need to buy different expensive pianos. Many CD recordings today, are actually using fake sampled drums, no need for a real drums anymore. PCs are replacing hardware, even people! Today PCs are trading at the stock exchange, earlier it was human traders.

Do you see the trend? Do you agree that every piece of hardware or functionality will eventually be turned into software that you can download at will? So your PC is a chameolont, changing functionality into what you want it to be. This is a bit lengthy post, but I hope I explained why I believe so? If you dont agree, please explain why, it would be interesting to see if I am wrong! :)

PS. I dont have time to write short, therefore I write long text. Churchill wrote so. I could have shorten this, removing redundancy, correct spelling, and make it clearer, but that would take time. To write a short text takes much effort. To write a long text saying the same thing, is easily done. That is maybe why, politicians talk for long time. ;)
 
volume, milliseconds in zpa_sync (time to flush to disk), write megabytes, read megabytes, write IOPs, read IOPs

actually just read this
Too much work to figure out what your point is. Next time I would appreciate if your made your point more explicit, so someone like me can understand. :)

It is like you are showing us an equation, and asking us to solve it, before we understand your point. Some of us can not do that. I dont know as much as you about storage (I doubt many do). So what is clear to you, might not be clear to me. So please, next time, be more explicit so even us noobs can understand you? :)
 
Too much work to figure out what your point is. Next time I would appreciate if your made your point more explicit, so someone like me can understand.

Says the guy with the giant wall of text post that cant get to the point. But by all means sit here attacking everyone else, you'll go far.
 
Too much work to figure out what your point is. Next time I would appreciate if your made your point more explicit, so someone like me can understand. :)

It is like you are showing us an equation, and asking us to solve it, before we understand your point. Some of us can not do that. I dont know as much as you about storage (I doubt many do). So what is clear to you, might not be clear to me. So please, next time, be more explicit so even us noobs can understand you? :)
the point was zfs software raid isn't slow, with numbers to back that up. 50k write IOPs for a volume that isn't laid out to be extremely fast at writing is quite good.

also, the cisco 5550 lists 425Mbps for VPN throughput. you can go all the way up to the 5585-X with SSP-60 and get 20Gbps. Thats a $300K box btw.

in comparison, sandybridge xeons with aes-ni can do about 1.5GBs. I wouldn't say they're getting crushed.
 
Too much work to figure out what your point is. Next time I would appreciate if your made your point more explicit, so someone like me can understand. :)

About 700 MBs written in a little over a half-second w/ ~49k write IOPS for his array is how I understood it.
 
RAID is not a back-up, never has been, never will be.

It's intended for disk failure in 24/7 uptime situations and not for backup.

Do a little research and see what I mean.

*Yawn* I said this is my backup array. Not that this IS my backup. :rolleyes:

What that means for my data is that it's stored on the systems it originated on where it's often used, and then it's moved to this backup array for rapid recovery as you said, and then it's moved off-site on other media for 'real' backup.

I didn't think a thread discussing raid6 hardware vs. software warranted my complete backup plan or usage plan but since you found it your job to poke your nose into it, there you go.
 
*Yawn* I said this is my backup array. Not that this IS my backup. :rolleyes:

Haha, oh web forums..they start innocent and by the third page are a pissing contest.

Hopefully by now you are thoroughly confused on what to do.
 
^-- LOL. Ya.

Sticking with my hardware RAID for now :D there won't be too much data on it to make it restrictive to migrate in the next few months if need be. And since I do have BACKUPS I could recreate it over a longer period :p For now, I Just want it up and useable :D :D
 
Haha, oh web forums..they start innocent and by the third page are a pissing contest.

Hopefully by now you are thoroughly confused on what to do.
hmm I see a stereotype X_x If the keyword 'RAID' pops up, then insert "RAID is not a back-up, never has been, never will be. It's intended for disk failure in 24/7 uptime situations and not for backup." for giggles and watch to see if a volcano erupts. :D
 
hmm I see a stereotype X_x If the keyword 'RAID' pops up, then insert "RAID is not a back-up, never has been, never will be. It's intended for disk failure in 24/7 uptime situations and not for backup." for giggles and watch to see if a volcano erupts. :D

What's funny is you'll see I've said the same thing to at-least one other person on this forum...oh the irony :cool:
 
Do they? I am not so sure of that. I mean the LSI and PERC5 mentioned in the last page gave very slow performance compared to what I see in software raid. I achieved that level of performance (350 MB/s reads and writes) nearly a decade ago under linux and mdadm.

My experience is with Areca and Adaptec. Areca has at least dedicated chip support for RAID6 computations.
 
My mobile phone, can act as a game machine, GPS, camera, calender, video player, MP3 player, telephone, etc etc - just by starting different programs. Earlier, you needed lot of different pieces of hardware to do that. You need 10s of devices, but today I only need one device. )

Without wasting my time arguing your long-standing software beats hardware manifesto (which you push ad nauseum in ZFS vs Hardware RAID discussions,) you are still incorrect that there is no reason to innovate because the "800MHz PowerPC is enough and eventually it will all be software anyway". First of all, if you can create an ARM core at 10mm2 instead of a PPC core at 40mm2 which also uses 1/2 the traces, 1/2 the tdp and double the performance then it is a win win. If every CPU were "more than enough" then they would sell you a simple HBA without the processor for twice the margin and 1/4 the price.

As to the "software does it all", it is not quite that cut and dry. Your mobile phone has specialized hardware which is required for almost all of those functions. I'll use an iPhone 5 as an example. You have a GPS controller chip and separate antenna. You have a camera sensor and a number of better phones also have a separate processor for the camera. Your phone/data functionality is provided by no fewer than 7 separate chips. Your MP3 player uses a dedicated MP3 codec chip (Cirrus) This is not a function of just having a CPU and running software alone gives you this magical panacea of functionality. Like all technology, more and more will eventually be integrated. You spend an inordinate amount of time and effort positing what "might be" while trying to apply it to what is. While the future will likely bring wonders, we aren't there yet.
 
Last edited:
Without wasting my time arguing your long-standing software beats hardware manifesto (which you push ad nauseum in ZFS vs Hardware RAID discussions,) you are still incorrect that there is no reason to innovate because the "800MHz PowerPC is enough and eventually it will all be software anyway". First of all, if you can create an ARM core at 10mm2 instead of a PPC core at 40mm2 which also uses 1/2 the traces, 1/2 the tdp and double the performance then it is a win win. If every CPU were "more than enough" then they would sell you a simple HBA without the processor for twice the margin and 1/4 the price.
I did not succeed in explaining in a clear matter. But I do not suggest to stop innovation, neither on the PowerPC cpu side nor software. I am trying to say that if the PowerPC cpu is fast enough for a raid card, then you dont need to optimize for speed anymore. If something is fast enough, why put down more resources? Why put a faster cpu than 800 MHz in a raid card when it is not needed? That would only be a waste?


As to the "software does it all", it is not quite that cut and dry. Your mobile phone has specialized hardware which is required for almost all of those functions. I'll use an iPhone 5 as an example. You have a GPS controller chip and separate antenna. You have a camera sensor and a number of better phones also have a separate processor for the camera. Your phone/data functionality is provided by no fewer than 7 separate chips. Your MP3 player uses a dedicated MP3 codec chip (Cirrus) This is not a function of just having a CPU and running software alone gives you this magical panacea of functionality. Like all technology, more and more will eventually be integrated. You spend an inordinate amount of time and effort positing what "might be" while trying to apply it to what is. While the future will likely bring wonders, we aren't there yet.
I am trying to say that the point of using a computer, is that they can emulate other machines. The computer is reprogrammable to different tasks. It is a chameleont so it can replace different devices. For instance, I dont need an Amiga nor Sega Megadrive anymore, I can emulate them. My computer becomes an Amiga, or becomes a Megadrive. That is the whole point of using a computer, that is why they are successful.

Regarding the iPhone, sure it needs some additional hardware for some of the devices it replaces. But the trend is clear, from one dedicated big expensive device - to being slimmed down to a tiny chip or replaced in software such as a Calendar.

To me, the trend is clear: the computer is replacing more and more people and devices, and everything is run in software. I dont need to buy a dedicated chess machine anymore, I can turn my computer into a chess machine via downloaded software. Same with hardware raid, they have clearly been outdated. Running raid software on a 800 MHz PowerPC computer on a separate card, or running the software on a server - what is the difference except that the server has vastly more resources and power?

It seems that you disagree? That computers are not replacing people or devices? I claim that the trend is that the computer replaces people and hardware devices, including hardware raid. You seem to claim the opposite? Can you elaborate and give some examples?
 
the point was zfs software raid isn't slow, with numbers to back that up. 50k write IOPs for a volume that isn't laid out to be extremely fast at writing is quite good.

also, the cisco 5550 lists 425Mbps for VPN throughput. you can go all the way up to the 5585-X with SSP-60 and get 20Gbps. Thats a $300K box btw.

in comparison, sandybridge xeons with aes-ni can do about 1.5GBs. I wouldn't say they're getting crushed.
Ah, ok. Thank you for being explicit. Actually, I dont dabble with bigger servers than my home PC so I dont really have a feeling for when you need 50K IOPS or so. Probably it is quite fast. Yes, I know the definition of IOPS, but I have never sized a server or so - so I dont have a feeling how fast it is. I just used servers that others have sized.
 
Here are some numbers from my mdadm RAID6 setup:

Code:
11xWD RE4 2TB mdadm RAID6, stripe_cache_size=8192

CPU: Intel Xeon X3430 @ 2.4GHz
RAM: 16GB ECC DDR3 1333MHz
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
merkur       32152M   339  98 692066  94 446460  50  3136  95 1312282  42 693.6  30
Latency             89501us     168ms     398ms    9361us   58212us   62504us

Version  1.96       ------Sequential Create------ --------Random Create--------
merkur              -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                512 58628  81 758519 100  7300   9 59094  80 +++++ +++  4522   6
Latency               396ms    1063us    1852ms     284ms     143us    2655ms

Seagate 2TB LP vs WD RE4 2TB

Code:
8xSeagate 2TB LP i mdadm RAID6, stripe_cache_size=8192

CPU: Intel Xeon X3430 @ 2.4GHz
RAM: 8GB ECC DDR3 1333MHz
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
merkur          16G   860  93 387501  29 206814  18  4823  94 704565  25 669.0  12
Latency             11594us     521ms     301ms   40611us     152ms     206ms

Version  1.96       ------Sequential Create------ --------Random Create--------
merkur              -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                512 50578  54 700192  99  1424   1 46576  49 1011904  98   848   1
Latency              1046ms     965us   12143ms    1171ms      48us   12321ms

Code:
8xWD RE4 2TB i mdadm RAID6, stripe_cache_size=8192

CPU: Intel Xeon X3430 @ 2.4GHz
RAM: 8GB ECC DDR3 1333MHz
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
merkur          16G   909  98 554914  39 290949  33  4622  95 845933  43 748.6  22
Latency             13845us     226ms     481ms   20591us   47392us   56955us

Version  1.96       ------Sequential Create------ --------Random Create--------
merkur              -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                512 56440  80 688331  99  3847   4 60200  83 987006  99  2214  3
Latency               360ms     947us    3766ms     221ms      20us    4140ms

RAID10, near (far is needed for RAID0-like read speeds, but write performance is lower) for comparison as RAID10 do not have the parity overhead that RAID6 have and is often as fast or even faster than HW RAID10 as long as your application are not using mainly sync writes (where the controller cache is far superior)

Code:
8xWD RE4 2TB i mdadm RAID10, near

CPU: Intel Xeon X3430 @ 2.4GHz
RAM: 16GB ECC RAM DDR3 1333MHz
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
merkur       32152M   771  97 327565  29 176701  15  3944  92 476405  17  1063  18
Latency             16994us     275ms     853ms   16291us   53837us   74728us

Version  1.96       ------Sequential Create------ --------Random Create--------
merkur              -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                512 50672  80 697910  99  6908  11 60476  85 +++++ +++  4452   8
Latency               505ms    1103us    2049ms     211ms     143us    1908ms

I have been collecting bonnie++ results from several of my systems with different software and hardware RAID configuration over several years here if you are interested:
https://wiki.proikt.com/wiki/Bonnie++_benchmark_results

Cisco ASA VPN was probably the worst example of hardware > software. A modern Xeon with AES-NI running Linux and ipsec and one or more 10 gbit/s interfaces is way faster than almost any Cisco ASA for both firewall and VPN-tasks for a lot less money. You buy ASAs for their "set-and-forget" configuration and maintenance and their relatively ease of management over a full blown server with a custom operating system and personal configuration, not for their absolute performance/cost ratio. Last time I checked one of our lower end ASAs actually ran a low mhz Intel Celeron CPU from the P4 era.
 
Last edited:
Quick question that the ZFS wiki is not answering though its probably an obvious answer.

Do you still lose drives to parity based on the type of raid configuration?
 
Second silly question.

Does ZFS stripe data across drives like regular old hardware raid?

Short answers preferred.

Yes. In addition to parity in RAIDZx configurations, It also consumes some disk space for rung check-sum's on files.
 
Yes. In addition to parity in RAIDZx configurations, It also consumes some disk space for rung check-sum's on files.

very little and until someone creates bp_rewrite you shouldn't use more than ~70% of usable space anyway.
 
Ok great.

Yeah, this is just my backup array for media and web files/dbs. (Pics,Movies, PHP,HTML,MySQL) and thus I want the most redundant for the $.

The 'live' versions will be on my desktop, and I have SSD on this server for faster access to the DBs, etc...

Thanks again for the help/info.

unRAID


/thread
 
Back
Top