NetApp Flash Cach vs PernixData or other caching technology?

You'll need to talk to your rep about limitations with mixing SSD and SAS in the same shelf. I think the 15k drives are only available in the DS4243 and the DS2246 is used for SSD and 10K SAS.

You also need to check on the flash pool capacity of the 3240, it's a bit different with what it supports then the newer 3220/3250. The 3220 is capped at 1.6TB of SSD in a flashpool, the 3250 is 4TB.
 
Yes and no.

FlashCache is global deduped, if you have multiple aggregate, it works better and it is faster than Flash Pool. If you only have 1 aggregate, it doesn't matter much.

One good thing about Flash Pool is it is more granular than FlashCache. You can turn it on/off per volume.

Now here is the limitation.
1. Once it's added, you can't remove it unlike ZFS ZIL/L2ARC.
2. You put 3 SSDs in RAID-DP, your cache size is only 1 SSD. So it makes more sense to put more SSDs, thus 5+1 hotspare.

Our main goal is to increase the performance of the Tier 1 aggregate. But I guess we could technically Buy another shelf with SATA drives and SSDs and then ditch our SAS shelf all together.

Being that I don't know much about NetApp, can you use different shelves together? I was reading a NetApp PDF and they mentioned the following shelves for flash pool.

DS2246
DS4243
DS4246

We are currently using DS4323 shelves for all our storage. Also, unless this PDF is really old, the size SSDs they show are quite small. Either 100 GB or 200 GB depending on the shelf you get. There is an 800 GB SSD, but only works with a particular controller. So if we had to buy a shelf with twelve 100 GB SSDs just to get 1 TB, God only knows what that costs. We got a quote, again over a year ago, for an entire shelf filled with SSDs and it was $140k.
 
You'll need to talk to your rep about limitations with mixing SSD and SAS in the same shelf. I think the 15k drives are only available in the DS4243 and the DS2246 is used for SSD and 10K SAS.

You also need to check on the flash pool capacity of the 3240, it's a bit different with what it supports then the newer 3220/3250. The 3220 is capped at 1.6TB of SSD in a flashpool, the 3250 is 4TB.

Ha, that is exactly what I was looking at. Unfortunately we don't have a NetApp rep. Apparently we are going through a reseller.

It is starting to sound like we may not be able to use Flash Pool with our current setup. If you shouldn't mix different drives in an aggregate, and our shelves don't support SSD and we would need something with 10k SAS drives and SSD, then it would be an all new aggregate. I guess if it was cost affective, possibly go with 1.2 TB 10k drives and SSD. It would have to be a new aggregate, but could be our Tier 0 storage.
 
Our main goal is to increase the performance of the Tier 1 aggregate. But I guess we could technically Buy another shelf with SATA drives and SSDs and then ditch our SAS shelf all together.

Being that I don't know much about NetApp, can you use different shelves together? I was reading a NetApp PDF and they mentioned the following shelves for flash pool.

DS2246
DS4243
DS4246

We are currently using DS4323 shelves for all our storage. Also, unless this PDF is really old, the size SSDs they show are quite small. Either 100 GB or 200 GB depending on the shelf you get. There is an 800 GB SSD, but only works with a particular controller. So if we had to buy a shelf with twelve 100 GB SSDs just to get 1 TB, God only knows what that costs. We got a quote, again over a year ago, for an entire shelf filled with SSDs and it was $140k.

You better ask your rep for the limitation and mix and match for disks, shelves.

I do have a full shelf of 100GB SSD used for Flash Pool with SATA, but on 3240, 3270 and 6280 I only use FlashCache.

I still suggest you try infinio if you have spare CPU cycle and memory in your ESX boxes. They have 30 day trial and it may turn out to solve your performance concerns without buying new disks/SSDs.
 
You better ask your rep for the limitation and mix and match for disks, shelves.

I do have a full shelf of 100GB SSD used for Flash Pool with SATA, but on 3240, 3270 and 6280 I only use FlashCache.

I still suggest you try infinio if you have spare CPU cycle and memory in your ESX boxes. They have 30 day trial and it may turn out to solve your performance concerns without buying new disks/SSDs.

We eventually need additional capacity anyway. One possibility if it can be done would be get the DS2246 shelf. Put 18 1.2 TB SAS drives in there. That is about 16 TB capacity with DP. We only have 8.65 TB or Tier 1 currently. Then add 6 - 400 GB SSDs. Three of them would be about 1.2 TB of Flash Pool which appears to be the limit for our NetApp. Two for the DP, and 1 hot spare. This would become our new Tier 1 storage and then ship our current shelf to another datacenter where we also need Tier 1 storage, but don't need caching.
 
Are you going through a NetApp VAR? They should be able to help you.

http://www.netapp.com/us/products/s...nd-storage-media/disk-shelves-tech-specs.aspx

Here are shelf specs, if you need more space on the aggregate then your best best is probably to just replace it all with 1.2GB 10k drives and the flash pool. I'm still a little worried that the 3240 may only support a 512GB flash pool though, but I haven't dug through the latest documents on it.
 
Are you going through a NetApp VAR? They should be able to help you.

http://www.netapp.com/us/products/s...nd-storage-media/disk-shelves-tech-specs.aspx

Here are shelf specs, if you need more space on the aggregate then your best best is probably to just replace it all with 1.2GB 10k drives and the flash pool. I'm still a little worried that the 3240 may only support a 512GB flash pool though, but I haven't dug through the latest documents on it.


I am not sure who we are using for purchasing NetApp.

What you suggested is exactly what I suggested. :) The flash cache is limited to 512 GB. According to a PDF I found from NetApp the 3240 supports 1.2 TB of Flash Pool.
 
I still suggest you try infinio if you have spare CPU cycle and memory in your ESX boxes. They have 30 day trial and it may turn out to solve your performance concerns without buying new disks/SSDs.

I was looking at the website for infinio and it mentions only 8 GB of memory. Is that the most you can use, or can you use more? Our new servers have quite a bit more memory than we need right now. If we could at least allocate 32 GB of memory, then I could see a potential benefit.
 
We eventually need additional capacity anyway. One possibility if it can be done would be get the DS2246 shelf. Put 18 1.2 TB SAS drives in there. That is about 16 TB capacity with DP. We only have 8.65 TB or Tier 1 currently. Then add 6 - 400 GB SSDs. Three of them would be about 1.2 TB of Flash Pool which appears to be the limit for our NetApp. Two for the DP, and 1 hot spare. This would become our new Tier 1 storage and then ship our current shelf to another datacenter where we also need Tier 1 storage, but don't need caching.

If you are going to create multiple aggregates, I personally would choose FlashCache. You don't really need write cache since your NetApp has NVRAM for that. You don't loose capacity for parity on SSD, whole capacity of FlashCache is used for read caching, it's global regardless how many aggregates you have, and it does the job really well.
 
I am not sure who we are using for purchasing NetApp.

What you suggested is exactly what I suggested. :) The flash cache is limited to 512 GB. According to a PDF I found from NetApp the 3240 supports 1.2 TB of Flash Pool.

I think it's 1TB FlashCache for 3240 if I'm not mistaken.

Infinio is again globally deduped, and distributed. You deploy it on each ESX box in the cluster.
 
I am not sure who we are using for purchasing NetApp.

What you suggested is exactly what I suggested. :) The flash cache is limited to 512 GB. According to a PDF I found from NetApp the 3240 supports 1.2 TB of Flash Pool.

Yeah, I was agreeing with your suggestion. ;)

Good to know that it's at 1.2TB now, the old doc had it at 512GB, I wonder if there's a newer guide out there.

https://communities.netapp.com/serv...lash_Pool_Design_and_Implementation_Guide.pdf

Please note: the FAS3240 and FAS3160 support only 512GB maximum cache size per node of either Flash Pool or Flash Cache. Due to the small cache size supported it is not recommended to mix Flash Pool and Flash Cache in the same HA pair on these platforms.
 
If you are going to create multiple aggregates, I personally would choose FlashCache. You don't really need write cache since your NetApp has NVRAM for that. You don't loose capacity for parity on SSD, whole capacity of FlashCache is used for read caching, it's global regardless how many aggregates you have, and it does the job really well.

We only have multiple aggregates to separate SAS from SATA. All our SATA storage is storing data and is not intended to be fast.

From a VM level, this is the kind of performance we are getting on our Tier 1 storage right now.



It doesn't happen often, but we did have a client that specifically needed 100 MBps read and write. As you can see our write performance isn't that great. Then again, either is our read. Being able to increase read and write performance while also doubling our Tier 1 capacity would be awesome. Especially if we could move our current SAS shelf to another datacenter instead of buying two brand new shelves both filled with 24 SAS drives and buying two FlashCache.
 
I think it's 1TB FlashCache for 3240 if I'm not mistaken.

Infinio is again globally deduped, and distributed. You deploy it on each ESX box in the cluster.

But can it go above 8 GB? I am about to redo our clusters so they don't have a mix of different servers. The most we would have in a single cluster is 10 servers. So that is only 80 GB.
 
We ended up order a new shelf with eighteen 900 GB SAS drives with six 200 GB SSD. This will give us 12.6 TB of storage and 600 GB of Flash Pool. Plus we are going to keep our current Tier 1 shelf that has twenty four 15k drives. We will just reduce the load on it, especially all the SQL DBs to reclaim some IOPS.

We will also keep maintaining 240 GB of Flash Cache directly in our ESXi hosts for additional read caching.
 
Guys. I get it. I do storage for a living. :) I deal in arrays that do both front-end read/write cache with SSDs as well as auto-tiered pools of storage that have flash/SAS/SATA in them. I get it.

What you're missing is the cost per IOPS. You can't beat server-side cache for cost per IOPS. You just can't. It's just cheaper to buy standard good SSDs and put them in your vSphere hosts than it is to buy SSDs from your Tier 1 storage vendor...and that's with licensing included for the caching software.

You also get the benefit of cache RIGHT THERE ON THE SERVER. No network..no fabric. It's right there giving you sub-ms response time. If you need more cache you just throw more cheaper SSDs in the servers and go. No additional shelves. No changing pools. Etc. It works really, really well.

I totally agree with this, but it doesn't completely work in my situation. I have way to many vsphere hosts. The cost to add SSD to each host would cost about the same as adding a tray of ssd disks to one of my vnxs. On top of that, I do boot from san with our UCS platform. This gives us the ability to move a profile(os) from a blade that is failing to a known good blade with in minutes. Meaning less downtime.

This is just my opinion.. but If you need sub millisecond response time, you shouldn't be running your app within VMware at that point. You should move it back to a physical machine.
 
I totally agree with this, but it doesn't completely work in my situation. I have way to many vsphere hosts. The cost to add SSD to each host would cost about the same as adding a tray of ssd disks to one of my vnxs. On top of that, I do boot from san with our UCS platform. This gives us the ability to move a profile(os) from a blade that is failing to a known good blade with in minutes. Meaning less downtime.

This is just my opinion.. but If you need sub millisecond response time, you shouldn't be running your app within VMware at that point. You should move it back to a physical machine.

What does a tray of SSD cost on a VNX and then compare that to around $600/host. How many blades do you have? Licensing comes in to play too. It's a sizing/pricing exercise to see what fits.

It doesn't matter if you do BFS as long as the Server Profile exposes the SSD in the blade.

Move the ESXi boot LUN to a new blade and it boots..and sees the SSD and will use it.
 
In our case we are doing more writes than reads, so the Flash Pool should be the best option.
 
What does a tray of SSD cost on a VNX and then compare that to around $600/host. How many blades do you have? Licensing comes in to play too. It's a sizing/pricing exercise to see what fits.

It doesn't matter if you do BFS as long as the Server Profile exposes the SSD in the blade.

Move the ESXi boot LUN to a new blade and it boots..and sees the SSD and will use it.

How many blades? A lot.. more then I can talk about, with way more coming. Hence why a tray of 15 SSDs with licenses is cheaper then ssd's on each local machine. We skipped the SSD on the ucs blades to save the cash on local disk. I guess technically I could just attach some san based flash/efd space, and get the same thing, but then again, if I do vmax fast vp or FAST on a vnx, with tiers of disks in either fast profiles, or fast pools.. I get the same result with less effort. Well.. less effort on my part. *wink*

It works for me because the vast majority of what I'm doing in VMware is mostly big on CPU and Memory with low IOPS for the most part. I got a few folks pushing the envelop of what VMware can doing iop wise, but not much. Anything Oracle/OLTP/Data warehouse however is on a physical box. This is why I'm futzing with xtremsf cache cards at this point in time in our dell severs. UCS Blades do not support cache cards because you can't just plug them in. UCS rack mounted boxes, we DO plan to get physical drives in them so down the road I can utilized cache cards.
 
Anyone use Nimble Storage? Just wondering what are some good alternatives to NetApp?
 
Anyone use Nimble Storage? Just wondering what are some good alternatives to NetApp?

I'd love to try one but it's iSCSI only with no plans for NFS support. That just doesn't work in my environment. Check out the ZS3 if you need NFS.
 
I should clarify. A good alternative to NetApp that is much less expensive. I don't know pricing on the Oracle, but I am guessing it is not cheap. I didn't realize Nimble didn't support NFS, although that shouldn't matter too much for us.
 
Looks interesting. But depending on what site it goes in, we would definitely need iSCSI.

I told you Tegile. Closest thing to NetApp is ZFS based storage. Tegile is cheap and works very well, and dedup actually works in Tegile.

I have both Nimble and Tegile for VDI, and I like Tegile as it supports all the protocols, NFS, CIFS, iSCSI and FC.
 
Well, at the moment we are looking at Nimble. According to Nimble, they make it sound like their product is much better than NetApp. One thing is that CASL performs better than WAFL. And since CASL is sequential, there is no need to use SSD for write cache and they only use SSDs for read cache.

Should we just go ahead and dump our NetApp for Nimble?
 
Well, at the moment we are looking at Nimble. According to Nimble, they make it sound like their product is much better than NetApp. One thing is that CASL performs better than WAFL. And since CASL is sequential, there is no need to use SSD for write cache and they only use SSDs for read cache.

Should we just go ahead and dump our NetApp for Nimble?

I like nimbles products, but what i dont like is their very limited install base and their questionable finances. You know for a fact NetApp will be here next year, and 5 years from now. One thing i want with my SAN is knowing i can depend on it and the company behind it.
 
I like nimbles products, but what i dont like is their very limited install base and their questionable finances. You know for a fact NetApp will be here next year, and 5 years from now. One thing i want with my SAN is knowing i can depend on it and the company behind it.

Nimble is the fastest growing non-big-guy-storage company right now. Don't think they are profitable yet but heading that way.
 
Nimble is the fastest growing non-big-guy-storage company right now. Don't think they are profitable yet but heading that way.

That is also what worries me. Although they are less expensive now, in 5 years they may end up costing just as much as NetApp.
 
That is also what worries me. Although they are less expensive now, in 5 years they may end up costing just as much as NetApp.

Who cares? That's 5 years away. The storage world will change in 5 years. You're next array will be all flash and totally different.
 
Who cares? That's 5 years away. The storage world will change in 5 years. You're next array will be all flash and totally different.

Yeah, but there are still a lot of "what ifs" with switching to Nimble. If we stay with NetApp our next purchase would be the FAS8040 with 1 mixed shelf and two SATA shelves. The amount of memory, flash cache, flash pool, and NVRAM available for that model is pretty good. We have kept with two heads using an active/active setup. This gives us pretty good flexibility with separate aggregates. Nimble has a different philosophy in which the performance is not only CASL, but the processor performance from the head. That is also a concern for me. I have no clue how that performance will scale, and buying additional spindles doesn't increase performance with Nimble. So if we spent probably $200k+ for a CS240 with a couple extra shelves for around 60 TB of storage, then run over 200 VMs and start losing performance, our only real option is to upgrade to a more expensive head.

They also appear to be a bit limited in capacity per head. I believe no more than three shelves per head. You can't mirror a shelf, so your only choice is to purchase an additional head and enable replication. And of course they don't support NFS.

Our FAS3240 has been pretty good so far. We are seeing a drop in performance being that there is probably at least 100 SQL VMs running on a single shelf filled with 15k RPM SAS drives. That issue will be resolved once the new shelf is installed with 600 GB of flash pool. Even adding an identical shelf to increase the spindles would have suffice, but hey, might as well starting taking advantage of SSDs. And we did consider buying a shelf filled with 24 - 200 GB SSDs. It's not actually that expensive. I don't really consider the FAS3240 to be a huge contender in the hybrid market. The flash cache and flash pool is limited, but this is not a new NetApp. The 8040 on the other hand, up to 4 TB of flash cache and 12 TB of flash pool. That is not even including all the shelves of SSDs you can purchase. The maximum storage is well beyond Nimble. With the Nimble head we are looking at, you can only have two 10 GbE NICs active. Not that we are maxing out that kind of capacity, but the 8040 supports 32 - 10 GbE.

To me it seems the room for growth is much higher with a NetApp. Nimble is great when it comes to simplicity, and I wouldn't doubt if that is one reason why they are so popular. Businesses not only looking to save money on storage, but also saving money on supporting that storage.
 
If you switch from Nimble to NetApp you'll regret it. Your storage will be less capable, less powerful and your skills as an administrator will be less marketable. Nimble has a nice thing with IOPS/$, but when you have data you need to manage, especially in a true multi-tenant fashion, NetApp is tops.
 
You can do cache cards in B-Series blades. They just have to be M3 or M4.

EMC rep said its actually an LSI branded card, that will run EMCs special sauce software. Not something I'm ready to dive into this year. Gotta make sure the baby cache cards we have (340gigs), actually work for our environment before we branch out.
 
Does anyone know if NetApp or EMC is more expensive?

Agreed with what NetJunkie said.

If you want NFS, go with Isilon. I'm pretty happy with the 15 nodes I've installed thus far. Isilon is the only thing out there that I think can go head to head with netapp in that department. Almost everything else is fairly chumpy. If its worth is salt, it'll get bought out by one of the major players in the future anyway.
 
Well, we got our new NetApp shelf with 600 GB of FlashPool installed last Friday. I am not really sure how to test the FlashPool performance, but so far it seems to be doing a good job.
 
Congrats! As long as you are happy with it, and its meeting your expectations, that's all that really matters.
 
Well, we are mostly happy. Seems to be working great in the first location we setup FlashPool. In our other datacenter we installed the same shelf and were going to setup FlashPool but our OnTap version was 8.1 and needed to be upgraded first. We upgraded to the latest version on Sunday and are now able to use FlashPool, but are having a major issue with a Microsoft server running SQL on an iSCSI LUN. Ever since the upgrade, when there is a lot of IO, the NICs on the NetApp disconnect. Needless to say, this is very bad especially since our client is the deciding factor on whether or not we get two additional NetApps, the FAS8040, in different datacenters. After this ordeal, we may just be purchasing Nimble. It has already been two days and NetApp hasn't been able to fix the issue. We applied Microsoft hotfixes that they recommend, they even helped change the load balancing algorithm because apparently it is different with this version. Our guy is on the phone with them now. So I guess I will find out tomorrow if it is resolved or not.
 
Back
Top