Tintri on its way!

I agree, but it beats our NetApp SnapMirror as well, and I didn't believe it myself when it said the replica was up to date and cloned it in the recovery site just to be sure. I still have more to replicate, so I will do a test and pay closer attention to how long it takes.

And although Zerto vs Tintri may not be apples to apples, based on my experience with Zerto, what would have taken a week, took less than a day with Tintri.

Oh and with SnapMirror we have to play around with the TCP window size to try and get the best performance. With Tintri, I just enable it and pick a schedule.

To be fair, SnapMirror is shit comparatively speaking.
 
To be fair, SnapMirror is shit comparatively speaking.

LOL. Another good reason for me to try and get NetApp out of our datacenters. We currently SnapMirror about 60 TB from coast to coast. It works OK after we tweaked the window size.
 
Let's think about this. Theoretical throughput of a 1gb link is 125 MB/s, or 450 GB/hr. During initial replication it's a pure copy since no data is present yet at the replicated site.

So seeding 270 GB in 45 minutes gives us a throughput of 360 GB/hr. Not convinced that insane is the word I'd use to describe copying data at a rate of 360 GB/hr, or at 80% line speed.

Now, if that VM wasn't the first to go over the wire and there is already data at the replication site then de-dupe comes into play which is likely rate-limited by CPU more so than network or disk. It would be interesting to know how much data was actually transferred to the replication site. Though it's safe to say that compressed and de-duped replication is slower than 360 GB/hr, meaning that if less than 270 GB were pushed then the throughput was lower. (It may still be faster than for other vendors.)

From what I have seen, most of the replication has been about 50% of the actual logical size. Again, my real world comparisons are NetApp SnapMirror, Veeam Replication, and Zerto. I'm sure other SAN's have great replication too, but after all my research, Tintri is the SAN I wanted in our datacenter to replace NetApp. The main comparison between Zerto and Tintri as for replication is because Zerto couldn't do what I needed. Technically it is not 100% Zerto's fault. Our NetApp already had too much read latency and adding the Zerto initial seed on top of that to these VMs caused the environment to come to a crawl.
 
Let's think about this. Theoretical throughput of a 1gb link is 125 MB/s, or 450 GB/hr. During initial replication it's a pure copy since no data is present yet at the replicated site.

So seeding 270 GB in 45 minutes gives us a throughput of 360 GB/hr. Not convinced that insane is the word I'd use to describe copying data at a rate of 360 GB/hr, or at 80% line speed.

Now, if that VM wasn't the first to go over the wire and there is already data at the replication site then de-dupe comes into play which is likely rate-limited by CPU more so than network or disk. It would be interesting to know how much data was actually transferred to the replication site. Though it's safe to say that compressed and de-duped replication is slower than 360 GB/hr, meaning that if less than 270 GB were pushed then the throughput was lower. (It may still be faster than for other vendors.)

That's fair, I guess I make the assumption it is about like our circuits. Rarely if ever is there no competition for throughput. That said, 80% of line speed is probably the realistic throughput. The math pans out that a raw copy would complete or be VERY close within 45 minutes as you showed.
 
So far our Tintri is performing much better than our NetApp ever did. The NetApp has 600 GB of SSD Flash Pool and we were even using VMware Flash Read Cache. Now I am just using the Tintri and no FRC and getting better performance.
 
Using Tintri's in a VDI environment. They simply kick ass. Chosen over 3PAR systems.
 
Over the weekend I moved 17 VMs which make up one of our clients virtual infrastructure from one datacenter to another using the Tintri replication. It worked great. Total time it took was 1 hour for everything. But this included moving the public networking to the new datacenter, reconfiguring the VMs networking (which could be automated, but I did it manually), upgrading VMware Tools on all VMs, rebooting them, testing, etc. I was surprised there were no Active Directory issues. Last time I did a DR test using Zerto for replication, the Domain Controller required dcdiag /fix to get it working again. In this case the only notification I got on a few VMs was that hardware changed and Windows/Office re-activated.

The performance of this RDS Farm has never been so fast. It may actually be more responsive than my laptop that has a Samsung 840 EVO. The performance is much better than our FAS8040 which uses 46x 10k RPM drives with 5x SSD drives for Flash Pool for Tier 1. Oh, and substantially cheaper than the NetApp. For the price we paid for the NetApp, we could have easily purchased the Tintri T880 and had quadruple or more the Tier 1 capacity.
 
Keep us updated. We are looking to move our esxi workload off of our NetApps to a Tintri possibly. We are still evaluating multiple vendors to do our due diligence, but so far I'm digging the Tintri solution.
 
We only have about 40 VMs running on our Tintri, but so far performance and latency has been great.

 
We're migrating everything over to a Tintri now. So far the performance is mind boggling fast. On databases that used to see average latency of 8-15ms we're seeing 1-2ms. These are high work load with large data set databases. So, we weren't expecting sub-millisecond latency. That being said the reduced latency has translated in to reduced performance complaints. Goal = Met.
 
Funny you mention latency. I was just looking at that today.

This is an image of our NetApp latency for the past three months.



You will see how high the latency used to be. Hard to guess an average, but it was frequently above 20ms. Now the NetApp has only about 1ms of latency. Why? Because all those VMs were migrated to our Tintri. And as you can see from my previous post, our Tintri is doing 10k IOPS with less than 1ms latency whereas the NetApp was MUCH higher. At this point I have more VMs running on our Tintri than I ever did on that NetApp and it is still way faster.
 
Thanks for that. We are meeting Reps weds before our pullup with Management. All the info i can get is great. I personally really am fond of their solution so far.
 
Just curious if you have any other thoughts on the Tintri now that you've been using it for a while?
 
Just curious if you have any other thoughts on the Tintri now that you've been using it for a while?

Sorry, was out of the country for a bit.

Still really the same thoughts. It has been awesome and well outperforming our NetApp. I have friends at other companies trying it out too. One is a 3PAR shop, the other NetApp, but he was demo'ing Pure Storage and is now going to check out Tintri.

The monitoring built into Tintri is very useful. We have an application that uses MySQL and people always complained about performance. Being that it is Linux and MySQL, I have nothing to do with them, but did see latency from those VMs while on the NetApp. I offered up the Tintri to fix the latency problem, but it actually didn't solve the problem. The performance metrics from Tintri were showing extremely high latency for these servers. Tintri looked at it and saw large commands/queries that were much larger than MySQL can really handle. I went back to the person that supports the app and told him what the issue was, and then with some DBA magic, it has finally been fixed. Granted, this is something that should have been fixed no matter what storage it was running on, but Tintri was great in pointing out what the issue was so I could get the correct people to fix the problem.
 
May as well use this thread to ask; where's Tintri at in creating resiliency via some sort of clustering or some other distributed way?
 
Our Tintri is humming along. ~25 TB of production databases, 1-2 ms database latency. Fucking. Awesome.

Mostly sub-millisecond latency on the vDisk, 1-2 with SQL overhead.
 
We got another Tintri, this time the T850, in a different datacenter. I migrated 385 VMs, about 16 TB, and the performance is great. Storage is still sub-millisecond. Prior to the T850 the VMs were spread across two NetApps with 5 disk shelves and the latency was horrible.
 
is that deduped storage or total non deduped? we have 70 vms that take up more space than that.
 
We had tintri come in and do a pitch recently, looks good and good to see it's well liked and performed well.
 
We got another Tintri, this time the T850, in a different datacenter. I migrated 385 VMs, about 16 TB, and the performance is great. Storage is still sub-millisecond. Prior to the T850 the VMs were spread across two NetApps with 5 disk shelves and the latency was horrible.

Here is a write up of the datacenter migration I performed with the new T850 we purchased. Tintri ReplicateVM – Datacenter Migration
 
Nice write up. DId you change the ip space in new datacenter or did you use existing ip space of replicated vms?
 
Nice write up. DId you change the ip space in new datacenter or did you use existing ip space of replicated vms?

Both actually. We are a cloud service provider. Customers all stayed the same. Our management network was new IP space since I built out many new servers as part of the upgrade/move.
 
As much as I love our Tintri storage, sadly we are migrating away from it. Going to Kaminario all flash arrays. The Tintri definitely held it's own since we started using it, and continues to work flawlessly. We're just getting an awesome deal on 60TB of K2 storage.

Dollar for dollar when performance is needed I will still recommend Tintri.
 
Speaking of Tintri, I keep getting pinged by an old coworker who is hiring for a DC-area sales engineer... If anyone is interested, I can put you in touch with them.

-- Dave
 
I was really pumped on the Tintri models for our SAN/NAS refresh, but their sales ppl totally dropped the ball on presentation and commitment on staying in contact with us. I even pre-gamed them on how to present to our CTO and they completely blew it by a 100 miles. Didn't stand a chance. Was not impressed. Even tho lopoeteve really helped hook me up with their guys and some demos, their sales folks just nuked their chances from the get-go. They may not have been the winner in the end either way, but they were eliminated quite early on just based on impressions.
 
What you guys need is some Infiniflash from yours truly =)

Who doesn't want 1/2 petabyte of usable flash storage in 3U?
 
Both actually. We are a cloud service provider. Customers all stayed the same. Our management network was new IP space since I built out many new servers as part of the upgrade/move.

Thanks. Thats what I meant. So replicated vms kept same ip as it was in the old environment.
 
Back
Top