Tintri. What is it?

SpeedyVV

Supreme [H]ardness
Joined
Sep 14, 2007
Messages
4,210
Can someone give me a Trinti for dummies?

I went to their website but really not sure what it is all about.
 
I passed. Got a big demo, went elsewhere. Too much magic sauce integration into vSphere. KISS.
Just give me all-flash block horsepower.
 
I passed. Got a big demo, went elsewhere. Too much magic sauce integration into vSphere. KISS.
Just give me all-flash block horsepower.

Good way to waste money by just going with all flash horsepower. You should have done your research first. I hope you at least went with Solidfire and not a company like Pure Storage.

The secret sauce is awesome especially considering the file system was built from the ground up years before VMware introduced VVOLs. This is not a WAFL, CASL, ZFS file system. If you do some research about who wrote RAID, you will find the guy that wrote the file system for Tintri.

It is not to say that there isn't other storage out there that will also get the job done, but when it comes to VM-aware storage and VM performance, Tintri is definitely one of the best.
 
Good way to waste money by just going with all flash horsepower.

Easy now, there are plenty of use cases where a hybrid array isn't the best choice, and then there are also use cases where Tintri in particular doesn't fit a use case.

What's important is that buyers do their due diligence to match array offerings to functional requirements.
 
Easy now, there are plenty of use cases where a hybrid array isn't the best choice, and then there are also use cases where Tintri in particular doesn't fit a use case.

What's important is that buyers do their due diligence to match array offerings to functional requirements.

True, but based on that persons comment it doesn't sound like the due dilligence was done especially saying they passed because of too much "magic" sauce as they put it. Just like if I were to talk to NetApp about VDI they would tell me to buy an all flash array because their solution to the problem is throwing horsepower at it while Tintri worked hard to build an excellent filesystem that works just as well or better without going all flash. That is all I am pointing out.
 
Good way to waste money by just going with all flash horsepower. You should have done your research first. I hope you at least went with Solidfire and not a company like Pure Storage.

The secret sauce is awesome especially considering the file system was built from the ground up years before VMware introduced VVOLs. This is not a WAFL, CASL, ZFS file system. If you do some research about who wrote RAID, you will find the guy that wrote the file system for Tintri.

It is not to say that there isn't other storage out there that will also get the job done, but when it comes to VM-aware storage and VM performance, Tintri is definitely one of the best.

:D :D :D
 
I passed. Got a big demo, went elsewhere. Too much magic sauce integration into vSphere. KISS.
Just give me all-flash block horsepower.

Magic sauce integration into vSphere, RHEV-M, OpenStack (cinder), SCVMM, and soon Xen...

Add intelligence - the APIs exist for a reason. That being said sure, there are places for all-flash arrays and all-flash performance, but plain dumb block storage for VMs? That's just a bad idea anymore. Too many things that go wrong that way, and I know ~all~ of them by heart and by hand.
 
Honestly, I haven't spend much time googling for this:

Let's say I have 5k active, non-persistent, VDI sessions. While those are non-persistent, they all have a thick client based app open which communicates with database servers and if that VDI session is terminated then at least some of the data entered into the app will be lost.

How do I avoid VDI downtime when the Tintri goes up in flames, or can I?
 
Why would data be lost? Assuming the client application doesn't return an "ack" to a command until the database has actually handled that data in some form, then terminating the application or session shouldn't matter - what's sent, is sent. Now, if we're talking form details that haven't been saved, then yes, stuff is lost - but it would be anyway:

As for avoiding downtime should you run the array over with a steamroller, you can't - but a stretched cluster doesn't avoid downtime for that either (there's still a failover). Nor can any other platform in the world, currently - the apps must restart. Zerto will get you to about a 4-10 second RPO if you want, though, or Tintri replication combined with a scripted failover will get you to 1 minute.
 
I'm not sure if that's entirely true, if I've understood the question - we had a stretch cluster HP StoreVirtual and I could absolutely switch off or "steamroller" half of it and the cluster IP and storage would fail over without any interruption.
 
I'm not sure if that's entirely true, if I've understood the question - we had a stretch cluster HP StoreVirtual and I could absolutely switch off or "steamroller" half of it and the cluster IP and storage would fail over without any interruption.

I over-simplified a bit - I'm assuming a true geographically stretched cluster within the range for complete synchronous replication (55 miles, give or take a nudge).

If you ~only~ lose one half of the storage and the hosts on both sides are still up and running, sure - in theory an improper config would keep you up and running, but assuming you'd designed it as a true stretched cluster, then you'd be sending storage traffic over the WAN. The proper design has it configured so that HA fails the VMs over to the hosts with local access to the clustered storage instead of reaching over said WAN, as that's how the SATP will detect and load for it as a "multi-site cluster" for the LeftHand systems and communicate said details to VC for HA. Effectively, you always want to be talking to the local copy, and not over the WAN, based on RTT latency limitations for the abort handlers.

If you implemented it all within a single datacenter, then sure - but you're spending a lot of money for that kind of solution, which is outside the needs of most places, and even then, a 4-10 second RPO is generally more than acceptable to folks in that spectrum as well (with a service blip).

It also depends on what version of ESX you were on - 5.5 was the first version with "true" stretched native stretched clustering support that understood all of this - the others were somewhat blind to it, and didn't involve HA properly for localized storage communication.
 
Good way to waste money by just going with all flash horsepower. You should have done your research first. I hope you at least went with Solidfire and not a company like Pure Storage.

Whats wrong with Pure? we're actually looking at SolidFire, Pure, Nimble, and 1 or 2 others, so i'm curious what i might need to watch out about with Pure.
 
Whats wrong with Pure? we're actually looking at SolidFire, Pure, Nimble, and 1 or 2 others, so i'm curious what i might need to watch out about with Pure.

Very simple platform - dedupe and compress, shoot to flash. Clunky snapshot and replication system, no VM management, performance cap at about 80-85% overall system capacity (fill it to that point, goodbye speed), SCSI queuing issues and filesystem locking limitations (for virtual workloads). Can get the same performance elsewhere for far cheaper. If you need raw HP for physical workloads, it's great - but it's nothing special for virtual platforms, just another all flash platform with dedupe and compression.
 
now i understand why pure keeps running all demos with 10-15% space used

Very simple platform - dedupe and compress, shoot to flash. Clunky snapshot and replication system, no VM management, performance cap at about 80-85% overall system capacity (fill it to that point, goodbye speed), SCSI queuing issues and filesystem locking limitations (for virtual workloads). Can get the same performance elsewhere for far cheaper. If you need raw HP for physical workloads, it's great - but it's nothing special for virtual platforms, just another all flash platform with dedupe and compression.
 
Last edited:
Back
Top