RTX 2080 Ti versus TITAN V

sk3tch

2[H]4U
Joined
Sep 5, 2008
Messages
3,326
WARNING: armchair, paper-based analysis ahead

Since there are no gaming benchmarks I was curious to see if the TITAN V represents a good approximation of what we can expect when it comes to gaming performance with the 2080 Ti...I was somewhat shocked that the TITAN V bests the 2080 Ti on paper in few pretty substantial ways:

My ghetto chart below:
ghetto_chart.png


(source1, source2)


So yeah, $3000 - $1200 you "save" a whole $1800 but it looks like gaming performance over the 1080 Ti will be in the 25% range (if we're lucky) if the TITAN V gives us any indication.

We are missing some data on ROPS and Texture Units for the 2080 Ti...

Thoughts? Is it all about the "RTX-enabled" games for now?
 

Attachments

  • ghetto_chart.png
    ghetto_chart.png
    89 KB · Views: 1,943
Last edited:
Your table is incorrect re. mem bandwidth, the 2080Ti has a bandwidth of 616GB/s, so relatively close to the Titan V. I would expect that in current games (no ray tracing), the Titan V might be a bit faster. But that'll all depend on how these cards boost and whether they're watercooled.
 
Your table is incorrect re. mem bandwidth, the 2080Ti has a bandwidth of 616GB/s, so relatively close to the Titan V. I would expect that in current games (no ray tracing), the Titan V might be a bit faster. But that'll all depend on how these cards boost and whether they're watercooled.

Thank you - updated
 
I thought the performance is 50% faster than the 1080 ti with the new 2080 ti?
 
Out of the box, I think the 2080 Ti will probably outperform the Titan V. The cooler on the V is really inadequate from what I've read, there are some benches of a Hybrid Titan V on Gamer Nexus and the gains from putting it on water are huge:

https://www.gamersnexus.net/images/...v/hybrid/6_3dmark-fs-ultra-titan-v-hybrid.png

https://www.gamersnexus.net/guides/3181-titan-v-hyrbid-mod-results-2ghz-efficiency-top-10-hwbot

That is very interesting - but even if you take the top TITAN V Hybrid OC result versus the 1080 Ti OC result in that PNG chart it's a 21.3% gain TITAN V vs. 1080 Ti (both OC'd, TITAN V watercooled) - not a good indicator for the future of 2080 Ti versus 1080 Ti...I'm still thinking 25% if we are lucky.

I thought the performance is 50% faster than the 1080 ti with the new 2080 ti?

I heard this too but haven't sourced it - could they have been talking about Ray Tracing?
 
I pre-ordered one at Microcenter, actually a few to make sure I get one but if numbers are only 25% faster .... I'm just gonna pick up a cheap 1080 ti.

I will buy the 2080 Ti if the performance is 30 - 35% faster but not 25% ..... 25% won't do enough for me for what I need, want or expect vs the $1300 dollars.
 
I thought the performance is 50% faster than the 1080 ti with the new 2080 ti?


They showed no numbers so performance is more or less unknown at the minute. Since they spent an hour babbling on about the tech in the gpu i would guess that performance isn't that much above 1080 ti, if it was substantially faster i don't see why they wouldn't have given some demonstrations with frame rate showing.
 
They showed no numbers so performance is more or less unknown at the minute. Since they spent an hour babbling on about the tech in the gpu i would guess that performance isn't that much above 1080 ti, if it was substantially faster i don't see why they wouldn't have given some demonstrations with frame rate showing.

The only counter to that is that people say NVIDIA typically does not show those type of numbers at these unveilings..it is curious how they didn't even generalize it...they just spoke about 6x ray tracing capabilities which means just about nothing to most gamers...

This is also why people say they dropped the Ti with the non-TI 2080 part...because the 2080 alone would get a "meh" reaction in comparison to what's out already.

EDIT: TITAN V down to around $2,000 on eBay - always an option if the RTX 2080 Ti scalper market becomes a thing. :)
 
Last edited:
As an eBay Associate, HardForum may earn from qualifying purchases.
Does anyone have a sense of what kind of frame rates we were seeing in the demo with ray tracing enabled? It didn’t look good.

Ray tracing might be a thing but I doubt this card can REALLY do it from what I saw. Just having the feature isn’t enough the frame rate hit is massive.
 
Seems core counts are up 15-20% on 20xx from 10xx. Clock speeds stayed roughly the same. And memory went from GDDR5X to GDDR6, so a bit more bandwidth there to play with.

Absolutely no word on any performance improvements or TDP/process improvements from Pascal. And given that core counts went up like they did, I think that's how they are going to get the bulk of their generational improvements.

My guess is.. Apart from RTX specific stuff -- 20-25% generation over generation performance improvement from Pascal. I think Titan V will probably tend to be just a bit faster than a 2080Ti on average, but not in all cases (more cores versus clock speed).

I think RTX is just the buzz word to get people to jump now that crypto is finally dying down. And that has the side benefit of appeasing Wall Street for the stock price. But I think as gamers, this is a standard generational bump akin to 4xx->5xx or 6xx->7xx with little else apart from RTX to really move the needle.

I won't be jumping, 980 is still working ok for me and what I play. It will take years for RTX to catch on beyond optional eye candy effects, if it ever does. I think it will end up being the next PhysX - basically only used when a company gets paid to include it into a title.
 
Anybody buying this card specifically for RT is silly. It’s a first-gen RT card. I’m buying for horsepower. If RT even works and has good driver optimization/performance, that’s a bonus. I think RT falls into the “futures” category. A tech that will take a long time offer real value.
 
Seems core counts are up 15-20% on 20xx from 10xx. Clock speeds stayed roughly the same. And memory went from GDDR5X to GDDR6, so a bit more bandwidth there to play with.

Absolutely no word on any performance improvements or TDP/process improvements from Pascal. And given that core counts went up like they did, I think that's how they are going to get the bulk of their generational improvements.

My guess is.. Apart from RTX specific stuff -- 20-25% generation over generation performance improvement from Pascal. I think Titan V will probably tend to be just a bit faster than a 2080Ti on average, but not in all cases (more cores versus clock speed).

I think RTX is just the buzz word to get people to jump now that crypto is finally dying down. And that has the side benefit of appeasing Wall Street for the stock price. But I think as gamers, this is a standard generational bump akin to 4xx->5xx or 6xx->7xx with little else apart from RTX to really move the needle.

I won't be jumping, 980 is still working ok for me and what I play. It will take years for RTX to catch on beyond optional eye candy effects, if it ever does. I think it will end up being the next PhysX - basically only used when a company gets paid to include it into a title.
Ray tracing is built into the DirectX 12 API, and it will be coming to Vulkan in the future. RTX is simply the hardware pipeline NVIDIA built to take advantage of it.

https://blogs.msdn.microsoft.com/directx/2018/03/19/announcing-microsoft-directx-raytracing/
 
i don't think you can just guess perf based on cores, Turing is a whole new architecture, has a brand new layout inside the gpu that handles graphics.

Right, that's why I'm using the TITAN V as a comparison metric. Same next-gen GPU architecture underneath (isn't Volta to Turing just a small step?). Certainly, some things may be different (that we're not aware of) - but I do not think that the RTX 2080 Ti having 25% gain over the GTX 1080 Ti based on TITAN V benches is a stretch.

https://www.techradar.com/news/nvidia-turing

"In addition to these RTX cores, the Turing Architecture will also feature Volta's Tensor core."

"What remains a mystery is whether Nvidia’s 12nm manufacturing process for Volta has trickled down to the company’s Turing lineup."

I just can't imagine Volta > Turing is a huge jump. Seems like the "huge jump" over Volta is the ray tracing stuff (and the cheaper, but comparably performing GDDR6).
 
Last edited:
Right, that's why I'm using the TITAN V as a comparison metric. Same next-gen GPU architecture underneath (isn't Volta to Turing just a small step?). Certainly, some things may be different (that we're not aware of) - but I do not think that the RTX 2080 Ti having 25% gain over the GTX 1080 Ti based on TITAN V benches is a stretch.
Turing borrows tech from Volta, but I don't think they're directly related at all. My understanding is that the SM array is completely new and the hardware pipeline is different. I don't think we can speculate about the performance at all due to these points. Performance in rasterized games could either completely surprise or disappoint us. Think of Volta as the "tick" (manufacturing process) in Intel's retired "tick-tock" cycle and Turing being the "tock" (microarchitecture).
 
i don't think you can just guess perf based on cores, Turing is a whole new architecture, has a brand new layout inside the gpu that handles graphics.

The only time in Nvidia's history we've had an appreciable IPC increase was Maxwell, the rest of the time it was zero or single digits. So we can guess, the chance of a major increase in IPC is less than 10% based on Bayesian style probability.
 
Turing borrows tech from Volta, but I don't think they're directly related at all. My understanding is that the SM array is completely new and the hardware pipeline is different. I don't think we can speculate about the performance at all due to these points. Performance in rasterized games could either completely surprise or disappoint us. Think of Volta as the "tick" (manufacturing process) in Intel's retired "tick-tock" cycle and Turing being the "tock" (microarchitecture).
This is why it's so frustrating Huang didn't say anything about it yesterday. If they had made large gains in IPC, he probably would've mentioned it.
 
Anybody buying this card specifically for RT is silly. It’s a first-gen RT card. I’m buying for horsepower. If RT even works and has good driver optimization/performance, that’s a bonus. I think RT falls into the “futures” category. A tech that will take a long time offer real value.

First post with common sense. 2nd gen RT is probably gonna stomp all over 1st gen. New tech, teething issues, learnings, adoption, etc. None of this should be surprising.
 
First post with common sense. 2nd gen RT is probably gonna stomp all over 1st gen. New tech, teething issues, learnings, adoption, etc. None of this should be surprising.

This is why one has to wonder whether or not NVIDIA is pimping ray tracing because their gaming performance is only up minimallyfrom the "last gen" Ti.
 
Looks like only 15% here

Source?

Though that's what I was expecting when I saw the specs. Kind of figured it would fall between the 1080Ti and Titan V. The major thing Turing has going for it is better DX12 support (a-sync compute) and DLSS, which will be good for people pushing 4K res. I'm expecting Ray Tracing to be pointless for 5 years or so, I hope I'm proved wrong.
 
It's a whole new architecture, so hard to say. My guess is that that most of the changes are at the architectural level above the CUDA cores and the only real change to the cores is the parallel INT/FP paths. That's a big enough change in itself, but I have no data as to how much performance that will bring to the table. The RT is really impressive, but I'm skeptical it's enough for real-time at higher refresh rates. 7nm may bring enough to the table or perhaps the inclusion of NVLINK is sufficient to allow two 2080Tis to scale. Time will tell.

If I had to put money anywhere, the 2080Ti will not beat the Titan-V outside of ray tracing and that might only be due to software limitations. For RT, it should crush the Titan-V, but it remains to be seen as to why. Is it just software on the tensor cores or something unique? My only reservations with that are if the dual INT/FP path yields far more benefit than I assume it will or NVLINK scales near 100%.
 

Sorry, updated my post.

EDIT: looks like it's another armchair analysis...he used the TPU database which I am not sure how accurate that is based on RTX 2080 Ti/etc. not yet being fully released.
 
Last edited:
Sorry, updated my post.

EDIT: looks like it's another armchair analysis...he used the TPU database which I am not sure how accurate that is based on RTX 2080 Ti/etc. not yet being fully released.
The BeForTheGame keynote said the 2080 Ti has 14.2 TFLOP/s FP32, which is about 30% over the 1080 Ti. The hardware pipeline is completely new, though, so I don't think we can assume performance gains just from those numbers. Remember that the Vega 64 should crush the 1080 Ti if we go by FLOP/s.
 
Anyone see this:

https://www.techradar.com/au/reviews/nvidia-geforce-rtx-2080-ti

They can't release real benchmarks, but did say this: "We also played a variety of other PC games that shall not be named, and saw performance run in excess of 100 fps at 4K and Ultra settings. Unfortunately, we also don’t know how much power these GPUs had to draw to reach this level of performance."

Not a whole lot to go on given that this could mean WoW or it could mean The Witcher 3 or FFXV Windows Ed. Still good to hear, at any rate.

This site: https://www.techpowerup.com/247005/nvidia-geforce-rtx-2080-ti-tu102-die-size-revealed lists the chip size at 775mm2, so I'm not all that worried that the ray tracing and tensor cores are going to crowd out all the more traditional elements.
 
I expect Stock TV and 2080ti to be roughly equal myself. Less shaders vs more clock speeds, memory bandwidth close though different kinds, not sure about latency etc.

Putting either under great cooling could change it up a bit. Most RTX air coolers won't suck as even the FE updated one looks decent, the TV with its anemic (for that huge die) older FE blower might have more growth potential coming from lower stock clocks. Turing is newer production but otherwise same vendor(s) and node I think?

I don't think turing uarch is really that much "newer" than volta. I also wonder how much of the RT sauce is just adapting/tweaking existing core stuff, no math expert but how much are ray calcs similar to ??? already done by gpu compute?

Baiting for wenchmarks to find out.
 
I expect Stock TV and 2080ti to be roughly equal myself. Less shaders vs more clock speeds, memory bandwidth close though different kinds, not sure about latency etc.

Putting either under great cooling could change it up a bit. Most RTX air coolers won't suck as even the FE updated one looks decent, the TV with its anemic (for that huge die) older FE blower might have more growth potential coming from lower stock clocks. Turing is newer production but otherwise same vendor(s) and node I think?

I don't think turing uarch is really that much "newer" than volta. I also wonder how much of the RT sauce is just adapting/tweaking existing core stuff, no math expert but how much are ray calcs similar to ??? already done by gpu compute?

Baiting for wenchmarks to find out.
Ray tracing in the purest sense means every triangle on screen has a ray being cast out from it, or even every vertex. The hard part is you don't know where each of these rays is pointing, so the math is figuring out where each of them are pointing to in relation to the viewport, and updating them fast enough to provide fluid motion in the entire scene. The way Jensen explained they're doing it is using a BHV tree, which is a quadratic algorithm if I'm not mistaken. Basically it means needing to do something as or more intensive as a square root to traverse down and find the source and direction of each ray. And there can be billions of rays being cast at the same time in a single frame. Up until now shaders have been doing simpler math that involved mostly multiplication operations on just a few million pixels every frame.
 
I expect Stock TV and 2080ti to be roughly equal myself. Less shaders vs more clock speeds, memory bandwidth close though different kinds, not sure about latency etc.

Putting either under great cooling could change it up a bit. Most RTX air coolers won't suck as even the FE updated one looks decent, the TV with its anemic (for that huge die) older FE blower might have more growth potential coming from lower stock clocks. Turing is newer production but otherwise same vendor(s) and node I think?

I don't think turing uarch is really that much "newer" than volta. I also wonder how much of the RT sauce is just adapting/tweaking existing core stuff, no math expert but how much are ray calcs similar to ??? already done by gpu compute?

Baiting for wenchmarks to find out.

If you look at Nvidia marketing material, they claim Volta has 50% more energy efficiency than Pascal. They use the same 50% number for Turing, using the wording "achieved performance". So in effect, there might be no CUDA core performance difference between the two architectures. So the higher CUDA core count of TV might win out.

https://www.nvidia.com/en-us/data-center/tensorcore/

NVIDIA-Turing-vs-Pascal-Shader-Performance.jpg
 
Back
Top