GeForce RTX 2080 and RTX 2080 Ti Reviews

l114am9-j4w2pqiwg_bvrk28gww24qynadxhrxttamw-jpg.jpg

RTX Off!
 
In light of this news, i just picked up a second GTX 1080 FTW2 for $350. Going to play with SLI
 
While I agree that it was nice that the Fury X failed gracefully, system RAM is not a substitute for GPU VRAM. Games like Rise/Shadow of the Tomb Raider, Rainbow 6: Seige, Shadow of Mordor, FFXV and Wolfenstein II will push the 8GB frame buffer on the 2080 to the limit, and that is without consideration for modifications such as high res texture packs. FFXV as an example, with its optional textures, will utilize 8.5GB upon startup.

For 1440P gaming, 8GB is sufficient for most gamers now. Even 4k with some reduced settings is ok. But it's not "NOT a problem." People need to be aware of the limitation so that they understand why their new RTX 2080 is suddenly performing half as fast as a 1080 Ti, and how to avoid the situation.

http://images.hardwarecanucks.com/image//skymtl/GPU/RTX2080-REVIEW/RTX2080-REVIEW-54.jpg

This is an interesting graph. I have said here several times that Wolf 2 was an exeption to the rule where dynamic ram doesn't help when there is at least 4 gb of dedicated ram. This is even the case for titles that "use" 9 gb or more. However, I have never seen it a problem with 8 gb vRam, even in Wolf 2.

Interesting for sure. It could be that the entire Vulcan API does not play well with dynamic ram.

Still, I would like to see at least one more graph where Wolf 2 falls on its face in 4k mein settings. Every other review with these settings showed 8 gb doing fine.
 
Looking again, something is off. Why does the RTX 2080 crush the Vega 64 at 1440p, but loses at 4k, despite both having 8gb?
A lack of bandwidth is not the reason, as Vulcan is not that bandwidth demanding. Wolf 2 is one of the few games where the rx580 crushes the rx390 when both are equipped with 8gb vRam
 
Its a fun graph to cherry pick and toss at people if you run a Vega 64 *cough* but something does seem amiss. Techpowerup shows way better numbers at 4k in Wolf2 for both the 2080 and 2080ti than any other card. I don't recall if they state the settings used however.
 
Its a fun graph to cherry pick and toss at people if you run a Vega 64 *cough* but something does seem amiss. Techpowerup shows way better numbers at 4k in Wolf2 for both the 2080 and 2080ti than any other card. I don't recall if they state the settings used however.

Same goes for Guru3d and Techspot. Pretty sure both of them were using mein settings too. It could be that the x299 system Hardware canuks used doesn't like to share ram resources. Who knows though.
 
This is an interesting graph. I have said here several times that Wolf 2 was an exeption to the rule where dynamic ram doesn't help when there is at least 4 gb of dedicated ram. This is even the case for titles that "use" 9 gb or more. However, I have never seen it a problem with 8 gb vRam, even in Wolf 2.

Interesting for sure. It could be that the entire Vulcan API does not play well with dynamic ram.

Still, I would like to see at least one more graph where Wolf 2 falls on its face in 4k mein settings. Every other review with these settings showed 8 gb doing fine.

The reason for the 2080 falling on its face in this graph was the benchmark scene used - they did a run through the Manhattan level, which is notoriously heavy for VRAM usage. When you crank things to 4k and use the "Mein Leben!" detail preset, there are points where framebuffer usage (actual usage, not just allocation) exceeds 8GB and the cards begin pulling from system RAM to complete the frame. They explained it briefly on the review page (meant to link it, sorry) but in Canucks style they don't really do much of a deep dive and sort of gloss over it in passing. I honestly wish they did a bit more exploration. I don't know if they used the "Image Streaming" option or not, but if so that would also help explain the result. Image streaming allows further texture precaching, but due to how the engine pre-allocates it will utilize jumbo portions of VRAM.

https://www.hardwarecanucks.com/for...a-geforce-rtx-2080-ti-rtx-2080-review-17.html

As you mentioned, Vulkan can be pretty picky about how it utilizes VRAM "spaces" but in this case it's just trying to put 10 pounds of dirt in a 5 pound bag. It's quite possible that the Vega 64 does better because of the faster memory bandwidth (483GB/s for Vega 64, 448GB/s for RTX 2080) allowing it to scramble resources around slightly faster, but they both fell on their face pretty hard and are having to swap to system RAM.


I just wanted to point out that there are definitely scenarios in which having more physical VRAM is a huge benefit, and I personally think it will become more of an issue going forward for people ponying up for an RTX 2080 expecting to just crank every slider they find and get a buttery smooth result. It's just not realistic. nvidia sort of painted themselves into a corner with their memory bus design vs the 2080 Ti, forcing them to either use 8GB or double down to an unnecessarily expensive 16GB.

Right now it's an outlier event, but I won't be surprised to see it happen again with other tests.

[edit: Put the "K" back in my spelling of Vulkan]
 
Last edited:
The reason for the 2080 falling on its face in this graph was the benchmark scene used - they did a run through the Manhattan level, which is notoriously heavy for VRAM usage. When you crank things to 4k and use the "Mein Leben!" detail preset, there are points where framebuffer usage (actual usage, not just allocation) exceeds 8GB and the cards begin pulling from system RAM to complete the frame. They explained it briefly on the review page (meant to link it, sorry) but in Canucks style they don't really do much of a deep dive and sort of gloss over it in passing. I honestly wish they did a bit more exploration. I don't know if they used the "Image Streaming" option or not, but if so that would also help explain the result. Image streaming allows further texture precaching, but due to how the engine pre-allocates it will utilize jumbo portions of VRAM.

https://www.hardwarecanucks.com/for...a-geforce-rtx-2080-ti-rtx-2080-review-17.html

As you mentioned, Vulkan can be pretty picky about how it utilizes VRAM "spaces" but in this case it's just trying to put 10 pounds of dirt in a 5 pound bag. It's quite possible that the Vega 64 does better because of the faster memory bandwidth (483GB/s for Vega 64, 448GB/s for RTX 2080) allowing it to scramble resources around slightly faster, but they both fell on their face pretty hard and are having to swap to system RAM.


I just wanted to point out that there are definitely scenarios in which having more physical VRAM is a huge benefit, and I personally think it will become more of an issue going forward for people ponying up for an RTX 2080 expecting to just crank every slider they find and get a buttery smooth result. It's just not realistic. nvidia sort of painted themselves into a corner with their memory bus design vs the 2080 Ti, forcing them to either use 8GB or double down to an unnecessarily expensive 16GB.

Right now it's an outlier event, but I won't be surprised to see it happen again with other tests.

[edit: Put the "K" back in my spelling of Vulkan]

Damn, that is nuts. With the next Doom coming out on vulKan, things will not get any better.

The 2080 looks even worse against the 1080ti now.

AMD might just be better with memory usage here. I don't think that bandwidth is all that important with Vulkan, relatively speaking. The 1070 easily beats the 980ti in all serrings and the rx580 beats the rx390x no problem as well.
 
Damn, that is nuts. With the next Doom coming out on vulKan, things will not get any better.

The 2080 looks even worse against the 1080ti now.

AMD might just be better with memory usage here. I don't think that bandwidth is all that important with Vulkan, relatively speaking. The 1070 easily beats the 980ti in all serrings and the rx580 beats the rx390x no problem as well.

Yeah, it's not a good look. Don't get me wrong, for someone looking to jump from an older generation (or a new system build) the RTX 2080 is preferable to the 1080 Ti, provided that the price gap isn't larger than about $100 or so. It's faster overall, if only by a small margin, and has the potential for RTX and DLSS implementation. It gets much trickier if you're given the choice between a ~$550 1080 Ti and an $800 2080 Ti. In that scenario, I simply don't see the value, and the 8GB VRAM issue becomes a secondary consideration.

On a semi-tangent, some of the first SLI RTX 2080 Ti videos hit the net this morning and while none of the reviewers seem to have done a technical analysis of NVLink it doesn't look like the hoped-for VRAM pooling feature has been implemented in the RTX series cards. On the Quadro line, multiple cards can be made one contiguous memory space - eg, 2xRTX 2080 cards would yield 8+8=16GB framebuffer,. It was hoped that this feature would also be made available on the RTX cards due to the way it was discussed, albeit briefly, during the RTX launch keynote. JayzTwoCents and Steve Burke from Gamers Nexus did rundown videos on some SLI performance and one of the shots from Jay's benchmarks captured the menu from a Ghost Recon: Wildlands run as reporting 10952MB of VRAM on a 2x2080Ti setup. So that's a bit disappointing. Maybe we'll see some further development from nvidia or developers. On the positive side, SLI performed surprisingly well and turned out some monstrous framerates at 4K. For those who absolutely need the latest and greatest, there's no denying the brute force of these things.

(timestamp for Ghost Recon result screen)

Some honest to god potential here for solid 120-144Hz gaming at 4k. That's pretty cool from a tech perspective.
 
Back
Top