First RTX 2080 Benchmarks Hit the Web with DLSS

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,532
Apparently some of the scuttlebutt about the RTX having framerate issues yesterday ruffled some feathers over at NVIDIA. Through the entire RTX presentation, we were never once given any kind of framerate metrics to digest, and that worried a lot of people. NVIDIA is leaking out some numbers today in order to counter the "RTX is gonna suck" argument. Also to share a bit more clarity on the numbers below, DLSS is "Deep Learning Super-Sampling" enabled. This is a new type of antialiasing that utilizes the Turing Tensor Cores to work. Think of this like the AI looking at a scene and deciding where there should not be jaggies, and fixes it. It would seem this is done in post, but we are not sure exactly of that. Also the slides outlines the specific games and what framerates those are able to be played at. Of course this is direct from NVIDIA, so consider it a best case scenario.

Slides.
 
Last edited:
So that looks like around ~45% average FPS increase, not too bad. The 2080Ti over 1080Ti should be about the same. So that would put the 2080Ti around 10% faster than the Titan V if it scales linearly.

If this is truly the case, the price makes sense now.
 
So that looks like around ~45% average FPS increase, not too bad. The 2080Ti over 1080Ti should be about the same. So that would put the 2080Ti around 10% faster than the Titan V if it scales linearly.

If this is truly the case, the price makes sense now.
The price only makes sense if every % of additional performance = each additional % more in cost vs the previous gen.
 
The very top says: RTX 2080: 2X 1080

...are they implying that they were running two 1080's in SLI when benchmarking this? I too enjoy publishing numbers with zero context and no axis labeled.
 
The very top says: RTX 2080: 2X 1080

...are they implying that they were running two 1080's in SLI when benchmarking this? I too enjoy publishing numbers with zero context and no axis labeled.

What that header is suppose to mean is that with DLSS enabled, a 2080 is over 2x 1080 speed. You can see the bars rising above 2.0 on the graph.
 
The very top says: RTX 2080: 2X 1080

...are they implying that they were running two 1080's in SLI when benchmarking this? I too enjoy publishing numbers with zero context and no axis labeled.

I think they saying 2080 can hit 2x perf of a 1080.

Yes, the pricing sucks on these cards. The performance looks like it's going to amazing overall, even without the new features.
 
Can you not read a stacked column chart?

Chart titles typically don't contain any conclusions, they label the data being displayed.

What that header is suppose to mean is that with DLSS enabled, a 2080 is over 2x 1080 speed. You can see the bars rising above 2.0 on the graph.

I'm sure you are correct, but that chart is worthy of being posted on https://www.reddit.com/r/dataisugly/ and provides absolutely zero additional information beyond "1 ... 1.5 ... 2" and a list of titles running at 4K with little additional context.
 
The chart makes perfect sense really. You don't want all sorts of different FPS numbers all over the place since game FPS varies wildly. Relative performance percentages keeps everything nice, clean and easy to understand.
 
Chart titles typically don't contain any conclusions, they label the data being displayed.
upload_2018-8-22_14-39-47.png


I'm sure you are correct, but that chart is worthy of being posted on https://www.reddit.com/r/dataisugly/ and provides absolutely zero additional information beyond "1 ... 1.5 ... 2" and a list of titles running at 4K with little additional context.
They provide exact FPS in the second slide. You also don't need data labels everywhere in the chart. That is what an accompanying data table is for. A chart is just supposed to be a quick way to see trending or relative comparison. Admittedly, they could have included horizontal axis lines to better read it, but it's not a big deal. I can see that FFXV is about 40% faster, PUBG a little less, Shadow of the Tomb Raider is about 25% faster, etc.
 
If DLSS has good enough image quality that its effects is almost imperceptible compared to traditional rasterization method with AA then this gen is a win for nvidia. Can’t wat for a [H] review and some major pixel peeping
 
I hate these goddamn "ratio" graphs they put out...just give the fucking FPS already.

That said, 4k 60+ FPS in those games on the 2080 (not even the Ti) seems pretty decent. Not $800 decent, but still...
 
Too many unknowns with that chart, plus it's not from a third party so I wouldn't trust it.

Of course and nVidia gets that two, some something vague but to with a general sense that yeah, these new parts should be more than slight incremental performance upgrades especially with using something like DLSS.
 
Too many unknowns with that chart, plus it's not from a third party so I wouldn't trust it.

That’s why charge your pre order to a CC and wait for reviews. If it sucks then you cancel pre order or return the product to the manufacturer. If not then you sell your current graphics card enjoy and brag to your nerd friends about having the latest and greatest while hey scour amazon for in stock RTX cards.
 
I wasn't one of those complaining about it. You can never trust vendor supplied benchmark numbers.
I don't necessarily disagree with you, but it can be trusted as a general guideline. I've been speculating that based on the numbers it would be around 30-40% faster in traditional rasterized games without DLSS, and these slides fall in line with that estimation.
 
So that looks like around ~45% average FPS increase, not too bad. The 2080Ti over 1080Ti should be about the same. So that would put the 2080Ti around 10% faster than the Titan V if it scales linearly.

The TI's are different chips, 10% slower clock speed with 48% more cores... So they are going to scale differently.

Going to have to wait for the [H] reviews to see how [H]ard these get us...
 
If I am understanding this right, DLSS is offloading the AA to the tensor cores or whatever while standard AA is done with the traditional cores, so when they disable whatever standard AA is being used in favor of the offloaded DLSS that frees up more cores for everything else thus the faster speeds, is that about right?
 
You are free to spend your money how you see fit, but your flawed logic isn't based in history or reality.

lol you know nothing about NVIDIA development costs or TSMC production costs for large chips. Your just a noob complaining about price, which seems to be pretty common these days.

Maybe the forum for you is at www.Softforum.com
 
People were complaining that NVIDIA didn't provide any performance numbers and they still are now that they did...

The information provided is interesting, but quite lean on the details. Here are the things I am wondering:
  • Is the 1080 running any AA? If so, which AA level and type? Is the 1080 running AA in both comparisons or just when DLSS is turned on for the 2080?
  • Is DLSS comparable in appearance to a traditional AA? If so, which type?
  • Will the same DLSS benefits scale similarly at lower resolutions?
  • Will using something like DLSS at the same time as ray tracing impact the performance of one or the other? (they both use the tensor cores)
 
People were complaining that NVIDIA didn't provide any performance numbers and they still are now that they did...

But...but...maths. :p

I get why they presented the performance numbers this way, though. Giving direct fps numbers for those titles would make it a lot easier to note that the 2080 seems to be only marginally better than a 1080ti in current graphical workloads. That jump in the past was reserved for the x70 cards, not the x80 cards.
 
I am all for new technology as long as the IQ stays the same. Can't wait to see what Kyle finds with DLSS. IMO sounds like a good idea to offload AA to the tensor cores. Can't wait for reviews now!
 
Well it's an interesting approach. So what they basically doing is running the game at 4k with AA off, then enabling DLSS which lets the tensor core do the AA. In other words, the tensor cores are being used as a dedicated anti-aliasing hardware. That begs the question though, this method is off the table if the game uses Ray Tracing then since those cores would be otherwise busy.
 
This launch seems really strange. Usually we see benchmarks the day of or early the day after launch. Now we just get graphs and stills? Something isn't right.
 
But...but...maths. :p

I get why they presented the performance numbers this way, though. Giving direct fps numbers for those titles would make it a lot easier to note that the 2080 seems to be only marginally better than a 1080ti in current graphical workloads. That jump in the past was reserved for the x70 cards, not the x80 cards.

They did give direct FPS numbers...people - click to the right to see the 2nd image. :)
 
The information provided is interesting, but quite lean on the details. Here are the things I am wondering:
  • Is the 1080 running any AA? If so, which AA level and type? Is the 1080 running AA in both comparisons or just when DLSS is turned on for the 2080?
  • Is DLSS comparable in appearance to a traditional AA? If so, which type?
  • Will the same DLSS benefits scale similarly at lower resolutions?
  • Will using something like DLSS at the same time as ray tracing impact the performance of one or the other? (they both use the tensor cores)

lYnoqTWFUXf_XHshtYGlXjyNozPiaPSrFF6KPRKSQbQeiHeTDtPeueo-LOCPepKLSAJoSdszsk-i2z8ptpk=w379-h347-nc.gif
 
But a 1080 Ti at $700, well a little less now can't do this so $800 isn't out of whack. The issue with the pricing on these RTX parts is that they aren't subsuming the old price points of the 10xx.

I feel as though $700 for a 1080 Ti was overpriced in the first place. Hell, I felt $600 was overpriced for the 1080, but I bought one because I had to have VR. :p
 
Back
Top