GeForce RTX 2080 SUPER Launch Review Roundup

IF the RTX2080Super was bandwidth starved, increasing the bandwidth would help, duh!!! Nvidia is not as bandwidth limited as AMD since it started using all its compression technologies.

Only in Mining did bandwidth make a huge difference, but gaming wise it doesn't affect much, if at all.

This is a rather arrogant claim. I did not see any reviews where the memory overclocking was isolated for performance gains. Even then, performance can degrade as clocks are pushed to the limits. Then only way to validate this 'claim' is to reduce memory clocks and see if performance degrades.

You can try to defend this card all you want, but $700 for an 8gb card is pathetic in 2019, even if you want to ignore the bandwidth issues.

We will see if nVidia compression pixie dust will keep min frames high on upcoming titles such as CP2077 or Doom Eternal.

Yep, $700 for 8gb of of 256 bit memory. Meanwhile my $380 OneX I bought 6 months ago was blessed with 12gb and 384 bits. GDDR5, but you get the point. Hell, we given 8gb 512 bit cards for $350 years ago, but we can't get 12GB today because....GDDR6!
 
This is a rather arrogant claim. I did not see any reviews where the memory overclocking was isolated for performance gains. Even then, performance can degrade as clocks are pushed to the limits. Then only way to validate this 'claim' is to reduce memory clocks and see if performance degrades.

You can try to defend this card all you want, but $700 for an 8gb card is pathetic in 2019, even if you want to ignore the bandwidth issues.

We will see if nVidia compression pixie dust will keep min frames high on upcoming titles such as CP2077 or Doom Eternal.

Yep, $700 for 8gb of of 256 bit memory. Meanwhile my $380 OneX I bought 6 months ago was blessed with 12gb and 384 bits. GDDR5, but you get the point. Hell, we given 8gb 512 bit cards for $350 years ago, but we can't get 12GB today because....GDDR6!

I find it odd that the RTX2080Super is only second to the 2080Ti and Vega VII in memory bandwidth, yet you claim its bandwidth starved...

Curious thing. Vega VII has double the bandwidth, yet its slower than the 2080super. And now the 5700XT has less than half of Vega VII bandwidth and still manages to beat it in some games.


And compression pixidust has helped with pretty much every game so far, I don't think the story will be any different with CP or Doom.
 
... Nvidia is not as bandwidth limited as AMD since it started using all its compression technologies.
I find it odd that the RTX2080Super is only second to the 2080Ti and Vega VII in memory bandwidth, yet you claim its bandwidth starved...

Curious thing. Vega VII has double the bandwidth, yet its slower than the 2080super. And now the 5700XT has less than half of Vega VII bandwidth and still manages to beat it in some games.

A lot to "decompress" here.
You make the argument about Nvidia compression technology yet you compare an AMD card as 'proof' that more bandwidth does not mean faster. This makes zero sense

That compression technology you speak of is 4th gen delta color compression - introduced with Pascal.

So with that, we can still use the 1080ti as a frame of reference as it has similar theoretical bandwidth as the 2080 Super.

Despite not being seen as bandwidth limited, let's compare it to a forgotten card - the Titan XP:
The XP had just 7% more Gflops than the 1080ti so BEST CASE scenario would be 7% faster as it should not be bandwidth limited.
However the XP, does enjoy a full 13% bandwidth over the TI.

Amazingly, the XP was faster by around 13% on every game tested except Warhammer and Doom (Vulkan). Those were 2017 games.
https://www.tomshardware.com/reviews/nvidia-titan-xp,5066.html

Perhaps you would have better luck in the youtube comment section cheerleading for this card.
 
This really hammers the bandwidth point home:
https://babeltechreviews.com/the-rt...wdown-highlighting-the-architectural-changes/

Both the 2080 and 2080 super hit a max core of 2055 mhz. When using the same clocks, performance was the same meaning the extra 2 SMUs are doing LITERALLY NOTHING. A 2080 with faster memory would have been able to achieve the same results.

Well, at least nVidia didn't give us DDR4 this generation...
 
Without having looked at the specs recently.. the various cards you are comparing (memory bandwidth), 1080Ti, 2080, 2080 Super, have different #s of cuda cores, running at different speeds, and (probably) different counts of shader cores, SMU's, whatever. It isn't any single factor.

I tend to agree with the point that memory bandwidth isn't a big of an issue as you might think, because of the points about the HBM2 and lots of bandwidth doing nothing appreciable that anyone can tell for game performance. Nvidia had a 512bit memory bus on the 8800gtx iirc. Newer cards had a smaller bus. Why? Faster memory doesn't need as wide of a bus.

Someone take a 5700xt, 2080 super, and downlock the memory by 10%, and compare performance both before and after. Good way to make the point one way or the other. And at any rate, at some point memory bandwidth is meaningless if it sits unused.
 
You can try to defend this card all you want, but $700 for an 8gb card is pathetic in 2019, even if you want to ignore the bandwidth issues.

We will see if nVidia compression pixie dust will keep min frames high on upcoming titles such as CP2077 or Doom Eternal.

Yep, $700 for 8gb of of 256 bit memory. Meanwhile my $380 OneX I bought 6 months ago was blessed with 12gb and 384 bits. GDDR5, but you get the point. Hell, we given 8gb 512 bit cards for $350 years ago, but we can't get 12GB today because....GDDR6!

Simple solution. Don't buy a 2080 if it bothers you that much.

NVidia has three different performance level cards now all running GDDR6 on 256 bit bus. Basically the entire "Super" line. It's no surprise that the top end "Super" card is close to a memory bottleneck, which is why NVidia included faster memory on the 2080 Super. The lesser cards should be have plenty of bandwidth.

Engineering is all about using "just enough". Overkill is just wasted resources, so it looks like NVidia did decent job, and I don't really see a big issue this generation. If GDDR6 memory speeds don't increase significantly by Next generation, then I expect a slightly wider bus might make it to the RTX 3080 level. Say 320 bit 10 GB cards. Maybe HBM will make a comeback.
 
Without having looked at the specs recently.. the various cards you are comparing (memory bandwidth), 1080Ti, 2080, 2080 Super, have different #s of cuda cores, running at different speeds, and (probably) different counts of shader cores, SMU's, whatever. It isn't any single factor.

I tend to agree with the point that memory bandwidth isn't a big of an issue as you might think, because of the points about the HBM2 and lots of bandwidth doing nothing appreciable that anyone can tell for game performance. Nvidia had a 512bit memory bus on the 8800gtx iirc. Newer cards had a smaller bus. Why? Faster memory doesn't need as wide of a bus.

Someone take a 5700xt, 2080 super, and downlock the memory by 10%, and compare performance both before and after. Good way to make the point one way or the other. And at any rate, at some point memory bandwidth is meaningless if it sits unused.

Alright, you guys keep comparing apples to oranges by pulling in older tech, other companies, and especially HBM. If you really want to get in the weeds:
https://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/8

There is a definate minimum threshold for bandwidth to GFlops or else nvidia would just use 128 bit 8gb cards instead if 192 bit 6 gb cards as it would actually be cheaper and more marketable.

Within the 10,16, and 20 series (which use similar compression tech), it seems that a min of 45 GB/s is needed for each Tflop. Yeah, it will vary between games and API, but it seems to be a baseline that was aimed for most of the products.

- The 1080ti actually falls below this and is why we see such large gains with the Titan XP despite such a small Tflop increase.

- The 1650 falls WAY below this and is why it can barely beat a 1050ti despite some 40% more TFlops. (I was disappointed a 1650ti with GDDR6 was not released as it would have been a stellar sff card)

-The 1070ti falls short which is why the 1070 is often much closer to the 1070ti vs the 1060. (Vulkan seems to be the exception)

-The 2080 Super is right there, but as I stated before, there is most likely performance and latency issues with having that GDDR6 so strung out.
 
The 1660 is the 1650ti at this point. For $230 on average. 1650 has GDDR5 too vs 6 on the two 1660 cards.
 
The 1660 is the 1650ti at this point. For $230 on average. 1650 has GDDR5 too vs 6 on the two 1660 cards.

There is a big jump from the 1650 to the 1660. The 1650 has 128 bit GDDR5 while the 1660 has 192 bit GDDR5. Only the 1660ti has GDDR6.

Given the small die size of TU117, it works best for SFF. However, it is very bandwidth limited and a 128 bit GDDR6 version of the card would be ideal.
 
I am really tired of hearing this nonsense from some people. The 1660 is a 120 watt card and a MASSIVE performance increase over the previous 1050 ti. In no way shape or form would the 1660 have been a 50 level card.

Of course it would, if the 2060 wasn't $50-100 more expensive than the 1060 was. Its currently selling at GTX 960 prices from 2015.
 
Of course it would, if the 2060 wasn't $50-100 more expensive than the 1060 was. Its currently selling at GTX 960 prices from 2015.
Are you truly that out of the loop about video cards? AGAIN the 1660 is a 120 watt card so that alone should give you a hint that it would NOT be a 1050/1050 ti replacement. And also AGAIN the 1660 is way to big of a performance jump over the 1050/1050 ti for them to give you that at the same price point. :rolleyes:
 
Are you truly that out of the loop about video cards? AGAIN the 1660 is a 120 watt card so that alone should give you a hint that it would NOT be a 1050/1050 ti replacement. And also AGAIN the 1660 is way to big of a performance jump over the 1050/1050 ti for them to give you that at the same price point. :rolleyes:

Exactly, the entire product stack shifted with the 20 series. The 1660 and 1660ti sort of replace the 1060 6GB as one battles the RX580 and the other the RX590. The 2060 is definitely more of a 1070 replacement.

That leaves a big hole for a 1060 3gb/rx570 performance and price. TU117 would make that happen with GDDR6 while being closer to 75w tdp. There was alot of speculation of the 1650ti a few months ago, but I haven't heard anything since. I guess all the focus went in the super cards.
 
Are you truly that out of the loop about video cards? AGAIN the 1660 is a 120 watt card so that alone should give you a hint that it would NOT be a 1050/1050 ti replacement. And also AGAIN the 1660 is way to big of a performance jump over the 1050/1050 ti for them to give you that at the same price point. :rolleyes:

Thats their problem not the customers. Nvidia is the one who jacked up prices across the board. The 1650 is apparently the new 1050 upgrade and is a pointless sku (doesn't even have the new encoder) except to have Nvidia in that performance level instead of ceding it to 3 year old rx 570/580 cards. Sorry maybe I am out of the loop in that I don't think paying $220+ for a 1660 is a good value nor is a 1660ti. I don't give a good gosh darn about your power requirements, I'm going by price points because at the end of the day, that is what matters most. What performance you get for the moneys you pay.
 
I got one of these for my son last week - an EVGA 08g-p4-3182-kr model. I had a budget of about $800 to spend. If crossfire was still a thing, I would've bought 2 x 5700XTs, but then again he loves using the Nvidia highlights feature so I don't know how he'd feel about that. He was upgrading from a 1060 6gb, so needless to say, this was a big upgrade for him. Now he's got the best gpu in the house :(. His old man is still hanging on to a 1080ti.
 
Thats their problem not the customers. Nvidia is the one who jacked up prices across the board. The 1650 is apparently the new 1050 upgrade and is a pointless sku (doesn't even have the new encoder) except to have Nvidia in that performance level instead of ceding it to 3 year old rx 570/580 cards. Sorry maybe I am out of the loop in that I don't think paying $220+ for a 1660 is a good value nor is a 1660ti. I don't give a good gosh darn about your power requirements, I'm going by price points because at the end of the day, that is what matters most. What performance you get for the moneys you pay.
And its not my fault that you cant comprehend reality. Anyone with a clue knows that the 50 level cards are not going to go from 75 to 120 watt cards. And if you live in the real world then you would know damn well they were not going to DOUBLE performance at the same price point on a 50 level of card especially only going to 12nm from 16nm.

At this point you are just to clueless about this subject to realize how foolish it is to think you were going to get TWICE the performance for the same price and have the power consumption go to 120 watts. That is NOT the way it works in the real world especially on lower end 50 series cards.
 
Same thing. If used had worse bang/buck no one would buy it. Used prices always have to fall to offer better value than new, so arguing that used has better value is pointless.

yeah, lets just agree to disagree then..
 
And its not my fault that you cant comprehend reality. Anyone with a clue knows that the 50 level cards are not going to go from 75 to 120 watt cards. And if you live in the real world then you would know damn well they were not going to DOUBLE performance at the same price point on a 50 level of card especially only going to 12nm from 16nm.

At this point you are just to clueless about this subject to realize how foolish it is to think you were going to get TWICE the performance for the same price and have the power consumption go to 120 watts. That is NOT the way it works in the real world especially on lower end 50 series cards.

So its ok for nvidia to shift up their model numbers up a level but not down a level? The 2060 is at xx70 prices now but the xx50 cards should still be $150-200 despite 3 year old competition from amd and their own old cards beating them on price/perf? The only reason for the 1650 to exist is for the oh so vaunted 75w only gamers. Who don't even get the new encoder chip.
 
So its ok for nvidia to shift up their model numbers up a level but not down a level? The 2060 is at xx70 prices now but the xx50 cards should still be $150-200 despite 3 year old competition from amd and their own old cards beating them on price/perf? The only reason for the 1650 to exist is for the oh so vaunted 75w only gamers. Who don't even get the new encoder chip.
You really do not know when to stop. No one with the least bit of logic and sense would expect the 50 class card to DOUBLE in performance and go from a 75 watt TDP to 120 watt and keep similar price point. The 50 class cards are lower power so 120 watts would not have been suitbale. And it is beyond ridiculous to think they would give you twice the performance over last gen on a card like that. At this point you will never understand how foolish you look by thinking the 1660 should have been 50 class card. :rolleyes:
 
Nvidia over charges their customers.

I paid $800 for my RTX2080. I doubt I'll ever buy an Nvidia product again, mostly because of the lock-in structure of g-sync.
 
after the crypto craze... i dont see as many people with cards 3 gens back as there used to be. add in the lower performance bump, and i expect this to be bad for sales. too many people have decent cards either due to interest in mining or at least catching deals on used cards after the bubble burst.
 
Nvidia over charges their customers.

I paid $800 for my RTX2080. I doubt I'll ever buy an Nvidia product again, mostly because of the lock-in structure of g-sync.

You pretty much always have to wait until round 2 with nVidia. I bought the 1080 and then a few months later the 1080 Ti came out at the same price. Was not real happy about that.
 
You really do not know when to stop. No one with the least bit of logic and sense would expect the 50 class card to DOUBLE in performance and go from a 75 watt TDP to 120 watt and keep similar price point. The 50 class cards are lower power so 120 watts would not have been suitbale. And it is beyond ridiculous to think they would give you twice the performance over last gen on a card like that. At this point you will never understand how foolish you look by thinking the 1660 should have been 50 class card. :rolleyes:

Sigh. Price wise, the 1660 should be no more than $185, probably closer to 150. The 1650 is a pointless card at its price point right now, it should be $120 tops and it would still lose out to the 3 year old rx 570s.
 


Looks much better:

Text Article.
https://www.gamersnexus.net/hwreviews/3498-sapphire-rx-5700-xt-pulse-review

So far, the Sapphire card is massively improved over reference for thermals, which was our first biggest complaint with the reference card. A few people out there tried to argue that the reference blower wasn’t actually loud or hot, but this is precisely why that’s wrong. It was both objectively hot and objectively loud, particularly in the comparative against partner models. There’s no point in defending a bad cooler design, and Sapphire gives us the numbers to definitively show that there is a better way to make a card. At about $10 more than the reference blower, it’s not even much more expensive.
 
You pretty much always have to wait until round 2 with nVidia. I bought the 1080 and then a few months later the 1080 Ti came out at the same price. Was not real happy about that.

I mean, how many generations now have we seen Ti type cards? Mainline cards are released, then Titan type cards, and then Ti type cards are last. It's a fairly predictable release schedule.
 
I mean, how many generations now have we seen Ti type cards? Mainline cards are released, then Titan type cards, and then Ti type cards are last. It's a fairly predictable release schedule.

True, though it's not always consistent the price / performance difference between the Ti and non-Ti cards, or how long it takes between releases.
 
Unless there's going to be a price-drop to go along with it, the 2080S looks like it's an uneventful refresh. 5% performance gains, same price.

Hardware Unboxed points out that the 2060S costs 18% more than a 2060 while giving only an 8% performance increase, costing 9% more per FPS.

Comparing the 2080S to the 1080Ti, HU points out that we're getting a 9% performance increase at the same price-point as the 1080Ti 2.5 years later. Is that the worst performance-over-time scaling we've ever had in gaming GPU history?




The RTX 2000 series and Super refreshes look like they're an exercise in pretending to be offering something while not actually offering something.

If we're going by that line of thinking, then the 780 Ti to 980 was only 5%. So, no.
 
True, though it's not always consistent the price / performance difference between the Ti and non-Ti cards, or how long it takes between releases.

Fair enough. I'm not very patient and the time between the Ti and non-Ti is typically irritating. I typically buy a Ti card the first day it's out every generation or every other generation. Tends to last me.
 
Back
Top