Is nvidia marketing to blame for the RTX complaints?

Stoly

Supreme [H]ardness
Joined
Jul 26, 2005
Messages
6,713
I think they made a TERRIBLE job on the model naming for the RTX line.

One of the main concerns is that the RTX2080, for example, replaces the GTX1080, yet it performs and costs much as a GTX1080Ti.
Same thing goes for the rest of the line except the 2080Ti which is a special case, so to speak.

As it is now

RTX 2060 replaces GTX1060, costs like a GTX1070
RTX 2070 replaces GTX1070/1070Ti costs like a GTX1080
RTX 2080 replaces GTX1080 costs like a GTX1080Ti
RTX 2080Ti replaces GTX1080Ti costs and arm and a leg, a kidney a lung and both retinas.

But if the RTX 2060 was named as RTX2070 instead?
Suddenly it wouldn't sound so bad as it would cost as much as a GTX1070 and perform faster, not by that much, but still.

And if the RTX2070 was named as RTX 2080?
Guess what, it would still be faster than the GTX1080 and cost as much.

and you get the idea.

I guess nvidia feared that consumers would view Turing as just Pascal + RTX/DLSS with just a small bump in performance for the same price. So they chose to position them as a large performance increase + RTX/DLSS but also a big jump in price.

And now we know how it turned out...
 
I thought the same thing...

Had the 2080 been called the 2080Ti, no one would have batted an eyelash at the price. Same goes on down the line and holds all the way to current 2060 - a $350 "2070" would have set the world on fire, even with the exact same specs, performance, etc... just the change in name.

That would beg the question, what do you call the current 2080Ti/Titan... but you could always have Titan and MegaSuperUltraTitan or something
 
I see what you're saying.

Personally I think that the performance of these cards are fine. I think Nvidia stuffed up big time with there pricing. These would be flying off the shelf if they were priced relatively close to Pascal pricing. It also would of meant AMD couldn't release the Radeon 7, if the 2080 was priced at our around the $500. As there is no way AMD could have priced it anywhere near that if the rumors are correct about its manufacturing costs.
 
I see what you're saying.

Personally I think that the performance of these cards are fine. I think Nvidia stuffed up big time with there pricing. These would be flying off the shelf if they were priced relatively close to Pascal pricing. It also would of meant AMD couldn't release the Radeon 7, if the 2080 was priced at our around the $500. As there is no way AMD could have priced it anywhere near that if the rumors are correct about its manufacturing costs.

Also keep in mind the 2070 is about the same die size as a Titan Xp/1080ti but could be had at $500, where a 1080ti was $699 on release?

The real problem is we’re paying for the RTX features since they take up substantial die space but are functionally useless. If the 2070 was all CUDA it’d be faster than a 1080ti at $500....
 
Too much die size wasted on Ray tracing/DLSS means more expensive GPU.
Nvidia is not willing to reduce their insane 60% margin so larger Die means higher price compared to Pascal.

*funny that I actually typed my die statement before browser refresh and Dayaks said same thing ;)
Whats sad (for AMD) and impressive for Nvidia, is that Nvidia is so confident in their performance vs AMD.
Nvidia can allocate 20% plus of their die space to a new feature and still beat AMD in rasterized
performance.
 
Last edited:
1. No one outside of hardware enthusiasts care about die size, so the continual argument about the 2070 being a larger die is moot, those details do not justify or help sell a card to the general consumer.

2. Could nVidia have combated this by bumping models up, ie 2080 becomes the 2080ti, the 2080ti becomes the Titan, and 2070 becomes 2080? Partial solve, but it would highlight how there is very little performance increase this generation.

3. The biggest problem is the lack of RTX/DLSS titles, we are in a chicken and egg moment, which comes first, the hardware or the software? So the consumer is paying for a bunch of Tech that cannot be used yet, on the hopes that it will be used in the future. nVidia took this new tech and upped the price rather than cut their margin, a short sighted move IMHO because if you want RTX to take off, you need market penetration, which isn't going to happen atm.

I bought a 2080ti, and am quite happy with it, was it worth the cost compared to my Titan Xp, no, but it is the fastest and I do notice the difference.
 
Too much die size wasted on Ray tracing/DLSS means more expensive GPU.
Nvidia is not willing to reduce their insane 60% margin so larger Die means higher price compared to Pascal.

*funny that I actually typed my die statement before browser refresh and Dayaks said same thing ;)
Whats sad (for AMD) and impressive for Nvidia, is that Nvidia is so confident in their performance vs AMD.
Nvidia can allocate 20% plus of their die space to a new feature and still beat AMD in rasterized
performance.

They might reduce their margin some. So far 2 straight quarters of them talking about slow sales of the RTX cards.
 
I bought the Titan X Maxwell, $1200 or so. Bought the Titan Xp, $1200 or so (but was an actual full chip). Now the Titan RTX is $2500 and $1200 only gets the 2080Ti. I'm not a stranger to paying $1200 for a GPU, but I ain't paying $1200 for a Ti or $2500 for the Titan RTX and $1300 jump from Ti to Titan just ain't happening. I might buy an RTXt if they come out with the "real" Titan like the Xp, but it's gonna have to cost a lot less than $2500.

I agree about the wasted space taken up by the ray tracing real estate. I mean they had to get that door opened and see if ray tracing takes hold (4 years after the still not universally used complete DX12 feature set was introduced). So they better come up with a card that uses all that extra space for stuff that's actually being used, or cut it out of the current cards and cut 50% of the price tag off, too. Then we can do business.

In 4 or 5 years, when developers finally start using ray tracing, then put all that added stuff back in there. Or make a RayTracing line and a NormalWhat99%ofUsUse line.
 
Too much die size wasted on Ray tracing/DLSS means more expensive GPU.
Nvidia is not willing to reduce their insane 60% margin so larger Die means higher price compared to Pascal.

*funny that I actually typed my die statement before browser refresh and Dayaks said same thing ;)
Whats sad (for AMD) and impressive for Nvidia, is that Nvidia is so confident in their performance vs AMD.
Nvidia can allocate 20% plus of their die space to a new feature and still beat AMD in rasterized
performance.

How much of the die(%) is used for RT cores?
 
How much of the die(%) is used for RT cores?

From what I've read @ AnandTech, 25% for RT cores and 25% for Tensor cores. I'll try to find a link.

EDIT: I'm finding 1/3 as well hmmm.
 
Last edited:
From what I've read @ AnandTech, 25% for RT cores and 25% for Tensor cores. I'll try to find a link.

EDIT: I'm finding 1/3 as well hmmm.

From what I've read @ AnandTech, 25% for RT cores and 25% for Tensor cores. I'll try to find a link.

EDIT: I'm finding 1/3 as well hmmm.

So everyone is just guessing?
And yet it is still too much...but I wanted an answer from the guy I quoted...as I suspect he also has no clue...but still thinks it is “too much!!!”
 
I thought the same thing...

Had the 2080 been called the 2080Ti, no one would have batted an eyelash at the price. Same goes on down the line and holds all the way to current 2060 - a $350 "2070" would have set the world on fire, even with the exact same specs, performance, etc... just the change in name.

That would beg the question, what do you call the current 2080Ti/Titan... but you could always have Titan and MegaSuperUltraTitan or something
If they called the 2080 "2080 Ti" then people would be complaining that it isn't faster than the 1080 Ti.
From what I've read @ AnandTech, 25% for RT cores and 25% for Tensor cores. I'll try to find a link.

EDIT: I'm finding 1/3 as well hmmm.
Not even close. Instead of looking at diagrams, how about looking at the actual die? I highlighted the SM and GPC area just to give perspective on the area used for RT cores.
upload_2019-2-12_9-53-47.png
 
If they called the 2080 "2080 Ti" then people would be complaining that it isn't faster than the 1080 Ti.

Not even close. Instead of looking at diagrams, how about looking at the actual die? I highlighted the SM and GPC area just to give perspective on the area used for RT cores.
View attachment 141253

Remember there are also tensor cores
 
1. No one outside of hardware enthusiasts care about die size, so the continual argument about the 2070 being a larger die is moot, those details do not justify or help sell a card to the general consumer.

2. Could nVidia have combated this by bumping models up, ie 2080 becomes the 2080ti, the 2080ti becomes the Titan, and 2070 becomes 2080? Partial solve, but it would highlight how there is very little performance increase this generation.

3. The biggest problem is the lack of RTX/DLSS titles, we are in a chicken and egg moment, which comes first, the hardware or the software? So the consumer is paying for a bunch of Tech that cannot be used yet, on the hopes that it will be used in the future. nVidia took this new tech and upped the price rather than cut their margin, a short sighted move IMHO because if you want RTX to take off, you need market penetration, which isn't going to happen atm.

I bought a 2080ti, and am quite happy with it, was it worth the cost compared to my Titan Xp, no, but it is the fastest and I do notice the difference.

1. While I agree people don't care about die size, people care about price and a bigger die is more expensive, hence higher prices. The jury is still out regarding RTX/DLSS so wether its a wasted space or not is still up in the air.

2. I agree. But IMO it would have been preferable to position the RTX series replacing the GTX on the same price bracket. Most reviewers did that anyway, including [H].

3. This is the real issue. As of today only BFV has RTX, but performance is awful. Nvidia already showed that DLSS should help, but the patch isn't out yet. BTW nvidia said it was relatively easy to implement DLSS, yet there are no games yet that support it. Nvidia has to push at least for DLSS to deliver the 40%+ performance it promised.
 
Remember there are also tensor cores
Unfortunately it's impossible to point them out at this resolution. It's somewhere in that blue outline.

upload_2019-2-12_10-13-24.png


If we go by the diagram the Tensor cores are about 40% of that space. But again, the diagram isn't representative of the real area.

upload_2019-2-12_10-15-59.png


What I do know is the SM without the RT and Tensor cores is about the same size as Pascal.
 
NV did not push developers enough for faster DLSS and even RT adoption despite having GPU which was compatible and there existing 'experimental' APIs for that so that on RTX lineup launch everyone can use these features. This is theirs only fault

RT and Tensor cores physically taking się space is a f***ing MUST and it would need to sooner or later happen and I amd glad it already did.
 
Any hopes Nvidia had of getting me to spend money on these died when they decided to price them at their current price points. Nvidia had always priced their new cards more or less around the previous gen's msrp but decided to give these new cards a significant bump this time.. I'm currently running a 1080 which i got for $669 (cdn) back in November 2017. If the 2080 TI was closer to $1000 cdn I'd be a little more tempted to bite, but it's currently sitting at $1600 cdn. I originally spent around $1600 for my whole system when i first put it together, not for a single gpu.

Anyways, I think price is one of the biggest issues for these cards right now.
 
Any hopes Nvidia had of getting me to spend money on these died when they decided to price them at their current price points. Nvidia had always priced their new cards more or less around the previous gen's msrp but decided to give these new cards a significant bump this time.. I'm currently running a 1080 which i got for $669 (cdn) back in November 2017. If the 2080 TI was closer to $1000 cdn I'd be a little more tempted to bite, but it's currently sitting at $1600 cdn. I originally spent around $1600 for my whole system when i first put it together, not for a single gpu.

Anyways, I think price is one of the biggest issues for these cards right now.

The prices from AMD has followed suit...Look at the Radeon 7 price....and it will only get go up and up in price as the process node shrinks, like it or not.
 
1. While I agree people don't care about die size, people care about price and a bigger die is more expensive, hence higher prices. The jury is still out regarding RTX/DLSS so wether its a wasted space or not is still up in the air.

2. I agree. But IMO it would have been preferable to position the RTX series replacing the GTX on the same price bracket. Most reviewers did that anyway, including [H].

3. This is the real issue. As of today only BFV has RTX, but performance is awful. Nvidia already showed that DLSS should help, but the patch isn't out yet. BTW nvidia said it was relatively easy to implement DLSS, yet there are no games yet that support it. Nvidia has to push at least for DLSS to deliver the 40%+ performance it promised.

Lack of RTX and DLSS titles are proving that the concept of easy implementation wrong. To my knowledge even Shadow of the Tomb Raider still doesn't have RTX features enabled.
 
Lack of RTX and DLSS titles are proving that the concept of easy implementation wrong. To my knowledge even Shadow of the Tomb Raider still doesn't have RTX features enabled.

You were around when DX10 launched rigth?
 
You were around when DX10 launched rigth?

I was around when 16bit became a thing. I am not shocked that this isn't taking off that fast, in fact I think it's even money DLSS ends up dead, and 1/3 a chance that rtx follows.

Edit: since some have pointed out the PhysX fallacy.
 
Last edited:
I was around when 16bit became a thing. I am not shocked that this isn't taking off that fast, in fact I think it's even money DLSS ends up like physx, and 1/3 a chance that rtx follows.

RTX is NVIDIA’s moniker...RT (Ray Traycing) is buildt into DX12 by Microsoft...RT is THE holy grail in graphics...so no.

DLSS also doesn’t affect gameplay (unlike PhysX)...A.I. is also comming (DLSS uses deep learning) if you like it or not.

Let me put it this way:
RT and Deep Learning are comming and if you don’t like that, you are SOL.
 
I was around when 16bit became a thing. I am not shocked that this isn't taking off that fast, in fact I think it's even money DLSS ends up like physx, and 1/3 a chance that rtx follows.
Why people keep saying physx is a failure? Physx is the most popular physics engine, period. Not havok nor bullet or whatever open source physics engines.
 
RTX is NVIDIA’s moniker...RT (Ray Traycing) is buildt into DX12 by Microsoft...RT is THE holy grail in graphics...so no.

DLSS also doesn’t affect gameplay (unlike PhysX)...A.I. is also comming (DLSS uses deep learning) if you like it or not.

Let me put it this way:
RT and Deep Learning are comming and if you don’t like that, you are SOL.

You can say they are coming all you want, I will forever believe it when I see it.
 
RTX is NVIDIA’s moniker...RT (Ray Traycing) is buildt into DX12 by Microsoft...RT is THE holy grail in graphics...so no.

DLSS also doesn’t affect gameplay (unlike PhysX)...A.I. is also comming (DLSS uses deep learning) if you like it or not.

Let me put it this way:
RT and Deep Learning are comming and if you don’t like that, you are SOL.

Just to clarify, DXR is raytracing for DX12, RTX is nvidia's implementation.
 
Well so were at some point, 3d games, 16bit, 32bit, AA, tessallation, etc., etc., etc.

There are lots of technological advancements that end up on the trash heap for many reasons, I never count on any of this coming to pass, I only hope it does. When I was younger I would get hyped for new technology I would read about in computer/tech magazines of the 80s and 90s, many of which have still not come to pass.

Implementation is imho, a long way off for these technologies, if it actually occurs in its current form (RTX/DXR). I don't believe we will see major use of DLSS or RTX for some time. If DLSS was as easy to implement at nVidia claimed, we would be seeing it on everything new. If RTX just worked, then SotTR would have it by now.

This is the value problem nVidia has with the 2000 series, marginal improvement over the 1000 series that is wiped out by the price increase, banking on new technology that approaching 6 months in is pretty much a no show. The kinect had better support at this point in time.
 
I would say marketing, pricing and most importantly lack of titles that use what they were trying to sell. It's been what... almost 5 months? And we got a single game, 2 benchmarks and a demo?

In addition, *everyone* knows that a die shrink is up next for nv. The next release should allow them to pack more silicon in and get closer to the performance we expect when using the new stuff.
 
You can say they are coming all you want, I will forever believe it when I see it.

If you don't think the end goal always has been raytraycing...you really need to read up.

And Deep Learning is already here, I see more and more servers doing deep learning in our datacenters...from medical research to ticket system optimization. (look at autonomous driving).

You are in for a hard time...
 
If you don't think the end goal always has been raytraycing...you really need to read up.

And Deep Learning is already here, I see more and more servers doing deep learning in our datacenters...from medical research to ticket system optimization. (look at autonomous driving).

You are in for a hard time...

I don't know why your ranting about deep learning, I never disputed anything about deep learning, and equating DLSS to broader implementation of deep learning is a straw man, frankly same with your raytracing shtick.

This is about nVidia's implementation and marketing of their GPU's, in which every point I have made is on topic and in my opinion, correct.

Yes, some day ray tracing maybe the thing, but today it is still pretty much the end goal it was in the 80's and 90's, albeit slightly closer. Once we have actual implemented raytracing on a broadscale, not just a shitty EA game and some vague promises of it, we will be there, currently we do not.

Edit: If you want to let me know what is going on in Deep Learning from your perspective, I'm more than happy to read it, though a PM would be the better place than a thread about nVidia.
 
Last edited:
So everyone is just guessing?
And yet it is still too much...but I wanted an answer from the guy I quoted...as I suspect he also has no clue...but still thinks it is “too much!!!”
Ask and you shall receive ;)
1080ti die size =471mm2 (3584 cuda core)
2080 die size= 545mm2 (2944 cuda core)

545mm/471mm= 1.157 Means the 2080 die is 15.7% larger than 1080ti
Not exactly a move in the right direction when you have a new generation card 2 years later requiring larger die size for same performance.
Historically, newer generation GPU provide equal performance on smaller die and that is how performance per dollar pushes the upgrade cycle (Which is not selling well this time for Turing)

Links since you are apparently too lazy to google search yourself:
https://en.wikipedia.org/wiki/GeForce_10_series
https://en.wikipedia.org/wiki/GeForce_20_series

The die sizes above also do not account for the lower cuda core count on the 2080(2944) vs 1080ti (3584).
So its reasonable to expect that the new Turing features take up more than 15.7% of the additional die space since there are less cuda cores.
For the record, my initial estimate of 20% + die space is absolutely spot on.
 
Last edited:
I was around when 16bit became a thing. I am not shocked that this isn't taking off that fast, in fact I think it's even money DLSS ends up like physx, and 1/3 a chance that rtx follows.
PhysX is the most ubiquitous physics engine in games today, so that bodes well for DLSS ;).
 
Back
Top