Rumor: 3080 31% faster than 2080 Ti

Nope. Traditional multi GPU even when implemented properly is fundamentally flawed due to the alternate frame rendering process. It causes input lag and all sorts of problems.

Even if you get 100% scaling (which I have never seen) it is still pretty bad.

The way 3DFX originally did it by alternating scan lines was a much better approach, but they never solved the scaling issue. Later development of Split Frame Rendering got closer to solving this issue but because people didn't demand it, and AFR was easier to program that's the crap we got.

With a chiplet design, as long as you could get good fast interconnects you wouldn't have any more contention for the memory than you would by adding more cores to a very large single GPU, because it would be a very large single GPU, just split across multiple dies.

We wouldn't have to worry about multi-GPU at all in game, because it would be one GPU.
That was kind of my point... He was talking using chiplets to increase the CU count more than what we see now. Which means memory contention. It would be odd to think they would do all that effort with interconnects to put 2x 20CU chiplets together to make a 5700xt. They would put 2x 40 or 4 x20, or w/e, which means it would be fighting for memory the entire time. I think AMD has a pretty good idea on chiplet pros and cons and if they thought it would help them they'd be jumping on it. Also, even with a fast interconnect, they still have latency issues (refer to example of 3100 vs 3300x at same speeds). It isn't an automatic win by just splitting it up into smaller parts. Someone still needs to do the interconnecting of said parts. So if you have great yields at 7nm, and move to chiplet design with slightly better yields (given the same GPU design specs) you may win out, you may break even or you may end up costing even more logistically, but in any case you end up with a slightly slower product. If you go to a huge die and can get a real benefit to yields, then it could be a win if you up your memory bandwidth to match the increase in cores.or.xome up with some large high speed cache near the GPU that helps mask the latency issues (which I know they are researching this now).
 
I was sitting through a long phone conference with people droning on about things not relevant to me, so I decided to run the numbers.

I pulled benchmarks from 9 titles (because they were easily available), all in 1080p, Crysis 3, Bioshock Infinite, Tomb Raider, Battlefield 4, Metro Redux, Sleeping Dogs, Thief, Watch Dogs and Dragon Age Inquisition.

The mean performance increases for these titles, generation over generation for every x80 GPU since the beginning of the DX11 era is as follows:

GTX980 -> GTX1080: 63.0%
GTX780 -> GTX980: 31.9%
GTX680 -> GTX780: 26.8%
GTX580 -> GTX680: 41.3%
GTX480 -> GTX580: 16.3%

Mean for all generations: 35.9%
....

Now that's how you prove someone wrong :p
 
I was purely going on the specs but sure, fanboy magical thinking... I very rarely touch consoles unless i'm playing minecraft with my daughter split screen, which hasn't been for a while since i setup our own local minecraft server so we can all jump on and play together on our PCs. The 5700xt is about 15% slower than a standard 2080 (it varies heavily on game though). I think a 15% gain in performance wouldn't be unreasonable. Of course it could be completely wrong, but hey, at this point this is all speculation. I don't think it'll hit 2080 ti levels and fall well short of Ampere's upper end products. Just as an example, the 5700xt has 40 compute units. The xbox x is going to have 52 CU's with a newer architecture and run at similar clocks (-100mhz?). If the architecture alone had ZERO increase in compute/raster performance, it's got a 23% increase in CU's. As long as RDNA 2 is at least on par with RDNA that should be able to make up the 15% difference between it and the 2080 (given adding a CU isn't perfect scaling). Again, lots of unknowns, but it's within reason if you actually look at the specs and break things down, no magic marketing or fanboyisms needed. The PS5 will have less CU's (36 vs 40) but will run at a higher frequency. If they have 0% architectural increases in performance, this one may well fall a little short. If they have anywhere similar to the difference between VEGA and RDNA it'll easily surpass it. In reality, it'll probably be somewhere between those 2 numbers. I think 10%-15% is reasonable. The increase from vega -> navi was pretty substantial. They went from 64/56 CU's down to 40 and gained performance (partially due to more clocks, partially due to better architecture). They both have the same example bandwidth, so that was purely frequency and architecture that contributed. So, I still think 2080 is about the place these will fall, with a few % either way.

It does add up, they make money from the consoles, maybe not as much as they'd like, but they aren't going to turn down some free publicity and extra R&D that's paid for by someone else. They are also high volume with very large quantity initial PO's (purchase orders). They know they can produce this part for the next 4 years and sell millions of them. The ps4 sold 106 million units as of the beginning of this year (total sales). XBox One was 'only' 46 million. It's very hard to find sales figures on normal GPU sales, but I can imagine they probably don't sell that many of any single generation of GPU (and even if they can/do, why not do both?) They want to be front and center in the consoles as it helps them in a few ways; sales, name recognition, and optimizations (developers optimize for the console with AMD GPU which helps developers understand how to get the most out of them) to name a few. Sure I'd of loved RDNA2 to drop earlier this year, but I'd also of loved Ampere to come quicker too.

All this said, I won't even be buying a console (at least not at release, I tend to end up getting them once they go on sale and/or used market). I haven't bought a console on release since.... ever. I will be upgrading 3-4 of my desktops later this year (and possibly my home server) when RDNA2/Ampere come out as and zen3. Not sure what i'll be putting in them yet because we have no clue what performance or price will be, and with as many desktops as I upgrade at a time, it's normally not the highest end parts, so perf/$ reigns supreme in my house. Anyways, I understand being skeptical, things get hyped and then let down over and over. I still think it reasonable that going from 40 CU's of an older architecture up to 52 CU's of a newer architecture should give us a 10-15% improvement which would make the 2080 a reasonable guesstimate. Nobody has a crystal ball, I'm just going on rough math's, but it definitely seems within the realm of reason.

The "specs" are exactly the problem - the lack of them. Because some people are just looking at the max theoretical TFLOPS figure from console marketing bulletpoints, and extrapolating from that "Welp, based on TFLOPS its clearly faster than a 2080, good enough for me!". The problem is TFLOPS are relative, and not directly comparable between architectures. And there are other variables and bottlenecks between the max theoretical math throughput of a GPU's cores, and what's finally delivered as FPS on the screen -- texture fill rate, pixel fill rate, implementation, core throttling, thermal dissipation.

Despite Navi Magic architectural improvements, the consoles are still going to be running a mobile class solution at end of the day - an APU with all the inherent disadvantages and performance killers relative to a dGPU, like the CPU/GPU having to share memory, and a small heatsink. So forget 2080, I'd be shocked if the top end Xbox Series X SKU is demonstrably more powerful than Pascal.
 
Last edited:
So forget 2080, I'd be shocked if the top end Xbox Series X SKU is demonstrably more powerful than Pascal.
If they can pull that off with ray tracing, well, that's good enough I think. Obviously they're going to have to optimize like crazy to hit 4k60 (or some reasonable facsimile that'll convince your average consumer), but since that's what consoles do, it'll probably be fine.
 
The "specs" are exactly the problem - the lack of them. Because some people are just looking at the max theoretical TFLOPS figure from console marketing bulletpoints, and extrapolating from that "Welp, based on TFLOPS its clearly faster than a 2080, good enough for me!". The problem is TFLOPS are relative, and not directly comparable between architectures. And there are other variables and bottlenecks between the max theoretical math throughput of a GPU's cores, and what's finally delivered as FPS on the screen -- texture fill rate, pixel fill rate, implementation, core throttling, thermal dissipation.

Despite Navi Magic architectural improvements, the consoles are still going to be running a mobile class solution at end of the day - an APU with all the inherent disadvantages and performance killers relative to a dGPU, like the CPU/GPU having to share memory, and a small heatsink. So forget 2080, I'd be shocked if the top end Xbox Series X SKU is demonstrably more powerful than Pascal.
I literally ignored the "specs" you are complaining about and even pointed out with 0% architectural increases, why it would end up around the location I mentioned. It had nothing to do with magical architectural improvements. I'm not sure why this is confusing. 5700xt = 40 CUs, Xbox x = 52 CUs. Even with NO architecture changes, that is 23% more. Minus some efficiency for non linear scaling, it's easily possible that it will be at 2080 performance, which is about 15% faster than 5700xt. These are confirmed specs from Microsoft. I'm not using any magic tflops #'s your bringing up or anything else. In order for what your saying to be right, their new architecture would have to be 10% less efficient/performance. If that's your belief cool, but please stop making strawman arguments and just look at the #'s we know about. Again, nobody knows for sure, but it's easily within reason. Thinking it's going to blow away a 2080ti, that would be wishful thinking. Using 23% more CUs to gain ~15% more performance... Reasonable.
 
The "specs" are exactly the problem - the lack of them. Because some people are just looking at the max theoretical TFLOPS figure from console marketing bulletpoints, and extrapolating from that "Welp, based on TFLOPS its clearly faster than a 2080, good enough for me!". The problem is TFLOPS are relative, and not directly comparable between architectures. And there are other variables and bottlenecks between the max theoretical math throughput of a GPU's cores, and what's finally delivered as FPS on the screen -- texture fill rate, pixel fill rate, implementation, core throttling, thermal dissipation.

Despite Navi Magic architectural improvements, the consoles are still going to be running a mobile class solution at end of the day - an APU with all the inherent disadvantages and performance killers relative to a dGPU, like the CPU/GPU having to share memory, and a small heatsink. So forget 2080, I'd be shocked if the top end Xbox Series X SKU is demonstrably more powerful than Pascal.

I think a lot of people are going by specs, clock speed and the Digital Foundry analysis, that had Gears 5 running close to RTX 2080.



Put it all together and it is looking pretty good. Naturally it's hard to compare console and PC, but I think it all bodes well for Big Navi PC part.

If AMD doesn't deliver better than 2080Ti performance, they should just give up. There is no excuse this time. Hard to say if they go far enough to really compete with 3080Ti (or whatever they call it) and how much they charge.
 
If AMD doesn't deliver better than 2080Ti performance, they should just give up. There is no excuse this time. Hard to say if they go far enough to really compete with 3080Ti (or whatever they call it) and how much they charge.
Generally, their 'excuse' has been that competing for the halo against Nvidia isn't good for ROI. Unit cost goes up fast and unit sales go down faster; unless they have a clear winner, better to focus their fab capacity on higher-volume parts. Of course, the bigger question this time around is if their first-gen RT performance will stack up well against Nvidia's second-gen RT release. AMD is undoubtedly capable of reaching the raster performance to compete, but that's not where the competition nor the games are going.
 
That was kind of my point... He was talking using chiplets to increase the CU count more than what we see now. Which means memory contention. It would be odd to think they would do all that effort with interconnects to put 2x 20CU chiplets together to make a 5700xt. They would put 2x 40 or 4 x20, or w/e, which means it would be fighting for memory the entire time.

Fair, but that would be no different than added memory contention due to a faster single chip GPU.

And we've been hearing the miracles HBM can do for memory bandwidth for years now, while simultaneously scratching our heads as to why we would need all of that added memory bandwidth when current designs seem to have all the memory bandwidth they need, as evidenced by the absolutely minimal increases in performance you see by cranking ram.

I think there is potential.

I think AMD has a pretty good idea on chiplet pros and cons and if they thought it would help them they'd be jumping on it. Also, even with a fast interconnect, they still have latency issues (refer to example of 3100 vs 3300x at same speeds). It isn't an automatic win by just splitting it up into smaller parts. Someone still needs to do the interconnecting of said parts. So if you have great yields at 7nm, and move to chiplet design with slightly better yields (given the same GPU design specs) you may win out, you may break even or you may end up costing even more logistically, but in any case you end up with a slightly slower product. If you go to a huge die and can get a real benefit to yields, then it could be a win if you up your memory bandwidth to match the increase in cores.or.xome up with some large high speed cache near the GPU that helps mask the latency issues (which I know they are researching this now).

Agreed, chiplets help the most when the yield is a challenging issue. I don't have good insight into what current yields are like in TSMC's 7nm process, or what they might be like in the next gen, (5nm?)
 
Generally, their 'excuse' has been that competing for the halo against Nvidia isn't good for ROI. Unit cost goes up fast and unit sales go down faster; unless they have a clear winner, better to focus their fab capacity on higher-volume parts. Of course, the bigger question this time around is if their first-gen RT performance will stack up well against Nvidia's second-gen RT release. AMD is undoubtedly capable of reaching the raster performance to compete, but that's not where the competition nor the games are going.

IMO in previous generations they were too far behind on technology to compete on the highest end.

Look how big the die was on Vega 64 vs GTX 1080, and the power it consumed, and that was using HBM.

A Vega-technology based competitor for 1080 Ti would have been too big, and too power hungry.

It's looking close enough to parity this time to at least surpase 2080ti if not catch 3080ti.
 
IMO in previous generations they were too far behind on technology to compete on the highest end.

Look how big the die was on Vega 64 vs GTX 1080, and the power it consumed, and that was using HBM.

A Vega-technology based competitor for 1080 Ti would have been too big, and too power hungry.

It's looking close enough to parity this time to at least surpase 2080ti if not catch 3080ti.

Yeah, Vega 64 vs 1080ti is the best example of the disparity. The 1080ti had a larger than normal increase over the 980ti, while the Vega 64 had a huge compute increase, but not so much on the raster performance (what gamers care about). They couldn't just increase it's size, it was already a power hungry beast. The architecture just wasn't there. RDNA closed that performance delta a bit (5700xt was relatively power efficient and small compared to Vega), but they didn't really push the limits on size; probably due to any/many reasons, yields of a new process, required power, ran into memory bottlenecks so more CU's wasn't to helpful, etc. I think the 2080ti is where they will be competing, probably not the top card, but possibly #2/3 in nvidia's lineup. As long as it's priced accordingly and their raytracing isn't complete crap (shouldn't be with all the dev going into the ps5/xbox x hopefully) they should sell pretty well. They made some headway with RDNA, so hopefully they can get closer with RDNA2 and eventually be on parity with RDNA3 (or w/e they decide to call it).
 
Seems to me that the rumor if true that there will be a 3090, suggests pretty strongly that Nvidia themselves are not sure that the 3080TI will outperform Big Navi. So they are esentially rebranding a Titan like product with a more consumer oriented SKU to keep the halo crown.
 
Seems to me that the rumor if true that there will be a 3090, suggests pretty strongly that Nvidia themselves are not sure that the 3080TI will outperform Big Navi. So they are esentially rebranding a Titan like product with a more consumer oriented SKU to keep the halo crown.
Or the so called 3090 is really just the 3080Ti and Nvidia just decide to drop the whole Ti moniker. While I do not doubt RDNA 2 will be a good product, it is remain to be seen whether they can retake the performance crown as it been more than half a decade AMD competed in that space. We will definitely see later in the year to see what both companies have to offer, quite exciting times.
 
Or the so called 3090 is really just the 3080Ti and Nvidia just decide to drop the whole Ti moniker. While I do not doubt RDNA 2 will be a good product, it is remain to be seen whether they can retake the performance crown as it been more than half a decade AMD competed in that space. We will definitely see later in the year to see what both companies have to offer, quite exciting times.

I think it would be good move to drop the Ti designation entirely. Just have 3060/70/80/90 and maybe refresh those later with Super variants.
 
I think it would be good move to drop the Ti designation entirely. Just have 3060/70/80/90 and maybe refresh those later with Super variants.
Agreed, it definitely makes much sense now for Nvidia to use Super to avoid creating more confusion in a market that at times can be really confusing.
 
Agreed, it definitely makes much sense now for Nvidia to use Super to avoid creating more confusion in a market that at times can be really confusing.

Super... Ti.... FE.... Sigh.... knowing Nvidia, they'll release the Titan as a halo gaming card all over again just confuse the hell out us. Start spewing titles again like Titan, Titan X, Titan Black, Titan V, Titan Z, etc.
 
Super... Ti.... FE.... Sigh.... knowing Nvidia, they'll release the Titan as a halo gaming card all over again just confuse the hell out us. Start spewing titles again like Titan, Titan X, Titan Black, Titan V, Titan Z, etc.
Please no, somebody at nVidia will read this and do it just to spite us. They will look over at the Intel product stack and say “Hold my jacket” and give us the “Ti Titan XP Super” in Black, Green, and Rose Gold options.

Ti2tan XP Gold type R, comes in Rose Gold and signed by Jensen.
 
Generally, their 'excuse' has been that competing for the halo against Nvidia isn't good for ROI. Unit cost goes up fast and unit sales go down faster; unless they have a clear winner, better to focus their fab capacity on higher-volume parts. Of course, the bigger question this time around is if their first-gen RT performance will stack up well against Nvidia's second-gen RT release. AMD is undoubtedly capable of reaching the raster performance to compete, but that's not where the competition nor the games are going.


Are you sure about that?

I mean, Nvidia is sure trying to push RT like it is the second coming of Christ, but I'm personally still undecided on whether or not it is just another fad.
 
Now that I think a bit longer about it, I wouldn't be disappointed in 3080Ti having +30% gain over 2080Ti, if the GPU was no more than €1000 (ACTUAL price, not these $999 2080Ti reveals which ended up being $1600 in Sweden when converted from euros).
That means some 60-80% faster than my old 1080Ti, while I wouldn't have to pay an arm and a leg for a new GPU.

But, if this is €900 3080 and 3080Ti is €1500 and +60%, I might bite on 3080Ti since I won't be upgrading the next gen. That means well over twice as fast as 1080Ti when oced.
 
Are you sure about that?

I mean, Nvidia is sure trying to push RT like it is the second coming of Christ, but I'm personally still undecided on whether or not it is just another fad.
It's not a fad, it's the direction the industry HAS to go (it's where realism is at, mathematically). The only question is how well can we use what they have produced to create better quality games/effects. I think it's biggest draw at the moment is in use for lighting. Even then since they can only use so many samples you can sometimes see glitches and such, but it really is improved over other methods. This was first generation tech, so all the kinks need to be worked out. Hopefully gen2 will bring some much needed performance boost along with lower end parts that also support it. It's not going to be mainstream if only Halo products have it, and game developers are only going to put so much effort/time in. Now that it'll be available on consoles, AMD, and nvidia lower tier (I think I remembered this, so I may be wrong), developers/studios will have a much larger audience.

ps. It's not going to all the sudden go from non-rt to rt. There is going to be a lot of transition and in between. They will slowly use it in more and more places until one day it'll be fast enough to actually do the entire scene in real time (and I don't mean low res, low poly/object count scenes).
 
Are you sure about that?

I mean, Nvidia is sure trying to push RT like it is the second coming of Christ, but I'm personally still undecided on whether or not it is just another fad.

Not a fad. No way. There is an industry wide push on this, it's not just from Nvidia.
 
If this rumor is true, then color me unimpressed.

For the mundane performance increase gen over gen and grossly artificially inflated prices that nVidia has been charging since day one for their RTX2000/GTX1600 series, they should be providing a 3000 series with a performance jump of at least 50%, IMO.

If these rumors had reason to focus instead on the pricing tiers returning to their pre-mining craze gouging at these supposed performance increaaes, then I would be singing a different tune. Until then, I'm not impressed...not with the price tags that nVidia has been slapping on. If they're going to charge a lot more, then they better be delivering a lot more.
 
Last edited:
If this rumor is true, then color me unimpressed.

For the mundane performance increase gen over gen and grossly artificially inflated prices that nVidia has been charging since day one for their RTX2000/GTX1600 series, they should be providing a 3000 series with a performance jump of at least 50%, IMO.

If these rumors had reason to focus instead on the pricing tiers returning to their pre-mining craze gouging at these supposed performance increaaes, then I would be singing a different tune. Until then, I'm not impressed...not with the price tags that nVidia has been slapping on. If they're going to charge a lot more, then they better be delivering a lot more.

No way nV is giving up ground in pricing, not if all mining ended tomorrow, not if Jesus Christ and Buddha showed up arm-in-arm to protest at nV HQ. I predicted it prior to the 20xx series release, and have stated the same multiple times since: the mining craze was a giant gift to nV (and AMD). While nV had been using every trick in the book to raise prices for the past 5-6 years (granular model segmentation, staggered chip version releases, encouraging AIBs to stray from MSRP, "special" edition branding), the mining craze effectively gave them a massive tailwind to follow. It effectively pushed the mindshift in the market that an $800 high-end card, and a $1200 enthusiast card is acceptable. That level of pricing is not acceptable, in my view, and never will be for what is a discretionary leisure-oriented purchase for most people, but this is nonetheless the state of things.
 
No way nV is giving up ground in pricing, not if all mining ended tomorrow, not if Jesus Christ and Buddha showed up arm-in-arm to protest at nV HQ. I predicted it prior to the 20xx series release, and have stated the same multiple times since: the mining craze was a giant gift to nV (and AMD). While nV had been using every trick in the book to raise prices for the past 5-6 years (granular model segmentation, staggered chip version releases, encouraging AIBs to stray from MSRP, "special" edition branding), the mining craze effectively gave them a massive tailwind to follow. It effectively pushed the mindshift in the market that an $800 high-end card, and a $1200 enthusiast card is acceptable. That level of pricing is not acceptable, in my view, and never will be for what is a discretionary leisure-oriented purchase for most people, but this is nonetheless the state of things.

Exactly my point. My very last sentence solidifies it.
 
No way nV is giving up ground in pricing, not if all mining ended tomorrow, not if Jesus Christ and Buddha showed up arm-in-arm to protest at nV HQ. I predicted it prior to the 20xx series release, and have stated the same multiple times since: the mining craze was a giant gift to nV (and AMD). While nV had been using every trick in the book to raise prices for the past 5-6 years (granular model segmentation, staggered chip version releases, encouraging AIBs to stray from MSRP, "special" edition branding), the mining craze effectively gave them a massive tailwind to follow. It effectively pushed the mindshift in the market that an $800 high-end card, and a $1200 enthusiast card is acceptable. That level of pricing is not acceptable, in my view, and never will be for what is a discretionary leisure-oriented purchase for most people, but this is nonetheless the state of things.

You left out this little thing called COVID 19 that's still kicking about... then there's US unemployment that's sitting at over 13%, the continued market/job uncertainty, and if you go looking more towards global growth projections for this year, well, they are pretty much in the crapper. World output is sitting at –4.9% and advanced economies are sitting -8%. So, this "state of things" may also temper Nvidia's plans a bit when they start setting MSRP expectations for these newer GPUs... but then again, this is Nvidia we are talking about.
 
Last edited:
All this is totally irrelevant.

The ONLY question that is valid is the performance difference between a $250 card now and a $250 card once this next generation comes out.
I have a real suspicion that it will be far less than 31% faster. Probably half that if we are lucky.

The 3080 could easily be 31% more expensive than the 2080
 
You left out this little thing called COVID 19 that's still kicking about... then there's US unemployment that's sitting at over 13%, the continued market/job uncertainty, and if you go looking more towards global growth projections for this year, well, they are pretty much in the crapper. World output is sitting at –4.9% and advanced economies are sitting -8%. So, this "state of things" may also temper Nvidia's plans a bit when they start setting MSRP expectations for these newer GPUs... but then again, this is Nvidia we are talking about.

nV has capital reserves to spare, and their focus over the past 5-6 years has been to adjust the price paradigm. They'll hold fast on price. A $900-1000 2080 Ti replacement is a fait accompli.
 
All this is totally irrelevant.

The ONLY question that is valid is the performance difference between a $250 card now and a $250 card once this next generation comes out.
I have a real suspicion that it will be far less than 31% faster. Probably half that if we are lucky.

The 3080 could easily be 31% more expensive than the 2080

A $250 GPU?

This isn't [S]oftOCP

Certainly the market for low end GPU's is large and important, but as far as enthusiasts are concerned it doesn't even exist.

If you aren't running at least xx70 series, what are you even doing in an enthusiast forum? :p
 
Last edited:
It effectively pushed the mindshift in the market that an $800 high-end card, and a $1200 enthusiast card is acceptable. That level of pricing is not acceptable, in my view, and never will be for what is a discretionary leisure-oriented purchase for most people, but this is nonetheless the state of things.

While I understand what you're saying, $1200 every 2-3 years isn't a crazy amount of money for "a discretionary leisure-oriented" purchase that also happens to be the very best part that you can get. Especially when you could still have an enjoyable gaming experience with a $300 card.

People will spend $10k for a performance package on a $50k car just to get to 60mph .2 seconds faster. If you're a cyclist the high-end bikes start at $6k and go up to $14k or so. I know people that will spend more than $1200 on golf course access, season tickets for the local sports team, guns and ammo, knick knacks, and other things that I think aren't worth it EVERY single year. And some of those things are way more than $1200 a year. Gaming is pretty inexpensive in relation to most hobbies; if you can resist the top-end stuff it's downright cheap.
 
Last edited:
A $250 GPU?

This isn't oftOCP

Certainly the market for low end GPU's is large and important, but as far as enthusiasts are concerned it doesn't even exist.

If you aren't running at least xx70 series, what are you even doing innan enthusiast forum? :p
I think he's talking volume. Look at steam and see what the most common GPU's are. Hint, it's not the 2080ti. He's saying for their average users and the majority of their sales is in the $250(ish) range.
 
While I understand what you're saying, $1200 every 2-3 years isn't a crazy amount of money for "a discretionary leisure-oriented" purchase that also happens to be the very best part that you can get. Especially when you could still have an enjoyable gaming experience with a $300 card.

People will spend $10k for a performance package on a $50k car just to get to 60mph .2 seconds faster. If you're a cyclist the high-end bikes start at $6k and go up to $14k or so. I know people that will spend more than $1200 on golf course access, season tickets for the local sports team, guns and ammo, knick knacks, and other things that I think aren't worth it EVERY single year. And some of those things are way more than $1200 a year. Gaming is pretty inexpensive in relation to most hobbies; if you can resist the top-end stuff it's downright cheap.
Well, while some people do, others don't... just as you feel it's not worth golf course access, season tickets, guns, ammo, w/e; he doesn't think it's worth $1200 for a GPU. He's entitled to his opinion just as you are. It doesn't make him wrong or you wrong. Some people don't spend $10k for upgrades on a $50k car and instead buy cheap cars to work on (my daily driver cost me $250 and another $60 to get on the road) and then spend a few $'s here and their to make them faster. Which person is the enthusiast? The one that paid someone else to put stuff on their car, or the one that tore into their own car? It's not just about spending money, it's about getting whatever you can out of what you have. Some people have more discretionary funds than others and different priorities. This used to be the place to buy a celeron and almost double your frequency... when did it turn into buy the most expensive stuff or your not [H]? If you can only afford to buy 3rd hand used computer parts and build it yourself, who cares?
 
Well, while some people do, others don't... just as you feel it's not worth golf course access, season tickets, guns, ammo, w/e; he doesn't think it's worth $1200 for a GPU. He's entitled to his opinion just as you are. It doesn't make him wrong or you wrong. Some people don't spend $10k for upgrades on a $50k car and instead buy cheap cars to work on (my daily driver cost me $250 and another $60 to get on the road) and then spend a few $'s here and their to make them faster. Which person is the enthusiast? The one that paid someone else to put stuff on their car, or the one that tore into their own car? It's not just about spending money, it's about getting whatever you can out of what you have. Some people have more discretionary funds than others and different priorities. This used to be the place to buy a celeron and almost double your frequency... when did it turn into buy the most expensive stuff or your not [H]? If you can only afford to buy 3rd hand used computer parts and build it yourself, who cares?

Though I agree with what you said I also agree with Bankle. Just because he "thinks" it's not worth it doesn't mean it isn't. Top tier card requires $1200. Take it or leave it. Personally, the reason that's a tough pill to swallow for me is because of how quickly GPU's depreciate once a new gen is released. Everyone loves value. 1080ti being the epitome of that. We all would love the top end card to cost 700-800 but it doesn't anymore. That being said, there are great options in that price range and even lower...you just have to lower expectations along with them. I own a 2080ti, 2070 Super and a 2060 on my laptop and the gaming experience is enjoyable on all of them. Once you're immersed you don't even notice. Honestly, the cost difference between the 2080ti and 2070 Super doesn't in any way shape or form justify the marginal performance gains between the two. It all depends on your expectations.

I used to be the guy who scrounged around for computer parts. In college, small budget, computer science blah blah blah. The first time I built a decent computer for myself was because I got REALLY lucky in vegas playing blackjack. I won $1400 from $20. Came home, built a brand new AMD Opteron 165 system with a 7800GTX. Tits! I milked the crap out of that computer. Before that it was 3rd tier parts. Still had a blast. Nowadays I have a great career and computers are still a priority for me. I don't game nearly as much as I used to but I enjoy the top level of performance when I do. I'm a snob in that regard. But it's not like if you can't afford the best you're going to have a shit experience. I suppose after all my useless rambling that's my point. :p
 
Though I agree with what you said I also agree with Bankle. Just because he "thinks" it's not worth it doesn't mean it isn't. Top tier card requires $1200. Take it or leave it. Personally, the reason that's a tough pill to swallow for me is because of how quickly GPU's depreciate once a new gen is released. Everyone loves value. 1080ti being the epitome of that. We all would love the top end card to cost 700-800 but it doesn't anymore. That being said, there are great options in that price range and even lower...you just have to lower expectations along with them. I own a 2080ti, 2070 Super and a 2060 on my laptop and the gaming experience is enjoyable on all of them. Once you're immersed you don't even notice. Honestly, the cost difference between the 2080ti and 2070 Super doesn't in any way shape or form justify the marginal performance gains between the two. It all depends on your expectations.

I used to be the guy who scrounged around for computer parts. In college, small budget, computer science blah blah blah. The first time I built a decent computer for myself was because I got REALLY lucky in vegas playing blackjack. I won $1400 from $20. Came home, built a brand new AMD Opteron 165 system with a 7800GTX. Tits! I milked the crap out of that computer. Before that it was 3rd tier parts. Still had a blast. Nowadays I have a great career and computers are still a priority for me. I don't game nearly as much as I used to but I enjoy the top level of performance when I do. I'm a snob in that regard. But it's not like if you can't afford the best you're going to have a shit experience. I suppose after all my useless rambling that's my point. :p
I literally said neither was wrong, aka I agree with both of them. It is worth it for some and not to others, just like any hobby. My point was judging people based on what is affordable is silly. You can be an enthusiast whether you think $1200 GPU is worth it or not. Just like I don't think a $2950 thread ripper is worth it right now. That doesn't mean it's not to someone else. I think the 2080ti is priced about where it should be, but I won't spend that much on it, I've got other higher priorities to worry about. If/when I have a bit more surplus maybe I'll consider it, but upgrading 5 desktops I just can't drop that much into a single one of them.
 
Last edited:
I literally said neighbor her was wrong, aka I agree with both of them. It is worth it for some and not to others, just like any hobby. My point was judging people based on what is affordable is silly. You can be an enthusiast whether you think $1200 GPU is worth it or not. Just like I don't think a $2950 thread ripper is worth it right now. That doesn't mean it's not to someone else. I think the 2080ti is priced about where it should be, but I won't spend that much on it, I've got other higher priorities to worry about. If/when I have a bit more surplus maybe I'll consider it, but upgrading 5 desktops I just can't drop that much into a single one of them.
Fair enough. I'll drink to that. Cheers to being computer enthusiasts. It is a ton of fun! 😊😊🍻🍻
 
For me, it's not about having the funds, or priorities. It's more about a consumer ethos where I won't pay for a product for which I think the price has been artificially jacked up through measures that are insidious and abject going well past any reasonable level of free market arguments.

Further, accepting that a $1200 top-end card is reasonable is effectively saying you're ok with the smoke-and-mirrors upwards pricing scheme nV has been pushing for the past 5-6 years. nV is in fact banking on the fact people will just tacitly accept that they can charge 30-40% more than the past. I'm not one of those people.
 
Last edited:
30% in what though?

Compute? Non RT? RT?

Being that I have a 2080ti that can run anything I throw at it with disgusting gobs of performance I just do not personally see the need to get a 3k series. I will wait until 4k series. Or lets see what Big Navi can do but it probably wont exceed 30% more under any metric though.
 
While I understand what you're saying, $1200 every 2-3 years isn't a crazy amount of money for "a discretionary leisure-oriented" purchase that also happens to be the very best part that you can get. Especially when you could still have an enjoyable gaming experience with a $300 card.

People will spend $10k for a performance package on a $50k car just to get to 60mph .2 seconds faster. If you're a cyclist the high-end bikes start at $6k and go up to $14k or so. I know people that will spend more than $1200 on golf course access, season tickets for the local sports team, guns and ammo, knick knacks, and other things that I think aren't worth it EVERY single year. And some of those things are way more than $1200 a year. Gaming is pretty inexpensive in relation to most hobbies; if you can resist the top-end stuff it's downright cheap.

People like you :banghead:
Finding excuses for nearly doubling the price of top en card while giving half the usual performance boost.
Get out of here.
 
Nvidia prices won't go down until AMD has enough competition to kick them off their throne. AMD would need to pull on the GPU side of things what they managed to pull on the CPU side of things and I don't think that this next series of cards is gonna do it. If they can refine their GPU technology as well and as consistently as they're refining their CPU technology, maybe by the end of 2021 they'll have something that can equal Nvidia, and by 2022 or 2023 they might be caught up.

Still fully expecting the top end card to be $1000+ But I'm interested in what's gonna be going on in the $500 range personally. Especially since Covid-19 basically killed my income, I'll be happy if I make 1/2 of what I made last year.
 
Nvidia prices won't go down until AMD has enough competition to kick them off their throne. AMD would need to pull on the GPU side of things what they managed to pull on the CPU side of things and I don't think that this next series of cards is gonna do it. If they can refine their GPU technology as well and as consistently as they're refining their CPU technology, maybe by the end of 2021 they'll have something that can equal Nvidia, and by 2022 or 2023 they might be caught up.

Still fully expecting the top end card to be $1000+ But I'm interested in what's gonna be going on in the $500 range personally. Especially since Covid-19 basically killed my income, I'll be happy if I make 1/2 of what I made last year.

AMD very much benefits from nV's prices, so much so I think they should be paying them a fee (considering nV is taking the risk that folks will balk, and has obviously put loads of resources/money behind their pricing strategy). They're riding nV's coattails in this respect, and that's not going to change with RDNA 2.

Sorry to hear about your situation. If it's any consolation, $500 is right where price/performance/benefit normally meets on the 3D scattergram. That said -- and it's ridiculous to think about -- you'll likely not get much in terms of performance above your 1080 ti for $500 from the upcoming cards.
 
When MCM GPUs arrive, the sky are the limit as with CPU cores and we will almost all, be PC peasants. :( Youngsters will be happy with cloud gaming, feeling top of the world, while beeing sucked dry, buying status through games immaterial property they own.

Of cause they will end of rich as it is the real deal! :arghh:
 
Take with a grain of salt....
https://wccftech.com/rumor-alleged-...-up-to-23-tflops-of-peak-graphics-horsepower/
1593173798889.png
 
Back
Top