RTX 3xxx performance speculation

LOL 3090 Ti, this is getting silly, maybe they created the SKU because Navi 2x is better than they thought it would be? Guess we'll find out very soon. Either way I'm pretty excited, I need to decide if I'm going to keep this 9900k or spend the cash and get the new Ryzen and hold on to the 2080 Ti a bit longer until the Ampere refresh releases..hmm...

Or they can increase the top end price, and people can't complain about the price increase of the x80 Ti. ;)
 
  • Like
Reactions: noko
like this
Or they can increase the top end price, and people can't complain about the price increase of the x80 Ti. ;)

They’ll still complain if the 3090 Ti is just a rebadge of what should’ve been a 3080 Ti. Looking at the leaked power specs it looks more like a Titan rebadge though. I’m still curious about what AMD has up their sleeve because I have a feeling Navi 2x is going to be pretty impressive.
 
The more people complain about price the less I’d have to fight for one on launch day. Price doesn’t dictate If I’m buying it not. Performance of my current hardware and how it’s not doing what I want it to do is the driving factor of upgrading or not. Guess I’ll hold out on buying that Louis belt and put that towards a better videocard 😜.
 
I think the bigger problem is that Quake RTX is apparently the benchmark for this card? Where is the killer app that makes this card even necessary. Every generation has one. Whatever it is, it isn’t on my radar screen.

I'm fine with Quake II RTX being a benchmark since the game is still awesome and in a lot of ways better than most newer games out there. The ray tracing in it is also very impressive. As for the killer app to test with the new cards, Control could count as that. The game is awesome and uses ray tracing for a lot of effects.
 
The more people complain about price the less I’d have to fight for one on launch day. Price doesn’t dictate If I’m buying it not. Performance of my current hardware and how it’s not doing what I want it to do is the driving factor of upgrading or not. Guess I’ll hold out on buying that Louis belt and put that towards a better videocard 😜.
Same here. Last GPU I bought was a 1080Ti and that was years ago at this point. PCs are (dirt) cheap compared to a lot of hobbies.

Besides, I just cashed my vacations of the last 3 years, so Ampere can't come fast enough.
 
Exactly. I'll be getting a 3000 series card. It's just a matter of which version which will depend on the price and performance of each of them. And I'm not the only one.
AMD doesn't have a top end card and is irrelevant to that decision.
A fortune teller! Sweet, what are my winning lottery numbers? Oh... No, you don't know? Say it isn't so and you weren't just making it up with no evidence to support it. Personally, I'll wait on benchmarks before trying to pass off guesses as facts. And I'll base my buying decision on performance in my price range, not on speculations of the unknown.

That said, my best guess is you are probably correct that Nvidia is not going to lose the lead, but until I have benchmarks I'm not going to speculate on what is "worth" buying. And you said it's a matter of price/performance on which model you'll get, so not sure if this is even relevant to you discounting ALL AMD cards because they don't have a more expensive model that you may not have even been interested in.
 
A fortune teller! Sweet, what are my winning lottery numbers? Oh... No, you don't know? Say it isn't so and you weren't just making it up with no evidence to support it. Personally, I'll wait on benchmarks before trying to pass off guesses as facts. And I'll base my buying decision on performance in my price range, not on speculations of the unknown.

That said, my best guess is you are probably correct that Nvidia is not going to lose the lead, but until I have benchmarks I'm not going to speculate on what is "worth" buying. And you said it's a matter of price/performance on which model you'll get, so not sure if this is even relevant to you discounting ALL AMD cards because they don't have a more expensive model that you may not have even been interested in.

I don't understand what you're trying to say. I'm looking for a top end gaming card. AMD hasn't had that for for a long time. I'll be choosing between a 3080, 3080 ti, 3090, titan, or whatever. Something AMD has no chance of matching. If they did they would be hyping it up. Benchmarks are released from both nvidia and reliable reviewers that get models early before you can buy them. It's not like I'm going to blindly guessing and deciding to spend $1000, there's tons of history to go off of. If an AMD card actually came out at the same time and was better I would get it, but that's not going to happen.
 
I don't understand what you're trying to say. I'm looking for a top end gaming card. AMD hasn't had that for for a long time. I'll be choosing between a 3080, 3080 ti, 3090, titan, or whatever. Something AMD has no chance of matching. If they did they would be hyping it up. Benchmarks are released from both nvidia and reliable reviewers that get models early before you can buy them. It's not like I'm going to blindly guessing and deciding to spend $1000, there's tons of history to go off of. If an AMD card actually came out at the same time and was better I would get it, but that's not going to happen.

"It's just a matter of which version which will depend on the price and performance of each of them". So you are going to pick a 3000 series based on price/performance without considering any AMD offering regardless of it's performance or price. I'm not sure what you don't understand, it was literally your statement. I agree there isn't a huge chance of AMD magically closing the gap completely, but there is a chance they can have a close enough for a good price. Especially if your looking (by your own admission) at one of the top 3 cards, which leaves a pretty good gap for AMD to be somewhere in or around. I don't care what you get, it's your money, I was just pointing out you saying AMD won't be competitive is just a guess. They may be competitive against a 3080, maybe 3090, nobody has a clue yet, except you state that you already know... You don't, you're guessing. I'm not even calling it a bad or uneducated guess, but it's still just a guess, not a fact as your trying to present it. I'll buy whatever makes sense when I have benchmarks (and probably give it a little while after to see if any space invaders or AMD blank screen issues arise).
Anyways, I was just saying you were presenting speculation as facts. I don't blame you for wanting to get Nvidia, they are currently the undisputed performance leader at the top end, and the were first out of the gate for some neat features. That doesn't magically translate to you knowing how well AMD will compete and where the performance will slot in though.
 
Ive bought cards over the last 30 years from many different makers, and I generally buy what gives me the best bang for my buck. I have to say the 1080 Ti I bought to replace my 980 TI when the fans died on it was one of my best purchases I made since the 7950 I bought before the 980 Ti. It has lasted me a long time and I am really ready to upgrade, Ill wait and see what comes out before deciding what I want to go with, as I am not permalocked into only one vendor. Whoever wows me the most will get my money, and hopefully bang for the buck will go up a lot with this next generation.
 
Ive bought cards over the last 30 years from many different makers, and I generally buy what gives me the best bang for my buck. I have to say the 1080 Ti I bought to replace my 980 TI when the fans died on it was one of my best purchases I made since the 7950 I bought before the 980 Ti. It has lasted me a long time and I am really ready to upgrade, Ill wait and see what comes out before deciding what I want to go with, as I am not permalocked into only one vendor. Whoever wows me the most will get my money, and hopefully bang for the buck will go up a lot with this next generation.
Yeah, 1080ti was a great card, still holding up good too. If only we knew in advance what that lasting value would be, lol (CPU and GPU wise).
 
Ive bought cards over the last 30 years from many different makers, and I generally buy what gives me the best bang for my buck. I have to say the 1080 Ti I bought to replace my 980 TI when the fans died on it was one of my best purchases I made since the 7950 I bought before the 980 Ti. It has lasted me a long time and I am really ready to upgrade, Ill wait and see what comes out before deciding what I want to go with, as I am not permalocked into only one vendor. Whoever wows me the most will get my money, and hopefully bang for the buck will go up a lot with this next generation.

Pascal was an anomali FYI.
 
So you are going to pick a 3000 series based on price/performance without considering any AMD offering regardless of it's performance or price
AMD is... Exceedingly unlikely to be able to match Nvidia's top tiers. We have a stack of current factors and a decade of history to support that statement objectively.

Hell, I'm sitting here with a 1080Ti and I don't really expect AMD to provide a decent upgrade for that card either. I'm sure that they'll exceed the raster performance, but I don't expect them to provide enough of a bump, along with solid RT, at a price that would make the upgrade worth it.

Now, subjectively, I hope that AMD bucks their trend, but as I've been hoping that since they bought ATi...
 
Lets wait for Fury lets wait for Vega lets wait for Navi and now lets wait for BIG Navi....AMD's GPU dept has become a Meme.
 
I really hope AMD drops a bomb and shakes things up. They have to do more than match the length of Nvidia’s benchmark charts though. Nvidia is not on the same level as Apple but they’ve created an ecosystem that is about more than FPS. And I’m not even talking about in the professional space where AMD has no chance there in the short term. CUDA and the whole software stack is too well entrenched.

AMD needs a proper response to NVENC, DLSS, Ansel, Twitch integration etc. Those are the things that will keep people locked to nvidia even if AMD can catch up on the FPS front.
 
I really hope AMD drops a bomb and shakes things up. They have to do more than match the length of Nvidia’s benchmark charts though. Nvidia is not on the same level as Apple but they’ve created an ecosystem that is about more than FPS. And I’m not even talking about in the professional space where AMD has no chance there in the short term. CUDA and the whole software stack is too well entrenched.

AMD needs a proper response to NVENC, DLSS, Ansel, Twitch integration etc. Those are the things that will keep people locked to nvidia even if AMD can catch up on the FPS front.

This +1000. For the buying public feature sets matter. As they should. I mean people turn down cars without Bluetooth from the factory FFS...
 
AMD is... Exceedingly unlikely to be able to match Nvidia's top tiers. We have a stack of current factors and a decade of history to support that statement objectively.

Hell, I'm sitting here with a 1080Ti and I don't really expect AMD to provide a decent upgrade for that card either. I'm sure that they'll exceed the raster performance, but I don't expect them to provide enough of a bump, along with solid RT, at a price that would make the upgrade worth it.

Now, subjectively, I hope that AMD bucks their trend, but as I've been hoping that since they bought ATi...
I don't disagree, it's unlikely for AMD to be at the top, and even admitted as much. Could they be somewhere near one of the top 3 Nvidia cards? I think that's within the realm of reason. Either way it's still not a fact... They could surprise everyone (unlikely as it is) and compete at the top end, or they could fall flat on their face and not complete at all (also unlikely). More likely they will be below the top end, but I would think they would at least be at or very near nvidias 3rd fastest card. You prefaced what you said with unlikely to compete at the top end, this is valid and is not presented as fact, but an educated guess (and my guess is you're right on this, but I won't present it as fact when it's not known yet).
To say you (not you, but the other poster) are not looking at other models besides Nvidia but are going to select one of the top 3 or so models because AMD can't compete makes it much less likely that AMD won't be somewhere in the range and able to compete on price/performance. I could be completely wrong and so can he, but I didn't try to present my thoughts as fact.
 
Lets wait for Fury lets wait for Vega lets wait for Navi and now lets wait for BIG Navi....AMD's GPU dept has become a Meme.
I wait for benchmarks.... While past performance can give an indication what may happen, it's not absolute. Ryzen broke the bulldozer mold, bulldoze broke the athlon mold (in a negative way). Like I said, I'll wait on benchmarks before writing anything off or getting my hopes very high.
 
I don't know why everyone equates the fine wine theory to better hardware and not just sloppy software, as well the last I looked at it, both manufacturers improve performance with drivers over time, its just that nVidia typically starts stronger than AMD in both hardware and software (pricing aside).
Never mentioned fine wine, talking exactly about initial software/driver for new hardware. Except this time around Nvidia had some software fine wine with DLSS with Turning. As for Hardware being better, that to me goes back forth between Nvidia and AMD.
 
As for top performer, not so cut and dry, could go back and forth for price points or just plain one dominates each price point and is close enough. I would recommend folks evaluate what will best fit them when actual hardware can be bought, hopefully good reviews will show all the features and performance from the hardware. PS5 Reveal Demo showed a number of RT hybrid titles running rather well, very complex geometry etc. and then the typyical non-interesting console feel titles I don't care about. RNDA2 looks to be strong, gaming Ampere I do not know. For me it would be pointless just for a GPU alone, especially a much more powerful one without a significant monitor upgrade or HDMI 2.1 TV, how well the cards support that upgrade would have a significant weight in my choice. I may just go with a Nvidia professional card due to much better overall game and software support regardless of the performance differences. Of course if your favorite team is your thing, have fun with it.
 
I don't know why everyone equates the fine wine theory to better hardware and not just sloppy software, as well the last I looked at it, both manufacturers improve performance with drivers over time, its just that nVidia typically starts stronger than AMD in both hardware and software (pricing aside).
I'm pretty sure you are simply agreeing with his statement... while complaining.
"AMD over time shows better what their hardware is capable of after a few years which is not exactly what people want"
I think the important piece is when he says is NOT what people want, aka, they want the capabilities when they buy it, not years later. Meaning they don't get the most out of their cards up front, aka, drivers suck at release. I don't think anyone disagrees, maybe it's just the wording difference. If AMD finds more performance 2 years after the release, the hardware was always there. Sometimes things last longer due to a design decision, aka, more compute performance vs. raster, which tends to last longer since newer games make more and more use of shaders. This is different than finding performance that was always on the table, this is just we designed our card a certain way that was more forward looking and at the time it was released it wasn't being utilized fully. For MOST things with AMD, it tends to be their drivers just suck up front and they get better over time (and they continue to support/optimize older hardware longer than nvidia does as well, again, generalizing).
 
As for top performer, not so cut and dry, could go back and forth for price points or just plain one dominates each price point and is close enough. I would recommend folks evaluate what will best fit them when actual hardware can be bought, hopefully good reviews will show all the features and performance from the hardware. PS5 Reveal Demo showed a number of RT hybrid titles running rather well, very complex geometry etc. and then the typyical non-interesting console feel titles I don't care about. RNDA2 looks to be strong, gaming Ampere I do not know. For me it would be pointless just for a GPU alone, especially a much more powerful one without a significant monitor upgrade or HDMI 2.1 TV, how well the cards support that upgrade would have a significant weight in my choice. I may just go with a Nvidia professional card due to much better overall game and software support regardless of the performance differences. Of course if your favorite team is your thing, have fun with it.
Yeah, one of my big things is I'm curious if AMD /RDNA2 is going to do a little better with GPU encoding. The last comparison should have been embarrassing to them. Obviously not everyone here buys a GPU for transcoding/encoding performance/IQ but hey, it's important to some of us.
 
Yeah, one of my big things is I'm curious if AMD /RDNA2 is going to do a little better with GPU encoding. The last comparison should have been embarrassing to them. Obviously not everyone here buys a GPU for transcoding/encoding performance/IQ but hey, it's important to some of us.
Main reason I'd like to see AMD improve here is for their APUs. They don't just lag Nvidia -- they lag Intel. For all the extra grunt that they've been putting into their mobile products, they're falling short here.

And the reality with 4k and HEVC (and its ilk) is that raw CPU grunt is just not going to continue to be useful for transcoding. Yes, it can work, and the flexibility of using general-purpose CPU cores can allow for better results for specific situations, but that gap closes every generation.

In addition, there's significant utility with more and more folks capturing video. So many things go into this, but the basics are that capturing, editing, and sharing high-quality video is just stupid easy these days. Tell people that they can toss an Nvidia GPU in a desktop or get a laptop with one to make the process significantly faster?

Yeah. AMD needs that too.
 
Main reason I'd like to see AMD improve here is for their APUs. They don't just lag Nvidia -- they lag Intel. For all the extra grunt that they've been putting into their mobile products, they're falling short here.

And the reality with 4k and HEVC (and its ilk) is that raw CPU grunt is just not going to continue to be useful for transcoding. Yes, it can work, and the flexibility of using general-purpose CPU cores can allow for better results for specific situations, but that gap closes every generation.

In addition, there's significant utility with more and more folks capturing video. So many things go into this, but the basics are that capturing, editing, and sharing high-quality video is just stupid easy these days. Tell people that they can toss an Nvidia GPU in a desktop or get a laptop with one to make the process significantly faster?

Yeah. AMD needs that too.
Yup, same sentiment here. I have as embarrassed for them to see them so far behind int l and Nvidia. Although, Nvidia wasn't always in the lead on this, Intel was the leader for years. They upped their game, hopefully AMD does the same.
 
I’m hoping the RTX 3070 variant would give 2080 Ti rasterisation performance minus maybe 10-15% for under $700. I can only dream....I don’t think I can afford the RTX 3080 if rumors are suggesting it’ll be $1000 😭
 
Speculation from Igor lab on 3090:

https://www.igorslab.de/en/350-watt...ned-chip-area-calculated-and-boards-compared/

According to Igor The RTX 3090's GA102 GPU core itself will draw roughly 230 watts of power with the memory drawing 60 watts. The MOSFETs will consume a further 30 watts of power with the fan coming in at 7 watts. PCB losses are expected to be in the range of 15 watts and input section consumption will be around 4 watts. All of this totals to a total board power of 350 watts for the RTX 3090.

If rumor is true - going to 7nm while increasing power consumption to 350w may result in a monster GPU. Guess we have few months until we find out.....
 
Nvidia is doing some massive sweepstakes with tons of high-end 2xxx GPUs and high-end PCs.
The catch?
Share, tweet, etc etc etc. They are focusing on people to follow their social media pages.
Sweepstakes end in mid-July.

I bet they are doing this to gather momentum and hit us with RTX 3xxx reveal somewhere in July.
 
Last edited:
Have my eyes set on a 3080Ti or a 3090Ti depending on what reviews and benchmarks show during release. I'll likely wait for the eVGA custom cards to come out after the FE's release.

I'm fairly excited for the next generation, but I don't exactly need an upgrade right away either, so glad I can sit back and examine all offerings after a few months of them being out.
 
Have my eyes set on a 3080Ti or a 3090Ti depending on what reviews and benchmarks show during release. I'll likely wait for the eVGA custom cards to come out after the FE's release.

I'm fairly excited for the next generation, but I don't exactly need an upgrade right away either, so glad I can sit back and examine all offerings after a few months of them being out.
Some very interesting stuff here that maybe coming from Nvidia, Traversal CoProcessor reflected in the major design change for board and cooler, Major RT implementations and performance benefits with other capabilities, an answer to the upcoming new Major Console releases that actually may win out, Holographic type capability VR headsets (This is beyond next gen VR). Those higher priced Turing cards may have paved the path for some rather out of this world stuff. Do not know if any of this is true or not so only for adults, no kiddies please:

 
Some very interesting stuff here that maybe coming from Nvidia, Traversal CoProcessor reflected in the major design change for board and cooler, Major RT implementations and performance benefits with other capabilities, an answer to the upcoming new Major Console releases that actually may win out, Holographic type capability VR headsets (This is beyond next gen VR). Those higher priced Turing cards may have paved the path for some rather out of this world stuff. Do not know if any of this is true or not so only for adults, no kiddies please:



Already being discussed in two dedicated threads. Though most of us think this is meaningless click-bait speculation.

Generally a co-processor is something you start with and integrate later, going as far back as when floating point was a co-processor for CPUs.

But BVH traversal is already integrated into the RTX cards, so the odds that there is anything to this speculation is slim.
 
Already being discussed in two dedicated threads. Though most of us think this is meaningless click-bait speculation.

Generally a co-processor is something you start with and integrate later, going as far back as when floating point was a co-processor for CPUs.

But BVH traversal is already integrated into the RTX cards, so the odds that there is anything to this speculation is slim.
I agree it's unlikely, but it's not impossible. They could separate some things out to help with yields as long as it's got a very close/short/fast interface it wouldn't make much difference if it was physically on or just next to the main chip. Still seems rather unlikely however as they would basically need something like hypertransport to make it work.
 
  • Like
Reactions: noko
like this
Still seems rather unlikely however as they would basically need something like hypertransport to make it work.
"Hypertransport" is AMD-speak for 'a fast bus'. It's more of a 'we ran out of shit to market, so let's give this common implementation a catchy name and throw that in the materials' kind of annoyance.

If a processor architect needs a fast bus, they'll make one. Nvidia has NVLink. They can hook that up on the board and optimize to their hearts delight.
 
"Hypertransport" is AMD-speak for 'a fast bus'. It's more of a 'we ran out of shit to market, so let's give this common implementation a catchy name and throw that in the materials' kind of annoyance.

If a processor architect needs a fast bus, they'll make one. Nvidia has NVLink. They can hook that up on the board and optimize to their hearts delight.
That's why I said something like it, as people are familiar with the term. I thought this part "close/short/fast interface" covered what it was doing. Also, hypertransport is more than just a name on an existing thing, it's a specific design/implementation. So, yes busses like this have been used as has a ring bus like Intel uses. My point was just they would have to use something that is meant to support direct chip comms quickly for it to have a chance at being true. Maybe I should have said infinity fabric instead of hypertransport?
 
I agree it's unlikely, but it's not impossible. They could separate some things out to help with yields as long as it's got a very close/short/fast interface it wouldn't make much difference if it was physically on or just next to the main chip. Still seems rather unlikely however as they would basically need something like hypertransport to make it work.

"Not impossible" is insufficient reason to give credence to clickbait. It's funny that speculation that would be dismissed if reasoned here, are given credence because someone made a clickbait video about it.

It appears that Ray Tracing/traversal is memory intensive. You need the entire geometry of a scene accessible to follow Ray Bounces. In theory you could have a co-processor, but it needs very high speed access to the main GPU chip, that will effectively take over the memory interface of the main GPU chip while to does its intersection testing. You could do this, but it seems VERY unlikely. This seems more like a temporary, "late to the party" move, but from examining the landscape, everyone is going with integrated solutions.

Even more unlikely is a RT co-processing card being bandied about, which would need it's own memory pool as well.

I also see Hypertransport/Infinity Fabric, as mainly marketing. AMD is usually just running it's own protocol over physical PCIe links. Everyone who needs some form of this has it under different names.
 
Already being discussed in two dedicated threads. Though most of us think this is meaningless click-bait speculation.

Generally a co-processor is something you start with and integrate later, going as far back as when floating point was a co-processor for CPUs.

But BVH traversal is already integrated into the RTX cards, so the odds that there is anything to this speculation is slim.
Traversal of BVH is very memory intensive which with a CoProccesor you have the space for additional catches. Agree with Ready4Dis on benefits to yield since the GPU can be smaller. The BVH is updated every frame since geometry normally changes frame to frame.

Being memory intensive also means memory bandwidth transfer rate restricted, also slowing down rest of GPU due to contention for memory. Separating this out could speed up the rest of the GPU. Similar to an APU where separating a GPU and CPU can be many times faster.

Then again Nvidia maybe just ran out of space to push performance so they had too. The other rumor of 10x RT maybe not so far off.

I conclude likely.
 
I agree it's unlikely, but it's not impossible. They could separate some things out to help with yields as long as it's got a very close/short/fast interface it wouldn't make much difference if it was physically on or just next to the main chip. Still seems rather unlikely however as they would basically need something like hypertransport to make it work.

It’s impossible if you want a working product.

How much faster and power efficient do you think an on-chip data transfer within an SM is compared to something like NVLink? 10x? 1000x? 1000000x?

Raytracing is a very fine grained operation that requires frequent low latency communication between the shader core and RT hardware. RT will go off chip when texture units go off chip. i.e. never.
 
It’s impossible if you want a working product.

How much faster and power efficient do you think an on-chip data transfer within an SM is compared to something like NVLink? 10x? 1000x? 1000000x?

Raytracing is a very fine grained operation that requires frequent low latency communication between the shader core and RT hardware. RT will go off chip when texture units go off chip. i.e. never.
It is also very memory intensive meaning the very fast access inside the chip is constantly waiting for memory reads, running at memory speeds vice cache speeds.

On a coprocessor you could have large part if not all the BVH highly compressed available where as on the GPU you would never have the space to do that and would have to rely on the memory at memory speeds.
 
It is also very memory intensive meaning the very fast access inside the chip is constantly waiting for memory reads, running at memory speeds vice cache speeds.

On a coprocessor you could have large part if not all the BVH highly compressed available where as on the GPU you would never have the space to do that and would have to rely on the memory at memory speeds.

Yes memory access latency is also very important. I suspect Ampere’s L2 cache will be significantly larger than Turing.

Here’s the image from the nvidia patent describing the coprocessor interface. The TTU aka Tree Traversal Unit aka RT core is inside the TPC next to the texture units accessing the same memory subsystem (MMU).

BAE349F4-E823-4FE8-8ED9-7F3857703C1B.png


https://patents.google.com/patent/US20200050451A1
 
Back
Top