Did nvidia throw a 1-2 punch to AMD with Ampere?

I think AMD top of the line will fall between the RTX3070 and 3080
That's my best case scnerario
Why? We already know the Series X can match a 2080 super and it's a cut down chip with lower clock speeds. It's all about perf per dollar. They match or exceed the 70 or 80 series at a more reasonable price it's a win. Given what the Series X can do, it's not out of the realm of possibility. I think it comes down to 7nm yields, the 5700 series chips were pretty small, not sure how they're going to deal with a much larger chip at these price points.
 
Why? We already know the Series X can match a 2080 super and it's a cut down chip with lower clock speeds. It's all about perf per dollar. They match or exceed the 70 or 80 series at a more reasonable price it's a win. Given what the Series X can do, it's not out of the realm of possibility. I think it comes down to 7nm yields, the 5700 series chips were pretty small, not sure how they're going to deal with a much larger chip at these price points.
Where do you get that? It seems to me the Series X will probably match a 5700 or a RTX2070 at best.
 
Where do you get that? It seems to me the Series X will probably match a 5700 or a RTX2070 at best.
Digital foundries did a video on the Series X where they had a few titles matching the 2080 super in performance in early testing. Worth checking out, also if you google Xbox Series X 2080 super there's some speculation articles. All of this is speculation at this point, thinking on it more it may have just been a gears 5 demo that matched 2080 super performance with almost no optimization.

Edit: I dug it up, around the 8 minute mark. They say it matches a 2080 without any optimization, so I was wrong. Still though it's pretty impressive.
 
Last edited:
  • Like
Reactions: noko
like this
I hope AMD has something good this time around it's been far too long since AMD was on top.
 
There are more than just gaming cards, how AMD handles the professional market will come to bare as well. 32gb HBM version? would be nice if cheaper than let say a 3090. I expect AMD to update ProRender to use the RT hardware in RNDA2, a number of programs would use it. Anyways Nvidia looks to have a very good product at reasonable pricing, I don't think many could go wrong there for gaming. The ram amount of the initial 3070 and 3080 make it more for just gaming then doing other stuff. I am presuming the 3090 with NVlink will be able to combine the two 24gb into 48gb, we just have not heard if that is the case, it would be utterly stupid for SLI, it has to have a real purpose, what is it for?
 
Digital foundries did a video on the Series X where they had a few titles matching the 2080 super in performance in early testing. Worth checking out, also if you google Xbox Series X 2080 super there's some speculation articles. All of this is speculation at this point, thinking on it more it may have just been a gears 5 demo that matched 2080 super performance with almost no optimization.

Edit: I dug it up, around the 8 minute mark. They say it matches a 2080 without any optimization, so I was wrong. Still though it's pretty impressive.


Well I guess after the dissapointing HALO showcase my expectations floored, also I've read reports that 4k on SeriesX is upscaled, maybe using AMD Freesclale or some sort of DLSS equivalent which I don't mind as long as it doesn't impact IQ.
I think consoles will be very powerful, just not PC quality powerful.
Besides after all the news stating that several console demos were actually running on "equivalent pc hardware", well... you catch my drift.
 
There are more than just gaming cards, how AMD handles the professional market will come to bare as well. 32gb HBM version? would be nice if cheaper than let say a 3090. I expect AMD to update ProRender to use the RT hardware in RNDA2, a number of programs would use it. Anyways Nvidia looks to have a very good product at reasonable pricing, I don't think many could go wrong there for gaming. The ram amount of the initial 3070 and 3080 make it more for just gaming then doing other stuff. I am presuming the 3090 with NVlink will be able to combine the two 24gb into 48gb, we just have not heard if that is the case, it would be utterly stupid for SLI, it has to have a real purpose, what is it for?
you presume right, nvlink allows full access to the 48gb at least in cuda IIRC.

Thing is that AMD doesn't have anywhere the support, middleware, features, etc, as nvidia. There is a reason other than performance why nvidia is the domint player in most of the markets its in.
 
Last edited:
  • Like
Reactions: noko
like this
I have a feeling Big Navi will be somewhere between a 3080 and 3090 with normal raster performance. However Ray Tracing remains a wild card, but I could care less about that.

If Big Navi falls significantly short on raytracing I guarantee Nvidia will shove it in everyone’s face ad nauseum and AMD will retain their title of an also ran. That will be easy too now that it’s on consoles even the average pleb will be hearing about this raytracing phenomenon.

Nvidia can always adjust pricing if AMD knocks it out of the park on rasterization but AMD won’t be able to easily adjust RT performance. Hopefully they didn’t drop the ball.
 
The irony of it being the console ray tracing would be done by an AMD chip.

I don’t disagree btw
 
If Big Navi falls significantly short on raytracing I guarantee Nvidia will shove it in everyone’s face ad nauseum and AMD will retain their title of an also ran. That will be easy too now that it’s on consoles even the average pleb will be hearing about this raytracing phenomenon.

Nvidia can always adjust pricing if AMD knocks it out of the park on rasterization but AMD won’t be able to easily adjust RT performance. Hopefully they didn’t drop the ball.

If AMD fall significantly short on Ray Tracing, they deserve it shoved in their face.

Despite the YT Click-Bait claims of 4 X Ray tracing improvement on Ampere, Relative Ray Tracing improvement in Ampere is negligible. All AMD would need to do is aim for Turing + 20% and they would be equal to Ampere.

That may seem bontroversial, since NVidia said about double RT performance. But realize they said about Double everything. Relatively speaking, Ray Tracing is in a similar position to before, just a bit more RT gains, than Raster gains in practical terms.

Raster games are running up to 80% faster, very heavy or pure RT up to 100% faster.

That's only a relative 20% gain in Ray Tracing balance, over Turing. Not double.

Heck, all AMD need to have done was aim to equal Turing that was out 2 years ago, and they would still be relatively close in RT performance. If they can't hit that target, they are incompetent.
 
Last edited:
If Big Navi falls significantly short on raytracing I guarantee Nvidia will shove it in everyone’s face ad nauseum and AMD will retain their title of an also ran. That will be easy too now that it’s on consoles even the average pleb will be hearing about this raytracing phenomenon.

Nvidia can always adjust pricing if AMD knocks it out of the park on rasterization but AMD won’t be able to easily adjust RT performance. Hopefully they didn’t drop the ball.

I honestly don't think the console players will give a damn about RT performance as long as it works well and they get the advertised 30-60 fps. I'm a PC only player and I don't care about RT much tbh.
 
Well I guess after the dissapointing HALO showcase my expectations floored, also I've read reports that 4k on SeriesX is upscaled, maybe using AMD Freesclale or some sort of DLSS equivalent which I don't mind as long as it doesn't impact IQ.
I think consoles will be very powerful, just not PC quality powerful.
Besides after all the news stating that several console demos were actually running on "equivalent pc hardware", well... you catch my drift.

343 is a trash developer, I wouldn't base anything off Halo these days.
 
I honestly don't think the console players will give a damn about RT performance as long as it works well and they get the advertised 30-60 fps. I'm a PC only player and I don't care about RT much tbh.

The typical console player doesn’t care about anything technical. Point is that for many people who do pay attention to these things RT is a big deal.
 
Heck, all AMD need to have done was aim to equal Turing that was out 2 years ago, and they would still be relatively close in RT performance. If they can't hit that target, they are incompetent.

Equaling Turing won’t be cheap, far less for doubling it.
 
Equaling Turing won’t be cheap, far less for doubling it.

This is primarily about balance between Raster and RT performance. You build your basic block of GPU that has all the functions and scale from there.

With Turing out there, they should at bare minimum have been aiming to have RT balance at least equal to Turing. If they weren't they were aiming for failure from the beginning.
 
Smart enough to realize that actually was the price whether you like it or not, AIB cards were often selling higher $1200. And the 2070 did not launch with equal performance to, the 1080 Ti as you claimed.

Smart enough to relalize every launch is not the same as you claimed. Pascal was one the best ever launches for NVidia, Turing was one of the roughest, Ampere looks to be shaping up as one of the best...

Actually.... Nvidia TNT ArchitectureFahrenheit, and Nvidia 8800 gts ArchitectureTesla, was the best launch!

Amphere 3080 gtx is looking good too! Wish it had higher mem bandwidth, oh well!
 
With Turing out there, they should at bare minimum have been aiming to have RT balance at least equal to Turing. If they weren't they were aiming for failure from the beginning.

Aiming is one thing. Getting there is another. This will be a fun couple of months.
 
Aiming is one thing. Getting there is another. This will be a fun couple of months.

This is not random though. These days, they build simulations first that give them a very good idea of performance before they hit silicon. It should be fairly easy to tweak the design to match your targets. It's more a question of what they were targeting.

With NVidia moving the relative needle so little on Ray Tracing, I could even see AMD overshooting them, by anticipating a bigger RT jump from NVidia than they delivered.

I have hard time understanding why they would target RT performance lower than Turing. I guess if they have no faith that RT has any importance at all, they could just add token support and focus on Raster Performance.

But that would seem like a VERY bad bet, IMO.
 
This is not random though. These days, they build simulations first that give them a very good idea of performance before they hit silicon. It should be fairly easy to tweak the design to match your targets. It's more a question of what they were targeting.

With NVidia moving the relative needle so little on Ray Tracing, I could even see AMD overshooting them, by anticipating a bigger RT jump from NVidia than they delivered.

I have hard time understanding why they would target RT performance lower than Turing. I guess if they have no faith that RT has any importance at all, they could just add token support and focus on Raster Performance.

But that would seem like a VERY bad bet, IMO.

I don’t think you got my point. Wanting to do something isn’t the same as being able to do it.
 
I think AMD top of the line will fall between the RTX3070 and 3080
That's my best case scnerario

I am waiting to see. But I think they will compete with 3080 at minimum. Trading blows.
 
If AMD bothered to do any homework whatsoever, they should know that nvidia does this every generation. The 980TI performance was roughly matched by the 1070. The 1080ti was roughly matched by the 2070. And now, gasp, omg, shocker!!111, the 2080ti is being roughly matched by the 3070. The only reason people are flipping their shit is because of the sky-high price that nvidia attached to the 2080ti. The reason they were able to do so? Absolutely no competition by AMD. Under no circumstance was anyone getting $1200 of actual value here.

The 1080Ti and 2080 were identical.
 
Anyone who buys one of the AMD cards at launch is in for driver disappointment.

Wait three months for AMD to sort out the kinks.(and they will)

By contrast I will buy 3080 on launch day, expect a few bugs that will quickly be squashed and a overall pleasant upgrade experience. That is worth any price premium to me that AMD would likely be able to shave off.

Say AMD Big Navi makes it in at $650 and a touch faster than the 3080. I still gladly buy the 3080 at $700. No more launch AMD for me. If If I was to make a purchase decision six months after launch - red vs green wouldn’t matter much to me. I’ll buy whichever is the performance to dollar leader.

Signed a ‘tell em like it is’ enthusiast who has owned and used the following hi-end cards in his primary gaming system for long enough in each instance to come to a firm opinion.
Fury X
Vega 64
1080TI
and
2080 RTX owner.
 
I don’t think you got my point. Wanting to do something isn’t the same as being able to do it.

Maybe you aren't getting mine. This isn't rocket science. I see no reason they couldn't target whatever level of RT performance they wanted. It's like deciding the balance of Floating Point vs Integer performance. You just devote more transistor one or the other to shift that balance.

Of course it comes with die space trade-off. It's simply a case of deciding the balance between RT and Raster. To choose less relative RT than Turing seems like it would have been a very bad decision.

I actually hope AMD deliver better relative RT performance than NVidia, then maybe we can finally stop hearing sour grapes arguments against RT.
 
Maybe you aren't getting mine. This isn't rocket science. I see no reason they couldn't target whatever level of RT performance they wanted. It's like deciding the balance of Floating Point vs Integer performance. You just devote more transistor one or the other to shift that balance.

Of course it comes with die space trade-off. It's simply a case of deciding the balance between RT and Raster. To choose less relative RT than Turing seems like it would have been a very bad decision.

I actually hope AMD deliver better relative RT performance than NVidia, then maybe we can finally stop hearing sour grapes arguments against RT.

Well I think they might come in between Turing and ampere for for rtx or match Turing worst case. I don’t see them matching ampere. Only because it has probably much higher dedicated resources.
 
The typical console player doesn’t care about anything technical. Point is that for many people who do pay attention to these things RT is a big deal.

It's really not unless you like to stand still and look at reflections in the puddles. very few games have actually used ray tracing in a way that would impress people with the environment. It's years away from being a must have item in a video card.
 
I am waiting to see. But I think they will compete with 3080 at minimum. Trading blows.
[/QUOTE
come to think about it the 3080 has a nice performance advantage over the 3070, like 35-40% or so. Navi20 could end up closer to 3080, but I don't think it will beat it.

BTW I'm willing to bet that nvidia will show benchmarks where the 2060Super using dlss 2.0 beats navi 20 in 4K.
 
BTW I'm willing to bet that nvidia will show benchmarks where the 2060Super using dlss 2.0 beats navi 20 in 4K

I think AMD could/would/should have some kind of DLSS alternative, beyond scaling and a sharpen filter.
 
If Big Navi falls significantly short on raytracing I guarantee Nvidia will shove it in everyone’s face ad nauseum and AMD will retain their title of an also ran. That will be easy too now that it’s on consoles even the average pleb will be hearing about this raytracing phenomenon.

Nvidia can always adjust pricing if AMD knocks it out of the park on rasterization but AMD won’t be able to easily adjust RT performance. Hopefully they didn’t drop the ball.

Everything AMD does is different than nV. I be AMD is going to do rather well if not very well in rays.

I dont mean to sound like an AMD shill. Im just really really wanting more options for a change. Good viable options.
 
Anyone who buys one of the AMD cards at launch is in for driver disappointment.

Wait three months for AMD to sort out the kinks.(and they will)

By contrast I will buy 3080 on launch day, expect a few bugs that will quickly be squashed and a overall pleasant upgrade experience. That is worth any price premium to me that AMD would likely be able to shave off.

Say AMD Big Navi makes it in at $650 and a touch faster than the 3080. I still gladly buy the 3080 at $700. No more launch AMD for me. If If I was to make a purchase decision six months after launch - red vs green wouldn’t matter much to me. I’ll buy whichever is the performance to dollar leader.

Signed a ‘tell em like it is’ enthusiast who has owned and used the following hi-end cards in his primary gaming system for long enough in each instance to come to a firm opinion.
Fury X
Vega 64
1080TI
and
2080 RTX owner.
Maybe that's why I never have issues with AMD drivers, I always buy months later. If you say you've had issues I believe you as one of the more credible people around here. I had some issues early on with my RX 5700, but they eventually went away and weren't a huge deal. I got some black screen crashes after some long sessions, but that was it. Every other AMD card I've had in the past was solid. The only card ive ever had die on me was a GTX 780. It was a good card but kicked out early for some reason. I still have a functional 8800 GT, so pretty odd. I'm hoping RDNA 2 is better at launch due to have experience with the original RDNA and the fact that AMD isn't hurting for cash. We'll see though.
 
I think AMD could/would/should have some kind of DLSS alternative, beyond scaling and a sharpen filter.

amd won’t do anything that is proprietary. So I don’t see that happening. Plus they won’t win with that anyways.
 
It's really not unless you like to stand still and look at reflections in the puddles. very few games have actually used ray tracing in a way that would impress people with the environment. It's years away from being a must have item in a video card.

Tell that to the marketing guys who will be pimping Fortnite, Cyberpunk, Witcher etc. Sorry but the “people don’t care about reflections” shtick is getting old. The average person doesn’t pick and choose which graphics effects they care about. They just want the best they can get for a good price.
 
amd won’t do anything that is proprietary. So I don’t see that happening. Plus they won’t win with that anyways.
AFAIK AMD will support DirectML AI upscaling on the SeriesX and Navi20 which is similar to DLSS. Wether its up to par quality/performance wise is anyones guess
 
AFAIK AMD will support DirectML AI upscaling on the SeriesX and Navi20 which is similar to DLSS. Wether its up to par quality/performance wise is anyones guess

Yes. As I mentioned they won’t do anything closed. Mostly open that can be used across platforms and if nvidia wants to jump on it they can.
 
AFAIK AMD will support DirectML AI upscaling on the SeriesX and Navi20 which is similar to DLSS. Wether its up to par quality/performance wise is anyones guess
Since Nvidia is using their own supercomputer as Leather Jacket Man (can we just call him 'LJM'?) mentioned in his kitchen reveal, I wonder how an 'open' alternative will work.

Will Microsoft be doing the optimizations in Azure?
 
Since Nvidia is using their own supercomputer as Leather Jacket Man (can we just call him 'LJM'?) mentioned in his kitchen reveal, I wonder how an 'open' alternative will work.

Will Microsoft be doing the optimizations in Azure?

It's not hard to get resources to train neural networks these days, you can rent some cloud resources if you don't want to roll your own. I am not sure they even need Neural networks.

Remember DLSS 1.5 on Control. That was just a conventional shader program, and it did a pretty good job.
 
It's not hard to get resources to train neural networks these days, you can rent some cloud resources if you don't want to roll your own. I am not sure they even need Neural networks.

Remember DLSS 1.5 on Control. That was just a conventional shader program, and it did a pretty good job.
Most of the discourse around DLSS going forward is about DLSS 2.0 and rumors about DLSS 3.0, that I've been able to see here and elsewhere.

It needs to be trained to really do its job. I'm betting that Nvidia will do the optimizations for any developer that's willing to put their own work toward supporting DLSS, and I'm betting that they're much less likely to do it for DML AI versions. So that leaves someone footing the bill for replicating that effort. It would make sense for Microsoft to do it for Xbox games, wouldn't it?
 
Tell that to the marketing guys who will be pimping Fortnite, Cyberpunk, Witcher etc. Sorry but the “people don’t care about reflections” shtick is getting old. The average person doesn’t pick and choose which graphics effects they care about. They just want the best they can get for a good price.

And like most marketing tools it only fools a few that dont pay attention. Average person doesn't even know what ray tracing is and could care less as the gpu they buy is not even close to powerful enough to run those gimmicks right now. Ray tracing is nothing more then a sticker on a box at the moment that most will barely notice. Like SLI and Crossfire, it sounds good up until you try to use it and you always feel disappointed by how much you paid for that poor experience.
 
And like most marketing tools it only fools a few that dont pay attention. Average person doesn't even know what ray tracing is and could care less as the gpu they buy is not even close to powerful enough to run those gimmicks right now. Ray tracing is nothing more then a sticker on a box at the moment that most will barely notice. Like SLI and Crossfire, it sounds good up until you try to use it and you always feel disappointed by how much you paid for that poor experience.

You're right. The average person doesn't know what RT is. Just like they don't know what HDR or 4K means. All they know is that it's "better" and that they should want it.
 
Back
Top