[WCCF] [IgorsLab] Alleged performance benchmarks for the AMD Radeon RX 6800 XT "Big Navi" graphics card have been leaked out.

I never said it was phoning out. There is no AI running on your local computer, just data generated by an AI. AIs "learn", the data on your computer is just an algorithm specific for the game you play. The AI is running on the computer that generated the upscaling algorithm. It's all just buzzwords used by NVidia.
As of 2.0 and now 3.0 the upscaling is all done locally. The process works similar to anti aliasing only for upscaling and now doesn’t get trained against a specific title developers can just use it. It’s now also been baked into the UE4 engine and according to Unreal is no harder to implement than TAA but drastically better.
 
As of 2.0 and now 3.0 the upscaling is all done locally. The process works similar to anti aliasing only for upscaling and now doesn’t get trained against a specific title developers can just use it. It’s now also been baked into the UE4 engine and according to Unreal is no harder to implement than TAA but drastically better.
Developers need to use it to make their game compatible. You are running an algorithm on your local computer that upscales it. You are not running an AI.
 
As of 2.0 and now 3.0 the upscaling is all done locally. The process works similar to anti aliasing only for upscaling and now doesn’t get trained against a specific title developers can just use it. It’s now also been baked into the UE4 engine and according to Unreal is no harder to implement than TAA but drastically better.
Epic generally doesn't mainline what they can't open source. Nvidia maintains a branch that implements it which you can pull from if you want.

UE will still be making improvements to their TAA, it's getting significant updates in the next major release and onwards into next gen.
 
View attachment 292020




View attachment 292021

source 1
source 2

What do you think?, if this is true, Navi 21 XTX could finally bring a battle in the high end market.

It's possible. We just don't have any solid information yet. However, I would remind people that this is 3DMark. It's not an actual game engine, nor is it representative of one. We've seen cards in the past that did really well in 3DMark that didn't do so well in actual games. While interesting, it's just one test suite.
 
remember, if big Navi beats an overclocked 3090 in 99 out of 100 scenarios, but is within margin of error of a SINGLE ONE scenario, Nvidia has the faster card.

This is how Mindshare works. The only way for AMD to claw back mindshare is to be exclusively faster in. every. game. If Nvidia wins in a SINGLE GAME, its status-quo preserved.

"In the ballpark" only solidifies Radeon as an "Budget-Alternative" brand. Like the knock-off breakfast cereal brands that are what you buy when you can't afford real cocoa pops.
 
remember, if big Navi beats an overclocked 3090 in 99 out of 100 scenarios, but is within margin of error of a SINGLE ONE scenario, Nvidia has the faster card.

This is how Mindshare works. The only way for AMD to claw back mindshare is to be exclusively faster in. every. game. If Nvidia wins in a SINGLE GAME, its status-quo preserved.

"In the ballpark" only solidifies Radeon as an "Budget-Alternative" brand. Like the knock-off breakfast cereal brands that are what you buy when you can't afford real cocoa pops.
Even if it's close enough I'll take it or at least try too, a bunch of the work stuff still relies on CUDA so NVidia not going anywhere any time soon from there, but for my home use, I would be more than happy to give AMD a shot. It's been a long time since I had one of their GPU's grace my desk want to say it was a pair of HD 6850's in Crossfire, I remember that it let me play Old Republic with all the eye candy on at 1080p except for the fight with Darth Malgus, where I had to turn off one of the settings because of the huge FPS drop I would get right when he used one of his abilities.. Then learned that it was best to just look at the floor on that fight and listen for the audio cues. Been a long time since then I would like to play for the red team again.
 
remember, if big Navi beats an overclocked 3090 in 99 out of 100 scenarios, but is within margin of error of a SINGLE ONE scenario, Nvidia has the faster card.
Depends what the one game is. If it's Call of Duty, okay sure. Barbie Horse Adventure? Not so much.
 
Apparently you don't want to take my word for it that it's a lot more complicated than that. Global illumination allows far more approximation than shadows and reflections. Should I again post the image I posted earlier? It's the scale of resources raytracing take up from least (sound), to most difficult (full raytracing). It's from Sony's PS5 presentation on raytracing.

Sony’s slide is misleading. What they call “full raytracing” is what I refer to as proper global illumination. The global illumination in their slide is referring to the simple one bounce diffuse like you have in Metro Exodus. That is far from actual GI.
 
Sony’s slide is misleading. What they call “full raytracing” is what I refer to as proper global illumination. The global illumination in their slide is referring to the simple one bounce diffuse like you have in Metro Exodus. That is far from actual GI.

Nvidia say that the Global Illumination used in Metro Exodus is more performance Intensive than Reflections.
 
I think you’re confusing RT with image filtering. There’s no such thing as bicubic or trilinear RT.



Interesting. What exactly would make an RT implementation faster at shadows but slower at reflections? They both require a single RT bounce and both need to handle transparency. Global illumination is harder than both as you really want multiple bounces for a realistic effect.
One only needs to cast days through bounding volumes, ray-cube intersections, and the other needs pixel/color information which requires ray-triangle intersections + texture filtering. Also, you can use filters for lighting in which it makes it a bit softer but uses less rays, you can't really do that with reflections otherwise it'd be a very blurry reflection that would look worse than screen space reflections currently in use. I also (personally) feel the gains in lighting realism from RT is superior to the increase in reflective puddles ;). Anyways, we won't know how big a performance hit RT on AMD hardware will be in either situation, but my guess is for heavily reflective scenes NVIDIA will have a decent lead, where RT is used in lighting only they will be very similar, so what you prefer and what games you play will probably have an impact on what you think is "worth" buying. I'll just say it now, because everything is has different opinions on visuals, if you find yourself arguing about a specific visual feature, it's subjective, you aren't wrong and neither is the person you're arguing with, you just focus and care about different things.
 
DirectML is equivalent to CUDA, it’s just a framework. DLSS is a neural network trained using CUDA.

Is Microsoft training image upscaling models and including them as part of DirectML? Or did you mean game developers will use DirectML to train similar models to DLSS?
I think he meant Direct ML can be used to perform the function of DLSS in the future and be an open standard (if not open implementation) that would work across manufacturers. Obviously it hasn't been done/shown yet, so we don't know how well or poor it would work nor do we know how easily it would be to support.
 
remember, if big Navi beats an overclocked 3090 in 99 out of 100 scenarios, but is within margin of error of a SINGLE ONE scenario, Nvidia has the faster card.

This is how Mindshare works. The only way for AMD to claw back mindshare is to be exclusively faster in. every. game. If Nvidia wins in a SINGLE GAME, its status-quo preserved.

"In the ballpark" only solidifies Radeon as an "Budget-Alternative" brand. Like the knock-off breakfast cereal brands that are what you buy when you can't afford real cocoa pops.

OK I would love some of what your smoking over there.

If a F1 car wins 99 out of 100 races.... and Ferrari wins one. I guess Ferrari is the season winner right ?

We'll see where AMD stacks up when reviewers get product to review.... however if AMD wins 99 out of 100 benchmarks. Yes they win. Nvidia would in such a case be the less then card.

I really don't get where people think Nvidia has won some major mind share prize. They have the market share right now cause yes they have had the winning cards for what at least 3 cycles now. Yes that wins you market share. Who ever has the fastest card wins.... that is the nature of a race. To continue with the 99 out of 100.... that is how many people just buy the fastest card in their price range. That is why AMD has sold tons of 580s.... and Nvidia has sold tons of 1080s.
 
I think he meant Direct ML can be used to perform the function of DLSS in the future and be an open standard (if not open implementation) that would work across manufacturers. Obviously it hasn't been done/shown yet, so we don't know how well or poor it would work nor do we know how easily it would be to support.

The best outcome would be if game engines implement hardware agnostic upscaling using DirectML. I haven't seen anything yet about developers working on ML based upscaling though.
 
OK I would love some of what your smoking over there.

If a F1 car wins 99 out of 100 races.... and Ferrari wins one. I guess Ferrari is the season winner right ?

We'll see where AMD stacks up when reviewers get product to review.... however if AMD wins 99 out of 100 benchmarks. Yes they win. Nvidia would in such a case be the less then card.

I really don't get where people think Nvidia has won some major mind share prize. They have the market share right now cause yes they have had the winning cards for what at least 3 cycles now. Yes that wins you market share. Who ever has the fastest card wins.... that is the nature of a race. To continue with the 99 out of 100.... that is how many people just buy the fastest card in their price range. That is why AMD has sold tons of 580s.... and Nvidia has sold tons of 1080s.
Three generations of cards is enough to establish a consumer perception that "Nvidia=faster". That's why I think AMD still needs to hit Nvidia on price this generation. Being a better value WITH better performance is going to be the quickest way to win back mind share. Then AMD can charge more, like they have announced with Zen 3.
 
Three generations of cards is enough to establish a consumer perception that "Nvidia=faster". That's why I think AMD still needs to hit Nvidia on price this generation. Being a better value WITH better performance is going to be the quickest way to win back mind share. Then AMD can charge more, like they have announced with Zen 3.
I agree with the consumer perception rationalization. But from a pricing perspective AMD has to teeter on a fine line of pricing too low and looking like a budget competitor and pricing too high where people will want an Nvidia card for the same price because that is what they are familiar with. I think 5-10% lower price would be ideal if the cards perform comparatively.
 
I agree with the consumer perception rationalization. But from a pricing perspective AMD has to teeter on a fine line of pricing too low and looking like a budget competitor and pricing too high where people will want an Nvidia card for the same price because that is what they are familiar with. I think 5-10% lower price would be ideal if the cards perform comparatively.

Given that TSMC capacity is limited, I believe, AMD will not be able to meet demand if priced cheaper than nVidia for same performance
 
Given that TSMC capacity is limited, I believe, AMD will not be able to meet demand if priced cheaper than nVidia for same performance
The way things are shaping up it’s not only that AMD will have competitive performance but they also have an exciting new architecture that is very bandwidth efficient. The buzz around that will drive demand further so I expect AMD cards to be hard to find too. Not to mention if prices are attractive it will only be that much worse.
 
These "leaks" look on purpose. Only benchmarks, no in-game stuff?

Firestrike, I know that is one AMD always scores better on. Are there any games at all that it is representative of?

If they actually hit 3080 perf in games that will be great.
 
These "leaks" look on purpose. Only benchmarks, no in-game stuff?

Firestrike, I know that is one AMD always scores better on. Are there any games at all that it is representative of?

If they actually hit 3080 perf in games that will be great.

A lot of the leaks have been at 4K too. Even the 3dmark stuff. I wonder if that’s because Navi’s uber cache isn’t as helpful at lower resolutions.
 
remember, if big Navi beats an overclocked 3090 in 99 out of 100 scenarios, but is within margin of error of a SINGLE ONE scenario, Nvidia has the faster card.

This is how Mindshare works. The only way for AMD to claw back mindshare is to be exclusively faster in. every. game. If Nvidia wins in a SINGLE GAME, its status-quo preserved.

"In the ballpark" only solidifies Radeon as an "Budget-Alternative" brand. Like the knock-off breakfast cereal brands that are what you buy when you can't afford real cocoa pops.

no it doesn't work that way unless everyone is simply stupid. If its faster in 99 games no one is going to go search for the 1 game its not faster in. If they do they are already biased and weren't looking to buy AMD anyways just wanted to support their own bias lol.
 
A lot of the leaks have been at 4K too. Even the 3dmark stuff. I wonder if that’s because Navi’s uber cache isn’t as helpful at lower resolutions.

Redgaming tech has slides he has been asked not to share. AMD will show ten games. In 1440P it will beat 3080 in 8 out of 10 games and at 4k It will beat it in 5, tie in 2 and lose in 3. Thats for 6800xt, the cards AIB will be getting for now.
 
If anything it seems likely to go the other way; better at lower resolutions and a bottleneck at 4K.

Not if it’s sized right. Higher resolutions require more shading and more fillrate but some parts of a frame bypass shading and are 100% fillrate and bandwidth limited. E.g. shadow maps and gbuffer writes. If the cache helps with those Navi could be a fillrate monster and process those steps in record time.
 
The best outcome would be if game engines implement hardware agnostic upscaling using DirectML. I haven't seen anything yet about developers working on ML based upscaling though.
I haven’t either I can’t even find an actual go-live date for it only that the XBox supports it and they plan to implement it at a later date. DirectML could do everything that DLSS does but since NVidia has already done the heavy lifting and created the AI that makes the algorithms Microsoft could be waiting for their algorithms and AI training to be more complete or it could be left up to the developers to do that. Not sure there is very little info out there about it beyond, “it does what DLSS does”
 
A lot of the leaks have been at 4K too. Even the 3dmark stuff. I wonder if that’s because Navi’s uber cache isn’t as helpful at lower resolutions.
To be fair anything 3080 class at 1080p or even 1440p is going to be CPU bound and not show much in the way of a relevant score. Being 30fps slower at that resolution means very little when scoring in the 300’s.
 
Three generations of cards is enough to establish a consumer perception that "Nvidia=faster". That's why I think AMD still needs to hit Nvidia on price this generation. Being a better value WITH better performance is going to be the quickest way to win back mind share. Then AMD can charge more, like they have announced with Zen 3.
I’m sure they could but I’m not 100% sure they want too. They just need to keep it priced at a point where it moves, too low and it will move too fast and they will have stock and supply issues as TSMC works in batches so they would need to wait for their next batch cycle to come around. Too high and they loose out to NVidia who despite current availability should be able to keep up production. But these cards are for bragging rights the real fight is going to be over the GA’s 106 & 107 vs Navi’s 22 & 23. The mid to low end segments, that’s where the bulk of the sales will be and that’s where they fight for market dominance. Because at the end of the day developers are going to develop for the feature sets where the majority of the users are.
 
Not if it’s sized right.
We can hope that's the case but if they're working with games not yet optimized for RDNA2 (and how could they be) with early drivers it may not live up to its performance potential.

But that's almost always true for new generations, and also true for Ampere, so it's probably still fair to compare them directly.
 
We can hope that's the case but if they're working with games not yet optimized for RDNA2 (and how could they be) with early drivers it may not live up to its performance potential.

But that's almost always true for new generations, and also true for Ampere, so it's probably still fair to compare them directly.

Any console game made for PS5/XboxX will be optimized for RDNA2 already. If that's not what you meant muh bad, but will be true going forward.
 
Any console game made for PS5/XboxX will be optimized for RDNA2 already. If that's not what you meant muh bad, but will be true going forward.
Not quite, the architecture and feature sets in the consoles is pretty different when compared to a PC. They get to take advantage of things that most users don’t have like the direct storage, a much smaller amount of ram and other things.
Most of the time game optimization boils down to tweaking the crap out of textures and their storage to improve how fast they work in a given condition. Sure some of it is code based but most of that is handled by the compiler as they are using one environment for Xbox, PC, PS, and Switch.
 
remember, if big Navi beats an overclocked 3090 in 99 out of 100 scenarios, but is within margin of error of a SINGLE ONE scenario, Nvidia has the faster card.

This is how Mindshare works. The only way for AMD to claw back mindshare is to be exclusively faster in. every. game. If Nvidia wins in a SINGLE GAME, its status-quo preserved.

"In the ballpark" only solidifies Radeon as an "Budget-Alternative" brand. Like the knock-off breakfast cereal brands that are what you buy when you can't afford real cocoa pops.
I think if AMD can beat Nvidias 3090 but it costs "only" $999, you could not only call that a win, you can call it ass whooping.

My prediction is
6800XT will be $649
6900XT will be $999

My wish is
6800XT $599
6900XT $699
 
OK I would love some of what your smoking over there.

If a F1 car wins 99 out of 100 races.... and Ferrari wins one. I guess Ferrari is the season winner right ?

We'll see where AMD stacks up when reviewers get product to review.... however if AMD wins 99 out of 100 benchmarks. Yes they win. Nvidia would in such a case be the less then card.

I really don't get where people think Nvidia has won some major mind share prize. They have the market share right now cause yes they have had the winning cards for what at least 3 cycles now. Yes that wins you market share. Who ever has the fastest card wins.... that is the nature of a race. To continue with the 99 out of 100.... that is how many people just buy the fastest card in their price range. That is why AMD has sold tons of 580s.... and Nvidia has sold tons of 1080s.
The average consumer doesn't know the outcome of such F1 races. When average people think of Ferrari they think of "fast" or "fastest" even though that isn't the case anymore. The more informed crowd (us) knows better and isn't hard to find the information.
This is mindshare
 
The average consumer doesn't know the outcome of such F1 races. When average people think of Ferrari they think of "fast" or "fastest" even though that isn't the case anymore. The more informed crowd (us) knows better and isn't hard to find the information.
This is mindshare
I really hope the Lemmings won't really care about the RX6000 launch, because they think Nvidia is king, but this was said about Ryzen too, everyone thought Intel is King and look where they are now.
The same could happen to Nvidia if they don't pull their heads out of their butts.
 
no it doesn't work that way unless everyone is simply stupid. If its faster in 99 games no one is going to go search for the 1 game its not faster in. If they do they are already biased and weren't looking to buy AMD anyways just wanted to support their own bias lol.
Going to agree with this to an extent. 99-1 is an exaggerated number hopefully to emphasize their point. But I do agree that if AMD cards flat out win let’s say 60% of games or wins in 80% of games but only by a couple percent in most of them, then that still may not be enough to sway many people from Nvidia.

Anecdotally, amongst all my friends that PC game, 5 of them have only bought Nvidia cards for every upgrade over the past decade and 2 have bought an AMD card at some point. Those 5 don’t even research AMD’s offerings whenever they upgrade because they’ve always been happy with Nvidia. Those are the difficult customers to convert.
 
Not quite, the architecture and feature sets in the consoles is pretty different when compared to a PC. They get to take advantage of things that most users don’t have like the direct storage, a much smaller amount of ram and other things.
Most of the time game optimization boils down to tweaking the crap out of textures and their storage to improve how fast they work in a given condition. Sure some of it is code based but most of that is handled by the compiler as they are using one environment for Xbox, PC, PS, and Switch.

Also there’s no indication that the consoles have any special caching in place. This could be a PC only play.
 
  • Like
Reactions: Axman
like this
Back
Top