AMD 6800 vs 3070 RT leaks

0a1ed62c2282908efa93ae6f76811e15.jpg
 
https://wccftech.com/amd-radeon-rx-6800-rdna-2-graphics-card-ray-tracing-dxr-benchmarks-leak-out/

Grain of Salt obviously. But it if true its possible... AMD is going to win RT. Again if this isn't BS NV looks to be behind unless they use their DLSS.

If AMD ends up with a DLSS type setting... they could remove ALL Nvidia check marks.
I looked at that article, and I don't see how AMD is going to beat NV at raytracing when the 3070, which is roughly where the 2080 Ti is at for RT, is fairly close to the 6800 even with DLSS off.

For perspective on that, I just did the new RT benchmark from 3DMark and I get just under 60 FPS, and a guy with a 2080 Ti Kingpin @2.2GHz gets 33 FPS.

But, as always - waiting for actual benchmarks from a plethora of reviewers (and platforms!) - is a good idea.
 
I could be wrong or misunderstood but I heard on youtube that like the 46 fps / 80 fps was achieved with shadow quality set to high (), while the graph say ultra preset (that sound like having the texture set to ultra instead of high).

But they say they are using the same setting though and I cannot read the language in the screenshots (are they using a 3500x as well or a close too equivalent ?).

I also cannot imagine if it was faster than Nvidia that they would have not put benchmark in their presentation, but it could just a little bit behind by a very little and that would be more than enough.
 
Last edited:
I agree that if it was spectacular they would be talking about it themselves at this point.... having said that;
in theory their solution should in fact be a lot faster. Nvidias solution as much as they want to talk about RT cores.... happens in what amounts to a co processor. AMDs solution is happening with in each CU.... in terms of brute force I have no doubt Nvidias hardware is faster at calculating rays and the intersect (tensor type) math required. Still even if AMDs calculator is slower... its in the same room as the guy (program) punching in the numbers. The latency gain on that many low precision calculations should offset a lot of that brute force.

Anyway your all right... wait for the real benchmarks. I have a feeling and its just a feeling based on what we know of Navi 1... and the little they have mentioned about what they changed with Navi2s compute units. That the lower end AMD SKUs are going to destroy Nvidias equals. I expect the 6800 will beat the 3070, and if we get an even more cut down 6700 I expect it will probably double the RT performance of a eventual 3060. On the high end though with perfect Nvidia silicon in the 3090 and almost perfect in the 3080 NV may indeed still be the fastest RT performer.

Perhaps AMD just doesn't want to talk about RT right now for that reason. So far everything they have shown off is them drop kicking NV. RT may be a case where they look again like the ultimate value buy... but NV is still the top dog. If things turn out that way I think I may even agree with them... have one big launch where the only things you talk about are aspects where your swinging the big dick. lol
 
If AMD really is beyond all doubt curb-stomping Nvidia this go'round, maybe they don't feel the need to trumpet the fact, because the reviews will do it for them.

But yeah agreed with the above about waiting for said reviews.
 
I have a feeling that it will depend heavily upon the game and which effects are being done on RT and especially if its more than one effect at once. And maybe even some driver optimization from there (I have heard rumors that AMD's cache could have ray tracing benefits).

I say this because Dirt 5 is an AMD featured game. But its only doing Ray Traced shadows on the Ray Tracing accelerators. The Voxel Cone traced bounce lighting Global Illumination is done via GPU compute. It could be that their RT accelerators are not so great at GI stuff. Or maybe its just overall more efficient to run different tracedeffects in parallel, on different parts of the GPU. Rather than pushing it all onto the RT accelerators all at once? Who knows. I hope AMD releases some technical videos, breaking this stuff down for us.
 
  • Like
Reactions: ChadD
like this
I have a feeling that it will depend heavily upon the game and which effects are being done on RT and especially if its more than one effect at once. And maybe even some driver optimization from there (I have heard rumors that AMD's cache could have ray tracing benefits).

I say this because Dirt 5 is an AMD featured game. But its only doing Ray Traced shadows on the Ray Tracing accelerators. The Voxel Cone traced bounce lighting Global Illumination is done via GPU compute. It could be that their RT accelerators are not so great at GI stuff. Or maybe its just overall more efficient to run different tracedeffects in parallel, on different parts of the GPU. Rather than pushing it all onto the RT accelerators all at once? Who knows. I hope AMD releases some technical videos, breaking this stuff down for us.
The infinity cache is interesting potentially for RT stuff.... doing things in their regular CUs mean everything is attached to the same vram cache. The problem with RT math isn't complication its volume... I could see a buffer type cache working very well. (with a little bit of compression they could probably get a lot more ray math into a fraction of that buffer space then you would imagine) Not to repeat myself too much, I have a feeling AMDs solution is more finesse vs Nvidias brute force solution. I am still not sold on the need for tons of crazy RT... but it will be interesting to see how they stack up. AMD may give developers a almost RT lite option. Which makes sense with Consoles being in the picture. Clearly AMDs tech has to scale down better then NVs.

Looking forward to 6700 vs 3600. Its the biggest market segment... and if those launch early next year it will be interesting to see if at that point AMD wants to talk more about RT performance vs NV.
 
Back
Top