RX 6800 XT Red Devil just beat RTX 3090 😱

One thing i noticed is even when the card hits 2700+ mhz, the performance gain doesn’t scale with the frequency increase (~12%). That means big navi has a bottleneck somewhere and it gets there sooner than Ampere. If i scale my Strix 3080 up 5% the gains typically scale with it in most games and benchmarks. Judging by what der8auer showed, big navi plateaued early on and wasn’t much faster than 3080 and still lost in some cases.
C05AC6D3-68E6-45A4-B45B-5E109BFE7414.pngAC181DF2-0BAE-4A9D-8E56-C85D5600B79F.pngF34E8E2B-080D-43E5-87C4-E4156B9E5556.pngB1A02F52-3757-4B84-A6BB-6D2E84EB2625.png
Look at PUBG, 6800 XT with a massive OC at 2700 mhz barely gets a 3 fps avg win over a stock 3080 (the 1% low variance is probably driver related). Not very impressive and it indicates some bottleneck happening with big navi in that configuration. I’m curious to see how the 6900XT scales since it does have more CUs.
 
Last edited:
I bought a 3090 and am happy AMD can beat it in certain scenarios for less $. Team Red nipping at Jensen's heels is a good thing for all of us. Still, these 6800XT's are going for $1200+ on eBay. "$800" cards from either side are nowhere to be had atm.
 
I love that someone came in to this thread to defend their purchase.

I own neither and have no dog in this fight.

I've got nothing to defend, I'll probably grab a 6900 XT when it releases in a few days (if I can snag one) and could sell my Strix for more than I paid for it. But given the scope of der8auer's test, we can see overclocking the 6800 xt has a hard limit with gains. It's hard to measure Ampere the same way because of how restricted the vbios is.
 
*when overclocked then compared to a non-overclocked 3090, and only in certain games and resolutions.

Fixed that for you :p

For reference (10 minutes into the video), here is my 3090 FTW3 Ultra on air getting more than 74FPS in TSE GT1 (yes, overclocked): https://www.3dmark.com/spy/15584279

And completely ignoring RT and VR performance. Literally the only reason I’d even consider upgrading from ~1080ti rasterized performance.
 
Not to shit on the 1080ti, but its raster leave a lot to be desired at higher resolutions, it chunks pretty hard at 4k and 3440x1440.
Going to have to disagree on part of your statement. I've been using a 1080 Ti FTW3 at 3440x1440 for the past 3 years and have had absolutely zero issue maxing out (or near maxing out) most games and playing at 100+ FPS (my X34's refresh rate). It did chug a bit when I played on my 4K TV though, but that's not a surprise since 4K is over 40% more pixels to push than 3440x1440.
 
Going to have to disagree on part of your statement. I've been using a 1080 Ti FTW3 at 3440x1440 for the past 3 years and have had absolutely zero issue maxing out (or near maxing out) most games and playing at 100+ FPS (my X34's refresh rate). It did chug a bit when I played on my 4K TV though, but that's not a surprise since 4K is over 40% more pixels to push than 3440x1440.

I disagree entirely, i had one as well and it chugged on Assasins Creed, watch dogs 2 and others and couldn't hit 100fps for the monitor. can guarantee that it can't do that today in games like HZD and BL3 without IQ decreases. Its why I switched to the 2080ti, even with its less than stellar bump over the 1080ti its much smoother overall.

I have the X34.
 
I disagree entirely, i had one as well and it chugged on Assasins Creed, watch dogs 2 and others and couldn't hit 100fps for the monitor. can guarantee that it can't do that today in games like HZD and BL3 without IQ decreases. Its why I switched to the 2080ti, even with its less than stellar bump over the 1080ti its much smoother overall.

I have the X34.
Maybe with some more modern games but my experience over the previous 3 years was very positive with the 1080 Ti at 3440x1440. Any games I played hit the 100 FPS cap. Sometimes had to turn down a setting or two from max to high but no noticeable IQ drop.

And of course modern games like HZD and BL3 won’t run at all max settings at 3440x1440 at 100 FPS. Even the 2080 Ti has issues there. But in reality I think you and I have different definitions of “chunks”. To me turning down a few settings with minimal to no immediately noticeable IQ drop to get to the 100 FPS cap is far from a 3 year old GPU “chunking”.
 
Maybe with some more modern games but my experience over the previous 3 years was very positive with the 1080 Ti at 3440x1440. Any games I played hit the 100 FPS cap. Sometimes had to turn down a setting or two from max to high but no noticeable IQ drop.

And of course modern games like HZD and BL3 won’t run at all max settings at 3440x1440 at 100 FPS. Even the 2080 Ti has issues there. But in reality I think you and I have different definitions of “chunks”. To me turning down a few settings with minimal to no immediately noticeable IQ drop to get to the 100 FPS cap is far from a 3 year old GPU “chunking”.

Well AC:O both of them couldn't do it, and WD2 is older. all of those games are 2-3 years ago at least. so yes in line with your timeline.

I don't remember the rest, pretty sure witcher 3 i wasn't happy with the fps/iq settings either on my X34z

I am happy you like it, but chunking is exactly how I would describe my experience, and my buddy who skipped the 2000 series and got a 3090 was absolutely complaining about his 1080 ti chunking in 2019/20 titles I have no real issue with (on a 2080ti).
 
Last edited:
Well AC:O both of them couldn't do it, and WD2 is older. all of those games are 2-3 years ago at least. so yes in line with your timeline.
Didn’t play AC:O but I played WD2 just fine by turning down a few settings that made the game not look any different from very high to me. So to each their own I guess.
 
Didn’t play AC:O but I played WD2 just fine by turning down a few settings that made the game not look any different from very high to me. So to each their own I guess.

I mean, gpus are luxury items and I could play them all fine on my old 970, but thats not why one buys the top of the line.

AC:O has a noticible chunk/stutter even with gsync unless you turn down settings.
 
One thing i noticed is even when the card hits 2700+ mhz, the performance gain doesn’t scale with the frequency increase (~12%). That means big navi has a bottleneck somewhere and it gets there sooner than Ampere. If i scale my Strix 3080 up 5% the gains typically scale with it in most games and benchmarks. Judging by what der8auer showed, big navi plateaued early on and wasn’t much faster than 3080 and still lost in some cases.
View attachment 303291View attachment 303292View attachment 303293View attachment 303294
Look at PUBG, 6800 XT with a massive OC at 2700 mhz barely gets a 3 fps avg win over a stock 3080 (the 1% low variance is probably driver related). Not very impressive and it indicates some bottleneck happening with big navi in that configuration. I’m curious to see how the 6900XT scales since it does have more CUs.
Likely it's the same thing that holds the 6800XT back at 4K (and soon the 6900XT when released) - memory bandwidth. But I could be wrong and maybe it's just an architectural thing, like how the ampere cuda cores scale better at higher resolutions.
 
Likely it's the same thing that holds the 6800XT back at 4K (and soon the 6900XT when released) - memory bandwidth. But I could be wrong and maybe it's just an architectural thing, like how the ampere cuda cores scale better at higher resolutions.
At 4K, you have the same number of triangles but more pixels to fill those triangles, each pixel is shaded and Ampere has more shading capability with 2 fp32 units per SM. At lower resolutions AMD Cache latency is very low and can feed the Stream Processors quickly and the single fp32 unit is not limited at lower resolutions shading the triangles. Anyways if reviewers would test by adjusting memory speed at 4K, we would be able to determine if memory bandwidth is limiting. My 2 cents.
 
I will take paying 60 less (5-8% compared to my TUF OC at 749) and 5% less performance compared w/ full graphical feature set rather than nonsense raster performance gains in games/resolutions where it doesn't matter (1080P/1440P).
 
I will take paying 60 less (5-8% compared to my TUF OC at 749) and 5% less performance compared w/ full graphical feature set rather than nonsense raster performance gains in games/resolutions where it doesn't matter (1080P/1440P).
Curious, what is this full graphical feature set you speak of?
 
I mean it’s really impressive that you can overclock the hell out of this 6800xt card and beat a stock 3090. Realistically the scalperbots will gobble this up and you’ll see it on eBay going for $1500. 😂

And yet, the less expensive 6900XT has not even been released yet, sure do feel bad for the 3090, stock or overclocked. 😂
 
I've got a 3090 in my main box at the moment but I'll get a 6900xt for my all AMD MITX rig if that ends up being fast as well. Of course who knows when that will be available to us regular joes.
 
And yet, the less expensive 6900XT has not even been released yet, sure do feel bad for the 3090, stock or overclocked. 😂
I don't think we'll see a big jump from 6800XT to 6900XT as the memory bandwidth is the same - unless there's some other architectural issue that is bottlenecking fill rate at higher clocks that doesn't happen if you add CU's (unlikely). I hope the 6900XT is able to go toe to toe with the 3090 at $500 less, but I have my doubts...Nvidia's main advantage this gen appears to be gddr6x which is why they only have an advantage at higher resolutions and in ray tracing. AMD did an admirable job with the infinity cache helping them trade blows with the 3080 despite having significantly lower memory bandwidth, but it's not quite enough to actually beat the 3080 and the 3090 is a decent step up from that.
 
I don't think we'll see a big jump from 6800XT to 6900XT as the memory bandwidth is the same - unless there's some other architectural issue that is bottlenecking fill rate at higher clocks that doesn't happen if you add CU's (unlikely). I hope the 6900XT is able to go toe to toe with the 3090 at $500 less, but I have my doubts...Nvidia's main advantage this gen appears to be gddr6x which is why they only have an advantage at higher resolutions and in ray tracing. AMD did an admirable job with the infinity cache helping them trade blows with the 3080 despite having significantly lower memory bandwidth, but it's not quite enough to actually beat the 3080 and the 3090 is a decent step up from that.
AMD data does not reflect that dealing with memory bandwidth of DDR6, so far AMD numbers are consistent with reviews depending upon configuration:

400066_3840x2160Corrected.png
 
I was talking about having playable frame rates. We will see when AMD has these features worth something. I live in here and now and not what can be.

How many current games are using DXR, and how many are using NVs RTX ? Some games provide great frame rates with RT enabled, some don't. I would say with more time we will see how it performs overall with ray tracing titles that use DXR.
 
I was talking about having playable frame rates. We will see when AMD has these features worth something. I live in here and now and not what can be.
For me the bottom line would be a better gaming experience overall. Using RT and having poor frame rates, reducing resolution and lowering other IQ enhancing options may create a worst gaming experience. So far the games that seems to support RNDA2 RT well has pointless implementations: Dirt 5 shadows for example have no significant noticeable improvement while hindering performance, Godfall shadows has outline artifacts of no shadows or shadows coming in and out (performance is not hit much at all except the shadows are worst). Older implementations were mostly a joke, BFV battlezone areas have virtually all showroom cars with mirror finishes, dust/grit free, mirror water, huge fps lost and so on which sticks out like a sore thumb and looks stupid, Shadow Of The Tomb Raider shadows are no enhancement at all with a huge performance penalty with the Vaseline smear of the older version of DLSS. A lot of growing to be done from developers as well as hardware.
 
I enjoyed RTX in Metro, WDL, COD BOCW, Control and one more game that I can’t remember. I am also stoked about Cyberpunk 2077 so it is important for me that it works with RTX and DLSS. I only play games once and am a first day first show kind of a guy so not having full features on release is a bit of a bummer for me. Anyways if people want more raster on an already 100+ FPS games more power to them.

My point remains that I paid 749 for this TUF which is 60 bucks less than a 6800XT TUF. OC to OC I will be about 5% behind by paying 8% less at 1440P. I can live with that.
 
I enjoyed RTX in Metro, WDL, COD BOCW, Control and one more game that I can’t remember. I am also stoked about Cyberpunk 2077 so it is important for me that it works with RTX and DLSS. I only play games once and am a first day first show kind of a guy so not having full features on release is a bit of a bummer for me. Anyways if people want more raster on an already 100+ FPS games more power to them.

My point remains that I paid 749 for this TUF which is 60 bucks less than a 6800XT TUF. OC to OC I will be about 5% behind by paying 8% less at 1440P. I can live with that.
I thought you live in the here and now, Cyberpunk 2077 is not out yet :p. Well if you are getting the enjoyment out of the added features then that is great.
 
One thing i noticed is even when the card hits 2700+ mhz, the performance gain doesn’t scale with the frequency increase (~12%). That means big navi has a bottleneck somewhere and it gets there sooner than Ampere. If i scale my Strix 3080 up 5% the gains typically scale with it in most games and benchmarks. Judging by what der8auer showed, big navi plateaued early on and wasn’t much faster than 3080 and still lost in some cases.
View attachment 303291View attachment 303292View attachment 303293View attachment 303294
Look at PUBG, 6800 XT with a massive OC at 2700 mhz barely gets a 3 fps avg win over a stock 3080 (the 1% low variance is probably driver related). Not very impressive and it indicates some bottleneck happening with big navi in that configuration. I’m curious to see how the 6900XT scales since it does have more CUs.


GamersNexus showed clearly the extra ram does little to nothing (Devs, WILL continue to write code for the majority,3-11Gb,not the Radeon 7's of the world...) and the HARDWARE RT RT performance is sub par,as for traditional rasterization? RX6K is fantastic,but so is Ampere and Turing and Vega,etc. nVIDIA still has the better overall package software wise. If I was nVIDIA I would be pouring tons of resources into implementing DLSS 2.0 and other RnD.
 
Limited edition for only the stupid key caps you get. Pretty sure they will make more with simpler box after this run.
 
I was extremely impressed with the Red Devil 5700xt and now the 6800xt. Are they releasing a 6900 variant?
 
Back
Top