Radeon 6000 series speculation

vjhawk

Limp Gawd
Joined
Sep 2, 2016
Messages
451
So I was watching hot news and I saw the following specs for Radeon 6000. Tell me if you think these are accurate.

Youtube source here:

Radeon 6900 XT 5120 shaders 256 bit bus 16GB GDDR6 300W
Radeon 6800 XT 3840 shaders 192 bit bus 12 GB GDDR6 200W
Radeon 6700 XT 2560 shaders 192 bit bus 6 GB GDDR6 150W

Based on that we can infer that the 6900 XT has 80CU, the 6800 XT has 60 CU, and the 6700XT has 40 CU.

Price guesses? It all depends on where the performance slots in. If the 6900 XT can go toe to toe with the 3080, I'd expect it to cost around $600. But if it's competing with the 3070 then I'd expect it go go for around $500 and the prices trickle down from there.

Do you expect the 6000 series to be competitive with Nvidia 3000 series?
 
40/60/80 corresponds pretty well with the TDP ratings of 150/200/300 watts. (Ratio-wise: I have no idea what the cards would pull, just seeing what numbers line up and where.)
 
I think that if true, the 80CU part will equate to 3080 like performance. I mean if there is a generation IPC uplift of 10% and 10% frequency increase along with the supposed cache improvements and doubling of CU's (assuming similar CU architecture) compared to the 5700XT. A near 100% increase in performance compared to the 5700XT seems like a fair bet.
 
I'm more interested in what AMD raytracing looks like compared to Nvidia. I have a 1080 and was very impressed with the 2080. The visuals were that much better. If Big Navi raytracing looks as good, I will buy.
 
Well if AMD can get cards out in quantity, have stable drivers, and no card issues....this is their race to win.
 
Mostly will look at rasterlize performance, the current crop of RT games used the older version of DXR from Microsoft vice current DXR 1.1. Current games are developed around Nvidia RTX hardware. This also affects Ampere RT performance which should be superior to Turing but is not showing that with the current RT games.

Has to have at least HDMI 2.1, would also like to see DP 2.0 for monitors down the line. Big plus if the case.

16gb is a huge plus if the performance is there.

Two slot good cooling solution at a good cost. Do not like the mammoth Ampere coolers in general. Making the FE the only Ampere card I would consider.
 
Last edited:
I'm more interested in what AMD raytracing looks like compared to Nvidia. I have a 1080 and was very impressed with the 2080. The visuals were that much better. If Big Navi raytracing looks as good, I will buy.

My real question is, will current "RTX" games just support AMD raytracing automatically? If not, I could see this being a big sticking point because I don't know that game developers can be trusted to go back and add AMD compatibility.
 
These specs are pretty close to what a lot of us have been postulating for months. The only surprising (and likely limiting) spec is 256-bit bus on the 6900 XT.

6900 XT will be $599, its performance will be just under 3080 but it will suffer at high resolutions and IQ due to the small mem bus. Ironically, it will have a leg up in VRAM capacity over 3080 10GB, bit won't have the bandwidth to fully use it in the few cases >10GB is helpful.

6800 is quite a step down in CUs. Likely somewhere under 3070 in performance.

6700 will be 2080 Super level and a midrange king.

No point in talking about hardware RT on these as performance will be abysmal vs. nV (unless the implementation is different).
 
My real question is, will current "RTX" games just support AMD raytracing automatically? If not, I could see this being a big sticking point because I don't know that game developers can be trusted to go back and add AMD compatibility.

at a base level yes they "should" since they're all built off the micosoft API. it likely won't be able to take advantage of RT core specific optimizations but may not need to.

No point in talking about hardware RT on these as performance will be abysmal vs. nV (unless the implementation is different).

at a hardware it's completely different.. where as nvidia uses specific cores that are designed to only do RT and nothing else, AMD's version uses the same shader cores to handle RT but exactly how is well outside my scope of being able to explain it.
 
Last edited:
at a base level yes they "should" since they're all built off the micosoft API. it likely won't be able to take advantage of RT core specific optimizations but may not need to.



at a hardware it's completely different.. where as nvidia uses specific cores that are designed to only do RT and nothing else, AMD's version uses the same shader cores to handle RT but exactly how is well outside my scope of being able to explain it.
The shader cores have RT hardware in them, the data is stored in cache just like everything else and can be fetched from vram if not in the cache. The shader itself can determine if more rays need casting or if it's enough, so it doesn't have to return results of each intersection, just on each final result. Whether this ends up more efficient or worse, we won't know until we get benchmarks.

If you're bored, here is/was their patent, no clue how close they actually followed it, but should be pretty similar.
https://www.freepatentsonline.com/20190197761.pdf
 
Will the 6900XT need more than 650 watt power supply? I have a really nice seasonic titanium 650 watt power supply i don't want to really upgrade.
 
Will the 6900XT need more than 650 watt power supply? I have a really nice seasonic titanium 650 watt power supply i don't want to really upgrade.

I don't think we know anything concrete about the TDP yet, but if it's 300W+ I am guessing a 650W may be cutting it really close.
 
Pretty sure AMD will implement HDMI 2.1 on their 6000 series cards, it wouldn't make sense not too, not to mention HDMI 2.1 is featured on their B520 and B550 motherboard chipsets would pretty much indicate AMD will do the same for their upcoming video cards.
 
I don't think we know anything concrete about the TDP yet, but if it's 300W+ I am guessing a 650W may be cutting it really close.
Rumors where showing 300w for 6900xt, 200w for 6800 and 6700 150w, so your 650w should be ok depending on what else you've got and how much you want to OC.
 
Rumors where showing 300w for 6900xt, 200w for 6800 and 6700 150w, so your 650w should be ok depending on what else you've got and how much you want to OC.
Running the rig in my sig with 3900x
 
Running the rig in my sig with 3900x

if the 3900x is stock you'll likely be fine but you'll still be on the upper end of the efficiency of the psu.. realistically how often is a 3900x going to be at full load in games so you'll likely sit around 400-450w in multi-threaded games assuming a 300w TDP on the 6900XT.
 
Last edited:
if the 3900x is stock you'll likely be fine but you'll still be on the upper end of the efficiency of the psu.. realistically how often is a 3900x going to be at full load in games so you'll likely sit around 400-450w in multi-threaded games assuming a 300w TDP on the 6900XT.
Yeah i did increase the pbo limit on the 3900x but thats it. I'll wait and see I would like to go AMD Gpu because i like to run linux desktop but i may go RTX 3070. Will wait and see what happens.
 
Just want something 80% faster than my RX 480.

I use to CX RX 570 on my X58 with Xeon 5660 and top card was the 8Gb and 4Gb in slot 3 with great cooling like that and you keep 8Gb buffer with CX and no ribbon needed . AMD was so much more advanced in this area even Eyefintiy,, a flashed RX 5700 is almost equal in gpu score on Fire Strike to that set up like single was 14,301 and CX was like 26,000 .. my 5700 does 25,800 so they already been at what your looking for .

On the 192 bit bus .. only the RX 5500 XT could show how that going work and mine is pretty some in PCI Express 4.0 which is 8x
 
I use to CX RX 570 on my X58 with Xeon 5660 and top card was the 8Gb and 4Gb in slot 3 with great cooling like that and you keep 8Gb buffer with CX and no ribbon needed . AMD was so much more advanced in this area even Eyefintiy,, a flashed RX 5700 is almost equal in gpu score on Fire Strike to that set up like single was 14,301 and CX was like 26,000 .. my 5700 does 25,800 so they already been at what your looking for .

On the 192 bit bus .. only the RX 5500 XT could show how that going work and mine is pretty some in PCI Express 4.0 which is 8x


I have no idea what you've said there in any relation to my quote. :confused:
 
These specs are pretty close to what a lot of us have been postulating for months. The only surprising (and likely limiting) spec is 256-bit bus on the 6900 XT.

6900 XT will be $599, its performance will be just under 3080 but it will suffer at high resolutions and IQ due to the small mem bus. Ironically, it will have a leg up in VRAM capacity over 3080 10GB, bit won't have the bandwidth to fully use it in the few cases >10GB is helpful.

6800 is quite a step down in CUs. Likely somewhere under 3070 in performance.

6700 will be 2080 Super level and a midrange king.

No point in talking about hardware RT on these as performance will be abysmal vs. nV (unless the implementation is different).

Performance should be quite a bit higher than that from the latest leaks and the boost frequencies. People keep acting like 3090 is so fast when its really about 11% faster than 3080.
 
the 6900XT still going to be expensive card like the RTX 3080 probably 649$ or 599. I may still go with the 6800XT with less power draw.
 
I'm reeeeaally curious if AMD is going to do a clocked-to-the-moon liquid cooled Big Navi. My dubious math suggests that 80CU RDNA2 needs to boost to around 2000-2200Mhz to match the 3080. Reaching the 3090 would likely require 2400-2500Mhz, which is pretty extreme. AMD seems to be going the narrow-fast route--so how high can RDNA2 clock? NV throwing efficiency to the wind helps AMD here I think.

It looks like the 6800XT and 6900XT will be in very close competition to 3070 & 3080 at similar price points but I think there is also a possibility for a highly-binned halo part, like 2.3Ghz at 350W with a 240MM rad, slower than 3090 but faster than 3080, for $999-$1199.

I have a lot of questions about memory bandwidth and a cache and there are not many answers. AMD has done a great job minimizing hype this time around- its a nice contrast to Vega ("poor Volta" are you kidding me) and I appreciate the change in tone. AMD hasn't promised the moon and stars so if they "only" match the 3080 (which seems likely) they won't have under-delivered.

I can't wait for chiplet GPUs (RDNA3 maybe?), that's when things are gonna get real... but that's another topic.
 
I'm interested to see your math, because a really simple estimate puts an 80 CU, 2050 MHz Navi 21 well north of a 3080 even with no architectural improvements. (80/40*2050/1905 = 115% faster than a 5700XT, and the 3080 is right around 95% faster than the 5700XT at 4K.) We don't know how the card is going to cope with the relatively lower memory bandwidth, but it has some room to spare.
 
I'm interested to see your math, because a really simple estimate puts an 80 CU, 2050 MHz Navi 21 well north of a 3080 even with no architectural improvements. (80/40*2050/1905 = 115% faster than a 5700XT, and the 3080 is right around 95% faster than the 5700XT at 4K.) We don't know how the card is going to cope with the relatively lower memory bandwidth, but it has some room to spare.
my crazy person math is based on a lot of assumptions and extrapolation but it goes something like this... (brace yourself for weirdness)

5700XT -> 2070S = 100% core count, 100% clock speed, 100% performance
(1÷1)÷1 = 1
core scaling Turing to RDNA1 = 100%

(note= I'm assuming 1800Mhz for all Ampere cards since that's what the reviews seem to find in actual use)

3080 -> 2080Ti = 200% core count, 106% clock speed, 140% performance
(2÷1.06)÷1.4 = 1.35
core scaling Turing to Ampere = 135%

3070 -> 2080Ti = 135% core count, 106% clock speed, 100% performance (allegedly)
(1.35÷1.06)÷1 = 1.27
core scaling Turing to Ampere = 127%

so let's say Turing (and by extension, RDNA) would perform ~130% of Ampere at same clocks and core count. Since RDNA2 alleges 10% IPC boost, let's up the scaling factor to 143%

80CU 2050Mhz RDNA2 -> 3080 = 59% core count, 114% clock speed, 143% scaling factor
(0.59×1.14)×1.43 = 0.96
speculative performance = 96%, which is close enough and I'm assuming a w i d e margin of error on my ridiculous calculations

bonus round...

80CU 2300Mhz RDNA2 watercooled dream card -> 3090
49% core count, 128% clock, 143% scaling factor
(0.49×1.28)×1.43 = 0.9
speculative performance = 90%

extra bonus round

60CU 2050Mhz RDNA2 -> 3070
65% core count, 114% clock, 143% scaling
(0.65×1.14)×1.43 = 1.06
speculative performance 106%

TL;DR round

40CU 2050Mhz RDNA2 -> 3060Ti (strange thing, *104 die on a **60 card?)
53% core count, 114% clock, 143% scaling
(0.53×1.14)×1.43 = 0.86
speculative performance 86%
<EDIT= there is rumor that 40CU RDNA2 might clock as high as 2500Mhz, which would put it at 105% of 3060Ti>

congrats if you made it to the end of this mess.

PREDICTIONS

$1499 3090
$999 6950XT (probably doesn't exist)
$899 3080Ti (72SM, 20GB, 2021?)
$699 3080
$649 6900XT
$549 3070 Ti (48SM, 16GB, 2021?)
$499 6800XT
$479 3070 (surprise price drop!)
$399 3060Ti
$379 6700XT
$349 3060
$329 6700
 
Last edited:
If it really is 256bit gddr6 then it will be very close to 5700xt in memory bandwidth. Yes I know cache improvements and magic sauce but I find it hard to believe that it will be the equal of a doubling of bandwidth in all situations. Particularly at 4k.
 
If it really is 256bit gddr6 then it will be very close to 5700xt in memory bandwidth. Yes I know cache improvements and magic sauce but I find it hard to believe that it will be the equal of a doubling of bandwidth in all situations. Particularly at 4k.
Well at 4K, you have the same amount of geometry as in triangles but each triangle gets more pixels to shade, each pixel is calculated, since those shaders and textures will more likely be in the cache it could have magnitude more bandwidth over the GA102 cards memory bandwidth for those operations. So we just have to see how this pans out. On lower resolutions the cache will have way lower latency than DDR6x, yet CPUs may limit both GPUs. Zen 3 may also throw a good twist into lower resolution, low latency, fast fps gaming. Performance with cache will be leaps and bound faster than DDR6x memory while performance when fetching from DDR 6 on RNDA2 will be slower. Anything that exceeds greatly the cache ability to keep most stuff local will slow down the lower memory bandwidth card. Will be very interesting when a wide bunch of tests are done and how games can better optimize using the cache over time.
 
5700XT -> 2070S = 100% core count, 100% clock speed, 100% performance
(1÷1)÷1 = 1
core scaling Turing to RDNA1 = 100%


Remember the RX 5700 has less but runs with in 5% of the XT with same bios and would be better to use as a scale for RDNA 1 for the clock speed and CU count .
 
5700XT -> 2070S = 100% core count, 100% clock speed, 100% performance
(1÷1)÷1 = 1
core scaling Turing to RDNA1 = 100%

Also 100% bandwidth so we can't know for sure that the core counts are the main reason performance is similiar.
 
my crazy person math is based on a lot of assumptions and extrapolation but it goes something like this... (brace yourself for weirdness)

5700XT -> 2070S = 100% core count, 100% clock speed, 100% performance
(1÷1)÷1 = 1
core scaling Turing to RDNA1 = 100%

(note= I'm assuming 1800Mhz for all Ampere cards since that's what the reviews seem to find in actual use)

3080 -> 2080Ti = 200% core count, 106% clock speed, 140% performance
(2÷1.06)÷1.4 = 1.35
core scaling Turing to Ampere = 135%

3070 -> 2080Ti = 135% core count, 106% clock speed, 100% performance (allegedly)
(1.35÷1.06)÷1 = 1.27
core scaling Turing to Ampere = 127%

so let's say Turing (and by extension, RDNA) would perform ~130% of Ampere at same clocks and core count. Since RDNA2 alleges 10% IPC boost, let's up the scaling factor to 143%

80CU 2050Mhz RDNA2 -> 3080 = 59% core count, 114% clock speed, 143% scaling factor
(0.59×1.14)×1.43 = 0.96
speculative performance = 96%, which is close enough and I'm assuming a w i d e margin of error on my ridiculous calculations

bonus round...

80CU 2300Mhz RDNA2 watercooled dream card -> 3090
49% core count, 128% clock, 143% scaling factor
(0.49×1.28)×1.43 = 0.9
speculative performance = 90%

extra bonus round

60CU 2050Mhz RDNA2 -> 3070
65% core count, 114% clock, 143% scaling
(0.65×1.14)×1.43 = 1.06
speculative performance 106%

TL;DR round

40CU 2050Mhz RDNA2 -> 3060Ti (strange thing, *104 die on a **60 card?)
53% core count, 114% clock, 143% scaling
(0.53×1.14)×1.43 = 0.86
speculative performance 86%
<EDIT= there is rumor that 40CU RDNA2 might clock as high as 2500Mhz, which would put it at 105% of 3060Ti>

congrats if you made it to the end of this mess.

PREDICTIONS

$1499 3090
$999 6950XT (probably doesn't exist)
$899 3080Ti (72SM, 20GB, 2021?)
$699 3080
$649 6900XT
$549 3070 Ti (48SM, 16GB, 2021?)
$499 6800XT
$479 3070 (surprise price drop!)
$399 3060Ti
$379 6700XT
$349 3060
$329 6700

Could you slap performance index next to the price? I'd be interested to see where predicted performance shakes out.
 
Since pricing rumors have been around $599 range I would imaging the 6900 XT will come in within 5% of the 3080, but at a much lower power consumption, which will be its main selling point. No one cares about ray tracing.
 
If it really is 256bit gddr6 then it will be very close to 5700xt in memory bandwidth. Yes I know cache improvements and magic sauce but I find it hard to believe that it will be the equal of a doubling of bandwidth in all situations. Particularly at 4k.

Depends how fast the VRAM is clocked since 5700XT is only 14gbps. 18gbps GDDR6 has been around since 2018 and at that speed you are looking at just shy of 600GB/s so its about the same bandwidth as the 2080 Ti. It all is gonna ride on how well this 128MB cache works to make up the difference with the 3080.
 
Last edited:
would it make sense to buy a Big Navi card if I have a 1440p 144hz G-Sync monitor (not FreeSync compatible)?...the monitor (ViewSonic XG2703-GS IPS) is only 2 years old so I have no intention of buying a new one anytime soon
 
would it make sense to buy a Big Navi card if I have a 1440p 144hz G-Sync monitor (not FreeSync compatible)?...the monitor (ViewSonic XG2703-GS IPS) is only 2 years old so I have no intention of buying a new one anytime soon
What gpu do you have currently?
 
Back
Top