confirmed: AMD's big Navi launch to disrupt 4K gaming

exlink

Supreme [H]ardness
Joined
Dec 16, 2006
Messages
5,041
Doesn't help that directly comparing numbers like that is a really bad idea. Way too many variables.
110% agree. That was the point of my post. One review, the one Master_shake_ posted had a completely different frame-rate than the one I posted. Its why I recommended waiting for reviews.
 

Ready4Dis

2[H]4U
Joined
Nov 4, 2015
Messages
2,331
I'm excited.. 3080 performance with 16GB of ram, and a good amount less power to do so, that'll be nice, and then AIB's will produce cards next year with higher power limits and probably reach towards 3090 numbers hopefully with just a bit of binning and a slightly higher TDP, I am hoping this doesn't more than double the price ;).
 

Axman

Supreme [H]ardness
Joined
Jul 13, 2005
Messages
5,992
then AIB's will produce cards next year with higher power limits and probably reach towards 3090 numbers
With Nvidia blowing the top off the power ceiling for GPUs, I'll be surprised if someone doesn't make a 6000-series with an AIO clocked like mad. That's just a gut feeling, but it's based on the rumors that the lower-end Big Navi cards will have higher clocks than the top tier.

I'm also looking forward to ending the debate on what is, isn't, or might be the difference between Big Navi and Biggest Navi, and what parts they are actually talking about.
 

Wat

Weaksauce
Joined
Jun 23, 2019
Messages
83
Am i the only one who cares if big navi uses the new 12 pin power plug? If amd ever wants to be viewed as an innovator, they got to adopt new ideas.

Honestly I hope they use a 13 pin connector - since they are announcing near Halloween.
 

exlink

Supreme [H]ardness
Joined
Dec 16, 2006
Messages
5,041
With Nvidia blowing the top off the power ceiling for GPUs, I'll be surprised if someone doesn't make a 6000-series with an AIO clocked like mad. That's just a gut feeling, but it's based on the rumors that the lower-end Big Navi cards will have higher clocks than the top tier.

I'm also looking forward to ending the debate on what is, isn't, or might be the difference between Big Navi and Biggest Navi, and what parts they are actually talking about.
I just want to know if I should return my 3090 or not. :ROFLMAO: Return window ends on Oct 30th.
 

AVATARAT

Weaksauce
Joined
Jun 16, 2020
Messages
105
Lisa Su show the Big Navy in her hand, but from which card was the result ? :)
So there is probability for more GPU power.
 

exlink

Supreme [H]ardness
Joined
Dec 16, 2006
Messages
5,041
Iunno. Can you get more than you paid for it on Ebay?

Don't tell anyone about it, tho...
I’d rather just return it. Don’t want to gamble $1600+ with some rando on the internet that will open a dispute and send me back their old GPU instead. Easier to just return it unopened to Newegg and pay the $16 return shipping.

Guess I’ll either be shipping it back or opening it on Oct 29th. Luckily there isn’t shit to play right now that my 1080 Ti can’t handle.
 

Wat

Weaksauce
Joined
Jun 23, 2019
Messages
83
It’s also worth noting that all three games that Su showed off in the Radeon RX 6000 teaser run on DirectX 12, rather than the more common DX11 graphics API.
All the best games use DX9.0c
Why have the reviewers been silent and complicit??
Its like some sort of freemason coverup.
 

cybereality

Supreme [H]ardness
Joined
Mar 22, 2008
Messages
6,259
Not to get side-tracked but high price hot items are ebay scammers favorite thing.

I tried to sell a 2080 Ti when it was hot and the first two buyers didn't pay, luckily the third did but it makes me weary for ebay.
 

TheBuzzer

HACK THE WORLD!
Joined
Aug 15, 2005
Messages
12,678
Master_shake_
I saw that you posted a benchmark comparing the new nvidia gfx card. Is that with their dlss ai thing on?

Maybe ati could be even faster if they also do some kind of ai thing too.
 

Teenyman45

2[H]4U
Joined
Nov 29, 2010
Messages
2,523
With Nvidia blowing the top off the power ceiling for GPUs, I'll be surprised if someone doesn't make a 6000-series with an AIO clocked like mad. That's just a gut feeling, but it's based on the rumors that the lower-end Big Navi cards will have higher clocks than the top tier.

I'm also looking forward to ending the debate on what is, isn't, or might be the difference between Big Navi and Biggest Navi, and what parts they are actually talking about.
I don't want an AiO, just give me one with a factory installed full cover waterblock and some decent tim and thermal pads. The AiB can save the costs of the pump and radiator and I can have a card I'd prefer.
 

ZeroBarrier

Limp Gawd
Joined
Mar 19, 2011
Messages
256
I'm excited.. 3080 performance with 16GB of ram, and a good amount less power to do so, that'll be nice, and then AIB's will produce cards next year with higher power limits and probably reach towards 3090 numbers hopefully with just a bit of binning and a slightly higher TDP, I am hoping this doesn't more than double the price ;).
No offense, but AMD has consistently been releasing GPUs that consume more power but provide lower performance than their competition. So why would you even think that they're going to magically require less power to still come second best this time around?

Now don't get me wrong. Nvidia certainly increased the power requirements of the 30 series for sure. But it seems strange to me that people keep thinking that AMD will magically produce a GPU that can somehow beat their competition. Competition, that mind you, that has been consistently pushing the envelope in GPU technology and actually moving the industry forward.

I wait until independent reviews are up and are verifiable before even thinking about stating something like that. I'm not about to delude myself just to get let down yet again. It's been too many times now, I know I've learned my lesson, I just wish more people learned as well.
 

DooKey

[H]F Junkie
Joined
Apr 25, 2001
Messages
9,537
No offense, but AMD has consistently been releasing GPUs that consume more power but provide lower performance than their competition. So why would you even think that they're going to magically require less power to still come second best this time around?

Now don't get me wrong. Nvidia certainly increased the power requirements of the 30 series for sure. But it seems strange to me that people keep thinking that AMD will magically produce a GPU that can somehow beat their competition. Competition, that mind you, that has been consistently pushing the envelope in GPU technology and actually moving the industry forward.

I wait until independent reviews are up and are verifiable before even thinking about stating something like that. I'm not about to delude myself just to get let down yet again. It's been too many times now, I know I've learned my lesson, I just wish more people learned as well.

I believe AMD will have lower power consumption at near same performance simply because they are on a better node. See Intel 14nm+++ versus TSMC 7nm for an example.
 

jeremyshaw

[H]F Junkie
Joined
Aug 26, 2009
Messages
12,437
I believe AMD will have lower power consumption at near same performance simply because they are on a better node. See Intel 14nm+++ versus TSMC 7nm for an example.
The only problem is, AMD has been on a better node since 2018 (with the MI50 - the datacenter version of the Vega 7), which is at least two nodes ahead of TSMC 20nm SuperFin (sorry, "12FFN"; no, I did not mean "16nm FinFET+" when I said "20nm SuperFin"). 10nm was TSMC's first meaningful HPC density increase since their dismal 20nm node, and 7nm was another (new age) node ahead of TSMC 10nm. Afterwards, AMD released their 2nd generation 7nm GPU, the 5700XT, and it still couldn't do the magic. 3rd time is the charm? Though the 3rd time was tied to a SoC, so we'll have to really wait for the 4th attempt, and hope AMD Radeon's 4th utilization of TSMC 7nm can finally overtake their competition in perf/power.

So there were 2 years where AMD was 2 nodes ahead, and AMD still was not able to get better perf per watt, but sure, keep stoking the coal in the hypetrain.

Is the past a guaranteed indicator of the future? Of course not. However, there have been consistent trends that have defied logic, yet stayed consistent.

While I, too, think AMD should be able to overtake Nvidia on perf/power with Big Navi, it's not something I'm getting excited over, nor something I would even be willing to bet on.
 

variant

Gawd
Joined
Feb 17, 2008
Messages
866
The only problem is, AMD has been on a better node since 2018 (with the MI50 - the datacenter version of the Vega 7), which is at least two nodes ahead of TSMC 20nm SuperFin (sorry, "12FFN"; no, I did not mean "16nm FinFET+" when I said "20nm SuperFin"). 10nm was TSMC's first meaningful HPC density increase since their dismal 20nm node, and 7nm was another (new age) node ahead of TSMC 10nm. Afterwards, AMD released their 2nd generation 7nm GPU, the 5700XT, and it still couldn't do the magic. 3rd time is the charm? Though the 3rd time was tied to a SoC, so we'll have to really wait for the 4th attempt, and hope AMD Radeon's 4th utilization of TSMC 7nm can finally overtake their competition in perf/power.

So there were 2 years where AMD was 2 nodes ahead, and AMD still was not able to get better perf per watt, but sure, keep stoking the coal in the hypetrain.

Is the past a guaranteed indicator of the future? Of course not. However, there have been consistent trends that have defied logic, yet stayed consistent.

While I, too, think AMD should be able to overtake Nvidia on perf/power with Big Navi, it's not something I'm getting excited over, nor something I would even be willing to bet on.

Vega was always a compute card being marketed as a graphics card which is why CDNA is based off Vega.
 

cybereality

Supreme [H]ardness
Joined
Mar 22, 2008
Messages
6,259
Radeon VII was actually a nice card for 4K gaming. Ran that for a bit on a Samsung TV, it held up well (probably 1080 Ti quality or a little better).
 

ZeroBarrier

Limp Gawd
Joined
Mar 19, 2011
Messages
256
The only problem is, AMD has been on a better node since 2018 (with the MI50 - the datacenter version of the Vega 7), which is at least two nodes ahead of TSMC 20nm SuperFin (sorry, "12FFN"; no, I did not mean "16nm FinFET+" when I said "20nm SuperFin"). 10nm was TSMC's first meaningful HPC density increase since their dismal 20nm node, and 7nm was another (new age) node ahead of TSMC 10nm. Afterwards, AMD released their 2nd generation 7nm GPU, the 5700XT, and it still couldn't do the magic. 3rd time is the charm? Though the 3rd time was tied to a SoC, so we'll have to really wait for the 4th attempt, and hope AMD Radeon's 4th utilization of TSMC 7nm can finally overtake their competition in perf/power.

So there were 2 years where AMD was 2 nodes ahead, and AMD still was not able to get better perf per watt, but sure, keep stoking the coal in the hypetrain.

Is the past a guaranteed indicator of the future? Of course not. However, there have been consistent trends that have defied logic, yet stayed consistent.

While I, too, think AMD should be able to overtake Nvidia on perf/power with Big Navi, it's not something I'm getting excited over, nor something I would even be willing to bet on.
Someone that actually pays attention. It's people like you that restore a modicum of hope.
I believe AMD will have lower power consumption at near same performance simply because they are on a better node. See Intel 14nm+++ versus TSMC 7nm for an example.
As stated by jeremyshaw, they've been on a "better" process node already and still couldn't hang. So to believe otherwise until there is verifiable proof is foolish at best, ignorant at worst.
 

Ready4Dis

2[H]4U
Joined
Nov 4, 2015
Messages
2,331
Someone that actually pays attention. It's people like you that restore a modicum of hope.

As stated by jeremyshaw, they've been on a "better" process node already and still couldn't hang. So to believe otherwise until there is verifiable proof is foolish at best, ignorant at worst.
Guess we'll see, hard to say for sure. Doesn't seem NVIDIA made much progress with their node change (perf/watt that is). Anyways, I agree, without proof we don't know for sure, all we have is rumors at this point. Current rumors have power #'s < 300w for 3080 performance. While they aren't confirmed, this is the only indication I have to go on right now. It could be completely wrong or it could be spot on, who knows. What we do know is that the 6900xt or w/e they showed has dual 8-pin power connections, which means 150w+150w+75w from pcie slot. They had some issues with getting to close to the pcie slot limit before (it was spiking slightly over spec and they had to lower the power draw), so I doubt they'll be pushing that limit again. A safe MAX bet would be 350w... which puts it 25w-30w different to the 3080 worst case. Best case is the rumors of 275w being true, which puts it 45w-50w better than nvidia. In reality, it'll probably be 300w-325w range, putting it either slightly better or the same, which in itself would be pretty impressive gains from one generation to the next. As you stated though, we don't know for sure and won't until we get independant reviews.

"Better" process node doesn't automatically mean better product. Their R&D budget and # of engineers is a crap ton less than nvidia, so a lot of their automated 7nm routing that could have been hand tweaked wasn't, leaving them plenty of opportunities to wring more performance and low power out of their products. They are finally getting their budgets into better shape and have learned a lot more about the process/node itself. I am not going to say it's a done deal or anything, just that it's not outside the realm of possibility.
 

Meeho

Supreme [H]ardness
Joined
Aug 16, 2010
Messages
5,128
No offense, but AMD has consistently been releasing GPUs that consume more power but provide lower performance than their competition. So why would you even think that they're going to magically require less power to still come second best this time around?

Now don't get me wrong. Nvidia certainly increased the power requirements of the 30 series for sure. But it seems strange to me that people keep thinking that AMD will magically produce a GPU that can somehow beat their competition. Competition, that mind you, that has been consistently pushing the envelope in GPU technology and actually moving the industry forward.
Because Nvidia has been stagnating with efficiency while AMD is making 50% perf-per-Watt improvements. RDNA2 is their Zen of GPUs. Was Zen magic or are dominating companies able to be overtaken?
 

Meeho

Supreme [H]ardness
Joined
Aug 16, 2010
Messages
5,128
Am i the only one who cares if big navi uses the new 12 pin power plug? If amd ever wants to be viewed as an innovator, they got to adopt new ideas.
That's not innovation, that's getting desperate and pushing your GPU to the far end of the efficiency curve to squeeze the last ounce of performance on an old node, skyrocketing the power consumption.
 

kirbyrj

Fully [H]
Joined
Feb 1, 2005
Messages
26,752
Because Nvidia has been stagnating with efficiency while AMD is making 50% perf-per-Watt improvements. RDNA2 is their Zen of GPUs. Was Zen magic or are dominating companies able to be overtaken?

It's like people somehow ignore the Ampere power numbers for some reason. They are monstrous. The RTX 2080 had a TDP of 225W. The RTX 3080 is 320W. All that "extra performance" comes at a greatly exaggerated power draw.

AMD appears to be making a concerted effort in the performance per watt category. Whether or not that pans out is a different story.
 

kac77

2[H]4U
Joined
Dec 13, 2008
Messages
2,562
It's like people somehow ignore the Ampere power numbers for some reason. They are monstrous. The RTX 2080 had a TDP of 225W. The RTX 3080 is 320W. All that "extra performance" comes at a greatly exaggerated power draw.

AMD appears to be making a concerted effort in the performance per watt category. Whether or not that pans out is a different story.
Thats' pretty much where we are. This happened when AMD released Zen 2. All of a sudden power didn't mean anything. Hopefully AMD didn't drop the ball this time. We shall see.
 

c3k

2[H]4U
Joined
Sep 8, 2007
Messages
2,207
... Doesn't seem NVIDIA made much progress with their node change (perf/watt that is). ...

AdoredTV (alright, guys: I enjoy and learn from his commentary) has an Ampere video (here:
or here:
). He discusses how it is NOT a new node, just marketing fluff. (That was my takeaway.) FWIW.

Sorry for posting two links. I'm not sure which one has the details.
 

Teenyman45

2[H]4U
Joined
Nov 29, 2010
Messages
2,523
AdoredTV (alright, guys: I enjoy and learn from his commentary) has an Ampere video (here:
or here:
). He discusses how it is NOT a new node, just marketing fluff. (That was my takeaway.) FWIW.

Sorry for posting two links. I'm not sure which one has the details.

What I still don't understand is how Ampere cards can have so, so many more shaders without having much greater improvement in performance unless the individual shaders have drastically lower IPC.
 

sirsad

Limp Gawd
Joined
Mar 12, 2006
Messages
245
What I still don't understand is how Ampere cards can have so, so many more shaders without having much greater improvement in performance unless the individual shaders have drastically lower IPC.
A site did testing on this using non-gaming apps and the performance was double. The conclusion was that it is a driver/game/api issue that can't take advantage of the cores. I thought it was wccftech but I can't find it right now.

EDIT: Found it here. Not a perfect analysis but it is pretty clear that games don't use what the 30 series offer but some apps do.
 
Last edited:

sabrewolf732

Supreme [H]ardness
Joined
Dec 6, 2004
Messages
4,436
Someone that actually pays attention. It's people like you that restore a modicum of hope.

As stated by jeremyshaw, they've been on a "better" process node already and still couldn't hang. So to believe otherwise until there is verifiable proof is foolish at best, ignorant at worst.

It's likely coming from the idea that xsx/ps5 offers ~2080 super levels with an ~300 watts total system consumption. Not saying it's accurate or whatever, I'm just guessing this is where the efficiency gains are being extrapolated from.
 

Meeho

Supreme [H]ardness
Joined
Aug 16, 2010
Messages
5,128
What I still don't understand is how Ampere cards can have so, so many more shaders without having much greater improvement in performance unless the individual shaders have drastically lower IPC.
Because they only do full FP32, while INT16 INT32 gets half execution cores.
 
Last edited:

Ready4Dis

2[H]4U
Joined
Nov 4, 2015
Messages
2,331
I have seen this mentioned more than once - where is the evidence for this?
It's just speculation, as with everything at this point. A few rumors that are just guessing, no evidence so take with an extra large handful of salt.
 

Teenyman45

2[H]4U
Joined
Nov 29, 2010
Messages
2,523
Because they only do full FP32, while INT16 gets half execution cores.

Okay, I was just looking at a hardwaretimes article on INT16, now that you mentioned the half-execution.

https://www.hardwaretimes.com/nvidi...hy-the-rtx-3080-is-limited-to-10gb-of-memory/

The article was indicating: "As a result of this new partitioning, each Ampere SM partition can execute either 32 FP32 instructions per clock or 16 FP32 and 16 INT32 instructions per cycle. You’re essentially trading integer performance for twice the floating-point capability. Fortunately, as the majority of graphics workloads are FP32, this should work towards NVIDIA’s advantage." But I must still be misunderstanding something, because if the majority of workloads are FP32 focused, then the sheer number of cores should be able to complete the workload much faster...
 
Top