confirmed: AMD's big Navi launch to disrupt 4K gaming

Iunno. Can you get more than you paid for it on Ebay?

Don't tell anyone about it, tho...
I’d rather just return it. Don’t want to gamble $1600+ with some rando on the internet that will open a dispute and send me back their old GPU instead. Easier to just return it unopened to Newegg and pay the $16 return shipping.

Guess I’ll either be shipping it back or opening it on Oct 29th. Luckily there isn’t shit to play right now that my 1080 Ti can’t handle.
 
It’s also worth noting that all three games that Su showed off in the Radeon RX 6000 teaser run on DirectX 12, rather than the more common DX11 graphics API.
All the best games use DX9.0c
Why have the reviewers been silent and complicit??
Its like some sort of freemason coverup.
 
Not to get side-tracked but high price hot items are ebay scammers favorite thing.

I tried to sell a 2080 Ti when it was hot and the first two buyers didn't pay, luckily the third did but it makes me weary for ebay.
 
Master_shake_
I saw that you posted a benchmark comparing the new nvidia gfx card. Is that with their dlss ai thing on?

Maybe ati could be even faster if they also do some kind of ai thing too.
 
With Nvidia blowing the top off the power ceiling for GPUs, I'll be surprised if someone doesn't make a 6000-series with an AIO clocked like mad. That's just a gut feeling, but it's based on the rumors that the lower-end Big Navi cards will have higher clocks than the top tier.

I'm also looking forward to ending the debate on what is, isn't, or might be the difference between Big Navi and Biggest Navi, and what parts they are actually talking about.
I don't want an AiO, just give me one with a factory installed full cover waterblock and some decent tim and thermal pads. The AiB can save the costs of the pump and radiator and I can have a card I'd prefer.
 
I'm excited.. 3080 performance with 16GB of ram, and a good amount less power to do so, that'll be nice, and then AIB's will produce cards next year with higher power limits and probably reach towards 3090 numbers hopefully with just a bit of binning and a slightly higher TDP, I am hoping this doesn't more than double the price ;).
No offense, but AMD has consistently been releasing GPUs that consume more power but provide lower performance than their competition. So why would you even think that they're going to magically require less power to still come second best this time around?

Now don't get me wrong. Nvidia certainly increased the power requirements of the 30 series for sure. But it seems strange to me that people keep thinking that AMD will magically produce a GPU that can somehow beat their competition. Competition, that mind you, that has been consistently pushing the envelope in GPU technology and actually moving the industry forward.

I wait until independent reviews are up and are verifiable before even thinking about stating something like that. I'm not about to delude myself just to get let down yet again. It's been too many times now, I know I've learned my lesson, I just wish more people learned as well.
 
No offense, but AMD has consistently been releasing GPUs that consume more power but provide lower performance than their competition. So why would you even think that they're going to magically require less power to still come second best this time around?

Now don't get me wrong. Nvidia certainly increased the power requirements of the 30 series for sure. But it seems strange to me that people keep thinking that AMD will magically produce a GPU that can somehow beat their competition. Competition, that mind you, that has been consistently pushing the envelope in GPU technology and actually moving the industry forward.

I wait until independent reviews are up and are verifiable before even thinking about stating something like that. I'm not about to delude myself just to get let down yet again. It's been too many times now, I know I've learned my lesson, I just wish more people learned as well.

I believe AMD will have lower power consumption at near same performance simply because they are on a better node. See Intel 14nm+++ versus TSMC 7nm for an example.
 
I believe AMD will have lower power consumption at near same performance simply because they are on a better node. See Intel 14nm+++ versus TSMC 7nm for an example.
The only problem is, AMD has been on a better node since 2018 (with the MI50 - the datacenter version of the Vega 7), which is at least two nodes ahead of TSMC 20nm SuperFin (sorry, "12FFN"; no, I did not mean "16nm FinFET+" when I said "20nm SuperFin"). 10nm was TSMC's first meaningful HPC density increase since their dismal 20nm node, and 7nm was another (new age) node ahead of TSMC 10nm. Afterwards, AMD released their 2nd generation 7nm GPU, the 5700XT, and it still couldn't do the magic. 3rd time is the charm? Though the 3rd time was tied to a SoC, so we'll have to really wait for the 4th attempt, and hope AMD Radeon's 4th utilization of TSMC 7nm can finally overtake their competition in perf/power.

So there were 2 years where AMD was 2 nodes ahead, and AMD still was not able to get better perf per watt, but sure, keep stoking the coal in the hypetrain.

Is the past a guaranteed indicator of the future? Of course not. However, there have been consistent trends that have defied logic, yet stayed consistent.

While I, too, think AMD should be able to overtake Nvidia on perf/power with Big Navi, it's not something I'm getting excited over, nor something I would even be willing to bet on.
 
The only problem is, AMD has been on a better node since 2018 (with the MI50 - the datacenter version of the Vega 7), which is at least two nodes ahead of TSMC 20nm SuperFin (sorry, "12FFN"; no, I did not mean "16nm FinFET+" when I said "20nm SuperFin"). 10nm was TSMC's first meaningful HPC density increase since their dismal 20nm node, and 7nm was another (new age) node ahead of TSMC 10nm. Afterwards, AMD released their 2nd generation 7nm GPU, the 5700XT, and it still couldn't do the magic. 3rd time is the charm? Though the 3rd time was tied to a SoC, so we'll have to really wait for the 4th attempt, and hope AMD Radeon's 4th utilization of TSMC 7nm can finally overtake their competition in perf/power.

So there were 2 years where AMD was 2 nodes ahead, and AMD still was not able to get better perf per watt, but sure, keep stoking the coal in the hypetrain.

Is the past a guaranteed indicator of the future? Of course not. However, there have been consistent trends that have defied logic, yet stayed consistent.

While I, too, think AMD should be able to overtake Nvidia on perf/power with Big Navi, it's not something I'm getting excited over, nor something I would even be willing to bet on.

Vega was always a compute card being marketed as a graphics card which is why CDNA is based off Vega.
 
Radeon VII was actually a nice card for 4K gaming. Ran that for a bit on a Samsung TV, it held up well (probably 1080 Ti quality or a little better).
 
The only problem is, AMD has been on a better node since 2018 (with the MI50 - the datacenter version of the Vega 7), which is at least two nodes ahead of TSMC 20nm SuperFin (sorry, "12FFN"; no, I did not mean "16nm FinFET+" when I said "20nm SuperFin"). 10nm was TSMC's first meaningful HPC density increase since their dismal 20nm node, and 7nm was another (new age) node ahead of TSMC 10nm. Afterwards, AMD released their 2nd generation 7nm GPU, the 5700XT, and it still couldn't do the magic. 3rd time is the charm? Though the 3rd time was tied to a SoC, so we'll have to really wait for the 4th attempt, and hope AMD Radeon's 4th utilization of TSMC 7nm can finally overtake their competition in perf/power.

So there were 2 years where AMD was 2 nodes ahead, and AMD still was not able to get better perf per watt, but sure, keep stoking the coal in the hypetrain.

Is the past a guaranteed indicator of the future? Of course not. However, there have been consistent trends that have defied logic, yet stayed consistent.

While I, too, think AMD should be able to overtake Nvidia on perf/power with Big Navi, it's not something I'm getting excited over, nor something I would even be willing to bet on.
Someone that actually pays attention. It's people like you that restore a modicum of hope.
I believe AMD will have lower power consumption at near same performance simply because they are on a better node. See Intel 14nm+++ versus TSMC 7nm for an example.
As stated by jeremyshaw, they've been on a "better" process node already and still couldn't hang. So to believe otherwise until there is verifiable proof is foolish at best, ignorant at worst.
 
Someone that actually pays attention. It's people like you that restore a modicum of hope.

As stated by jeremyshaw, they've been on a "better" process node already and still couldn't hang. So to believe otherwise until there is verifiable proof is foolish at best, ignorant at worst.
Guess we'll see, hard to say for sure. Doesn't seem NVIDIA made much progress with their node change (perf/watt that is). Anyways, I agree, without proof we don't know for sure, all we have is rumors at this point. Current rumors have power #'s < 300w for 3080 performance. While they aren't confirmed, this is the only indication I have to go on right now. It could be completely wrong or it could be spot on, who knows. What we do know is that the 6900xt or w/e they showed has dual 8-pin power connections, which means 150w+150w+75w from pcie slot. They had some issues with getting to close to the pcie slot limit before (it was spiking slightly over spec and they had to lower the power draw), so I doubt they'll be pushing that limit again. A safe MAX bet would be 350w... which puts it 25w-30w different to the 3080 worst case. Best case is the rumors of 275w being true, which puts it 45w-50w better than nvidia. In reality, it'll probably be 300w-325w range, putting it either slightly better or the same, which in itself would be pretty impressive gains from one generation to the next. As you stated though, we don't know for sure and won't until we get independant reviews.

"Better" process node doesn't automatically mean better product. Their R&D budget and # of engineers is a crap ton less than nvidia, so a lot of their automated 7nm routing that could have been hand tweaked wasn't, leaving them plenty of opportunities to wring more performance and low power out of their products. They are finally getting their budgets into better shape and have learned a lot more about the process/node itself. I am not going to say it's a done deal or anything, just that it's not outside the realm of possibility.
 
No offense, but AMD has consistently been releasing GPUs that consume more power but provide lower performance than their competition. So why would you even think that they're going to magically require less power to still come second best this time around?

Now don't get me wrong. Nvidia certainly increased the power requirements of the 30 series for sure. But it seems strange to me that people keep thinking that AMD will magically produce a GPU that can somehow beat their competition. Competition, that mind you, that has been consistently pushing the envelope in GPU technology and actually moving the industry forward.
Because Nvidia has been stagnating with efficiency while AMD is making 50% perf-per-Watt improvements. RDNA2 is their Zen of GPUs. Was Zen magic or are dominating companies able to be overtaken?
 
Am i the only one who cares if big navi uses the new 12 pin power plug? If amd ever wants to be viewed as an innovator, they got to adopt new ideas.
That's not innovation, that's getting desperate and pushing your GPU to the far end of the efficiency curve to squeeze the last ounce of performance on an old node, skyrocketing the power consumption.
 
Because Nvidia has been stagnating with efficiency while AMD is making 50% perf-per-Watt improvements. RDNA2 is their Zen of GPUs. Was Zen magic or are dominating companies able to be overtaken?

It's like people somehow ignore the Ampere power numbers for some reason. They are monstrous. The RTX 2080 had a TDP of 225W. The RTX 3080 is 320W. All that "extra performance" comes at a greatly exaggerated power draw.

AMD appears to be making a concerted effort in the performance per watt category. Whether or not that pans out is a different story.
 
It's like people somehow ignore the Ampere power numbers for some reason. They are monstrous. The RTX 2080 had a TDP of 225W. The RTX 3080 is 320W. All that "extra performance" comes at a greatly exaggerated power draw.

AMD appears to be making a concerted effort in the performance per watt category. Whether or not that pans out is a different story.
Thats' pretty much where we are. This happened when AMD released Zen 2. All of a sudden power didn't mean anything. Hopefully AMD didn't drop the ball this time. We shall see.
 
... Doesn't seem NVIDIA made much progress with their node change (perf/watt that is). ...

AdoredTV (alright, guys: I enjoy and learn from his commentary) has an Ampere video (here: or here: ). He discusses how it is NOT a new node, just marketing fluff. (That was my takeaway.) FWIW.

Sorry for posting two links. I'm not sure which one has the details.
 
AdoredTV (alright, guys: I enjoy and learn from his commentary) has an Ampere video (here: or here: ). He discusses how it is NOT a new node, just marketing fluff. (That was my takeaway.) FWIW.

Sorry for posting two links. I'm not sure which one has the details.


What I still don't understand is how Ampere cards can have so, so many more shaders without having much greater improvement in performance unless the individual shaders have drastically lower IPC.
 
What I still don't understand is how Ampere cards can have so, so many more shaders without having much greater improvement in performance unless the individual shaders have drastically lower IPC.
A site did testing on this using non-gaming apps and the performance was double. The conclusion was that it is a driver/game/api issue that can't take advantage of the cores. I thought it was wccftech but I can't find it right now.

EDIT: Found it here. Not a perfect analysis but it is pretty clear that games don't use what the 30 series offer but some apps do.
 
Last edited:
Someone that actually pays attention. It's people like you that restore a modicum of hope.

As stated by jeremyshaw, they've been on a "better" process node already and still couldn't hang. So to believe otherwise until there is verifiable proof is foolish at best, ignorant at worst.

It's likely coming from the idea that xsx/ps5 offers ~2080 super levels with an ~300 watts total system consumption. Not saying it's accurate or whatever, I'm just guessing this is where the efficiency gains are being extrapolated from.
 
What I still don't understand is how Ampere cards can have so, so many more shaders without having much greater improvement in performance unless the individual shaders have drastically lower IPC.
Because they only do full FP32, while INT16 INT32 gets half execution cores.
 
Last edited:
I have seen this mentioned more than once - where is the evidence for this?
It's just speculation, as with everything at this point. A few rumors that are just guessing, no evidence so take with an extra large handful of salt.
 
Because they only do full FP32, while INT16 gets half execution cores.

Okay, I was just looking at a hardwaretimes article on INT16, now that you mentioned the half-execution.

https://www.hardwaretimes.com/nvidi...hy-the-rtx-3080-is-limited-to-10gb-of-memory/

The article was indicating: "As a result of this new partitioning, each Ampere SM partition can execute either 32 FP32 instructions per clock or 16 FP32 and 16 INT32 instructions per cycle. You’re essentially trading integer performance for twice the floating-point capability. Fortunately, as the majority of graphics workloads are FP32, this should work towards NVIDIA’s advantage." But I must still be misunderstanding something, because if the majority of workloads are FP32 focused, then the sheer number of cores should be able to complete the workload much faster...
 
Okay, I was just looking at a hardwaretimes article on INT16, now that you mentioned the half-execution.

https://www.hardwaretimes.com/nvidi...hy-the-rtx-3080-is-limited-to-10gb-of-memory/

The article was indicating: "As a result of this new partitioning, each Ampere SM partition can execute either 32 FP32 instructions per clock or 16 FP32 and 16 INT32 instructions per cycle. You’re essentially trading integer performance for twice the floating-point capability. Fortunately, as the majority of graphics workloads are FP32, this should work towards NVIDIA’s advantage." But I must still be misunderstanding something, because if the majority of workloads are FP32 focused, then the sheer number of cores should be able to complete the workload much faster...
AdoredTV has an informative video on the subject:



Games still utilize integer data path for about 1/4 of operations:

Screenshot_20201012-211857~01.jpg
 
https://www.hardwaretimes.com/nvidi...hy-the-rtx-3080-is-limited-to-10gb-of-memory/

The article was indicating: "As a result of this new partitioning, each Ampere SM partition can execute either 32 FP32 instructions per clock or 16 FP32 and 16 INT32 instructions per cycle. You’re essentially trading integer performance for twice the floating-point capability. Fortunately, as the majority of graphics workloads are FP32, this should work towards NVIDIA’s advantage." But I must still be misunderstanding something, because if the majority of workloads are FP32 focused, then the sheer number of cores should be able to complete the workload much faster...

You still have to feed data to those execution units. That requires tons of bandwidth. Caches do help but only so much. For compute workloads where the data fits entirely in cache you see GPUs hitting close to their peak instruction throughout. This isn’t the case for games.

In games only some parts of a frame are FP32 heavy. Other parts of the frame are 100% bandwidth or fillrate limited. You will never see perfect scaling based on FP32 alone. You need to scale up everything else too.
 
I am convinced MLID and Jim from AdoredTV are on AMD's marketing payroll.

You clearly haven't followed them enough. They have torched AMD bunch of times. Watch their past vidoes. MLID doesn't fuck around and calls bullshit like it is and Adored has went of on AMD bunch of times.
 
Someone that actually pays attention. It's people like you that restore a modicum of hope.

As stated by jeremyshaw, they've been on a "better" process node already and still couldn't hang. So to believe otherwise until there is verifiable proof is foolish at best, ignorant at worst.
So we are just flat out ignoring architectural generations? It's node process or bust?
Whatever :ROFLMAO::ROFLMAO::ROFLMAO:
 
You clearly haven't followed them enough. They have torched AMD bunch of times. Watch their past vidoes. MLID doesn't fuck around and calls bullshit like it is and Adored has went of on AMD bunch of times.


I have followed both since they started,as with most of the other rumor mongers and other yellow journalists,but things can change,just as the economy has and peoples financial needs within it.
 
I am convinced MLID and Jim from AdoredTV are on AMD's marketing payroll.
Ya took the time to name his name and give the channel name, but MLID for the other :rolleyes:
These aren't house-hold names! Not to pick on you but, people, this it isn't Twitter. You can say whole words so we know WTF your talking about!
 
  • Like
Reactions: Parja
like this
I believe they sent Jim a Radeon VII he said he liked it except for the blank screens and drivers.
 
Back
Top