AMD Navi RX 3080 $249. Leaks & Rumors.

ROFL.

I’d rather have the 2060 all day every day: it’s quieter. Faster. And likely over locks a ton better. Then again if every game you play like me is Vulkan supported (Doom 2016) the Vega is fine but I’d rather not wake the dead or heat up the polar icecaps all from my gaming box.
I like gamers nexus breakdown on this. Out of box experience go with 2060. If you love tweaking and overclocking to squeeze every bit of performance from the card, Red Dragon 56.

Their take on overclocking in general is that it just isn’t fun on Nvidia hardware, they specifically mention that in the Radeon VII review, it being broken at launch being a large disappointment since they were looking forward to undervolting and tweaking clocks. (And recommend the Vega 56 yet again).
 
Don't forget that the original Fury AND Vega were delayed almost a year in the release cadence due to internal issues at AMD. Remeber that Navi is going to be the first chip that AMD has invested any real money into since they went on a CPU warpath 5+ years ago in trying to reestablish the company as viable again. Only now that they have acheived those timelines are they able to allot the engineers to work out Navi. It was only 2 years ago that Ryzen came out, so 2 years since they started to free up some engineers and work on Navi seriously. They really didn't need a new architecture until the PS5/nextbox gen, so everything is still moving on according to their schedule. The crypto boom definitely gave them money to reinvest into Radeon, so we're probably very fortunate that crypto came around when it did, otherwise Navi might still be 3 years away.

There are so many things wrong.

1. Navi is just a stopgap until AMD replaces GCN. AMD is not going to invest a lot of money into something that's going to be replaced anyway.

2. There are now more engineers working on Zen (and its derivatives) than 2 years ago.

3. Saying that Navi is made for PS5/Xbox Next is like saying that Zen 2 is made for PS5/Xbox Next. It's just ridiculous and doesn't make any sense.

4. Navi wouldn't have been 3 years away because AMD is working to replace GCN in 2020 or 2021.
 
He actually did an apples to apples against the OCed 2060, And the 2060 was defeated. And i never claimed the 56 can beat an OCed 2070.

So a modded 2060 with basically unlimited power? Because that’s what he did to the Vega 56.
 
So a modded 2060 with basically unlimited power? Because that’s what he did to the Vega 56.
Yes, that would be an interesting comparison. Can the 2060 be modded in such a way though? I thought that was always the thing of Vega 56, it could be easily modded to have no upper limit on power target/have Vega 64/64LC bios.
 
Yes, that would be an interesting comparison. Can the 2060 be modded in such a way though? I thought that was always the thing of Vega 56, it could be easily modded to have no upper limit on power target/have Vega 64/64LC bios.

You can usually BIOs flash to a higher wattage BIOs. Like my 2080ti I flashed from 300W to 380W. Takes a minute.

Shunt modding is also fast. I’ve seen tech jesus do that.

I did both to my card. This was before the space invader reports came out in mass. The BIOs flash is enough is what I found out. Shunt mod ontop of it didn’t buy anything.... yet.

After seeing the Radeon VII and hearing Navi is GCN my only hopes is it has great value. My relative’s GTX 970s are starting to show age and I’d like to use Navi if it’s at least on par with the competition.

AMD really needs a new arch. They haven’t had a good increase due to arch where nVidia gains a lot more just due to engineering and not the node differences.
 
Last edited:
You can usually BIOs flash to a higher wattage BIOs. Like my 2080ti I flashed from 300W to 380W. Takes a minute.

Shunt modding is also fast. I’ve seen tech jesus do that.

I did both to my card. This was before the space invader reports came out in mass. The BIOs flash is enough is what I found out. Shunt mod ontop of it didn’t buy anything.... yet.

After seeing the Radeon VII and hearing Navi is GCN my only hopes is it has great value. My relative’s GTX 970s are starting to show age and I’d like to use Navi if it’s at least on par with the competition.

AMD really needs a new arch. They haven’t had a good increase due to arch where nVidia gains a lot more just due to engineering and not the node differences.

I keep reading that about GCN from all sorts of people. GCN is nothing more then encompassing designation for how the compute units work and are used in the layout, between GCN revisions there are differences as well.

People claim that GCN is a power hog while at the same time APU using GCN still don't overheat or cause any kinds of problems of those kinds and are low powered.
GCN tends to be a reference for all that is bad in the world of AMD gpu without any reservation.
 
APU using GCN still don't overheat

APUs are also memory bandwidth starved...

GCN is inefficient; in general it requires more bandwidth to return the same performance of Nvidia designs, and it doesn't scale as well with clockspeed, regardless of process node. Every time AMD has released a GPU based on it, it's run like Nvidia's last generation. The Radeon VII typifies this, being a 7nm GPU with massive memory bandwidth, and still as hot as a GTX1080Ti.
 
Every time AMD has released a GPU based on it, it's run like Nvidia's last generation. The Radeon VII typifies this, being a 7nm GPU with massive memory bandwidth, and still as hot as a GTX1080Ti.

I’d hardly say every time. 7970 and 290x were hotter and more power hungry but didn’t run like last gen. Course they didn’t exactly run like the comparable current gen either...
 
APUs are also memory bandwidth starved...

GCN is inefficient; in general it requires more bandwidth to return the same performance of Nvidia designs, and it doesn't scale as well with clockspeed, regardless of process node. Every time AMD has released a GPU based on it, it's run like Nvidia's last generation. The Radeon VII typifies this, being a 7nm GPU with massive memory bandwidth, and still as hot as a GTX1080Ti.

So what is it inefficient or it does not scale with clock speed?
Why do you refuse to read what I wrote and in return say things I very well explained to you what is GCN and why there different revisions.
 
GCN is inefficient; in general it requires more bandwidth to return the same performance of Nvidia designs, and it doesn't scale as well with clockspeed, regardless of process node. Every time AMD has released a GPU based on it, it's run like Nvidia's last generation. The Radeon VII typifies this, being a 7nm GPU with massive memory bandwidth, and still as hot as a GTX1080Ti.

In other to get even close to NVIDIA, AMD needs to have:

1. working tile-based rasterization

2. much better delta color compression
 
Last edited:
APUs are also memory bandwidth starved...

GCN is inefficient; in general it requires more bandwidth to return the same performance of Nvidia designs, and it doesn't scale as well with clockspeed, regardless of process node. Every time AMD has released a GPU based on it, it's run like Nvidia's last generation. The Radeon VII typifies this, being a 7nm GPU with massive memory bandwidth, and still as hot as a GTX1080Ti.

The real determination off efficiency would be to set various aspects between nVidia and AMD equal to each other. ALU counts, clocks speeds and bandwidth would have to be normalized to really get a feel for this. AMD and nVidia tends to dance around each other here making an 'apples-to-apples' comparison difficult. Of note, the Radeon VII has 3840 ALU and 64 ROP count and clocks speed mimic the plain GTX 1080 of 3840 ALU and 64 ROP as well. The Radeon VII does have more TMU (240 vs 160) and nearly three times the memory bandwidth. Reducing the memory clock on the Radeon VII would normalize that aspect as well as setting the core/boost clocks on both GPU's to equal values. My guess would be that the Radeon VII would turn out to be slightly more efficient in this comparison but I'll yield to the raw data if someone where to undertake this comparison.

So why is nVidia ruling the high end right now? They are simply throwing more hardware toward 3D graphics than AMD. A lot more. AMD's designs top out at 4096 ALU's, 256 TMU and 64 ROPs where nVidia's designs currently top out at 5120 ALUs, 320 TMU and 128 ROPs. To further compound the difference, nVida tends to clock their chips higher. The result is pretty clear: nVidia comes out faster. This increase in hardware is also expressed in the massive difference in die size: the TU102 on the RTX 2080 Ti is a massive 754 mm^2 built on a 12 nm node vs. the Radeon VII at 331 mm^2 built on a 7 nm node. Not an exactly an apples-to-apples comparison given the different process nodes but nVidia's die is twice as large (and the GV100 is even bigger!) than the Vega 20.

This is where AMD's chiplet strategy on the CPU side will pay off greatly on the GPU side: AMD will be able to throw just as much silicon into a product as nVidia does. (AMD has been shying away from extremely large dies for years but they conceptually could go the same route as nVidia here.). nVidia has plans to also go the chiplet route in time, though I would bet that AMD makes this move first on the GPU side. Moore's Law has slowed down but I would continue to expect GPU speeds to further sky rocket. The downside is that going with a modular approach I would expect prices to stay at the elevated plateau the mining boom had on pricing.
 
Heh, pretty much what IdiotInCharge said. Last night I literally defrosted half-a-loaf of potato bread on my Vega 56 exhaust air. As in, from fully frozen to nice and soft in about 10 minutes (I had to leave it and walk outside to do stuff), but shit, it puts out enough hot air to keep my room nice and warm in the winter-time. In other words, AMD videocards are as not energy/heat efficient as the nVidia videocards. So yeah, there is that.
 
Back
Top