Navi Rumors

Status
Not open for further replies.
That's a huge mountain to climb.

AMD would have to update its delta color compression (which hasn't been updated since Tonga) to be on par with NVIDIA's 4th gen delta color compression used in Pascal. (assuming 8GB GDDR6, providing memory bandwidth of 448.0 Gb/s)

I know but why would amd release a slower card that consumes same power as Radeon7/Vega64. That part is what makes no sense. That is something that will be very hard to believe and it will be DOA.
 
Assuming no significant architectural improvement, I would expect it to perform somewhere between Radeon RX Vega 56 and Radeon RX Vega 64 while having the power consumption of the Radeon RX Vega 64.

The GPU's would run at Radeon RX Vega 56's clock speed.

Memory bandwidth (assuming 8GB GDDR6) would 448.0 Gb/s, which is around half way between Radeon RX Vega 56's 410.0 Gb/s and Radeon RX Vega 64's 483.8 Gb/s.

That would put it in Geforce RTX 2060's territory.

I forgot to mention, this is with 14 Gbps GDDR6 (that NVIDIA uses on its Geforce RTX).

______________________________________________________________________________

With Samsung's 16 Gbps GDDR6 [used on NVIDIA's Quadro RTX], memory bandwidth (assuming 8GB GDDR6) would be 512.0 Gb/s. [This surpasses Radeon RX Vega 64's 483.8 Gb/s.]

Even with Radeon RX Vega 56's clock speed, the additional memory bandwidth would allow Radeon RX Vega 64-like performance.

_______________________________________________________________________________

With Micron's 20 Gbps GDDR6 (never mind the cost), memory bandwidth (assuming 8GB GDDR6) would be 640.0 Gb/s

The would blow up the power consumption, though.
 
Last edited:
I know but why would amd release a slower card that consumes same power as Radeon7/Vega64. That part is what makes no sense. That is something that will be very hard to believe and it will be DOA.

...because GDDR6 is cheaper than HBM2 that Vega uses and Vega hasn't been exactly profitable for AMD

I addressed that earlier.
 
Do you guys think there will be a ~ $100'ish navi sku that will require slot power only & beat a 1050Ti? Looking for something like this for a small box for my son to play Lego games on steam.
 
Do you guys think there will be a ~ $100'ish navi sku that will require slot power only & beat a 1050Ti? Looking for something like this for a small box for my son to play Lego games on steam.

No. For that I would use their APU chips or a GTX 980, which can be had off Ebay for around $100 dollars.
 
The thing is if AMD doesn't make good cards (GPU) and can't get market share, you think Bethesda is going to care to stick with AMD optimizations? Is MS and Sony going to stay with AMD as well? For consoles its all about saving money, but if AMD can't get power consumption in control with the same or higher performance profile, all of these things will be in jeopardy. Power usage in consoles is important too, if they can cut down 5 bucks in cooling needs then they can spend that in GPU or SOC needs.

CPU wise they no longer need to stick with x86 they can go to ARM. Granted it will kill backward compatibility but that has always been the reason to buy new consoles cause new games will only come out on the new consoles. This generational console thing, I think Sony and MS are going to find out, its not going to help them sell more consoles so market share is going to be stagnant, its going to slow down the sales of new consoles. How ever little they are making from the new consoles and R&D for them, is all going to go to waste.

I think its totally different for consoles vs desktop or high end gaming parts. AMD GPUs have decent efficiency in a controlled environment. Navi will do just fine on 7nm for consoles. Consoles have a much longer life span so if they can work with Polaris for years I am sure they can work with Navi for similar time frame just fine. Consoles will stick with what works not anything else. They have no reason to ARM and take that big of a risk when they will be getting more efficient and faster parts.

I don't see consoles going to ARM, may be Nintendo will do. But sony and MS will stick to higher performing parts and not piss off developers who would want easier porting ability from PC to consoles.
 
No. For that I would use their APU chips or a GTX 980, which can be had off Ebay for around $100 dollars.
I've already got a spare pc that isn't an APU, so pretty much just looking for the GPU..... Gtx 980 also requires more than slot power.
 
Do you guys think there will be a ~ $100'ish navi sku that will require slot power only & beat a 1050Ti? Looking for something like this for a small box for my son to play Lego games on steam.

Should be something to replace the RX560, which is already in a pretty good place.
 
Do you guys think there will be a ~ $100'ish navi sku that will require slot power only & beat a 1050Ti? Looking for something like this for a small box for my son to play Lego games on steam.

Obviously not slot power only but the RX570 is a pretty good deal that likely would last for a while too.
 
Obviously not slot power only but the RX570 is a pretty good deal that likely would last for a while too.

Yeah I've got a spare one of those, however in my 400w (92% efficient) HP power supply with an E3-1240v3, 32G ddr3, and 1 ssd, it will shut the pc off under load. The spare box I have for my son is 300w. Power supply doesn't have a 6 or 8 pin connector. It also doesn't have any spare molex. This is an Acer i5-8400 Desktop I picked up as an insurance replacement from a flood.

For lego games I was thinking maybe and RX560 from ebay for $70-$80? But if navi is going to have something, I'll definitely wait.
 
Yeah I've got a spare one of those, however in my 400w (92% efficient) HP power supply with an E3-1240v3, 32G ddr3, and 1 ssd, it will shut the pc off under load. The spare box I have for my son is 300w. Power supply doesn't have a 6 or 8 pin connector. It also doesn't have any spare molex. This is an Acer i5-8400 Desktop I picked up as an insurance replacement from a flood.

For lego games I was thinking maybe and RX560 from ebay for $70-$80? But if navi is going to have something, I'll definitely wait.

Heres a 600 watt EVGA power supply for $20

Fix your situation a better way

https://slickdeals.net/f/12861811-e...ed-1050ti-99-99-shipped?src=SiteSearchV2Algo1

https://slickdeals.net/f/12851920-e...r-supply-19-99-after-mir?src=SiteSearchV2Algo
 
Last edited:
Do you guys think there will be a ~ $100'ish navi sku that will require slot power only & beat a 1050Ti? Looking for something like this for a small box for my son to play Lego games on steam.

Yeah I've got a spare one of those, however in my 400w (92% efficient) HP power supply with an E3-1240v3, 32G ddr3, and 1 ssd, it will shut the pc off under load. The spare box I have for my son is 300w. Power supply doesn't have a 6 or 8 pin connector. It also doesn't have any spare molex. This is an Acer i5-8400 Desktop I picked up as an insurance replacement from a flood.

For lego games I was thinking maybe and RX560 from ebay for $70-$80? But if navi is going to have something, I'll definitely wait.


I would go with a higher wattage PSU for now, and when Navi rolls around, upgrade to something more high-powered from that lineup...
 
It will be slow, it will be hot, it will be late, it will be Radeon....
Radeons not always were too little too late with ability to heat your room while gaming.
Radeon 9700 was leaps and bounds better than anything NV had.
HD5xxx series also were first cards with DX11 and were much better than Fermi.

What made AMD struggle was switching from VLIW to disaster called GCN.
Games use rather simple shaders and operate at vectors and VLIW type of architectures as it happens are exceptionally good at these kind of tasks.
GCN added tons of circuitry that is only useful for writing very complex shaders, almost like having many full fledged processors which is completely useless for games and their simple shader programs that rarely if ever use any branching and have to execute a lot of multiply and add operations in order.

There was similar situation with NV with GTX 2xx series where they added a lot of circuitry for double precision (64bit) floats and other shaders improvement and overall gaming performance was in turn sacrificed because within transistor budget they could make much better gaming card or at the very least make the same CUDA count with less transistors and lower power consumption.

I find this GCN situation as pretty ridiculous because consoles use GCN cards where they could have VLIV5 and much better gaming performance... but maybe some tallented developers managed to use advanced GCN features to offload some work from pathetically weak CPU's... then it would somewhat redeem AMD's sins...
 
XoR_ that's an interesting pov. AMD themselves said that VLIW5 had gotten overly complicated, which was why the top end HD 6000 cards used a new TS3 (VLIW4) architecture. I recall those being very competitive and it seemed as though this would be the architecture in use for all of the HD 7000 cards, then AMD threw a curve ball and dropped GCN on everyone.

It certainly seems to have done its job for GPGPU and at least in the first wave or two seemed to remain competitive, but then we only got one 3rd gen part (R9 285, and the followup R9 380/380X), which seemed a nice improvement, and then Polaris, which was also a nice improvement but has suffered for power draw vs. equivalent nvidia parts.

I do wonder how the architectures would have compared if Terascale 3 had made the shrink to 28nm, though. I don't recall relative performance between the two and I'm having a hard time finding a comparison now of the 7870 "xt" or "le" and 6970 (since both had a 1536:96:32 config) - most of the reviews of the former omit the latter in their comparison, and the old Tom's article only has a single test entry on it that seems to indicate a roughyl 25% performance advantage for GCN with same core count, slightly higher clocks and slightly more mem bandwidth. If that's accurate it paints GCN in a pretty positive light at the time, but without ever knowing what Terascale 3 would have looked like at higher core count / clock speeds from smaller process node I guess it's all just pretendspeak anyways.

NVIDIA managed to get an impressive performance improvement out of 28nm (Kepler to Maxwell) and then again at 14nm (Pascal to Turing at 14nm+++ or whatever you want to think of TSMC's 12FFN), as some current info on the 1660 vs 1060 seems to indicate, whereas AMD sort of got some efficiencies in at 28nm (Tonga, GCN 3rd gen) but nothing like the generational improvement of the Kepler -> Maxwell jump. And 14nm brought Polaris and then Vega, but no real way to do a direct comparison between the two to see if Vega was really a big improvement over Polaris. What would it have looked like scaled down to mainstream as "vega 32"? What would it have looked like with GDDR5X instead of HBM2? Who knows.

I sometimes think it would have been better for AMD to stop mixing generations and execute a top-to-bottom refresh of their product stack with the latest and greatest. Polaris seemed to go pretty well for them in that respect but left them without anything at the high end. Navi, maybe, depending on the rumours you choose to believe, could finally be that and might be a real "zen" moment for them. Or... it could be more like Radeon VII, with some good performance improvements attributable to the shrink but not quite the leap everyone is hoping for. /shrug.

I'm excited to find out either way though. I'd prefer to keep AMD parts in my systems to support their continued, er, support of open source software.
 
I actually posted that. I ordered it but not sure if it will fit in the case :(
Edit: Fit perfectly. Now gaming on an RX570 in that case. Airflow is good enough that it doesn't seem to get very hot under load. Pretty impressed. If BIOS options were better on this Acer (no ram timings/speed settings/XMP profiles????) I'd fully reccomend it at $380 for i5-8400 setup
 
XoR_ that's an interesting pov. AMD themselves said that VLIW5 had gotten overly complicated, which was why the top end HD 6000 cards used a new TS3 (VLIW4) architecture. I recall those being very competitive and it seemed as though this would be the architecture in use for all of the HD 7000 cards, then AMD threw a curve ball and dropped GCN on everyone.

It certainly seems to have done its job for GPGPU and at least in the first wave or two seemed to remain competitive, but then we only got one 3rd gen part (R9 285, and the followup R9 380/380X), which seemed a nice improvement, and then Polaris, which was also a nice improvement but has suffered for power draw vs. equivalent nvidia parts.

I do wonder how the architectures would have compared if Terascale 3 had made the shrink to 28nm, though. I don't recall relative performance between the two and I'm having a hard time finding a comparison now of the 7870 "xt" or "le" and 6970 (since both had a 1536:96:32 config) - most of the reviews of the former omit the latter in their comparison, and the old Tom's article only has a single test entry on it that seems to indicate a roughyl 25% performance advantage for GCN with same core count, slightly higher clocks and slightly more mem bandwidth. If that's accurate it paints GCN in a pretty positive light at the time, but without ever knowing what Terascale 3 would have looked like at higher core count / clock speeds from smaller process node I guess it's all just pretendspeak anyways.

NVIDIA managed to get an impressive performance improvement out of 28nm (Kepler to Maxwell) and then again at 14nm (Pascal to Turing at 14nm+++ or whatever you want to think of TSMC's 12FFN), as some current info on the 1660 vs 1060 seems to indicate, whereas AMD sort of got some efficiencies in at 28nm (Tonga, GCN 3rd gen) but nothing like the generational improvement of the Kepler -> Maxwell jump. And 14nm brought Polaris and then Vega, but no real way to do a direct comparison between the two to see if Vega was really a big improvement over Polaris. What would it have looked like scaled down to mainstream as "vega 32"? What would it have looked like with GDDR5X instead of HBM2? Who knows.

I sometimes think it would have been better for AMD to stop mixing generations and execute a top-to-bottom refresh of their product stack with the latest and greatest. Polaris seemed to go pretty well for them in that respect but left them without anything at the high end. Navi, maybe, depending on the rumours you choose to believe, could finally be that and might be a real "zen" moment for them. Or... it could be more like Radeon VII, with some good performance improvements attributable to the shrink but not quite the leap everyone is hoping for. /shrug.

I'm excited to find out either way though. I'd prefer to keep AMD parts in my systems to support their continued, er, support of open source software.
ATI HD5870 evolution to AMD HD 6970 was pretty nonsensical. It took 1 year to make and: shader count decreased, transistor count increased, performance improved but also power consumption. It seems they might as well just add faster newer memory and better power delivery and have the same results... with much better value for bitcoin/litecoin miners which disease were starting to get very popular at the time. Hell, HD5870 was faster for mining coins than never card.

HD 7970 had better power consumption but it was 28nm vs 40nm for previous gen cards. Compared to last ATI card (at least high-end) performance improved by 38% with 100% increase of transistor count and 8% clock increase and 72% memory bandwidth increase.
It seems they might as well just double everything and wipe the floor with Nvidia's GTX 680.
Nvidia only released HD680 as their top GPU because of Tahiti XT inefficiency.

Of course GCN excellent DX12 compatibility and all the neat features like async compute... which are hardly needed even today, seven years later. Nvidia took their sweet time implementing async compute in gradual way and even Turing probably lack sophistication of GCN implementation.
I think AMD would do much better if they did it like their competition: concentrate on maximizing performance for typically used workloads and support smallest possible feature set to get DX12 compatibility and do not concentrate on full performance with these features. I thing VLIW5 could be made compatible with DX12
 
Hopefully something interesting interesting arrives in the $300-500 segment soon.

I replaced my RX580 with a RTX2070 in the meanwhile.

I'll take another look once Navi is out. The more choices the better.
 
Do you guys think there will be a ~ $100'ish navi sku that will require slot power only & beat a 1050Ti? Looking for something like this for a small box for my son to play Lego games on steam.

If AMD launches a small Navi, I suspect they won't launch it until 2020. The RX 460 came 6 months after the RX 480.

As far as pricing, I wouldn't expect to see it at 100 dollars. Probably more like 140 to reflect the cost of the 7nm process. The RX 460 launched with poor price to performance initially and only started having competitive pricing after the GTX 1050 launched. Considering the launch prices of most cards from both companies, I don't expect competitive pricing on the RX Navi initially. Basically just match Nvidia price points as they have done with all their launches as of late.

I think your better off waiting for next gen Ryzen 2 APU. The lego games are likely not intensive and the movement in this market and the 75 watt market is largely stagnated because their are too many bottlenecks related to memory configuration making the next step up better in price to performance. E.g RX 470/gtx 1060 3gb has more than 2x the performance for about 50-60% more cost.
 
If AMD launches a small Navi, I suspect they won't launch it until 2020. The RX 460 came 6 months after the RX 480.

As far as pricing, I wouldn't expect to see it at 100 dollars. Probably more like 140 to reflect the cost of the 7nm process. The RX 460 launched with poor price to performance initially and only started having competitive pricing after the GTX 1050 launched. Considering the launch prices of most cards from both companies, I don't expect competitive pricing on the RX Navi initially. Basically just match Nvidia price points as they have done with all their launches as of late.

I think your better off waiting for next gen Ryzen 2 APU. The lego games are likely not intensive and the movement in this market and the 75 watt market is largely stagnated because their are too many bottlenecks related to memory configuration making the next step up better in price to performance. E.g RX 470/gtx 1060 3gb has more than 2x the performance for about 50-60% more cost.

Navi is going to replace Vega.

Polaris is dirt cheap to make.

There's no urgent need to replace Polaris.
 
Navi is going to replace Vega.

Polaris is dirt cheap to make.

There's no urgent need to replace Polaris.

So something that would compete with the 1660 and 2060?

Vega 64 already competes with the 2070 and the R 7 with the 2080.
 
Radeon RX Vega 56 and Radeon RX Vega 64 are EOL

Radeon VII, not so much

I get that. So what will Navi bring then that won't be too close to the R7 and still a upgrade from Vega?

Or will it simply be a Vega equivalent but with less heat and power consumption?
 
I get that. So what will Navi bring then that won't be too close to the R7 and still a upgrade from Vega?

Or will it simply be a Vega equivalent but with less heat and power consumption?

Well, it will be cheaper to make.

Had Vega been cheaper to make, AMD would be more than happy to keep selling it to you.

Let’s say that you have Product A that sells for $10, but costs $11 to make.

That’s bad!

You replaced it with Product B, which costs $9 to make.

Now, you are golden!
 
Or will it simply be a Vega equivalent but with less heat and power consumption?
Bingo...plus cheaper to make.

I think the top end RX Navi chip was only supposed to be somewhere equivalent to the 2070/1080 with a lower price. Personally, I'm hoping for 2060 level of performance for under $250.
 
Well, it will be cheaper to make.

Had Vega been cheaper to make, AMD would be more than happy to keep selling it to you.

Let’s say that you have Product A that sells for $10, but costs $11 to make.

That’s bad!

You replaced it with Product B, which costs $9 to make.

Now, you are golden!

I still fear that AMD's pricing will match Nvidia's very closely and that we're not going to see any meaningful price drops on new cards.
 
I still fear that AMD's pricing will match Nvidia's very closely and that we're not going to see any meaningful price drops on new cards.

For all this time that there's been hope that AMD would force Nvidia to lower prices... it looks like AMD will be thanking Nvidia for keeping the prices high instead :ROFLMAO:
 
At the same time, they'd be prudent for their own bottom line to start high and perhaps cut aggressively afterward.
If they are serious, they gotta come out the gate swinging. Maybe not $249 cheap, but if they can hit that it would knock it out the park.
 
If they are serious, they gotta come out the gate swinging. Maybe not $249 cheap, but if they can hit that it would knock it out the park.

It's not going to work.

Let's say that AMD launch competitors to the GeForce RTX 2060 and GeForce RTX 2070 for $299 and $399, respectively.

Guess what?

NVIDIA is just going to cut the prices of the GeForce RTX 2060 and GeForce RTX 2070 to $299 and $399, respectively.

It would make more sense for AMD to price high and put on periodic discounts to get some sales.
 
It's not going to work.

Let's say that AMD launch competitors to the GeForce RTX 2060 and GeForce RTX 2070 for $299 and $399, respectively.

Guess what?

NVIDIA is just going to cut the prices of the GeForce RTX 2060 and GeForce RTX 2070 to $299 and $399, respectively.

It would make more sense for AMD to price high and put on periodic discounts to get some sales.
That only makes sense if you know the die size of Navi and the yields.
Producing Navi at the price AMD is wanting it to sell ($249) that means yields/performance are really good. If those are good why would Nvidia all of a sudden be competitive in pricing for their premium (but it has ray tracing) RTX series if anything it might trigger the update of the GTX series.

Navi can only work on 7nm with the characteristics described by AdoredTV leak since it will have lower power and lower price. That could bring a lot of good to consumers and a real headache for Nvidia due to them not being able to wield "but AMD is a space heater".
 
That only makes sense if you know the die size of Navi and the yields.
Producing Navi at the price AMD is wanting it to sell ($249) that means yields/performance are really good. If those are good why would Nvidia all of a sudden be competitive in pricing for their premium (but it has ray tracing) RTX series if anything it might trigger the update of the GTX series.

Navi can only work on 7nm with the characteristics described by AdoredTV leak since it will have lower power and lower price. That could bring a lot of good to consumers and a real headache for Nvidia due to them not being able to wield "but AMD is a space heater".

Let's not discuss the rumors from FraudTV.
 
Last edited:
That would be the logical thing for them to do. They can't compete with the RTX series on features yet, so it would follow that they'd want to have an aggressive price/performance stance in the raster only space.

At the same time, they'd be prudent for their own bottom line to start high and perhaps cut aggressively afterward.

AMD will undercut them and it makes sense. Becuase Nvidia really doesn't have much choice with RTX series because they are bigger dies and cost them more to make. I think rtx 2060 is probably as low as it can get cuz nvidia loves their margins. AMD on the other hand will have smaller process and smaller dies and that will allow them to undercut NVidia.
 
AMD will undercut them and it makes sense. Becuase Nvidia really doesn't have much choice with RTX series because they are bigger dies and cost them more to make. I think rtx 2060 is probably as low as it can get cuz nvidia loves their margins. AMD on the other hand will have smaller process and smaller dies and that will allow them to undercut NVidia.

Nvidia has the GTX1660, and has access to the same processes that AMD does...
 
AMD will undercut them and it makes sense. Becuase Nvidia really doesn't have much choice with RTX series because they are bigger dies and cost them more to make. I think rtx 2060 is probably as low as it can get cuz nvidia loves their margins. AMD on the other hand will have smaller process and smaller dies and that will allow them to undercut NVidia.

NVIDIA has access to the same fabrication process that AMD does.

Clearly, the reason that NVIDIA is staying with 12nm is that it's cheaper than 7nm.
 
NVIDIA has access to the same fabrication process that AMD does.

Clearly, the reason that NVIDIA is staying with 12nm is that it's cheaper than 7nm.

Talking about die sizes. Not just process. RTX series is expensive because they are larger even if 12nm is cheaper at this point. AMD will Be able to harvest more dies from 7nm even if it's more expensive.

RTX series is more expensive to make. It's pretty common knowledge at this point.
 
Status
Not open for further replies.
Back
Top