First AMD Navi GPU, "Navi 12," Will Have 40 CUs

Megalith

24-bit/48kHz
Joined
Aug 20, 2006
Messages
13,000
Wcctech claims that AMD has finalized its first Navi GPU, which is codenamed the “Navi 12.” It will reportedly have 40 compute units and is slated for release sometime in 1H 2019. The author points out that this is not the GPU that is being worked on for the PlayStation 5; “it’s a derivative of the actual Navi die and has been created specifically so AMD can get it to market for the PC audience primarily.”

I don’t have a concrete timeline for the part, but early estimates put it in H1 2019. AMD can also choose to accelerate this part while they work on Navi 10/20. As I have previously mentioned, the true TITAN-killer from Radeon will not be landing anytime soon – early estimates put the beefy Navi 20 somewhere around 2020 and at this early in the lifecycle, these timelines can vary widely.
 
Hopefully, this is the single-chip variant, that they use tho shake out all the bugs, hit all their thermal designs, and get reliability down.

New architecture will need new drivers, new power, new everything.

THEN, when all that's worked out, we'll see eight of these on a MCM package/chip, and NV will go: "WTF just happened; Why can't we sell anything?"

:D

New designs are where you see a new driver come out, and suddenly there's 30% more performance...
 
  • Like
Reactions: N4CR
like this
The true "titan killer won't be landing anytime soon". I assume he meant TI killer as the titan has been repositioned (again) to be more of a prosumer card and not a pure gaming card.
 
Too bad they won't be doing a 7nm version of the Vega chip. It would serve as a nice stop-gap until Navi comes online.
 
Too bad they won't be doing a 7nm version of the Vega chip. It would serve as a nice stop-gap until Navi comes online.
Like 10% with low power consumption and a bit of a cost ?...

AMD needs a new RX580 at 1080 performance (at least!)
 
Like 10% with low power consumption and a bit of a cost ?...

AMD needs a new RX580 at 1080 performance (at least!)
A Vega refresh would it. Performance is already there now. Bump 10% performance using less power for less money and you have a real winner.
 
Just get something worth while out there i can go and buy around April next year when i have gotten my piggy bank stuffed again.
I don't think they will do Vega on 7 Nm ( with a twist as i think that's needed too ), whatever i need something 1080 - > 1080 ti performing + freesync and a decent price, that should serve my 1080p screen just fine for a while.
 
Assuming it's true, I wonder what that means for the PS5? Last rumor I heard said 2021, but a 1H 2019 PC Navi release either means it's coming before 2021, or that it's not really 1st gen Navi.
 
A Vega refresh would it. Performance is already there now. Bump 10% performance using less power for less money and you have a real winner.
I think reason AMD won't bring it to market is BOM. It doesn't make sense to compete with a 1080Ti if your card costs 30-40% more to make and you cant command a premium. Where they can command a premium is in computing and enterprise markets.
Vega is quite a bit more expensive to make than e.g. a 1080Ti. Maybe cheaper than the RMA2080Ti though because that die is a fucking behemoth (I want to acid etch one off and display it when they are cheap or enough dead cards out of warranty).

There is a chance I may have an application for 7nm vega next year, so if I do get my mitts on some, I will see if they can be made to work with the normal Vega drivers.
 
I think reason AMD won't bring it to market is BOM. It doesn't make sense to compete with a 1080Ti if your card costs 30-40% more to make and you cant command a premium...

I'm not sure about that; a completely new architecture will have issues with backwards compatibility, software compatibility, and such.

Getting some hardware out there, just to see what the issues will be is worth subsidizing the R&D costs with cheap hardware.

I'll put up with a LOT of crap for a 90% performance card for <50% of the price of the frontrunners.

If the 2080 was a $200 card, the bitching would be Way less Shrill.

That's like being a Paid Betatester, unlike the rest of the current NV market. :)

If they work really well, competitors may have problems shipping cards.

Then, they drop the multi-chip version. :D
 
So basically if I want something from AMD that counters the 2080TI I have to wait till 2020 for something that will probably be a side grade at best ....... So this will probably sit around the 1070 at 90% of the price. Which really isn't a bad place to be but it's not what I am looking for, but it is at least a step in the correct direction, it actually gives me hope for the PS5 more than anything.
 
So basically if I want something from AMD that counters the 2080TI I have to wait till 2020 for something that will probably be a side grade at best ....... So this will probably sit around the 1070 at 90% of the price. Which really isn't a bad place to be but it's not what I am looking for, but it is at least a step in the correct direction, it actually gives me hope for the PS5 more than anything.

You don't turn around a deficit like they have so quickly.

If they can compete in the 1070/2070 space with a GDDR6 based card to keep costs reasonable then they could get a foot hold in.
 
So basically if I want something from AMD that counters the 2080TI I have to wait till 2020 for something that will probably be a side grade at best ....... So this will probably sit around the 1070 at 90% of the price. Which really isn't a bad place to be but it's not what I am looking for, but it is at least a step in the correct direction, it actually gives me hope for the PS5 more than anything.


Thats the thing though, not a lot of people NEED or WANT a $1200 4k/60 GPU and AMD knows it. Don't get me wrong I would love a 2080ti crushing monster but its not realistic or necessary for AMD to get some good market share which is what they need.

Personally I would take something around a V56 with a lower power draw and be perfectly happy (aka a 1070). I have a 240hz 1080p screen for gaming and I don't do anything super crazy high end.
 
I'm running 3 1920x1200 screens in crossfire on a 8GB RX480; I bought it as an upgrade for 2x 7970's, and I'm totally happy with the performance.

Playing Doom, or Racing, or even FarCry4 has not presented me with any noticible lag.

And the first thing I did was try Crysis with everything at maximum, it lags a bit, but not a s bad as the 7970's.

I think I paid ~$500 for the 7970's each; the RX480 was $250, iirc.

I don't pay for bleeding edge hardware; I buy cheaper hardware, and OC the crap out of it. :)

intel never sold a 4.5GHz 6 core socket 2011 cpu, that I'm aware of, for any price. :D
 
I doubt they will cut Navi down to 40 CU's from Vega's 64 CU's while struggling to keep up with Nvidia. Unless this data is about the 580 replacement and not the actual Navi.
 
AMD Roadmap TLDR as I have heard:
  • Vega 7nm will not be coming to gamers.
  • Navi 12 will be the first Navi part to arrive and will be landing sometime in 1H 2019. Navi 10 has either been scrapped or will follow later sometime in late 2019 or early 2020, depending on a couple of factors. The performance level of this part will be equivalent to Vega and it will be a small GPU based on 7nm.
  • Navi 20 is going to be the true high-end GPU built on the 7nm node and as things stand right now, you are tentatively looking at it landing sometime around late 2020 – 2021.
  • Navi will also be the first architecture to transition away from GCN (and along with it, the 4096 SP / 64 CU limit that is inherent to the uArch implementation).
  • ‘Next-Gen’ architecture is the uArch formerly codenamed KUMA internally before AMD decided it didn’t like that name too much (oops) and will be based on the same brand new major architecture that AMD rolls out with Navi.
1st one is prolly true .
The rest is just to confuse you as much as it confused him.
 
1st one is prolly true .
The rest is just to confuse you as much as it confused him.

That doesn't match what Lisa Su has said... RX Vega 64 already lands at 1080 performance level, its reasonable for AMD to refresh Vega 64 and maintain 2080 level of performance (minus RayTracing). Another interesting point now is seeing RX Vega 64 selling in $399 to $449 price range.
 
  • Like
Reactions: N4CR
like this
That doesn't match what Lisa Su has said... RX Vega 64 already lands at 1080 performance level, its reasonable for AMD to refresh Vega 64 and maintain 2080 level of performance (minus RayTracing). Another interesting point now is seeing RX Vega 64 selling in $399 to $449 price range.
It is a compute card that is what Vega is really good at they rather sell it to the professional market and focus on better performance with Navi and the one after Navi when they finally have some of the problematic power usage fixed.

If you look at Navi there going to go a different route with the drivers since that was pretty hard for them to get those optimized for Vega not to sure that Vega is on track for anything for consumers (desktop).

We still have not heard what Navi is about from all the rumours it seems as it will be something average not competing high end. With what AMD has they will not be looking to compete with anything from Nvidia in the range they released 2080 2080TI. prolly for longer then 1 year from now.
Assuming it's true, I wonder what that means for the PS5? Last rumor I heard said 2021, but a 1H 2019 PC Navi release either means it's coming before 2021, or that it's not really 1st gen Navi.
Nothing that is the semi custom division in AMD my guess is that they finished that one already. It is not so much performance as in staying with in designated goals (performance per Watt) if they reach their goal keeping expensive cooling for the device out of the picture and getting a good deal more performance due to 7nm is why I would think it is already finished. Because of the custom aspect of the chip it should not use the full version the PC market is getting.
 
That doesn't match what Lisa Su has said... RX Vega 64 already lands at 1080 performance level, its reasonable for AMD to refresh Vega 64 and maintain 2080 level of performance (minus RayTracing). Another interesting point now is seeing RX Vega 64 selling in $399 to $449 price range.

While I would be the first in line to replace at least 3 of my 7 VEGAs with a 7nm variant assuming we see clock speeds north of 2100~2200Mhz on water with reasonable power usage ( i can already sustain 1750~1800Mhz on my best samples) we are most likely not going to see this for consumers unless AMD were to off a FE prosumer edition in the ~$1500 range with the HBM cut down to 16GB. It is a costly card to build, and the yields for dies that big on 7nm are not that amazing yet.

There is the chance that AMD stockpiles some dies that end up with 56 level (3584) CUs enabled and we see that paired with 16GB of HBM2 for the $1500 I mentioned but its iffy at best.
 
1st one is prolly true .
The rest is just to confuse you as much as it confused him.
Thanks for the Roadmap comments. It makes it look like the 40CU comment was likely related to the original cut down lower priced Navi they thought they were going to launch as a middle range card. How fascinating. I don't believe that AMD needs to immediately chase ray tracing. I don't see it being a holy grail any more then Nvidia's PhysX was. But if AMD wants to show Nvidia up with using open source ray tracing instructions and not proprietary bs again. Maybe then it be worth it. I have 0 faith in the long term usefullness of "Nvidia's" proprietary ray tracing stack. Now the Tensor cores being used for other things sure. That AA style looks promising. But lets not get bit again by another failed proprietary offering that is only in a handful of games and never lives up to its promises.
 
Thanks for the Roadmap comments. It makes it look like the 40CU comment was likely related to the original cut down lower priced Navi they thought they were going to launch as a middle range card. How fascinating. I don't believe that AMD needs to immediately chase ray tracing. I don't see it being a holy grail any more then Nvidia's PhysX was. But if AMD wants to show Nvidia up with using open source ray tracing instructions and not proprietary bs again. Maybe then it be worth it. I have 0 faith in the long term usefullness of "Nvidia's" proprietary ray tracing stack. Now the Tensor cores being used for other things sure. That AA style looks promising. But lets not get bit again by another failed proprietary offering that is only in a handful of games and never lives up to its promises.

It is simple AMD marketing will use it as a stick :) They will simply say look at all the money you spend on ray tracing and what did you get for it. The company that championed the slogan "the way it is meant to be played" to "the way you are meant to be played" Ray Tracing :)
 
  • Like
Reactions: craz0
like this
. Because of the custom aspect of the chip it should not use the full version the PC market is getting.
IIRC the Xboner uses a custom Polaris that is better (more CUs) than what is available in dGPU...
 
I was thinking about Navi slated for the PC market after next year.

https://www.hardocp.com/article/2016/12/19/asus_rog_strix_rx_480_o8g_gaming_video_card_review/12

Do not think that it uses the same amount of power as this PC card generates. 244 Watt So it can have more units but is it the same or better ?

Polaris has a wide bin where a vast majority of cards are running at higher voltage than needed. Very few need the volts they come with, many underclock them and can game more efficient than the 1060 can while undervolted. Miners also reported much the same...
That said, the figures there are for the entire PC.. not the card. Or it would be as much powoer as a Vega lol. But to answer your question, clocks will be lower on xboner. Wider uarch (more CU), lower clock, similar speed, more efficiency for xboner, the jaguar cores it uses are also very low clocked for efficiency, so it will use less than the PC [H] tested polaris in.
Desktops are clocked higher because of yield probably, from what I understand there never has been a fully enabled Polaris yet, maybe we'll see this in the 590 or whatever they are re-spinning it as.
 
Not so the numbers I subtracted from 66 Watt Idle and 310 Watt load (gaming)

And the RX 590 is nothing but a die shrink to 12nm nothing exciting.

But my assessment was that the chip on the Xbox can not do that amount of Watt that the card does so it either is a miracle only made possible by MS engineers or it just does not produce the same performance.

It is not bound by CU but rather by Watt..
 
Last edited:
Not so the numbers I subtracted from 66 Watt Idle and 310 Watt load (gaming)

And the RX 590 is nothing but a die shrink to 12nm nothing exciting.

But my assessment was that the chip on the Xbox can not do that amount of Watt that the card does so it either is a miracle only made possible by MS engineers or it just does not produce the same performance.

It is not bound by CU but rather by Watt..
It takes a CPU to play a game too, and a motherboard, hard drives, controllers, etc... don't forget this factor in your calculation.
I can't remember FLOPs for Xbox vs Polaris but that will be your answer, maybe it is less?

Also I'm not expecting much from RX590, as you said, it's a die shrink. Maybe 10-15% clock boost max.
Third re-spin now?
 
Last edited:
Back
Top