AMD Navi Made for Sony - Less Vega for You

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,601
Jason Evangelho is suggesting that Vega's R&D suffered greatly due to AMD's collaboration with Sony and it sucking up resources that would have delivered a better Vega to PC gamers. Certainly a good read! That all said, to this day, I have heard some sources that have told me this story as well, and others that have said it is patently false. That gives us the impression that the truth lies somewhere in the middle. When it is all said and done, the fact is that AMD has sold pretty much every chip it had the production capacity to support, so it is a win for AMD, even if not for PC gaming. When you throw the mining market into the picture and how that affected prices at the retail level, I am not sure any difference would have been made to any gamers and enthusiasts anyway. Many of us are waiting on next-gen and hoping the mining craze continues to roll off a cliff and into a huge fireball. Thanks cageymaru.


Speaking to industry sources this week under conditions of anonymity, I've learned that the PS5 will use both AMD's Zen and Navi graphics architectures. What isn't clear is whether the PS5 will incorporate a beefy SoC (system on a chip) or use separate Ryzen and Navi-based components.

But the collaboration came at the expense of Radeon RX Vega and other in-development projects. Allegedly, Koduri saw up to a massive 2/3 of his engineering team devoted exclusively to Navi against his wishes, which resulted in a final RX Vega product Koduri was displeased with as resources and engineering hours were much lower than anticipated. As I mention in my companion report, the implication is that AMD CEO Dr. Lisa Su wanted to devote more energy into its semi-custom business than its desktop graphics division.
 
I doubt that this is true. Vega has some major problems regarding the use of power when scaling so much that they had to use HBM2 instead of GDDR5 (according to Buildzoid GDDR5 would make Vega 64 use over 500 Watt).

And since you can only change things when you are at the start and not in the middle of the process that ship would have sailed and nothing ever hinted that this has anything to do with how AMD/RTG operates because other GPU suffered from the same problems with scaling.
 
Navi would be "new" so any potential problems or ways to keep it "where they would like it to be" are very much able to be worked upon.

Vega did not scale the greatest when it comes to clock speed and power required to do so, neither did/does RX 4-5, however apparently when tuning the voltages it makes a substantial impact with how much power they end up using (and a noticeable difference in heat output as well).

AMD is going to go where the $$$$ is, there is never a shadow of a doubt about this, they were fighting to stay alive for a very long time, Bulldozer (along with their APU) many generations of GPU development were "hurting" because they were doing what they were able to do just to keep the lights on and the workers paid.

AMD has had a chance to learn from what BD/PD/SR was able to do (and not do) this helped Ryzen be what it is, I think with their GPU this same concept applies, Radeon 7k all the way to the current RX 5xx and Vega they have a bunch of information about what they could do, what they have done, what they need to do.

If anything, even if Navi were just an "optimized" Vega core with a focus on "make it a lean fighting machine" that is, voltage tuning it likely will make a very noticeable impact on where it can be used where current Vega cannot, they were able to stuff Vega on to a CPU die for an APU afterall, so it leads me to believe that should they take the time to really focus on voltages along with core optimizations, the sky really is the limit of what Navi will end up being.

who cares where it comes from, what it uses (HBM, GDDR6 etc) as long as it performs well, does not cost a bunch of $ or power to make it run, at least they are likely to make sure it is well built (not cheaping out on capacitor/vreg or thermal interface choice).

--------------------------------------------------------
---------------------------

For the amount of hate people have against consoles that games are made on them and ported to PC, why not the hardware components as well, maybe this will be the "new" sweetspot strategy (which generally was quite successful for AMD to allow room above and below to nail nearly entire market for their cpu and gpu first appearing RV700 (Radeon 4000 series)

keep it "lean" for console and make it "mean" via scaling the design up for PC systems, hell it works pretty well for console to PC gaming (mostly) no reason why it could not also for the components (such as cpu-gpu) where they (AMD) have to keep power budgets and proper tuning at the very center of stage thinking.

IMO it just goes to show how much Nv really chopped out of their "newest" to allow them to clock it up really high and still maintain their power budget, ither Nv really chopped a crap ton out or AMD has not had the time ($ and engineering) to optimize things as best as they could have over the last couple of generations (eating crackers, counting nickels to pay rent is hard). ^.^
 
who cares where it comes from, what it uses (HBM, GDDR6 etc) as long as it performs well, does not cost a bunch of $ or power to make it run, at least they are likely to make sure it is well built (not cheaping out on capacitor/vreg or thermal interface choice).

If it did use GDDR5 the cost would be $160 less and you can subtract the price for 8 GB of DDR5 from the $160 and figure out how much cheaper the Vega 56 and Vega 64 would be.
 
I doubt that this is true. Vega has some major problems regarding the use of power when scaling so much that they had to use HBM2 instead of GDDR5 (according to Buildzoid GDDR5 would make Vega 64 use over 500 Watt).

And since you can only change things when you are at the start and not in the middle of the process that ship would have sailed and nothing ever hinted that this has anything to do with how AMD/RTG operates because other GPU suffered from the same problems with scaling.
AMD was already invested in HBM with their flagship Fury line. Why would they downgrade to GDDR5 for Vega? Doesn't make sense. Sounds like rumor mill.
 
Whatever anyone seems to think . There will be a 7nm consumer Vega before Navi or whatever next gen arch. is. It would be foolish of AMD not to refresh this thing.
 
AMD was already invested in HBM with their flagship Fury line. Why would they downgrade to GDDR5 for Vega? Doesn't make sense. Sounds like rumor mill.
Why don't you check Buildzoid channel and look up one of the many Vega videos.

HBM uses a lot less power. That is no rumour ....
You can check the [H] review on what Vega uses power wise now see the difference?
If Vega used GDDR5 instead of HBM2 it would have been a good deal cheaper and AMD would have made more profit.
 
I mean it is pretty clear that both MS and Sony will be continuing to use AMD to help design their SoCs, and when you are going to sell millions of them, that would be a priority for the company, but I doubt that is causing issues for the rest of the product lines. If anything the extra money should help.
 
Why don't you check Buildzoid channel and look up one of the many Vega videos.

HBM uses a lot less power. That is no rumour ....
You can check the [H] review on what Vega uses power wise now see the difference?
If Vega used GDDR5 instead of HBM2 it would have been a good deal cheaper and AMD would have made more profit.

I think it's obvious that AMD was going to use HBM2 for its performance regardless of its power consumption.
 
This is the point I made time after time. AMD doesn't have nVidia's budget, or Intel's.

It's my personal hope that someone takes away AMDs graphics division outright and properly funds it.
 
Frankly, I believe the report. It makes total sense when viewed in the context of the previous report regarding amd's Financial woes and what they had to focus on. Developing for consoles and semi-custom provides more assured money.

In terms of why some are saying yes and others are denying it, my guess is the dividing lines are between the people who are the engineers who lived it and the marketers who are keeping an eye on the future knowing AMD will eventually return to the high-end and don't want to damage the brand.

That said, it looks like high-end AMD video cards are still going to be 3 years out - 2021.
 
I'm kinda surprised I'm not seeing MS and their Xbox with these Navi/Sony rumours. Perhaps MS is going in a different direction for their next Xbox GPU?
 
I'm kinda surprised I'm not seeing MS and their Xbox with these Navi/Sony rumours. Perhaps MS is going in a different direction for their next Xbox GPU?
it's been MS style since the 360 to let SONY fund R&D and then stop by AMD and be like soooooo, whaddya got?
 
I doubt that this is true. Vega has some major problems regarding the use of power when scaling so much that they had to use HBM2 instead of GDDR5 (according to Buildzoid GDDR5 would make Vega 64 use over 500 Watt).

And since you can only change things when you are at the start and not in the middle of the process that ship would have sailed and nothing ever hinted that this has anything to do with how AMD/RTG operates because other GPU suffered from the same problems with scaling.

I find it far more likely they wanted to build a hbm2 product rather than being compelled to further down the road. Keep pushing the hbm technology, keep working with it and improving it and help nudge its cost down over time as a consequence. High price gpus subsidise this process.

Bringing memory closer to the gpu, to all these big throughput devices, is the future. Using 'external' dram is effective for the time being but it's archaic and wheezy and cost aside is solidly inferior.

What isn't widely known about using hbm is that it makes for a simpler, less power hungry chip. There are real area and power savings on the actual chip before you even get into the power consumption of the memory modules.
 
Last edited:
If it did use GDDR5 the cost would be $160 less and you can subtract the price for 8 GB of DDR5 from the $160 and figure out how much cheaper the Vega 56 and Vega 64 would be.

And you'd have a turd on hand, vega 64 is up on it's power limit constantly and give it less power... yay!
You'd give the Vega 64 vega56 like performance and vega 56 gets down to gtx1070 and the gpu die (most costly part) is still no cheaper to make.

HBM2 vs GDDR is a bout 60-70 bucks more but it consumes way less power, there is NO way vega would be anything on gddr... it's not struggling cause of HBM.
All the problems with vega is on the gpu core namely Vega, not HBM, not drivers, not pcb its simply architecture and process node issues.
 
And you'd have a turd on hand, vega 64 is up on it's power limit constantly and give it less power... yay!
You'd give the Vega 64 vega56 like performance and vega 56 gets down to gtx1070 and the gpu die (most costly part) is still no cheaper to make.

HBM2 vs GDDR is a bout 60-70 bucks more but it consumes way less power, there is NO way vega would be anything on gddr... it's not struggling cause of HBM.
All the problems with vega is on the gpu core namely Vega, not HBM, not drivers, not pcb its simply architecture and process node issues.

Correct Vega could not function without HBM2. But that is my argument to begin with. The design does not scale and uses to much power. If you compare the GTX 1070 vs Vega 56 the difference in power is clear but GTX 1070 uses GDDR solution. Their gpu design is incredibly efficient compared to AMD.

And HBM2 requires extra manufacturing process as well so all in all it is harder to make and more expensive. But all of the attributes stay the same with older cards. That is why I doubt that the story is true.
I find it far more likely they wanted to build a hbm2 product rather than being compelled to further down the road. Keep pushing the hbm technology, keep working with it and improving it and help nudge its cost down over time as a consequence. High price gpus subsidise this process.

Bringing memory closer to the gpu, to all these big throughput devices, is the future. Using 'external' dram is effective for the time being but it's archaic and wheezy and cost aside is solidly inferior.

What isn't widely known about using hbm is that it makes for a simpler, less power hungry chip. There are real area and power savings on the actual chip before you even get into the power consumption of the memory modules.

On Vega there is no other way then HBM2 that is what I said in the first post. The reason why I mention GDDR5 is because all of the older cards which have the same problem (scaling up means a lot more power check R9 290X performance). And the suggestion that Vega was abandoned simply does not ring true none of the problems with Vega do not come from R&D because these issues you do not magically get mid way through the design.
 
Last edited:
Forest for the trees.

The fact that every single studio is working yesterday, today and tomorrow on AMD GPUs is something you don't drop the ball on. It's no wonder Lisa Su was made CEO. She actually understands what needs to be done.

How can folks not understand, every major studio has developers working with AMD tools and GPUs, every single day and for years to come. If they hadnt won the console wins, the actual fact that AMD is single digits on Steam HW survey would have been death of AMD GPUs.

The tech media can portray an even fight, but the bean counters at the big studios and just lack of budget in smaller ones would require that games not be QA against AMD GPUs. Look at the GPU numbers... Their biggest current gen product is the RX 480 at half a percent. Even the 1080 ti is above 1% and pales against 1060 and 1050 numbers.
 
Why don't you check Buildzoid channel and look up one of the many Vega videos.

HBM uses a lot less power. That is no rumour ....
You can check the [H] review on what Vega uses power wise now see the difference?
If Vega used GDDR5 instead of HBM2 it would have been a good deal cheaper and AMD would have made more profit.
Correct Vega could not function without HBM2. But that is my argument to begin with. The design does not scale and uses to much power. If you compare the GTX 1070 vs Vega 56 the difference in power is clear but GTX 1070 uses GDDR solution. Their gpu design is incredibly efficient compared to AMD.

And HBM2 requires extra manufacturing process as well so all in all it is harder to make and more expensive. But all of the attributes stay the same with older cards. That is why I doubt that the story is true.


On Vega there is no other way then HBM2 that is what I said in the first post. The reason why I mention GDDR5 is because all of the older cards which have the same problem (scaling up means a lot more power check R9 290X performance). And the suggestion that Vega was abandoned simply does not ring true none of the problems with Vega do not come from R&D because these issues you do not magically get mid way through the design.

The power and clock scaling are true there's no arguing. But solely on the point of hbm it is going to get pushed be it on worthy or unworthy vehicles.

From what I've read a great deal of the extra die size of vega over what's come before was down to trying to break the scaling barrier. But it only added 200mhz and a ton more power. An experiment I guess and it didn't work out.
 
well if navi comes out and creams anything NVid's got, well then maybe all's not lost... i mean Vega isn't really that bad. it just doesn't have performance crown. i mean if V64 hovering around 1080 in performance then that's no slouch of a card in my eyes?.?
 
That could well be, they announced today that there will be a new Xbox in 2020:

https://www.digitaltrends.com/gaming/next-xbox-reportedly-coming-2020/

I don't think Navi will be available until 2021. So either MS is going with Intel or nVidia, or else they are moving on to a streaming console.
Not necessarily... they will likely have most if not all of the features of the Navi for their custom chip they are getting from them. That's typically how it's worked the last few generations actually.
 
This is the point I made time after time. AMD doesn't have nVidia's budget, or Intel's.

It's my personal hope that someone takes away AMDs graphics division outright and properly funds it.

This makes hitting foundry contracts (Gloflo, although are any GPUs fabbed there?) much more difficult. Given the enormous complexity of these chips nowadays, bigger is generally better, sadly. Don't think a separate AMD graphics suddenly finds new money.
 
Uh...the PS3 uses an Nvidia chip. :confused:
heh, Oops.

So with PS3, Sony developed Cell with IBM. And then MS came along and repurposed three of the main cores from the Cell cluster, for the 360.

With PS4, Sony did the APU design and MS came along and did a cut-down version of it.
 
heh, Oops.

So with PS3, Sony developed Cell with IBM. And then MS came along and repurposed three of the main cores from the Cell cluster, for the 360.

With PS4, Sony did the APU design and MS came along and did a cut-down version of it.

I would like to now remind everyone: The Cell was a beast. At people theoretical throughput, its faster then either the PS4 or XB1 main CPU. It's too hellish to code for though, especially with the memory limitations the PS3 had.

As for the HBM2 debate, remember the GPUs have a relatively long processing window (ideally, <16ms) to create a single frame. Memory bandwidth isn't a massive concern, especially since most graphics workloads send data to the GPU well before the GPU needs it, and the on-die latencies to access data already in VRAM is minimal when compared against the amount of time that data is operated on by the rest of the GPU.

The only real benefit to increasing memory bandwidth are compute workloads (where you tend to do a lot more memory access) and game engines that tend to be more memory bandwidth limited (most deferred rendering engines). For everything else, memory bandwidth really doesn't affect performance that much; the total amount of memory and shader processing power matters significantly more. And those are the two areas where NVIDIA has historically focused its resources.

By using HBM2, AMD is improving the part of the GPU that is the third most important part for gaming performance, while sabotaging the part that matters most (shader performance) due to higher power draw (more heat). While a minority of game engines benefit from this, the majority of games will see lower performance on a higher cost part (HBM2 is expensive compared to GDDR5x/GDDR6).
 
That could well be, they announced today that there will be a new Xbox in 2020:

https://www.digitaltrends.com/gaming/next-xbox-reportedly-coming-2020/

I don't think Navi will be available until 2021. So either MS is going with Intel or nVidia, or else they are moving on to a streaming console.

I have no idea where you read that. Navi is coming 2019. 2020 and beyond is their next gen GPU. No way Navi comes 3 years form now. Not happening. There is not even a rumor about navi coming in 2021.
 
I would like to now remind everyone: The Cell was a beast. At people theoretical throughput, its faster then either the PS4 or XB1 main CPU. It's too hellish to code for though, especially with the memory limitations the PS3 had.
In terms of floating point (FPU) operations, yes, the Cell far exceeded the 8-core Jaguar CPUs in both the PS4 and XBone consoles, but in terms of integer operations, the Cell is far less capable.
The FPU functions (Synergistic Processing Elements - SPE) of the Cell CPU were much more powerful than that of the 8-core Jaguar, but the GPU in the AMD APU picks up more than enough of the FPU slack in that area; NVIDIA PhysX were also able to be performed on the Cell's SPEs as well, since the Series 7 ~7800GTX GPU was not capable of GPU physics and was one generation before the Series 8 (G80) which was capable of GPU physics.

As for the general purpose integer functions, the single PowerPC core (Power Processing Element - PPE) in the Cell has about 1/20 of the integer processing power of the 8-cores (assuming full-SMP and not single-thread) in the Jaguar CPU.
You are definitely right about the Cell being difficult to program for, though, and I'm sure many of the PS3 developers don't miss it at all!



On the topic of GDDR5 not being used with AMD's Vega GPUs, I do remember reading somewhere that the power output would have been drastically increased had it been used, and the performance would have suffered greatly from it due to the architectural changes made in Vega's design.
If I remember correctly, if HBM2 wasn't used, the cost wouldn't have been decreased much by going with GDDR5 due to the increase in surface area and adjacent components needed on the PCB in order to accommodate the GDDR5 memory, thus further increasing costs.

Take what I'm saying with a grain of salt for now, but I will try to find the article which had these statements in it - hopefully I can find it again, it wasn't too long ago that it was posted.
 
This will be the death of AMD in the PC gaming market.

This is strange to say, but Intel are our only hope. nVidia are evil, and need competition to keep them honest, which is something they forgot, years ago.
 
Bottom line is that bills have got to be payed. They also basically had to re-invent themselves after the many hardships of the early 2000's. Whether blue or green there were a number of shady dealings working against them and other strategies had to be created. They've created some good business partnerships with MS/Sony and in turn an economic infrastructure. Some tinkering in the desktop GPU, just to keep those departments alive and moving but maintaining a course of cash flow. I agree with Kyle, the truth is somewhere in the middle.
 
That could well be, they announced today that there will be a new Xbox in 2020:

https://www.digitaltrends.com/gaming/next-xbox-reportedly-coming-2020/

I don't think Navi will be available until 2021. So either MS is going with Intel or nVidia, or else they are moving on to a streaming console.
There was no announcement. The widely cited source is from Thurrott who cites no sources for their information. All we know officially is what Phil Spencer said at E3: that a new console is being worked on.
 
well if navi comes out and creams anything NVid's got, well then maybe all's not lost... i mean Vega isn't really that bad. it just doesn't have performance crown. i mean if V64 hovering around 1080 in performance then that's no slouch of a card in my eyes?.?

AMD doesn't have to ever 'cream' NVIDIA's top end in the gaming segment. They need to be competitive in the largest segments. RX480/RX580 and Vega56 are pretty on point and caused NVIDIA to make like 12 different sku's and revisions. If these cards weren't also processing powerhouses for mining, they would have slayed NVIDIA this gen without price inflation.
 
This will be the death of AMD in the PC gaming market.

This is strange to say, but Intel are our only hope. nVidia are evil, and need competition to keep them honest, which is something they forgot, years ago.


Let's hope and pray that Intel saves us.


AMD sure can't. And from past experience, anything they release is always a let down with a ton of excuses to follow.


I am willing to bet anything that their new GPU's that are released in 2019 will be about as fast as a 1080. Watch. They don't have the people or the money to do any better.
 
Why don't we all just TAKE A BREATH and wait for the next generation cards to actually ship before we start preaching doom and gloom?

Totally agree. If I had a nickel for every post on one of these kinds of threads that pop up on the internet every time a new cycle is about to begin I could probably buy one of these companies.
 
Totally agree. If I had a nickel for every post on one of these kinds of threads that pop up on the internet every time a new cycle is about to begin I could probably buy one of these companies.

It's also funny that some here view Vega as a complete failure, even though they are selling all they can produce, and it performs on par with the 3rd most powerful card currently on the market (Behind Titan Xp, 1080ti, and 1080, although it can beat the 1080 if cooled and overclocked/undervolted properly). It also happens to be the best card you can get if you are running Linux...or using a Freesync monitor...or one of the new Samsung TVs with Freesync.

I guess some people just need to justify their own purchases...enough so that they can't even conceive of why someone would buy something else. Kind of sad, really.
 
It's also funny that some here view Vega as a complete failure, even though they are selling all they can produce, and it performs on par with the 3rd most powerful card currently on the market (Behind Titan Xp, 1080ti, and 1080, although it can beat the 1080 if cooled and overclocked/undervolted properly). It also happens to be the best card you can get if you are running Linux...or using a Freesync monitor...or one of the new Samsung TVs with Freesync.

I guess some people just need to justify their own purchases...enough so that they can't even conceive of why someone would buy something else. Kind of sad, really.
All that plus price (msrp in any case) is in line with performance.
 
It's also funny that some here view Vega as a complete failure, even though they are selling all they can produce, and it performs on par with the 3rd most powerful card currently on the market (Behind Titan Xp, 1080ti, and 1080, although it can beat the 1080 if cooled and overclocked/undervolted properly). It also happens to be the best card you can get if you are running Linux...or using a Freesync monitor...or one of the new Samsung TVs with Freesync.

I guess some people just need to justify their own purchases...enough so that they can't even conceive of why someone would buy something else. Kind of sad, really.

Totally agree. If VEGA sucked for mining I would still 100% pleased with my pair in my gaming rig (Although I would not own 7 of them as I do now ;) )..They perform better then I expected since I water cool and know how to tune a card but even bog standard performance was better then a 1070 and I get FreeSync.

It's funny to hear how bad VEGA is when a stock 64 is just as fast as a 1080 and can be pushed even higher when tuned. AMD has had solid driver releases for years now. I haven't had a single major issue going back to my 6970/7950 x fire/290 xfire.

It just cracks me up. Do I wish AMD would release a 1080TI killer so I could jump to 4k at <60FPS sure. But until that happens (and Nivida does the same) I'm happy with butter smooth 1440P performance.
 
Back
Top