Radeon RX Vega Discussion Thread

For anyone not interested in technical jargon about the architecture, this is the money shot.

upload_2017-3-24_21-11-55.png
 
The advantage isn't just in power consumption- and yeah, they'll hit a wall there- but also package size.

And yeah, while I'm skeptical that AMD's architecture will be truly competitive, if it is then it'll have legs on HBM2- and that will translate to mobile.

Of course, if there's something to be had there, expect Nvidia to jump in too- HBM isn't an 'AMD thing'.
 
For anyone not interested in technical jargon about the architecture, this is the money shot.
Run was done with 2 gigs of VRAM, just saying.
The advantage isn't just in power consumption- and yeah, they'll hit a wall there- but also package size.
Cooling of these is what makes the laptops gargantuan, not the size of the card or memory chips around it.
 
yeah pretty much everything they showed they showed before with Raja's presentation and the first architecture release of info.
 
Good to know, I didn't catch that point. In any case, DXMD can saturate all 12GB on the TitanXP, so this would still be relevant if Vega ships with 4 or 8GB.
fairly certain most games overprovision on allocated memory, so no, it probably won't be of much use on 8gb, not without running resolutions and settings Vega will hardly cop with.
 
The advantage isn't just in power consumption- and yeah, they'll hit a wall there- but also package size.

And yeah, while I'm skeptical that AMD's architecture will be truly competitive, if it is then it'll have legs on HBM2- and that will translate to mobile.

Of course, if there's something to be had there, expect Nvidia to jump in too- HBM isn't an 'AMD thing'.

There is no advantage to have HBM2 in mobile. Package size advantage that HBM has is detrimental in mobile due to higher cooling requirements in a smaller area.

Just look at the previous generation. GTX 980 was in gaming laptops everywhere. Meanwhile, Fiji was nowhere to be seen in gaming laptops. HBM doesn't make a positive difference.
 
Fiji was not built on 14nm, fiji was a big ass chip on a different process that didn't really benefit it. Vega on the other hand is on 14nm and has HBM2 that has made improvements over HBM. So I am amazed how many people here just forget that and seem to know everything about vega and hbm2 without even seeing how power efficient or a power hog it is. Lets wait until it comes out.
 
Fiji was not built on 14nm, fiji was a big ass chip on a different process that didn't really benefit it. Vega on the other hand is on 14nm and has HBM2 that has made improvements over HBM. So I am amazed how many people here just forget that and seem to know everything about vega and hbm2 without even seeing how power efficient or a power hog it is. Lets wait until it comes out.
Do you realize that Vega 10 is literally over 500mm^2? It itself is gargantuan.
 
Fiji was not built on 14nm, fiji was a big ass chip on a different process that didn't really benefit it. Vega on the other hand is on 14nm and has HBM2 that has made improvements over HBM. So I am amazed how many people here just forget that and seem to know everything about vega and hbm2 without even seeing how power efficient or a power hog it is. Lets wait until it comes out.

Maxwell wasn't on 14nm either, and yet desktop Maxwell chips made their way into gaming laptops just fine. Nano on average uses 30W more than the 980, at the same level of performance. So why didn't we see Fiji chips in gaming laptops?
 
Maxwell wasn't on 14nm either, and yet desktop Maxwell chips made their way into gaming laptops just fine. Nano on average uses 30W more than the 980, at the same level of performance. So why didn't we see Fiji chips in gaming laptops?

Cuz it was Fiji and amd doesn't have the same hold on gaming laptops And Fiji was power hungry. How does any of this apply to Vega? You care to explain without having a card to test? Even if it is great doesn't mean it will go in gaming laptops simply because nvidia has that shit covered and Vega is too late. So I highly doubt amd is too concerned with gaming laptops. What is the marker share of gaming laptops? Amd is only going to be focused on Zen/Vega APU primarily. Yea we might see them in gaming laptops here and there but it won't be much. It won't take much to run these in laptops with little slower clocks and cut down version.

Fiji didn't go in any laptops cuz it sucked. You can't apply same to Vega without having any hard numbers.

When was the last time amd was killer in gaming laptops? At no point anyone mentioned they were planning on dominating that with Vega. Let's see where it lands. Then we can choose to bitch.
 
Good to know, I didn't catch that point. In any case, DXMD can saturate all 12GB on the TitanXP, so this would still be relevant if Vega ships with 4 or 8GB.

The scary part is they even plan 4GB Vega 10 cards. But the cost and performance may have forced them to it.

For the VRAM part?
upload_2017-3-25_9-54-16.png


But that's already at 15-20FPS at 4K. And regular 2GB cards have no issue as such since they also just stream memory.
 
Last edited:
500mm^2 with HBM or without ?

its without, HBM is not part of the actual die. This chip is a power house when it comes to deep learning. AMD needs to get the big bucks and hopefully reinvest that along with decent penetration from ryzen.

Vega will do good but I think AMD currently doesn't have the R&D to work on both gaming and deep learning. Vega seems to have a blueprint of a great card which is probably a monster in everything else but gaming.
 
Just watched the video, didn't realize this won't be GCN at all. It's a brand new architecture, chances are drivers wont be optimized for this chip for a while after launch.
 
its without, HBM is not part of the actual die. This chip is a power house when it comes to deep learning. AMD needs to get the big bucks and hopefully reinvest that along with decent penetration from ryzen.

Vega will do good but I think AMD currently doesn't have the R&D to work on both gaming and deep learning. Vega seems to have a blueprint of a great card which is probably a monster in everything else but gaming.

Yeah I'm with you, I don't really see it really as a gaming chip first and foremost.
 
Saving 20-40 watts on memory power when gpu consumes 200 is hardly an advantage, is it?


Skeptical you say? You look more wishful than that HBM APU guy.

HBM is actually a thermal disadvantage. It moves the memory chips from a place they can be passively cooled (or damn close to it) to adjacent to the die. From a thermal perspective it's worse. You now have even more heat to dissapate from a tiny area.

Kinda OT but I wonder what the political % are for people who like AMD vs nVidia. I'm off to make a soapbox poll...
 
Feel free to use facts next time.

Using 2 sources without _any_ source is so cleaning up in the facts department. You want to claim anything else maybe you are on fire ?

And you complained about hype. And even mention impulse buys what kind of nonsense is that, you moved from "the hype is so bad everyone buys on hype" to disproving your own post where kept on going and linked some more nonsense here.

You keep moving goalposts. Good luck with your facts ...
 
HBM is actually a thermal disadvantage. It moves the memory chips from a place they can be passively cooled (or damn close to it) to adjacent to the die. From a thermal perspective it's worse. You now have even more heat to dissapate from a tiny area.

Kinda OT but I wonder what the political % are for people who like AMD vs nVidia. I'm off to make a soapbox poll...

Whats the benefit of it then, other than taking up no space directly on the board? Does it just make it easier for manufacturers? Any chance HBM will help save on cost, from what I hear the memory itself is almost prohibitively expensive.
 
Using 2 sources without _any_ source is so cleaning up in the facts department. You want to claim anything else maybe you are on fire ?

And you complained about hype. And even mention impulse buys what kind of nonsense is that, you moved from "the hype is so bad everyone buys on hype" to disproving your own post where kept on going and linked some more nonsense here.

You keep moving goalposts. Good luck with your facts ...
Ok, let's get on to the "Most people buy based on hype". I will concede that my wording choice is incorrect. I should've used "A lot of people" instead of "Most people". But anyway...
http://www.apple.com/pr/library/201...-Plus-Top-Four-Million-in-First-24-Hours.html
Apple Announces Record Pre-orders for iPhone 6 & iPhone 6 Plus Top Four Million in First 24 Hours
http://www.apple.com/pr/library/201...hone-Sales-Top-10-Million-Set-New-Record.html
Apple® today announced it has sold over 10 million new iPhone® 6 and iPhone 6 Plus models, a new record, just three days after the launch on September 19.

So at least 4/14 million is preorders, basically impulse buys based on hype. And the preorder number is only for the first 24 hours, so rest assured it'll be even higher closer to launch day. So the ratio between preorders and post-launch sales will skew even more towards preorders. 29% at that figure.

How about Pokemon Sun/Moon? In the US alone, over 720,000 preorders. First week sales total 1.55 million in the US. So 720,000/2,270,000. Around 32%.
http://www.vgchartz.com/preorders/42687/USA/
http://www.vgchartz.com/weekly/42694/USA/

How about actual market research?
https://moz.com/blog/new-data-reveals-67-of-consumers-are-influenced-by-online-reviews
The results revealed that online reviews impact 67.7% of respondents' purchasing decisions. More than half of the respondents (54.7%) admitted that online reviews are fairly, very, or absolutely an important part of their decision-making process.

So around 32% out of 1,000 people buy not based on reviews, but other factors. That figure is very close to the above 2 figures. Coincidence, perhaps? Or the hype effect?
Our research also uncovered that businesses risk losing as many as 22% of customers when just one negative article is found by users considering buying their product. If three negative articles pop up in a search query, the potential for lost customers increases to 59.2%. Have four or more negative articles about your company or product appearing in Google search results? You’re likely to lose 70% of potential customers.

Powergate and the fact that RX 480 had stock issues is why AMD didn't sell more cards. Not because people don't buy based on hype.
 
Whats the benefit of it then, other than taking up no space directly on the board? Does it just make it easier for manufacturers? Any chance HBM will help save on cost, from what I hear the memory itself is almost prohibitively expensive.

This is my best guess as I've seen this happen in a lot of companies.

Company is struggling, they demand engineering is "innovative". Engineering shows innovative things they can do, might even having some warnings or list the unknowns, up the chain goes some of the features.

Once a higher up tells his boss something many of them will never, under any circumstance, admit they were wrong in fear of looking inconpetent or weak.

So my guess is they started down the journey of HBM and here we are. Many of us at [H] predicted HBM gains would be minimal (or negative in the case of Fury), raise costs, and have supply issues. So anything besides that scenario doesn't make sense to me.

Maybe now they hope for costs to come down? HBM should be more advantageous as memory capacity increases. At 4GB any armchair engineer knew it didn't.
 
Last edited:
I would disagree a small package has a handicap in mobile. That is ridicules, you will need to know overall power consumption. What are the chances of getting a 1080Ti into a laptop compared to a Vega solution? What would be the max performance you could realistically use? Vega could be a much better solution and feasible for this but we won't know until launched.

Now on the video it was said HBM2 was 5x more power efficient to DDR5x - what relation is that to? Size? Pin density? ? ? ? Some times I just hate the none speak that pours forth from AMD.
 
I would disagree a small package has a handicap in mobile. That is ridicules, you will need to know overall power consumption. What are the chances of getting a 1080Ti into a laptop compared to a Vega solution? What would be the max performance you could realistically use? Vega could be a much better solution and feasible for this but we won't know until launched.

Now on the video it was said HBM2 was 5x more power efficient to DDR5x - what relation is that to? Size? Pin density? ? ? ? Some times I just hate the none speak that pours forth from AMD.

Assuming top TDP of 150W per MXM slot, and assuming both Pascal and Vega have the same performance, at the same power consumption level. So 120W for GPU and 30W for memory chips, give or take.

All Pascal mobile GPUs will only have to cool 120W in a 21mm squared area, and 30W spread over multiple large memory chips on the MXM board. The memory chips don't even have to be efficiently cooled, just thermal pads to allow the cooler to make contact with the chips, and most of the time is piggy-backing the GPU core cooling system. Not much extra engineering needed.

All Vega HBM2 GPUs will have to cool 150W in a 25mm x 25mm squared area. An extra 30W on the die, that has to be efficiently and quickly cooled, otherwise it will negatively affect GPU core temperatures, causing throttling and instability. You'd need extra engineering to make sure the IHS can spread this heat efficiently, to allow the cooler to cool this extra load, or if bare die, will need extra engineering to make sure proper contact and maybe even extra cooling headroom have to be added in order to timely cool the extra 30W.

Of course, over a long gaming session, both will arrive at a similar average temperature. However, Vega's cooling solution will have to be more robust, while the Pascal solution has more leeway. Heat density matters.
 
Assuming top TDP of 150W per MXM slot, and assuming both Pascal and Vega have the same performance, at the same power consumption level. So 120W for GPU and 30W for memory chips, give or take.

All Pascal mobile GPUs will only have to cool 120W in a 21mm squared area, and 30W spread over multiple large memory chips on the MXM board. The memory chips don't even have to be efficiently cooled, just thermal pads to allow the cooler to make contact with the chips, and most of the time is piggy-backing the GPU core cooling system. Not much extra engineering needed.

All Vega HBM2 GPUs will have to cool 150W in a 25mm x 25mm squared area. An extra 30W on the die, that has to be efficiently and quickly cooled, otherwise it will negatively affect GPU core temperatures, causing throttling and instability. You'd need extra engineering to make sure the IHS can spread this heat efficiently, to allow the cooler to cool this extra load, or if bare die, will need extra engineering to make sure proper contact and maybe even extra cooling headroom have to be added in order to timely cool the extra 30W.

Of course, over a long gaming session, both will arrive at a similar average temperature. However, Vega's cooling solution will have to be more robust, while the Pascal solution has more leeway. Heat density matters.
You also have to consider room inside of the laptop for sufficient cooling for both - a larger package means less space for cooling. Also heat pipes would be one way to effectively transfer heat from a smaller source to outside of the laptop much easier on a smaller package than having multiple heat pipes for gpu and ram. There are + and - on any design. The HBM package may have to run at a higher temperature to transfer the heat due to the smaller surface area - that can be designed in. We just have to see once launched if viable.
 
You dont save that much space with HBM2. And when you first reach those TDP levels you defeat the other purpose of the benefit. Had it been some 30-50W part, sure, then we can talk.

But as long as the GPU itself is so inefficient forget it. Same reason why Polaris is pretty much only in a few apple models with the issues they cause there.
 
Whats the benefit of it then, other than taking up no space directly on the board? Does it just make it easier for manufacturers? Any chance HBM will help save on cost, from what I hear the memory itself is almost prohibitively expensive.
HBM is faster, uses less power, and takes up less space (board footprint from being stacked and memory controller on die from easier termination of signals). Important thing people seem to be forgetting here is that HBM is both high bandwidth and low latency. GDDR is like relaxing memory timings to increase clockspeed. HBM is more adding memory channels to increase bandwidth, while keeping timings low.

While GPUs are designed to hide latency, there is still a cost in cache capacity/wave occupancy of doing so.

Some tidbits of Vega info here (GCN 5 is what AMD is calling its architecture internally)
GDC2017-Advanced-Shader-Programming-On-GCN

Better link on GPUOpen.

Pretty limited tidbits beyond expected: packed math, HBCC, etc. The drivers hinted more at actual changes. That presentation was interesting, confirmed most of what I was speculating a year or so ago in regards to concurrency, async behavior, and scheduling. Even have a Vulkan extension now for handling ratios of compute and graphics explicitly for concurrency.

Bigger takeaways I've noticed recently are that Data Parallel Primitives, Sub DWORD Addressing (SDWA), and "Moverel" (all relatively recent additions generally dealing with cross-lane communication) were removed and likely replaced by "AperatureRegs" (only new feature listed). So they likely added full crossbars with a nearby op cache(Aperature?) in the SIMDs as well as enhancing scalar capabilities(good chunk of the new driver changes deal with moving SGPRs).
 
Interesting enough when talking about packed math they are showing tessellation demos, I think that is where their tessellation through put increases will come from.
 
If Vega is indeed close, AMD is doing a damn good job at plugging leaks for now. Oh well. At this point I don't think AMD is too confident in Vega, all their presentation has been around packed math and hbcc shit. Looks like Vega will sell well in deep learning area and will probably be a disappointment in gaming.

if it gets close to the Ti and priced close to the 1080 or little higher. Then count me in. But I am growing less and less optimistic as time goes on.
 
If Vega is indeed close, AMD is doing a damn good job at plugging leaks for now. Oh well. At this point I don't think AMD is too confident in Vega, all their presentation has been around packed math and hbcc shit. Looks like Vega will sell well in deep learning area and will probably be a disappointment in gaming.

if it gets close to the Ti and priced close to the 1080 or little higher. Then count me in. But I am growing less and less optimistic as time goes on.


Vega will not sell at all in deep learning, there is no software that supports their hardware at the moment. Its going to take them time to make inroads, it took what 3 generations of hardware from nV before we saw them make strides with deep learning? With all the college programs that nV set up? Its going to take AMD much longer once they get the software (other companies make) then they have to go into a saturated market.

Just keep this in mind for small businesses (like mom and pop stores) it takes 5 years when going into a new market to make a turn around. Now imagine, a global market that nV has to create, and push, it took them 4.5 years, a market that Intel has barely made any inroads in the same time, because they themselves didn't do what nV did at the college level, which AMD definitely didn't because they just don't have the money to do something like that and we know software wise AMD is behind even Intel. So Nope Vega will be possible the first decent step towards that, but to do anything that would be meaningful from a marketshare and profit perceptive (in the deep learning market), no Vega is not it.

And you also have take into consideration, anything that makes them good compute cards should make them good gaming cards, as games are using more compute every generation.

And this has been the Achilles heel of AMD, they make products that have features that look great, but they aren't relevant in today's games, and by the time they do become relevant, the competition catches up or surpasses them.

So far they have shown HBCC, double packed math for tessellation, in a gaming type setting. HBCC ok might be useful for the lower end Vega, but that doesn't take anything away from nV cause they have cards that are 8 gb.

Double packed math for tessellation, that won't be in games any time soon, and we can forget about back ward assimilation in games. Any case with double packed math for tessellation, that will catch up to what nV has for polygon throughput for current cards.
 
Last edited:
Well, that jump to 8-pin screams factory-overvolted.

Guess AMD doesn't like being no faster than the factory-overclocked GTX 1060 cards. We'll have to see how much higher power consumption this costs them.
 
Last edited:
Back
Top