Radeon RX Vega Discussion Thread

Power matters if your power supply won't handle the extra load. Not everything is about saving electricity and heat. Someone who heretofore only had to consider Card A or Card B now has to consider Card A or Card B + PSU.
 
Power matters if your power supply won't handle the extra load. Not everything is about saving electricity and heat. Someone who heretofore only had to consider Card A or Card B now has to consider Card A or Card B + PSU.

Well in that case, if you are going for a top end card you better have a decent power supply. I doubt you are using a shitty power supply with 500-600 dollar card. common now. No one is going to stick a 500-600 dollar card in to a system with shitty power supply.
 
It's more likely to matter to someone who is in the market, say for a RX 480/470 vs. someone who is looking at purchasing a 1080/Ti vs. a top-end Vega, but you simply cannot pretend that edge cases don't exist. Of course, some people seem to be objectively-challenged enough not to recognize that not everyone has the same budgets or wants or needs...
 
razor1 We haven't gotten details of any Vega parts yet, so this is all guesswork on my part. I guess that wasn't clear? I'm just going based on the typical back-and-forth between AMD and NVIDIA over the last few years, factoring in that AMD hasn't had the performance lead for a while and is targeting more of the mainstream now, and making an educated guess.

And if "performance isn't everything", are you saying you'd really spend an extra $50 on a slower card just because it was rated for a lower TDP?

There's a handy power calculator here that I've used before to try and figure this out, but let's say we've got a hypothetical Radeon Vega 570 which is exactly the same speed as a GTX 1070 but used 45W more power and costs $50 less. At 12c kWhr, for 8 hours of gaming 365 days/yr that means the Vega would cost you an extra $15 a year, assuming it's even at peak load that entire time.

I would buy the cheaper card, in this instance, because while "performance isn't everything", I didn't say that. I said a good price/performance ratio is what they need.

(Also I'm curious to know where you're getting your manufacturing costs from.)

Anyways, I wouldn't be too surprised at this point to see them release something with GDDR5X if it would keep costs down, and save HBM2 for the top end, sort of like what they did with Fury when it launched - we got the Fiji with HBM and everything else was using normal memory (granted it was also old tech).

Maybe Vega 10 won't use HBM and Vega 11 will? We don't know! That's what has been so maddening about this whole thing. AMD could take five minutes and answer a lot of these questions and it would help a whole lot.


Performance isn't everything if performance is equal or close to equal.

I could care less about a power/ $ calculator, its all about impression of what sells, cause OEM's system builders take that into consideration when they procure parts for their systems, and if they can save another 100 bucks on cooling and power deliver systems guess which cards they want to use will use?

Now manufacturing costs, how much does a 14nm wafer cost? This is easy enough to find. How many possible chips can be made with a specific wafer size? Now HBM2 costs, a bit of digging around you can find those too. Easy enough right?

Why would Vega 10 be on GDDR5x when its the flagship? The highest end model and why would Vega 11 which is the smaller Vega be using HBM? It only makes sense to use the more expensive memory on the higher end products. We know with several sources Low cost HBM 3 is the only type of HBM 3 that will scale down in price enough to reach mid tier graphics cards, cards that are less that 300 bucks.

AMD already are giving reasons why 8 gb is enough, isn't that telling enough what they are using and why they are marketing products a certain way? We saw this with Fiji too.....

Vega 10 will be HBM2, Vega 11 will be GDDR5x.

These are not guesses, they are what they are what they are. Damn we even have die shots of Vega 10, its interposer and HBM2 stacks.
 
Last edited:
Algrim that's true, but we're talking about high end cards here. If you're trying to decide between a 1080 and a Vega whatever, you're probably not sitting on a 300W OEM power supply (and if you are, you'd need a new power supply regardless of which card you choose). I agree with you in situations where there's another limiting factor, and of course "edge cases" will exist, but it doesn't apply to this conversation.

You're also forgetting that most people who have those OEM systems with limited power supplies are probably walking into a best buy and buying a cheap video card anyways.

Performance isn't everything if performance is equal or close to equal.

I could care less about a power/ $ collector, its all about impression of what sells, cause OEM's system builders take that into consideration when they procure parts for their systems, and if they can save another 100 bucks on cooling and power deliver systems guess which cards they want to use will use?

Okay, but I'm not really sure what that has to do with which card you would buy? Or are you just making the argument that Vega won't succeed because OEMs won't want it, regardless of it's viability in the add-in market?

I suppose that's a possibility, but since we don't know how it will actually compare to anything that's currently out, and we aren't privy to OEM pricing on bulk buys, and I'm not sure the market for OEM systems with extremely high-end video cards preinstalled is very big anyways, it's kind of hard to make that call at this point, isn't it?

Now manufacturing costs, how much does a 14nm wafer cost? How many possible chips can be made with a specific wafer size? Easy enough right?

No, not really. You're talking about HBM, interposers, different die design vs. something with a "traditional" memory controller, etc, etc. I'm just curious where you got your "$120/4GB so whole board will cost them at least $350" from.

Why would Vega 10 be on GDDR5x when its the flagship? The highest end model and why would Vega 11 which is the smaller Vega be using HBM? It only makes sense to use the more expensive memory on the higher end products.

Well, that's a typo on my part there. My point was that the low end one could be using cheaper memory, but we don't know, because AMD isn't telling. And it sucks, because they're basically shooting themselves in the foot. :/
 
I suppose that's a possibility, but since we don't know how it will actually compare to anything that's currently out, and we aren't privy to OEM pricing on bulk buys, and I'm not sure the market for OEM systems with extremely high-end video cards preinstalled is very big anyways, it's kind of hard to make that call at this point, isn't it?


You don't see them in HP computers?

Dude HP has been selling 980ti since launch of 980ti, in their cheapo, non gaming version systems.

And when you start looking at systems using Fiji, just look at how much Dell is OVER charging for them, if ya want a Fury X in an alienware, you need to pay more than a gtx 1080!


No, not really. You're talking about HBM, interposers, different die design vs. something with a "traditional" memory controller, etc, etc. I'm just curious where you got your "$120/4GB so whole board will cost them at least $350" from.


I know where its at, but if you search you will find a ball park figure, this is why low cost HBM 3 is the only type of HBM that will be viable for cards less that 300 bucks, cause there is no way ANY IHV can maintain decent amount of margins.

If you listened to Raja at Capsacian and Cream he even states memory is the most expensive part of graphics card at least for AMD, trying to give reasons for that 8 GB man, trying really hard to sell us on why they are marketing something, again. So if a GPU is 100 bucks with perfect yields which yeah is unlikely add a 50% on top of that cause you have to look at cut down parts as wastage costs, memory is going to be more than that right?


Well, that's a typo on my part there. My point was that the low end one could be using cheaper memory, but we don't know, because AMD isn't telling. And it sucks, because they're basically shooting themselves in the foot. :/


They have to be, why would they put a GDDR5x memory controller where it takes up an addition space of 20% in silicon if they don't have to? I expect even the cut down Vega 10 to be using GDDR5x.
 
okay man, if you say so. I guess I'm still not sure what we're arguing about.

If there's a GDDR5X controller available for AIBs to use, then they can pitch HBM as a flagship solution, maybe no one will buy it, and we still get possibly good competition with 1060/1070/1080.

But since we don't actually know what AMD is planning to release for consumers to buy, despite several Vega events so far, it's all basically up in the air for people to talk around it and speculate and guess.
 
there is nothing to use, the controllers are built into the GPU silicon, AMD has stated they can either use GDDR5x or HBM on Vega, to me it sounds like they can use both on the same chip. There is no speculation on what big Vega is using. They would not have used HBM 2 if they don't plan on using it lol. That like showing off a bugatti with its 2k a piece tires but its only going to use 50 buck white walls lol.

Why go through all that expense to and then show the public the die and the HBM2 and interposer and be using something else?
 
I mean, since we're asking, why go through all that expense but not actually show the product, and only talk about it in these general terms?

If they've stated that GDDR5X is an option, and if HBM2 is the holdup, why not just get GDDR5X product out while they're waiting on the HBM2 stuff?
 
I mean, since we're asking, why go through all that expense but not actually show the product, and only talk about it in these general terms?

If they've stated that GDDR5X is an option, and if HBM2 is the holdup, why not just get GDDR5X product out while they're waiting on the HBM2 stuff?

I see what you are saying, cause board design has to be changed too, that takes time, then AIB partners have their validation as well, to make two seperate designs for the same thing would be a bit laborious and increases costs. This will get very complex. Keep in mind vram frequency too, to reach HBM 2 bandwidth amounts AMD would need a 512 bit bus and even then the fastest available GDDR5x and it wouldn't reach HBM2's bandwidth. And if I'm reading too much in between the lines, the HBCC will need a great amount of bandwidth to work effectively.
 
Yeah, that's all true.

But they wouldn't have to use GDDR5X to try and reach the bandwidth that HBM would provide. Just slap a 256-bit interface and some 10gbps memory on there and call it a day, that's what NVIDIA did and it worked out pretty well for them. HBM can come around later and they can sell it as Vega XTX or something. :)

I'm also thinking that if the option exists for there to be HBM and GDDR5 parts, someone must have already come up with a board design to use the lower-cost GDDR5.

Again, all entirely guesswork. I'd just really like to see an actual functional purchasable product here, even if it's missing out on this fancy next-gen memory tech.
 
I think their cards need more bandwidth than nV's cards though, going by polaris, it needs 30% more bandwidth at its performance levels to stay up with the 1060. Factoring that in to Vega, they can only get it with HBM 2. This is a guess but seems reasonable cause why wait for HBM 2 if they don't need to?
 
Wasn't there some talk of tile-based rendering finally being implemented? That could help a lot with bandwidth efficiency if they released GDDR5X parts first.
 
Well AMD's tile renderer, seems to need its primitive shaders to work at least that is how AMD seems to be putting it, so juror is still out. Also we don't know how much bandwidth the tile renderer will save, no real reference points to pull from, and we know Pascal has much better compression than current AMD GCN products, since AMD hasn't been talking about new compression techniques, I think its fairly safe to say the full advantage of what we see with Pascal, will not pull over to Vega.
 
Yeah, I could see that.

If only AMD could just... give more detail. :(
 
And if I'm reading too much in between the lines, the HBCC will need a great amount of bandwidth to work effectively.
Yeah. If anything HBCC will need less bandwidth as it shouldn't be paging memory that isn't ultimately used. The only case it should use more is extremely poor page sizing (say 32k pages and 33k resources requiring two pages) or running out of memory (in which case the status quo applies).

I think their cards need more bandwidth than nV's cards though, going by polaris, it needs 30% more bandwidth at its performance levels to stay up with the 1060. Factoring that in to Vega, they can only get it with HBM 2. This is a guess but seems reasonable cause why wait for HBM 2 if they don't need to?
The tiled raster as Nvidia pointed out triples their effective bandwidth. Change that and there really isn't that much of a difference between architectures. Even lossless compression will largely be a wash as those algorithms only go so far.

Well AMD's tile renderer, seems to need its primitive shaders to work at least that is how AMD seems to be putting it
Which would just mean AMD creates a primitive shader that emulates the pipeline and implements the effect. That could be how they implement the fixed function pipeline anyways if it's entirely programmable. Scrap the fixed function stuff in exchange for more CUs and flexibility. Depending on the CU configuration, which we haven't seen, they could be just as capable as the fixed function units.
 
Yeah. If anything HBCC will need less bandwidth as it shouldn't be paging memory that isn't ultimately used. The only case it should use more is extremely poor page sizing (say 32k pages and 33k resources requiring two pages) or running out of memory (in which case the status quo applies).

No what i'm looking at it seems to need HBM to work effectively, it will work with other memory types but the bandwidth and latency would substantially reduce the possible gains. That is the only reason why AMD showed it off with limiting Vega to 2gb and a game, Deus Ex, which at those settings would use 6 gb, to show use the advantages. keep in mind if they reduced the GB avialbility the bandwidth will be limited too (unless they can quarantine off specific portions of memory chips)

The tiled raster as Nvidia pointed out triples their effective bandwidth. Change that and there really isn't that much of a difference between architectures. Even lossless compression will largely be a wash as those algorithms only go so far.
Theoretically not in real case, highly dependent on how the renderer and shaders are set up. Just look from Keplar to Maxwell do we see a 3 times improvement with bandwidth synthetic tests, not really actually we only see the memory frequency improvements.
Which would just mean AMD creates a primitive shader that emulates the pipeline and implements the effect. That could be how they implement the fixed function pipeline anyways if it's entirely programmable. Scrap the fixed function stuff in exchange for more CUs and flexibility. Depending on the CU configuration, which we haven't seen, they could be just as capable as the fixed function units.

hard to say at this point but I don't know if that is possible, cause it looks like its using the shader units for many of the geometry tasks, and that will be hard to just turn on and off cause there will be specific needs for those types of shaders that have be governed by the code.
 
No what i'm looking at it seems to need HBM to work effectively, it will work with other memory types but the bandwidth and latency would substantially reduce the possible gains. That is the only reason why AMD showed it off with limiting Vega to 2gb and a game, Deus Ex, which at those settings would use 6 gb, to show use the advantages. keep in mind if they reduced the GB avialbility the bandwidth will be limited too (unless they can quarantine off specific portions of memory chips)
The type of memory should be irrelevant to the HBCC. They limited it to 2GB because otherwise it wouldn't show anything. There needs to be some contention for resources for the HBCC to show a significant benefit. There would likely be a small boost from less streaming of assets, but that should be minimal. I think it was something like 3% of the data changing each frame, then half of that for savings from the HBCC if usage cut in half. It'd be like testing cards with 8GB and 16GB of memory on a game that only used 6GB. Assuming same bandwidth, latency, etc performance would be identical. So the real benefit would be the lower memory footprint along with simplified memory management.

Theoretically not in real case, highly dependent on how the renderer and shaders are set up. Just look from Keplar to Maxwell do we see a 3 times improvement with bandwidth synthetic tests, not really actually we only see the memory frequency improvements.
https://www.techpowerup.com/231129/on-nvidias-tile-based-rendering
Those are the figures Nvidia presented. In situations where the tiling works, that is roughly the benefit. Since it won't represent all the bandwidth required by a typical frame, the gains obviously won't be 3x without an infinite level of overdraw. That's where Nvidia's bandwidth/power efficiency has been coming from and it should be there with Vega.
 
https://www.techpowerup.com/231129/on-nvidias-tile-based-rendering
Those are the figures Nvidia presented. In situations where the tiling works, that is roughly the benefit. Since it won't represent all the bandwidth required by a typical frame, the gains obviously won't be 3x without an infinite level of overdraw. That's where Nvidia's bandwidth/power efficiency has been coming from and it should be there with Vega.

its not 3 times, tiled rasterizer benefits only shows at most 25%.....

Compression gives 50% savings.

Pascal took compression with what Maxwell had which is what GCN is equal now, and doubled it.

So if this is best case scenarios, we can expect quite a bit less in real world testing, outside of compression which is pretty much standard across all assets (the view angle of compression is not there with Pascal)

Even if AMD's TBR working with primitive shaders, it won't be able to equalize the bandwidth gap without better compression.
 
Last edited:
I am thinking we see this in may. They have been talking and talking about Prey. I am rally thinking we might have it launched in may bundled with Prey. Its all a guess but it makes sense since they are promoting the game so hard.
 
guys, its time to face the truth - its 2017 and we have a new Duke Nukem and it is known as RX Vega.
 
guys, its time to face the truth - its 2017 and we have a new Duke Nukem and it is known as RX Vega.

Oh that's just harsh. Still, I wish AMD would release some info on Vega. As I already have a Polaris card, a refresh of that doesn't interest me at all.
 
We've known Vega was going to be a 1H 2017 part for quite some time. Q2 is just a tad more specific than 1H but it's still 'on time' until it doesn't arrive prior to July 1, 2017...
 
its not 3 times, tiled rasterizer benefits only shows at most 25%.....

Compression gives 50% savings.

Pascal took compression with what Maxwell had which is what GCN is equal now, and doubled it.

So if this is best case scenarios, we can expect quite a bit less in real world testing, outside of compression which is pretty much standard across all assets (the view angle of compression is not there with Pascal)

Even if AMD's TBR working with primitive shaders, it won't be able to equalize the bandwidth gap without better compression.
It's still a ~60% reduction in bandwidth for a significant percentage of the overall workload. That's still providing that 25-35% bandwidth and power savings that came with Maxwell. The compression is largely a part of the tiled process and lossless compression tech only goes so far.
 
It's still a ~60% reduction in bandwidth for a significant percentage of the overall workload. That's still providing that 25-35% bandwidth and power savings that came with Maxwell. The compression is largely a part of the tiled process and lossless compression tech only goes so far.


Gotta wait and see, cause Pascal is on its second generation of tiled renderer and its quite different from Maxwell, so What can we expect Vega realistically when its functioning? Expect something less right?
 
Gotta wait and see, cause Pascal is on its second generation of tiled renderer and its quite different from Maxwell, so What can we expect Vega realistically when its functioning? Expect something less right?
Generation aside, there are still too many unknowns. Simply increasing cache/tile size I'd imagine does a lot to impact that performance. If AMD added the scalars it could conceivably be really strong at compression and sorting of those tiles. So yeah, we just have to wait and see. All we do know is they're extremely tight-lipped about what's actually in Vega.
 
For me I'm just wanting Vega to fall somewhere in between the 1080 and the 1080Ti. I'd be happy enough with that sort of performance.
 
For me I'm just wanting Vega to fall somewhere in between the 1080 and the 1080Ti. I'd be happy enough with that sort of performance.

I'll be shocked if it can keep up with a 1080 and I'm being very honest. And if it does I suspect it'll have no OC headroom because its OC'd to the max from the factory just to keep up, much like the Fury cards.
 
I'll be shocked if it can keep up with a 1080 and I'm being very honest. And if it does I suspect it'll have no OC headroom because its OC'd to the max from the factory just to keep up, much like the Fury cards.

I have a feeling you are right and that's a bit of a bummer really. I was hoping it can match the feats of RyZen and come out swinging.
 
I have a feeling you are right and that's a bit of a bummer really. I was hoping it can match the feats of RyZen and come out swinging.

Me too, gonna hand it to nVidia, so far they are given people reasons to upgrade their GPU unlike Intel.
 
I have a feeling it'll compete with 1080 and be priced similarly. Still too little too late.
 
I have a feeling it'll compete with 1080 and be priced similarly. Still too little too late.
I think that is the real problem it is too late. If you want the best performance now and probably even Later the 1080ti is there and now with the price cut the 1080 is a fantastic buy and so is the 1070. AMD needed this card to release last fall. They have ceded the performance market to Nvidia for nearly 2 years now.
 
I think that is the real problem it is too late. If you want the best performance now and probably even Later the 1080ti is there and now with the price cut the 1080 is a fantastic buy and so is the 1070. AMD needed this card to release last fall. They have ceded the performance market to Nvidia for nearly 2 years now.

Exactly. It's like watching a Nascar race and second place comes in 10 minutes later. Half the stands are empty at the point and those that are left are too drunk to stand and don't give a shit anyways.
 
AMD have a habit of not leading their target. They are designing future tech to compete wth a present product.
 
AMD have a habit of not leading their target. They are designing future tech to compete wth a present product.
And by time the future tech comes around their competitor is out shining them. Joking aside I don't buy into the whole AMD designs for the future and is just more advanced argument at all. Instead NVIDIA is pouring if billions into advanced GPU computing for their fast growing server business and trickle down that tech to consumers creating vastly superior products. AMD is just trying to create a consumer graphics card NVIDIA is making super computers.
 
well you have to remember amd did say vega was CUs were designed for higher clock operation. I don't know if that will hold true. But I expect vega to be higher clocked than Polaris. How much AMD can squeeze out of it we will see.

And yes I don't expect it to OC like Nvidia.
 
Back
Top