Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Power matters if your power supply won't handle the extra load. Not everything is about saving electricity and heat. Someone who heretofore only had to consider Card A or Card B now has to consider Card A or Card B + PSU.
razor1 We haven't gotten details of any Vega parts yet, so this is all guesswork on my part. I guess that wasn't clear? I'm just going based on the typical back-and-forth between AMD and NVIDIA over the last few years, factoring in that AMD hasn't had the performance lead for a while and is targeting more of the mainstream now, and making an educated guess.
And if "performance isn't everything", are you saying you'd really spend an extra $50 on a slower card just because it was rated for a lower TDP?
There's a handy power calculator here that I've used before to try and figure this out, but let's say we've got a hypothetical Radeon Vega 570 which is exactly the same speed as a GTX 1070 but used 45W more power and costs $50 less. At 12c kWhr, for 8 hours of gaming 365 days/yr that means the Vega would cost you an extra $15 a year, assuming it's even at peak load that entire time.
I would buy the cheaper card, in this instance, because while "performance isn't everything", I didn't say that. I said a good price/performance ratio is what they need.
(Also I'm curious to know where you're getting your manufacturing costs from.)
Anyways, I wouldn't be too surprised at this point to see them release something with GDDR5X if it would keep costs down, and save HBM2 for the top end, sort of like what they did with Fury when it launched - we got the Fiji with HBM and everything else was using normal memory (granted it was also old tech).
Maybe Vega 10 won't use HBM and Vega 11 will? We don't know! That's what has been so maddening about this whole thing. AMD could take five minutes and answer a lot of these questions and it would help a whole lot.
Performance isn't everything if performance is equal or close to equal.
I could care less about a power/ $ collector, its all about impression of what sells, cause OEM's system builders take that into consideration when they procure parts for their systems, and if they can save another 100 bucks on cooling and power deliver systems guess which cards they want to use will use?
Now manufacturing costs, how much does a 14nm wafer cost? How many possible chips can be made with a specific wafer size? Easy enough right?
Why would Vega 10 be on GDDR5x when its the flagship? The highest end model and why would Vega 11 which is the smaller Vega be using HBM? It only makes sense to use the more expensive memory on the higher end products.
I suppose that's a possibility, but since we don't know how it will actually compare to anything that's currently out, and we aren't privy to OEM pricing on bulk buys, and I'm not sure the market for OEM systems with extremely high-end video cards preinstalled is very big anyways, it's kind of hard to make that call at this point, isn't it?
No, not really. You're talking about HBM, interposers, different die design vs. something with a "traditional" memory controller, etc, etc. I'm just curious where you got your "$120/4GB so whole board will cost them at least $350" from.
Well, that's a typo on my part there. My point was that the low end one could be using cheaper memory, but we don't know, because AMD isn't telling. And it sucks, because they're basically shooting themselves in the foot. :/
Okay sure. Unless you can give me inside confirmation, I assume you were guessing and you got a good guess right.
I mean, since we're asking, why go through all that expense but not actually show the product, and only talk about it in these general terms?
If they've stated that GDDR5X is an option, and if HBM2 is the holdup, why not just get GDDR5X product out while they're waiting on the HBM2 stuff?
Yeah. If anything HBCC will need less bandwidth as it shouldn't be paging memory that isn't ultimately used. The only case it should use more is extremely poor page sizing (say 32k pages and 33k resources requiring two pages) or running out of memory (in which case the status quo applies).And if I'm reading too much in between the lines, the HBCC will need a great amount of bandwidth to work effectively.
The tiled raster as Nvidia pointed out triples their effective bandwidth. Change that and there really isn't that much of a difference between architectures. Even lossless compression will largely be a wash as those algorithms only go so far.I think their cards need more bandwidth than nV's cards though, going by polaris, it needs 30% more bandwidth at its performance levels to stay up with the 1060. Factoring that in to Vega, they can only get it with HBM 2. This is a guess but seems reasonable cause why wait for HBM 2 if they don't need to?
Which would just mean AMD creates a primitive shader that emulates the pipeline and implements the effect. That could be how they implement the fixed function pipeline anyways if it's entirely programmable. Scrap the fixed function stuff in exchange for more CUs and flexibility. Depending on the CU configuration, which we haven't seen, they could be just as capable as the fixed function units.Well AMD's tile renderer, seems to need its primitive shaders to work at least that is how AMD seems to be putting it
Yeah. If anything HBCC will need less bandwidth as it shouldn't be paging memory that isn't ultimately used. The only case it should use more is extremely poor page sizing (say 32k pages and 33k resources requiring two pages) or running out of memory (in which case the status quo applies).
Theoretically not in real case, highly dependent on how the renderer and shaders are set up. Just look from Keplar to Maxwell do we see a 3 times improvement with bandwidth synthetic tests, not really actually we only see the memory frequency improvements.The tiled raster as Nvidia pointed out triples their effective bandwidth. Change that and there really isn't that much of a difference between architectures. Even lossless compression will largely be a wash as those algorithms only go so far.
Which would just mean AMD creates a primitive shader that emulates the pipeline and implements the effect. That could be how they implement the fixed function pipeline anyways if it's entirely programmable. Scrap the fixed function stuff in exchange for more CUs and flexibility. Depending on the CU configuration, which we haven't seen, they could be just as capable as the fixed function units.
The type of memory should be irrelevant to the HBCC. They limited it to 2GB because otherwise it wouldn't show anything. There needs to be some contention for resources for the HBCC to show a significant benefit. There would likely be a small boost from less streaming of assets, but that should be minimal. I think it was something like 3% of the data changing each frame, then half of that for savings from the HBCC if usage cut in half. It'd be like testing cards with 8GB and 16GB of memory on a game that only used 6GB. Assuming same bandwidth, latency, etc performance would be identical. So the real benefit would be the lower memory footprint along with simplified memory management.No what i'm looking at it seems to need HBM to work effectively, it will work with other memory types but the bandwidth and latency would substantially reduce the possible gains. That is the only reason why AMD showed it off with limiting Vega to 2gb and a game, Deus Ex, which at those settings would use 6 gb, to show use the advantages. keep in mind if they reduced the GB avialbility the bandwidth will be limited too (unless they can quarantine off specific portions of memory chips)
https://www.techpowerup.com/231129/on-nvidias-tile-based-renderingTheoretically not in real case, highly dependent on how the renderer and shaders are set up. Just look from Keplar to Maxwell do we see a 3 times improvement with bandwidth synthetic tests, not really actually we only see the memory frequency improvements.
https://www.techpowerup.com/231129/on-nvidias-tile-based-rendering
Those are the figures Nvidia presented. In situations where the tiling works, that is roughly the benefit. Since it won't represent all the bandwidth required by a typical frame, the gains obviously won't be 3x without an infinite level of overdraw. That's where Nvidia's bandwidth/power efficiency has been coming from and it should be there with Vega.
GPU products based on the Vega architecture are expected to ship in the second quarter of 2017.
guys, its time to face the truth - its 2017 and we have a new Duke Nukem and it is known as RX Vega.
It's still a ~60% reduction in bandwidth for a significant percentage of the overall workload. That's still providing that 25-35% bandwidth and power savings that came with Maxwell. The compression is largely a part of the tiled process and lossless compression tech only goes so far.its not 3 times, tiled rasterizer benefits only shows at most 25%.....
Compression gives 50% savings.
Pascal took compression with what Maxwell had which is what GCN is equal now, and doubled it.
So if this is best case scenarios, we can expect quite a bit less in real world testing, outside of compression which is pretty much standard across all assets (the view angle of compression is not there with Pascal)
Even if AMD's TBR working with primitive shaders, it won't be able to equalize the bandwidth gap without better compression.
It's still a ~60% reduction in bandwidth for a significant percentage of the overall workload. That's still providing that 25-35% bandwidth and power savings that came with Maxwell. The compression is largely a part of the tiled process and lossless compression tech only goes so far.
Generation aside, there are still too many unknowns. Simply increasing cache/tile size I'd imagine does a lot to impact that performance. If AMD added the scalars it could conceivably be really strong at compression and sorting of those tiles. So yeah, we just have to wait and see. All we do know is they're extremely tight-lipped about what's actually in Vega.Gotta wait and see, cause Pascal is on its second generation of tiled renderer and its quite different from Maxwell, so What can we expect Vega realistically when its functioning? Expect something less right?
For me I'm just wanting Vega to fall somewhere in between the 1080 and the 1080Ti. I'd be happy enough with that sort of performance.
I'll be shocked if it can keep up with a 1080 and I'm being very honest. And if it does I suspect it'll have no OC headroom because its OC'd to the max from the factory just to keep up, much like the Fury cards.
I have a feeling you are right and that's a bit of a bummer really. I was hoping it can match the feats of RyZen and come out swinging.
I think that is the real problem it is too late. If you want the best performance now and probably even Later the 1080ti is there and now with the price cut the 1080 is a fantastic buy and so is the 1070. AMD needed this card to release last fall. They have ceded the performance market to Nvidia for nearly 2 years now.I have a feeling it'll compete with 1080 and be priced similarly. Still too little too late.
I think that is the real problem it is too late. If you want the best performance now and probably even Later the 1080ti is there and now with the price cut the 1080 is a fantastic buy and so is the 1070. AMD needed this card to release last fall. They have ceded the performance market to Nvidia for nearly 2 years now.
And by time the future tech comes around their competitor is out shining them. Joking aside I don't buy into the whole AMD designs for the future and is just more advanced argument at all. Instead NVIDIA is pouring if billions into advanced GPU computing for their fast growing server business and trickle down that tech to consumers creating vastly superior products. AMD is just trying to create a consumer graphics card NVIDIA is making super computers.AMD have a habit of not leading their target. They are designing future tech to compete wth a present product.