AMD Radeon RX Vega 64 Video Card Review @ [H]

Did you miss the direct quote by mr smug mug that pascal would be unbeatable for this Christmas season?

Yes and nothing in there tells you anything about future strategy, all he says is Pascal is still strong for the foreseeable future (a neutral response to keep consumer interest still in Pascal) but that has no relevance about Volta and if they launch GV104 before end of year or even when Volta Titan launches.
If you read that transcript in full it is clear he is keeping the narrative on GV100 for talks about Volta, previous year financial call before launch of GP104 and GP100 he kept the talk from himself for Pascal primarily on automobiles, HPC, software advancement for future growth.

In the 2016 Q1 Financial call, they only mentioned Pascal briefly in a consumer context (most was HPC/automobile focused) and had to mention it because GP104 had been announced 1 week earlier, and even so the primary focus from Jen-Hsun was automobile and HPC for Pascal even when asked about major growth factors Q2 and gaming where he did not explicitely say anything about impact of GP104 but talked about software.
Putting it into context; Nvidia ONLY mention products or segment platfform and future involving it where it has been announced, because they only talked about GP104 and GP100 in the May 2016 financial call, but we know Titan Pascal launched August nor any mention about GP106 or longer term but important to HPC Tesla/Quadro GP104 and GP102 launched in Oct 2016 but was announced late Q2.
For Nvidia the Pascal Titan was actually quite important because that was their narrative push for Int8 inferencing and strong synergy with GP100 yet no mention about that when asked about Pascal going forward, and like I said no mention of their GP106, for the reason I gave that Nvidia do not talk about future product-segment generally unless they are already announced (context more generally rather than their professional presentations).

But this digresses that many publications are making massive assumptions and generate BS around Nvidia because like I said in previous post Nvidia has a pretty tight ship on information feeding, and many of those publications made totally wrong assumptions and reports last year relating to Pascal (I gave examples) and were totally caught out when Pascal did launch, and we are seeing a repeat it seems with some publications taking Jen-Hsun totally out of context.

Cheers
 
Last edited:
There may be still a lot left to squeeze out of Vega in terms of mining performance. I want to see what it does with HBCC enabled.
"High Bandwith Cache Controller - still inactive"
https://translate.google.co.uk/translate?sl=auto&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=https://www.computerbase.de/2017-08/radeon-rx-vega-64-56-test/&edit-text=

Need to consider how is the HBCC going to help when one has system RAM-VRAM-cache as the primary data buffers on consumer PC rather than HPC?
If AMD has hooked it somehow heavily into the VRAM and is now a compromise unless the drivers can use it properly with said software well then someone needs to be shot at AMD, but I do not think that nor can I see how HBCC is going to help traditional GPU-CPU environment.
HBCC is their take on a Unified Memory controller also with other functionality.
Cheers
 
What reference AMD card for that matter, has ever been the best representation of what a chip can do?

Why do reference cards always release with these terrible coolers? Is it because AMD have to agree to give their partners an economy to grow and work in with better cooling options?

Totally fixed that for you!

Nvidia reference cards and their coolers have been awesome for years!
 
Yes and nothing in there tells you anything about future strategy, all he says is Pascal is still strong for the foreseeable future (a neutral response to keep consumer interest still in Pascal) but that has no relevance about Volta and if they launch GV104 before end of year or even when Volta Titan launches.
If you read that transcript in full it is clear he is keeping the narrative on GV100 for talks about Volta, previous year financial call before launch of GP104 and GP100 he kept the talk from himself for Pascal primarily on automobiles, HPC, software advancement for future growth.

In the 2016 Q1 Financial call, they only mentioned Pascal briefly in a consumer context (most was HPC/automobile focused) and had to mention it because GP104 had been announced 1 week earlier, and even so the primary focus from Jen-Hsun was automobile and HPC for Pascal even when asked about major growth factors Q2 and gaming where he did not explicitely say anything about impact of GP104 but talked about software.
Putting it into context; Nvidia ONLY mention products or segment platfform and future involving it where it has been announced, because they only talked about GP104 and GP100 in the May 2016 financial call, but we know Titan Pascal launched in August nor any mention about GP106 or longer term but important to HPC Tesla/Quadro GP104 and GP102 launched in Oct 2016 but was announced late Q2.
For Nvidia the Pascal Titan was actually quite important because that was their narrative push for Int8 inferencing and strong synergy with GP100 yet no mention about that when asked about Pascal going forward, and like I said no mention of their GP106, for the reason I gave that Nvidia do not talk about future product-segment generally unless they are already announced (context more generally rather than their professional presentations).

But this digresses that many publications are making massive assumptions and generate BS around Nvidia because like I said in previous post Nvidia has a pretty tight ship on information feeding, and many of those publications made totally wrong assumptions and reports last year relating to Pascal (I gave examples) and were totally caught out when Pascal did launch, and we are seeing a repeat it seems with some publications taking Jen-Hsun totally out of context.

Cheers

Well, itt is funny that lot's of people are saying NVIDIA is going to trump 1080 at an affordable price point in Q1 2018. Why would they do that? Everyone keeps saying this is the move, to 'shame' AMD. That's how years of development, engineering and frabrication work? That is totally why NVIDIA is bleeding the 1080 for 1080ti's?
 
Well, itt is funny that lot's of people are saying NVIDIA is going to trump 1080 at an affordable price point in Q1 2018. Why would they do that? Everyone keeps saying this is the move, to 'shame' AMD. That's how years of development, engineering and frabrication work? That is totally why NVIDIA is bleeding the 1080 for 1080ti's?
That has nothing to do with my post or context and based upon 'information' that again is speculation.
What we do know is the relative performance/envelope of Volta and also the product strategy Nvidia used with Pascal that is a mirror of Volta in many ways.
Cheers
 
Totally fixed that for you!

Nvidia reference cards and their coolers have been awesome for years!

The NVIDIA reference blowers don't suck on the 1080's FE?

That has nothing to do with my post or context and based upon 'information' that again is speculation.
What we do know is the relative performance/envelope of Volta and also the product strategy Nvidia used with Pascal that is a mirror of Volta in many ways.
Cheers

Wow, that was like 10 seconds. Volta is not around the corner. You know the performance/envelope of a card that by NVIDIA's admission probably doesn't exist? Volta is well off road-map.
 
The NVIDIA reference blowers don't suck on the 1080's FE?



Wow, that was like 10 seconds. Volta is not around the corner. You know the performance/envelope of a card that by NVIDIA's admission probably doesn't exist? Volta is well off road-map.
Where do I talk about reference blowers?
If your going down that route (though is model meaning blower vs custom card not GPU die that is my context) well then complain about sites and people talking about the need to undervolt Vega and Polaris to give them greater relevance...
I feel you really want to argue as you add different tangent points, and last point about GPU architecture performance envelope indicator that has sweet FA to do with the reference blower and my context as you well know.
You can extrapolate by a fair amount what is in V100 if ignoring the Tensor cores, it gives us the performance envelope and like I said we know Nvidia's product strategy from Pascal and Nvidia has so far followed it pretty much with Volta.
If you want to argue about this I suggest we go to another thread rather than cluttering this one.
 
Last edited:
I think its safe to speculate that its mining performance will increase. We know that Fury X is the fastest in Dual mining ETH + SIA etc. Judging 580's performance in games vs its performance in mining with simple memory strap editing.
It becomes even more convincing. None of the miners out there are optimized for Vega yet. Its pretty clear that its gaming and mining performance is that of a new GPU. It needs to be optimized by drivers first , so it can be tweaked by the developers of the miners.
 
Maybe AMD's next Vega? The " GT " ....... This one runs cooler.


vega 2.jpg
 
Last edited:
I don't know, would it be fair to say that the VEGA 64 is AMD's new Bulldozer?
 
I don't know, would it be fair to say that the VEGA 64 is AMD's new Bulldozer?

I don't think so. Bulldozer architecture felt like a bad design and also almost a marketing lie. Vega's just hot and delayed, but not a badly designed architecture.
 
I don't know, would it be fair to say that the VEGA 64 is AMD's new Bulldozer?

That's a bit of a stretch. Vega is a bit underwhelming, but it's performing at a fairly high level. The 1080 Ti is out of it's reach, but the 1080 is in the same ballpark. The price and the power draw make it unappealing compared to the 1080, but it's not a total stomp. If prices on it slip a bit, it'll start being a pretty good option.

Bulldozer was just plain awful. It didn't compete even with Intel's midrange offerings and quickly had to be priced into the bargain bin to sell it.
 
I don't think so. Bulldozer architecture felt like a bad design and also almost a marketing lie. Vega's just hot and delayed, but not a badly designed architecture.

To get the MOST out of GCN, games have to be tailored to it. (Similar to GLide was for Voodoo) And that's why you see such awesome performance for Vulkan titles. But the amount of work needed to wring that performance is on the high side.

I guessing AMDs strategy was to sell GCN enabled gaming systems (PS4 and XBone) in the hopes that programmers would create toolkits and games which wringed everybit out of vulcan to help with the often lack luster resources of consoles. And in turn this would transfer over to the PC world as games were ported. Unfortunately this really hasn't panned out. GCN is indeed more powerful, but complex and generates a lot of extra circuits in the process which limits it's overall top speed. And if you look at the transistor count, you can see AMD has a lot more overhead for their performance.

I take NVIDIA versus AMD akin to a Dodge Demon versus a Chevy Corvette.

Most people are just going to drag race and the Demon is going to win that hands down. It was designed from the ground up to go fast in a straight line in a 1/4 mile. This is NVIDIA. They designed it to do one thing and do it well in a straight line DX11 and DX12 with low power overhead. This is what 95% of gamers care about.

AMD's designed their to have more finesse to handle a whole series of task (ie: HBMC with large datasets, async compute etc), which in the end takes more driver skill to get the most out of it. It would be interesting if AMD's drivers can invoke all the CPUs/Threads in Ryzen and threadripper in prep for the Draw Call. But as others pointed out the Anti Alaising performance is really hurting it. It's something they known about since Fury and they really didn't properly address it.

On the upshot, I do believe that Vega might have a bit more longevity to it in terms of life if it can do VR. It's certainly more capable at DX12/Vulkan games. It also scales better at higher resolutions. This again indicates room for possible driver improvements as the CPU becomes the bottleneck. Optimizing for RYZEN/TR could help. And it IS a compelling option IF you want an adaptive monitor and you are on a fixed budget provided you can pick it up for $500 with AFTERMARKET cooling. I'm not worried about the excess power usage as it would take years upon years to make up the nvidia tax on the gsync monitor (That's running it full tilt 24/7).
 
Last edited by a moderator:
To get the MOST out of GCN, games have to be tailored to it. (Similar to GLide was for Voodoo) And that's why you see such awesome performance for Vulkan titles. But the amount of work needed to wring that performance is on the high side.

I guessing AMDs strategy was to sell GCN enabled gaming systems (PS4 and XBone) in the hopes that programmers would create toolkits and games which wringed everybit out of vulcan to help with the often lack luster resources of consoles. And in turn this would transfer over to the PC world as games were ported. Unfortunately this really hasn't panned out. GCN is indeed more powerful, but complex and generates a lot of extra circuits in the process which limits it's overall top speed.

I take NVIDIA versus AMD akin to a Dodge Demon versus a Chevy Corvette.

Most people are just going to drag race and the Demon is going to win that hands down. It was designed from the ground up to go fast in a straight line in a 1/4 mile. This is NVIDIA. They designed it to do one thing and do it well in a straight line.

AMD's designed their to have more finesse to handle a whole series of task (ie: HBMC with large datasets, async compute etc), which in the end takes more driver skill to get the most out of it.

I am not sure about that because in AAA games AMD has closed the gap with some of the notable ones out there even in DX11, DX12 ones depend if it is a new engine and with Nvidia engineers engaged with the studios.
I feel the console strategy has paid off to some extent as they seem to be getting some better integrated optimisation at time of game development, even if at times it is not as good as could be compared to involvement of AMD's driver engineers on the port/PC game.
Games where Nvidia engineer's engage with dev team you can see where there are very strong benefits at times, but it is not as consistent as it was across all AAA games.

Regarding what both manufacturer's introduce into their new lineup, Nvidia tends to only implement technology that is viable now or very shortly where AMD decides to implement in their new GPU technology that has no use whatsoever for at least 15 months and even then that is early adoption - kinda frustrating because I feel AMD would have quicker product lead times if they focused on now to 12 months rather than technology that is going to take quite a long time to get momentum and by the time it does Nvidia puts it into their next current design just when it becomes relevant.
If one only intends to upgrade once every 2-3 years the AMD approach can be nice, but these days games are becoming so demanding for enthusiasts I am not sure many can wait that long, especially as the upgrade cycle of consoles has increased.
Vega could had come to market a lot sooner with some of that technology scope implemented pulled back rather than putting everything in now; HBCC could had been simpler as one example.
Only critical because I feel AMD is hindering their potential in this context.

Cheers
 
Last edited:
To get the MOST out of GCN, games have to be tailored to it. (Similar to GLide was for Voodoo) And that's why you see such awesome performance for Vulkan titles. But the amount of work needed to wring that performance is on the high side.

I guessing AMDs strategy was to sell GCN enabled gaming systems (PS4 and XBone) in the hopes that programmers would create toolkits and games which wringed everybit out of vulcan to help with the often lack luster resources of consoles. And in turn this would transfer over to the PC world as games were ported. Unfortunately this really hasn't panned out. GCN is indeed more powerful, but complex and generates a lot of extra circuits in the process which limits it's overall top speed. And if you look at the transistor count, you can see AMD has a lot more overhead for their performance.

Funny enough many of those games that run great on AMD consoles just happened to sprout GameWorks on the PC version and didn't run so good. Silly AMD...
 
Funny enough many of those games that run great on AMD consoles just happened to sprout GameWorks on the PC version and didn't run so good. Silly AMD...
One thing to watch out for; Gameworks is now fully DX12 integrated and that will make it much better and viable; previously it would had been a nightmare to integrate Gameworks well into engines designed for both DX11 and DX12, also as it is a 'bolt-on' suite it really needs excellent dev-code understanding to not fook it up, seems most try to do it more basic approach on the cheaper side to save resources when porting from console to PC and trying to integrate the DX11 Gameworks would only had exacerbated this.
Sort of like if studios try to use some of the game engines on the cheap without full resource commitments they can end up as a big disappointment.

Moving forward studios now have the joy of looking to integrate both AMD and Nvidia suites into their games and critically engines.
Cheers
 
Fanboys crying it has to optmized to extract the power of Vega?! You want it to run hotter?! Call it for what it is. AMD put out a card just to have a product in that bracket. One day AMD may have a 9700 or (arguable) 290X again but today is not that day.
 
Fanboys crying it has to optmized to extract the power of Vega?! You want it to run hotter?! Call it for what it is. AMD put out a card just to have a product in that bracket. One day AMD may have a 9700 or (arguable) 290X again but today is not that day.

The 7970 was the last "great" card by AMD. the 290x was good, the FurX was average and Vega is terrible. GCN seems to have a shelf life.
 
didn't AMD really pave the way wit\h HBM technology or is that a misunderstanding? Seems AMD paved the way and even NVIDIA uses HBM, but not in their consumer products.. Seems HBM is so expensive coupled with aged GCN architecture that is the one-two punch against Vega... maybe they sohuld have left HBM2 for the FE / professional market and focused on more traditional memories to keep the cost down. Maybe it's time they ditch the GCN architecture too.
 
Here is the temps from Tom's Hardware.
http://www.tomshardware.com/reviews/amd-radeon-rx-vega-64,5173-18.html

This is still within parameters so no risk but possibly comes back to the height variation depending upon packaging and their review.

The challenge with HBM is ensuring the stacks are in contact correctly with cooling system because the temp variation between the top and bottom DRAM-logic die stack is quite a lot, especially when done with air cooling and more so for the 8-Hi stacks.
I have some presentations on this subject, but here is a broad summary news most sites released as part of the JEDEC HBM2 announcement relating to the same context:


And yeah I should reiterate it is not a risk, nor really much of an issue nor conspiracy but interesting that it may cause different thermal behaviour for the HBM2 stacks.
BUT one will need to be very careful if putting their own waterblocks on with those lower height packaged cards.
Cheers
I am not sure you can contribute that to the substrate packaging. To actually test that you would need to do some seriously controlled heat testing with multiple GPUs from each group then compare, THEN you would need to pull the heatspreaders and test all that again with bare GPUs to figure out if it was packaging or IHS tolerances.
 
From HardOCP perspective might be interesting to see if AMD can advise if there are further considerations to installing a waterblock or OCing the HBM2 memory on that specific packaging.
I seriously doubt we would get an answer on that. Going to have to go back to the block builder for that depending on design. And keep in mind here that AMD does not retail any of these cards, but rather its partners.
 
I have a feeling they will need to respond because it raises a potentially valid point of unknown that with that particular package being lower height it has ramifications for custom waterblocks and also more importantly (as applies to more consumers) stressing the HBM2 stacks by OCing them.
They contract the substrate-GPU packaging work so the responsibility is still on them before the partners.
Same way Nvidia had to respond when partners actually changed the reference design by replacing the memory from Samsung to Micron (or whatever it was) and in that instance one could argue the onus was more on the partners.
Cheers
And I would suggest that this would be between them and their partners that AMD SELLS THE CARDS and/or GPUS to.
 
So the video I am linking is interesting this guy had a Vega 56 beating a GTX 1080 in Dirt 4 by a decent amount but he was using CMAA or no AA. Something in Vega seriously breaks however when MSAA is used and the FPS plummets.

 
It still is a heater ! For people who pay 3x as much for electricity as you guys in the US do that actually does matter far more. 130-150w more is just a no-go
Power bills where I live can run upwards of $700 a month in the summer, and that is with the AC turned up to 80f. I actually pay attention to power draw and heat output.
 
It still is a heater ! For people who pay 3x as much for electricity as you guys in the US do that actually does matter far more. 130-150w more is just a no-go

Unless you are gaming 24/7 on it you aren't going to notice much of a hit to your electric bill. A couple bucks a month, maybe.
 
So on our side of the pond, 56 is not listed yet, 64 is over 650 euro and the water cooled ones are at about 790 euro. In perspective, 1080 Zotac AMP! is at 550 euro and cheapest Ti - Gigabyte Gaming OC Black is at 750 euro. Not sure, what AMD wants to achieve with the 64, that is much more power hungry and much more expensive than proper AIB NV cards.
 
So the video I am linking is interesting this guy had a Vega 56 beating a GTX 1080 in Dirt 4 by a decent amount but he was using CMAA or no AA. Something in Vega seriously breaks however when MSAA is used and the FPS plummets.



He also shows that a FuryX is actually faster than a GTX1070 with no AA in that game by around 6-7% and the 1060 vs 580 where the 580 is faster by 17-20%.
He also then goes to another site that has Dirt4 running 4x MSAA and at that setting the Fury X is in front of a 1070 and Vega56 is over 20% faster still, which he then confirmed with his own results back at launch day where he used 4xMSAA and had 580 20% faster than 1060.

So the only thing to note is the game is weighted towards AMD, but AMD suffers when using 8x MSAA but responds well to CMAA.
I think he had to do the video because he had different settings that skewed the results and that generated comments from subscribers.
Cheers
 
He also shows that a FuryX is actually faster than a GTX1070 with no AA in that game by around 6-7% and the 1060 vs 580 where the 580 is faster by 17-20%.
He also then goes to another site that has Dirt4 running 4x MSAA and at that setting the Fury X is in front of a 1070 and Vega56 is over 20% faster still, which he then confirmed with his own results back at launch day where he used 4xMSAA and had 580 20% faster than 1060.

So the only thing to note is the game is weighted towards AMD, but AMD suffers when using 8x MSAA but responds well to CMAA.
I think he had to do the video because he had different settings that skewed the results and that generated comments from subscribers.
Cheers
It's still an obvious bug or flaw that it it tanks badly at 8x MSAA. That is noteworthy.
 
It's still an obvious bug or flaw that it it tanks badly at 8x MSAA. That is noteworthy.
Not sure that much can be read into it, because to get that much more performance out of the engine relative to the Nvidia cards obviously the game has some hefty optimisation, and maybe that optimisation starts to drop after 4x MSAA due to the internal engine design.
I mean 17-20% more from the 580 over a 1060 and also FuryX outperforming a 1070, how is there a flaw in the game/Vega apart from for Nvidia?
The 8x MSAA setting also affects the performance of 580 and also FuryX where they drop relative to Nvidia as well.
That said 8x MSAA is rather excessive for 1440p anyway.
Cheers
 
Last edited:
Not sure that much can be read into it, because to get that much more performance out of the engine relative to the Nvidia cards obviously the game has some hefty optimisation, and maybe that optimisation starts to drop after 4x MSAA due to the internal engine design.
I mean 17-20% more from the 580 over a 1060 and also FuryX outperforming a 1070, how is there a flaw in the game/Vega apart from for Nvidia?
The 8x MSAA setting also affects the performance of 580 and also FuryX where they drop relative to Nvidia as well.
That said 8x MSAA is rather excessive for 1440p anyway.
Cheers
The MSAA issue appears in other games as well not just this one game.
 
How many people out there (not just [H], but overall) are in the market for a Vega/1080 or Vega56/1070 that own neither a G-Sync or a FreeSync monitor?

This will be me next year. I will be looking for a high-refresh rate 4K monitor and a GPU to support it.
 
The MSAA issue appears in other games as well not just this one game.
The only reason we used such a high level of MSAA in Dirt 4 is because the game was playable at 8XMSAA on the review card.
 
Back
Top