Radeon RX Vega 64 vs Radeon R9 Fury X Clock for Clock @ [H]

Excellent work. But doesn't being able to cope with higher clocks count for something? You can't clock the Fury to Vega speeds, can you?
 
Extremely well done article and explained quit a few things. I bet having double the vram helps in certain games as well, but thats a different subject.
 
absolutely outstanding work Brent and Kyle, I really hope theres some performance in Vega waiting to be unlocked
 
Two thumbs up on this article.

I actually thought Vega would do worst from a number of opinions expressed on the forums. Brent did an outstanding investigation with good number of data points. If only a few tests were done it could look like a 6%-8% gain or virtually nothing. Just enough to nail it in other words.

AMD: ,(just in case AMD can respond here)
  • Can we expect performance gains in future drivers that are significant as in 10% or more in current select games? Like GTA 5 or others?
  • Also gains from hardware ability in Vega not exposed or used yet in the drivers?
 
Awesome article guys! Very interesting stuff and you even went the extra mile to get the memory as close to parity as possible. Hard to argue with these results.
 
If you believe the whitepapers, there's a lot of latent tech buried in Vega. I think it's going to come down to which of two camps is correct: 1) RTG's driver team is going to unlock this latent tech transparently for existing titles allowing 'full' Vega to be experienced, or 2) Game developers are going to have to code explicitly for this latent tech for 'full' Vega to be experienced.

As far as power draw is concerned, not surprising at all. RTG tried to clock it as high as it could so it could trade blows with a 1080. It's no secret that a chip running 'within design parameters' is going to run more efficiently than a chip clocked balls to the wall.
As power draw is concerned it seems to be something for seriously warm countries.
I'm down 67watts, up 67 mhz average clock, down a couple of DB of noise.
And I like my room quite warm....

Warmer climates = more heat = more voltage.
Also add in some room for bad psu voltage ripples and you got a seriously high stock voltage.

It was easier to downclock Fury X, than try and overclock Vega 64 that high and introduce instability. I did not need that complication.
Any Vega 64 should do 1050 without anything done.
I tested 1125 on mine(Samsung) and 1100 on V56 with V64 bios, and yes it's very much very stable.

Although the benefits of HBM plummets past 1050mhz suggesting that these cards are not bandwidth starved as many "internet experts" seems to suggest.
There doesn't seem to be good scaling above 1650 mhz core either.
 
Great article.
But what is the deal with the pcie 3.0 dropping to 1.1?
I would think 3.0 vs 2.0 vs 1.1 etc would be a pure bios setting that shouldn't change through software?
Honestly I don't know much about how this works, but it just seems like it shouldn't do that.
Any insight into this, from anyone, would be great.
 
As power draw is concerned it seems to be something for seriously warm countries.
I'm down 67watts, up 67 mhz average clock, down a couple of DB of noise.
And I like my room quite warm....

Warmer climates = more heat = more voltage.
Also add in some room for bad psu voltage ripples and you got a seriously high stock voltage.


Any Vega 64 should do 1050 without anything done.
I tested 1125 on mine(Samsung) and 1100 on V56 with V64 bios, and yes it's very much very stable.

Although the benefits of HBM plummets past 1050mhz suggesting that these cards are not bandwidth starved as many "internet experts" seems to suggest.
There doesn't seem to be good scaling above 1650 mhz core either.
I don't know about that, at 1050 my 64 LC locks up in Supersition, 1020 ok. It will depend on the card on hand.

Either way works to get parity on HBM bandwidth if stable.
 
Very much appreciated Kyle and Brent, you guys deliver like always!

I would have more to add but getting ready for elk hunting, no time for thoughts, just action!
 
Great article.
But what is the deal with the pcie 3.0 dropping to 1.1?
I would think 3.0 vs 2.0 vs 1.1 etc would be a pure bios setting that shouldn't change through software?
Honestly I don't know much about how this works, but it just seems like it shouldn't do that.
Any insight into this, from anyone, would be great.

PCIE slots drop down to slower modes to decrease power consumption.
 
Looks all the new hot features are either disabled or not working properly. Can a driver update solve this? We'll see.
 
Looks all the new hot features are either disabled or not working properly. Can a driver update solve this? We'll see.
I wonder if Microsoft will need to do an update to DX as well? Really only AMD can answer this. Or it just shows up in driver releases.
 
I wonder if Microsoft will need to do an update to DX as well? Really only AMD can answer this. Or it just shows up in driver releases.
I think all of that is wishful thinking. Lets just accept Vega for what it is an overclocked Fiji with extra math capabilities. I don't think there will be a driver that magically unlocks all of these features. if it existed AMD would have launched with it. You only get one launch.
 
All AMD needs to do, glue and hammer in a Ryzen core inside a Vega core, to help it run faster.
 
I'm sorry, but having re-read the article, I still can't see it. All I see is the odd line like 'All performance advantages Vega 64 has over Fury X is down to clock speed differences.' (from page 9).
Would you like me to come to your house and read it out loud for you. I guess you are asking me to cut and paste from the article here? Reading is fundamental.
 
Thanks Kyle and Brent for all these various AMD articles. I've been learning a lot about the newest gens from them. Having been team blue/green for so long now AMD is almost like a foreign language but I know that significant strides have been made in the last couple of years.

Found it really interesting about the logic of power consumption for HBM2 in order to maintain proper power for the rest of the board. This goes with the logic Kyle spoke of in terms of OC'ing Vram and keeping it slightly lower than max to have higher stable core clocks. Having practiced that now and enjoying the benefits all I can say is thanks, convincing others isn't so easy. I'm now also beginning to understand the relevancy for voltage across the board tests you do for MOBO's and OC'ing. It's all relative.

Hopefully AMD can get dev's to use some of those unused features you mentioned to further awaken these sleeping dragons!
 
Last edited by a moderator:
I wonder if Microsoft will need to do an update to DX as well? Really only AMD can answer this. Or it just shows up in driver releases.

As much as I like to root for the underdog, I am just scratching my head trying to figure out what they were doing these past 2 years. Just waiting for HBM2... and call it a day?
 
I'm thinking vega is testing ground for HBM2 and a step up from Fury for those who need a bit more oomph on the AMD side. I expect the next gen to leverage their findings with vega and take a more impressive leap in performance (or at least perf/watt).
 
If you believe the whitepapers, there's a lot of latent tech buried in Vega. I think it's going to come down to which of two camps is correct: 1) RTG's driver team is going to unlock this latent tech transparently for existing titles allowing 'full' Vega to be experienced, or 2) Game developers are going to have to code explicitly for this latent tech for 'full' Vega to be experienced.

As far as power draw is concerned, not surprising at all. RTG tried to clock it as high as it could so it could trade blows with a 1080. It's no secret that a chip running 'within design parameters' is going to run more efficiently than a chip clocked balls to the wall.

I think we'd all LOVE it if AMD pulled a rabbit from a hat with a driver release and the Vega WTFPWND the 1080s. I know I would.

Just doesn't seem super likely. <shrug>

Two thumbs up on this article.

I actually thought Vega would do worst from a number of opinions expressed on the forums. Brent did an outstanding investigation with good number of data points. If only a few tests were done it could look like a 6%-8% gain or virtually nothing. Just enough to nail it in other words.

AMD: ,(just in case AMD can respond here)
  • Can we expect performance gains in future drivers that are significant as in 10% or more in current select games? Like GTA 5 or others?
  • Also gains from hardware ability in Vega not exposed or used yet in the drivers?

Looks all the new hot features are either disabled or not working properly. Can a driver update solve this? We'll see.
This discussion about a magic driver seems oddly reminiscent of the HD 2900 XT.
 
I love [H] for doing these kinds of reviews.

My thoughts (without having read a bunch of whitepapers) is that portions of the pipeline have had some tweaks. Some game engines will benefit more than others.

If the die shrink allowed for higher clocks, it is still an improvement.

I think the pipeline type tweaks only result in a percent or 2 here and there in IPC increases. Worth doing but it's just not like the old days where performance jumped 50% or more each generation.
 
I love [H] for doing these kinds of reviews.

My thoughts (without having read a bunch of whitepapers) is that portions of the pipeline have had some tweaks. Some game engines will benefit more than others.

If the die shrink allowed for higher clocks, it is still an improvement.

I think the pipeline type tweaks only result in a percent or 2 here and there in IPC increases. Worth doing but it's just not like the old days where performance jumped 50% or more each generation.

I mean, they got 8% in some games so there is deifnitely some improvement. If they can get that across the board, it would be pretty decent, but yeah I'm still struggling to see what took them 2 years unless they had to completely re-lay the thing out to get it to clock at 14nm...Which could be the case.
 
I mean, they got 8% in some games so there is deifnitely some improvement. If they can get that across the board, it would be pretty decent, but yeah I'm still struggling to see what took them 2 years unless they had to completely re-lay the thing out to get it to clock at 14nm...Which could be the case.
Completely out of my ass guess here, but I think they where blindsided by Pascals performance. If we look at what Vega is you realize it sacrifices cooling and power to go toe to toe with the GTX 1080, but when you you use the lower power bios and under-volt Vega suddenly it drops power and heat incredibly and still performs well but can no longer match a 1080. I think those lower power requirements was where Vega was originally designed at, but once the GTX 1080 came out they did not want to be seen as behind NVIDIA yet again.
 
Completely out of my ass guess here, but I think they where blindsided by Pascals performance. If we look at what Vega is you realize it sacrifices cooling and power to go toe to toe with the GTX 1080, but when you you use the lower power bios and under-volt Vega suddenly it drops power and heat incredibly and still performs well but can no longer match a 1080. I think those lower power requirements was where Vega was originally designed at, but once the GTX 1080 came out they did not want to be seen as behind NVIDIA yet again.

I was just about to say pretty much the same. I also think AMD was forced to increase the voltage and clockspeeds on Vega to meet its performance at the cost of powerdraw. As you point out Vega suddenly becomes much more power efficient once its under volted.
 
That was an interesting comparison. There must have been some architecture improvements over the Fury X as there is no way in hell that the Fury X would reach Vega clock speeds. Whatever we all think about the Vega series GPU's, they are more competitive now that they have been for a long time which gives people a choice of the mid to high cards.
 
Typically Macallan 18. (I go for highlands usually.) Lately though I've gotten back into wine a bit more, but it doesn't go quite as good with the cigars. (well, depending on the wine)

nice single malt!
 
As long as someone reads the part of the article to do with clock speeds I'm all for a gathering. :D (gotta fly to Seattle though...)
 
I love [H] for doing these kinds of reviews.

My thoughts (without having read a bunch of whitepapers) is that portions of the pipeline have had some tweaks. Some game engines will benefit more than others.

If the die shrink allowed for higher clocks, it is still an improvement.

I think the pipeline type tweaks only result in a percent or 2 here and there in IPC increases. Worth doing but it's just not like the old days where performance jumped 50% or more each generation.

There are added pipeline stages, to help improve clock speed, but that also has a downside, latency.
 
That was an interesting comparison. There must have been some architecture improvements over the Fury X as there is no way in hell that the Fury X would reach Vega clock speeds. Whatever we all think about the Vega series GPU's, they are more competitive now that they have been for a long time which gives people a choice of the mid to high cards.

Node shrink, and lots and lots of pipeline stages added, plus voltage, = more clock speed.

It's a brute force approach, and that has worked in the past.
 
I am glad you read my comments on the previous reviews and tested it out yourselves. I am quite surprised really.
That was unbelievably well done.
 
Given the size of AMD and their lack of R&D budget, I don't think it was unreasonable for AMD to spend 2 years on a die shrink, pipeline increase and additional compute logic. For example, if we just look at a die shrink, all the basic logic gates and IP modules have to be redesigned, tested and confirmed to work with the new process. All VLSI layout rules and parameters have to be tested and changed for every single transistor, gate, logic circuit, IP module. This will require close collaboration with GF, which may not be as competent as TMSC in terms of spitting out ready made IP library for use by AMD.

Just remember, behind all the abstraction, the damn thing is still physically laid out transistor by transistor by some poor engineers working overtime at GF and AMD (okay, not as dire given most use limited automation).
 
Back
Top