Radeon RX Vega 64 vs Radeon R9 Fury X Clock for Clock @ [H]

I think all of that is wishful thinking. Lets just accept Vega for what it is an overclocked Fiji with extra math capabilities. I don't think there will be a driver that magically unlocks all of these features. if it existed AMD would have launched with it. You only get one launch.
Not wishful thinking - it is an unanswered question dealing with the gaming aspect - you can think all you want I am more interested in facts of real capability is all.
 
Not wishful thinking - it is an unanswered question dealing with the gaming aspect - you can think all you want I am more interested in facts of real capability is all.
You have all the facts of real capability in front of you. What it is now is its real capability anything else does not exists. No driver is going to unlock untapped potential en-mass.
 
You have all the facts of real capability in front of you. What it is now is its real capability anything else does not exists. No driver is going to unlock untapped potential en-mass.
So is that wishful thinking on your part? How do you know this? You maybe right I do not know.
 
So is that wishful thinking on your part? How do you know this? You maybe right I do not know.
It's not it is based on facts and history. No company is going to launch a product with all of its performance enhancing capabilities disabled. That is nonsensical. You only get one launch and you are going to put your best foot forward. It is rare to see a driver grant 10% performance increases across the board if it even grants that in a single game. By looking at Vega's heat output and power-draw it is self evident AMD has pushed the card to its thermal and power limits to compete with a GTX 1080; if they had the ability to unlock hardware potential through a magic driver they would have done that over sacrificing thermals and power-draw.
 
If you're talking about past cards, that hypothesis has been tested here at [H]ard. As for Vega, I doubt it will have huge gains from driver updates, but a few percent here and there for games which were already out before its launch isn't an unreasonable expectation, imo.
 
Thanks for this analysis, Kyle. I'm wondering, is it possible to take a Vega 56 or 64 and disable some shaders to see what the optimal clock speed/shader count is? I would imagine that if you had, say, a Vega 32, but with the same power headroom as the Vega 64 (as we've seen with the Vega 56 flashed with a Vega 64 bios), you might be able to push that Vega 32 up to nearly 2000MHz or so? Just curious to see what would happen to the game benchmarks on such a card.
 
Thanks for this analysis, Kyle. I'm wondering, is it possible to take a Vega 56 or 64 and disable some shaders to see what the optimal clock speed/shader count is? I would imagine that if you had, say, a Vega 32, but with the same power headroom as the Vega 64 (as we've seen with the Vega 56 flashed with a Vega 64 bios), you might be able to push that Vega 32 up to nearly 2000MHz or so? Just curious to see what would happen to the game benchmarks on such a card.
Cannot modify VBIOS at this time .
 
If you're talking about past cards, that hypothesis has been tested here at [H]ard. As for Vega, I doubt it will have huge gains from driver updates, but a few percent here and there for games which were already out before its launch isn't an unreasonable expectation, imo.
Where you have instances of good gains, 30% for Fallout 4 and 22% for Doom. Looks like Fallout 4 was bugs, Doom though was exposing hardware features of GPU. Overall it may not look impressive but certain titles may see some big improvements, an average of 10% may not seem much but when you start looking at titles that may mean much more, especially titles that are popular and are on your list to play. If you go back to Hawaii era, that was a more impressive period of driver updates increasing performance plus the hardware was really just more future proof as well. Now GCN looks to be behind the times.
 
Lovely work from the H Team! Thanks for doing this work.

So I was right. Vega is a clock speed boosted/factory overclocked part, and i will say this again, i would put money on the fact that AMD is lying about what Vega is, and that Vega is not even a die shrink, its manufactured on the same process as the previous gen. this explains the performance, the power, and the temps.

Vega is a lie, Raja is a liar.
 
Lovely work from the H Team! Thanks for doing this work.

So I was right. Vega is a clock speed boosted/factory overclocked part, and i will say this again, i would put money on the fact that AMD is lying about what Vega is, and that Vega is not even a die shrink, its manufactured on the same process as the previous gen. this explains the performance, the power, and the temps.

Vega is a lie, Raja is a liar.
But it has a smaller die and a buttload more transistors?
 
But it has a smaller die and a buttload more transistors?

But does it? How can anyone of us prove that it's actually different, and what process it's been made on? I'm not trying to say that they did not try to optimise layout, that would cause a smaller die using the same process due to better transistor placement and a more optimised design layout, use of denser libraries etc.

You tell me why it takes nearly 4 billion more transistors to add 2% performance? What are those 4 billion transistors doing, apart from using power and getting hot? They sure as hell aren't doing anything else. Hell, AMD could tell you it has 20 billion transistors, and there's nothing you can do to prove it.
 
But does it? How can anyone of us prove that it's actually different, and what process it's been made on? I'm not trying to say that they did not try to optimise layout, that would cause a smaller die using the same process due to better transistor placement and a more optimised design layout, use of denser libraries etc.

You tell me why it takes nearly 4 billion more transistors to add 2% performance? What are those 4 billion transistors doing, apart from using power and getting hot? They sure as hell aren't doing anything else. Hell, AMD could tell you it has 20 billion transistors, and there's nothing you can do to prove it.


increased clock speeds AMD stated they used 3 billion transistors or so to increase clocks. Longer pipelines more stages.
 
increased clock speeds AMD stated they used 3 billion transistors or so to increase clocks. Longer pipelines more stages.
So there you have it. It's the same chip with a few very minor tweaks to improve clock speed and yield.

Think about it, it explains the increased power usage and heat output of the chip. There is no new architecture here at all, which is what I have been saying ever since this thing came out.
 
Last edited:
So there you have it. It's the same chip with a few very minor tweaks to improve clock speed and yield.

Think about it, it explains the increased power usage and heat output of the chip. There is no new architecture here at all.


yep end result is just clocks the rest of the transistors are the same front end changes as Polaris, which they don't give that much extra to Polaris over all in current games, so those front end changes won't give much benefit to Vega right now either.
 
yep end result is just clocks the rest of the transistors are the same front end changes as Polaris, which they don't give that much extra to Polaris over all in current games, so those front end changes won't give much benefit to Vega right now either.

So everything we have been told about Vega is bullshit. Raja said it was a new design, and even said during the press event that he had given the driver team hell because it was such a radically new architecture, and that they had to basically come up with new drivers from scratch.

Why would AMD need such radical changes to their driver software, for essentially a minor pipeline change which only serves to increase clock speed?

This stinks of marketing bullshit, making people think that once they bought a Vega card, they would get some kind of magic driver in a couple of months that would unlock incredible performance increases, due to Vega having some kind of voodoo in those extra transistors.

At least maybe we now have the reason for Raja's absence and possible demotion.
 
Last edited:
Well RPM isn't BS everything else looks to be the same shit compute units can do anyways, so, yeah something I kinda surmised last year, why have separate paths that do essentially the same thing (worse yet folks at AMD say its actually harder to implement those "new" features via drivers)? Cause they aren't separate lol, just the same stuff talked about differently.

Pretty much they wanted to show they can do the same things nV's hardware could do. Yeah they can do somethings similarly with a butt load of work arounds. Which if they do the work and it can be functional in majority of the cases it would be good for them, and not only that they can probably get these features to work on Fiji (outside of DSBR and RPM), but at this point, I don't see that happening too may if this than that lol.

Drivers, well drivers have to be new, just the increased latency in the pipelines they would have to redo quite a bit of the driver code to hide that.

I think you are right on Raja's absence, its the marketing BS, really left a pretty bad taste in people's mouths with the pricing, all the hype etc. Not for the product itself, it seems to be more to do with the public perception of what is going on with RTG although the product shows this, all by itself without the fan fare marketing it wouldn't be as bad.
 
Last edited:
Well RPM isn't BS.

Nope, but we are talking about 2 16Bit instructions being combined in to a single 32 Bit instruction, is that right? That has to be a very minor hardware change at best, or minor driver work at worst. Plus it's useless to most end users, and needs developer support to make it work. It's not going to be mainstream anytime soon, unless there is a need for it in the console market.

Isn't this the only actual hardware "feature" that has been added to Vega, outside of the changes to the video decode engine? With the rest being debunked as marketing BS. Because none of the rest of it seems to do anything for anyone.
 
Last edited:
Nope, but we are talking about 2 16Bit instructions being combined in to a single 32 Bit instruction, is that right? That has to be a very minor hardware change at best, or minor driver work at worst. Plus it's useless to most end users, and needs developer support to make it work. It's not going to be mainstream anytime soon, unless there is a need for it in the console market.

Isn't this the only actual hardware "feature" that has been added to Vega, outside of the changes to the video decode engine? With the rest being debunked as marketing BS. Because none of the rest of it seems to do anything for anyone.


ALU's have to be changed for that, it will need 2 control lanes 16 bit each to do it that way. Logic gates and multiplex have to be changed too. Fairly significant changes, but over all, not much use yet. This also influences how the register is setup and needs more cache too.
 
ALU's have to be changed for that, it will need 2 control lanes 16 bit each to do it that way. Logic gates and multiplex have to be changed too. Fairly significant changes, but over all, not much use yet. This also influences how the register is setup and needs more cache too.

Interesting stuff, but do you think we will see it in a game that people want to play within the next year? Is it even useful to PC game developers?
 
Thanks a bunch for your hard work guys, appreciate it. This was a rather interesting read. : -)
 
By the time any extra performance is unlocked, the overall performance in the then current games will demand another upgrade.

You're always chasing with AMD it seems. :confused:
 
Now do this with Pascal vs Maxwell

Yeah, nVidia did the same move.

I think AMD claimed way better IPC though?

It is an interesting experiment anyways, but what really matters is performance + features vs cost.
 
Yeah, nVidia did the same move.

I think AMD claimed way better IPC though?

It is an interesting experiment anyways, but what really matters is performance + features vs cost.

At least nVidia changed the manufacturing process, and got legitimate performance increases and thermal/power savings along with it, unlike AMD.
 
At least nVidia changed the manufacturing process, and got legitimate performance increases and thermal/power savings along with it, unlike AMD.

AMD got legitmate performance increases (35%) when compared to Fury X (stock to stock). When nVidia did this move Pascal just clocks way higher. We don't know if that's a fab or AMD design problem or a bit of both.
 
Sucks they were stuck at the higher manufacturing node for so long. Next update should be with a better understanding of the smaller process.
I look at vega as something they would have released a long time ago if they hadn't been stuck on 28nm for so long.
 
Hello Brent,

Thank you for taking the time to perform all the tests and the well put together article.
I have a small request, would it be possible to run the same head to head with forza 7? I would be curious to see the end results since this game has been a hot topic for Vega lately.

Thank you,
 
Back
Top