Vega Rumors

I ain'yt no fake, phonie, or a fraud as my wife calls Miss Cleo. She had a fake West Indian accent. I push science not mysticcism. I may be wrong on this, But I believe I will be closer to the truth than you think and most of the idiots who believe you can compare FE performance to either Nvidia 1080, 1080 Ti or RX Vega. They are different animals. One is a development paltform the other a gaming platform. FE was never designed to play games for fun. The gaming mode with its older drivers is for backward compatibility for game developers. It is astounding how few people care to understand this and still try to say it proves RX Vega is a dud. I will not be the person who has to eat shit here.

Oh boy you are full of it....

There can never be so much nonsense BS in a single post.....
 
This thread is pretty off the rails. It could stand to be closed...

heck close the AMD Flavor sub until Vega hits and people have something real to talk about

We have something very real - Vega FE - which will be close to Vega RX. It's been beaten to death, but it's still interesting to hear the differing predictions.
 
I ain'yt no fake, phonie, or a fraud as my wife calls Miss Cleo. She had a fake West Indian accent. I push science not mysticcism. I may be wrong on this, But I believe I will be closer to the truth than you think and most of the idiots who believe you can compare FE performance to either Nvidia 1080, 1080 Ti or RX Vega. They are different animals. One is a development paltform the other a gaming platform. FE was never designed to play games for fun. The gaming mode with its older drivers is for backward compatibility for game developers. It is astounding how few people care to understand this and still try to say it proves RX Vega is a dud. I will not be the person who has to eat shit here.


I think you are already eating the shit that AMD has been feeding you and trying to feed everyone else ;)

Reality is going to be far from that.

You just made up a reason for AMD Vega FE gaming disaster. Why do people do this? Why do you have to make an excuse up that AMD themselves didn't make. Backward compatibility in gaming mode lol, wow that is the first I heard of that in all my years.....

Have you ever noticed if they come out and say something, its usually Ok there is a problem they are going to fix it, but that fix is always short of the consumers expectations?

If they don't say anything, then there really is a huge problem, nothing will most likely change.

Now Raja and a couple AMD guys stated RX Vega will be faster that Vega FE, but reality is not by much.

There are games the Vega FE barely out does the 1070, there are games its under the 1070, there are games its @ the 1080, there games its just above the 1080. So if we take the median right now, that's where its going to end up even after the "proper" gaming drivers. That would be in between the 1070 and the 1080 most likely, give them the benefit of the doubt you say? Ok 1080. Hasn't changed anyones minds has it Os2wiz?


Do you realize its a dud because its late and super power hungry for what its offering.

Shit at least Fermi was top dog in performance for its 300 watts it pulled! And it was a dud, it hurt nV.

I don't understand how anyone can expect it not to be a dud, thus why AMD has been taking so long to get it out, they know its not even worth coming out with it, but at least they can get some of their money back, how ever little it is.
 
Last edited:
Vega would be in a lot of trouble once Volta is released.

If the previous pattern holds:

Geforce GTX 1080 TI --> Geforce GTX 1170

Geforce GTX 1080 --> Geforce GTX 1160

___________________________________

That means that liquid cooled Radeon RX Vega would be competing with Geforce GTX 1160.

Of cause, the former is much more expensive to make and consuming much more power.

I am not sure what AMD is suppose to do.

Pack up and go home?
 
I don't understand how anyone can think some miracle drivers will save Vega. How many times has people thought drivers would save AMD? How many times has it panned out? Zero. We've seen the hardware, it's exactly the same in RX, and it's not very good.

The other thing I find dumbfounding about AMD is this: When a developer releases a buggy, unoptimized product and try to fix it many months later they get torn apart for being terrible and lazy. When AMD released a buggy, unoptimized product and try to fix it way down the road they get praised for being forward looking and call it stupid things like "fine wine". It's the exact same thing, you're paying to beta test their junk. If AMD made games they'd be down there at the bottom with EA and Ubisoft.
 
They haven't released the product though. Just engineering samples for developers. It's like playing a beta and not expecting bugs and low performance. Put the hardware out there so devs can start testing, patching, and experimenting. Devs done need the final drivers for most of that, just a stable platform.

The hardware I expect to be similar, but RX faster out of the box. We've already seen 1600MHz sustained around 300W. New drivers should help FE as well as RX/WX.

The release wouldn't have been delayed a month if any gains were minimal at best. So either hardware bug we haven't seen or drivers. Given all the changes, drivers make sense and SIGGRAPH could be bringing Vulkan 1.1 and DX's SM6.0/1 compilers which AMD likely targeted.
 
They haven't released the product though. Just engineering samples for developers. It's like playing a beta and not expecting bugs and low performance. Put the hardware out there so devs can start testing, patching, and experimenting. Devs done need the final drivers for most of that, just a stable platform.

The hardware I expect to be similar, but RX faster out of the box. We've already seen 1600MHz sustained around 300W. New drivers should help FE as well as RX/WX.

The release wouldn't have been delayed a month if any gains were minimal at best. So either hardware bug we haven't seen or drivers. Given all the changes, drivers make sense and SIGGRAPH could be bringing Vulkan 1.1 and DX's SM6.0/1 compilers which AMD likely targeted.


Hardware bugs this late in the show? Nothing will help them there right? Drivers, as explained if they haven't had time to get all the functionality done prior to Tape out, they will be screwed, there literally playing with fire if they didn't know at that point drivers will be fully functional with the silicon and they tape out, cause if there are any bugs in the silicon emulation would have been able to figure those out.

So you are saying drivers aren't ready but they are going to get other API functionality up and going before drivers are ready for current API functionality? Doesn't make sense does it?

PS we haven't seen 1600mhz sustained at 300W, not with air cooling at least. With water cooling, Its above 300 watts to sustain 1600 mhz.
 
Vega would be in a lot of trouble once Volta is released.

If the previous pattern holds:

Geforce GTX 1080 TI --> Geforce GTX 1170

Geforce GTX 1080 --> Geforce GTX 1160

___________________________________

That means that liquid cooled Radeon RX Vega would be competing with Geforce GTX 1160.

Of cause, the former is much more expensive to make and consuming much more power.

I am not sure what AMD is suppose to do.

Pack up and go home?


Vega is in trouble right now with Pascal lol, Volta is just a big FU.
 
Hardware bugs this late in the show? Nothing will help them there right? Drivers, as explained if they haven't had time to get all the functionality done prior to Tape out, they will be screwed, there literally playing with fire if they didn't know at that point drivers will be fully functional with the silicon and they tape out, cause if there are any bugs in the silicon emulation would have been able to figure those out.

So you are saying drivers aren't ready but they are going to get other API functionality up and going before drivers are ready for current API functionality? Doesn't make sense does it?

PS we haven't seen 1600mhz sustained at 300W, not with air cooling at least. With water cooling, Its above 300 watts to sustain 1600 mhz.

Basing on PCPer reviewed of the water cooler, OC the card to 1712 mhz doesn't seem to yield much performance boost, I guess that is limitation of GCN?
 
Haven't read that one yet, but odd since Gamers Nexus did get a nice performance boost, nothing that changes its standings but still.

Ok just skimmed through it, yeah pretty much the same thing, doesn't change its standings, but decent improvements.

Yeah I think you are right, the improvements seems to purely come from the clock speed increases and the ability to maintain its boosts. So pretty much a 300 mhz increase (inclusive of able to boost without being thermally limited) is showing ~15% improvement in performance. So its being held back by the uarch.
 
Last edited:
if there are any bugs in the silicon emulation would have been able to figure those out.
Not necessarily as it's still possible to have odd manufacturing defects that aren't predictable. Rare, but it can happen. I don't think hardware is the issue here.

So you are saying drivers aren't ready but they are going to get other API functionality up and going before drivers are ready for current API functionality? Doesn't make sense does it?
The new compilers are backwards compatible. Microsoft moving to the LLVM tool chain and Khronos already there. It wouldn't be unreasonable to focus resources there as opposed to tools in the process of being phased out. Only problem is those tools aren't readily available yet. Most of those features are the intriniscs AMD has supported for a while.

For Linux drivers AMD has been pushing a unified back end including Windows support. That should be built on LLVM. Just earlier there was a patch temporarily disabling VGPR indexing because it regresses performance in OpenGL on LLVM 5. Requiring explicit programming of "temporaries". That could very well be the register caching I mentioned before. Reducing power usage with no performance impact from less data shuffling. Or possibly improving performance by saving bandwidth.

PS we haven't seen 1600mhz sustained at 300W, not with air cooling at least. With water cooling, Its above 300 watts to sustain 1600 mhz.
There was one guy that simply undervolted to 1100MHz and was around there. Better cooling with less leakage a bit lower. Then halving the RAM should put it below. That's still not allowing for that register change I mentioned above. Or lower core clocks and higher memory if that works better with 8GB.

Basing on PCPer reviewed of the water cooler, OC the card to 1712 mhz doesn't seem to yield much performance boost, I guess that is limitation of GCN?
That won't necessarily tell you much. If they hit a thermal limit hardware can just idle with high clocks. Insert waits to cut power and heat. Or in the case of memory less bandwidth as it refreshes more frequently. Same thing Pascal does with thermal limits and it could vary by core with all the sensors if they had hotspots.

The entire issue may be a result of less bandwidth with 8-Hi HBM from refreshing more frequently. Even raising clocks/temperature on memory could harm performance. That will be less of an issue with 4-Hi. Same reason we never saw the larger HBM1 stacks. 8GB on FuryX could have made a significant difference. Now we have Vega using 8-Hi stacks that don't exist in catalogs.
 
Not necessarily as it's still possible to have odd manufacturing defects that aren't predictable. Rare, but it can happen. I don't think hardware is the issue here.

Very rare that will ever happen specially on a node that is tested and proven.....


The new compilers are backwards compatible. Microsoft moving to the LLVM tool chain and Khronos already there. It wouldn't be unreasonable to focus resources there as opposed to tools in the process of being phased out. Only problem is those tools aren't readily available yet. Most of those features are the intriniscs AMD has supported for a while.

Intrinsics won't be added to Vulkan in a general sense unless both IHV's support them. Similar to Ogl vendor specific instructions.

It would be a BIG mistake for AMD not to focus on current API functionality vs upcoming functionality. Great way to stunt their own growth. Just like they did with DX11 vs LLAPI's. What a screw up that was.

For Linux drivers AMD has been pushing a unified back end including Windows support. That should be built on LLVM. Just earlier there was a patch temporarily disabling VGPR indexing because it regresses performance in OpenGL on LLVM 5. Requiring explicit programming of "temporaries". That could very well be the register caching I mentioned before. Reducing power usage with no performance impact from less data shuffling. Or possibly improving performance by saving bandwidth.

Nope :)
There was one guy that simply undervolted to 1100MHz and was around there. Better cooling with less leakage a bit lower. Then halving the RAM should put it below. That's still not allowing for that register change I mentioned above. Or lower core clocks and higher memory if that works better with 8GB.


The reason why AMD and nV release at certain specifications and why some people can undervolt is because of one thing, silicon lottery. AMD and nV both rate chips based on over all manufacturing and what most chips can do stably.
Talking about one offs means nothing at the end. This is why with Polaris, we didn't see AMD change their core voltage and power draw after the launch fiasco, cause well most chips were rated to do a certain level a performance with certain amps, volts, watts.

That won't necessarily tell you much. If they hit a thermal limit hardware can just idle with high clocks. Insert waits to cut power and heat. Or in the case of memory less bandwidth as it refreshes more frequently. Same thing Pascal does with thermal limits and it could vary by core with all the sensors if they had hotspots.

In different applications it should vary that didn't really happen with Vega FE water cooled, so.........and you have two reviewers testing it out, granted GN's was a Frankenstein set up but still.

The entire issue may be a result of less bandwidth with 8-Hi HBM from refreshing more frequently. Even raising clocks/temperature on memory could harm performance. That will be less of an issue with 4-Hi. Same reason we never saw the larger HBM1 stacks. 8GB on FuryX could have made a significant difference. Now we have Vega using 8-Hi stacks that don't exist in catalogs.

HBM 1 didn't have more higher stacks cause it wasn't designed for it. Either they went wider or nothing at all.
 
One thing I got from the PCPer article on the H2O cooled FE edition is that on the synthetic tests (3D Mark, Unigine Heaven) Vega is not much faster than Fury X. On Firestrike Extreme it is 34.5% faster, Firestrike Ultra it is 31.2% faster, and Heaven it is only 23.3% faster. These are all well below the simple clock scaling from Fury X to Vega (1050 MHz to 1600 MHz). Clock scaling should give about a 52% performance improvement. So something is seriously wrong with Vega. It really makes you wonder what they have been doing all this time with Vega.
 
One thing I got from the PCPer article on the H2O cooled FE edition is that on the synthetic tests (3D Mark, Unigine Heaven) Vega is not much faster than Fury X. On Firestrike Extreme it is 34.5% faster, Firestrike Ultra it is 31.2% faster, and Heaven it is only 23.3% faster. These are all well below the simple clock scaling from Fury X to Vega (1050 MHz to 1600 MHz). Clock scaling should give about a 52% performance improvement. So something is seriously wrong with Vega. It really makes you wonder what they have been doing all this time with Vega.

If you look at pcgh test using the beyond3d suite, there are obvious bottlenecks because although the computational output is doubled, some thing are only better by 16 percent compared to Polaris. For the most part there is about a 40% increase in output vs polaris besides ALU's. This is why we are not seeing remotely linear scaling with clocks.

This makes it a very unbalanced architecture. The bandwidth of the architecture is the biggest right now. It's alot worse then fury x.

Compared this to the gtx 1080 ti which leads every metric almost and even the gtx 1080. What you see with Pascal is relatively linear scaling between the gtx 1080 and gtx 1080 ti part in raw performance in most categories. The gtx 1080 ti performs where you expect it to from the step up from a gtx 1080.

Compare the scaling in raw performance to polaris to Vega and you will see why Vega at best is trading blows with the gtx 1080 and not close to the gtx 1080 ti. GCN is reaching it's limit and bottlenecks are become more and more difficult to skirt around.
 
One thing I got from the PCPer article on the H2O cooled FE edition is that on the synthetic tests (3D Mark, Unigine Heaven) Vega is not much faster than Fury X. On Firestrike Extreme it is 34.5% faster, Firestrike Ultra it is 31.2% faster, and Heaven it is only 23.3% faster. These are all well below the simple clock scaling from Fury X to Vega (1050 MHz to 1600 MHz). Clock scaling should give about a 52% performance improvement. So something is seriously wrong with Vega. It really makes you wonder what they have been doing all this time with Vega.


Nothing really wrong with it, cause if we look at Fiji from Hawaii you see the same issues with scaling. Bottlenecks are what are hurting it, GCN was on its last legs for 2 generations now, just no where to go.
 
well looking like i was wrong. thought it would get just above 1080 performance for cheap..

I can admit when im wrong ! god frigging damnit amd.


edit: too hot, too late, too expensive

wtf
 
Intrinsics won't be added to Vulkan in a general sense unless both IHV's support them. Similar to Ogl vendor specific instructions.
DirectX is already adding them, but they are standard instructions and not intriniscs now. Why wouldn't Vulkan do the same? In fact I'm fairly sure they are already there as experimental extensions. Doom is already using them.

So unless some IHVs don't plan on supporting SM6 and possibly a Vulkan update, they will be supported. Most were widely used on consoles so facilitate porting.

HBM 1 didn't have more higher stacks cause it wasn't designed for it. Either they went wider or nothing at all.
There were 8-Hi stacks in theory, but they were never used. Likely for a number of technical reasons.
 

vega-undervolt-current_3.png

vega-undervolt-thermals_3.png

vega-undervolt-frequency_3.png

vega-undervolt-firestrike-ultra.png

vega-undervolt-timespy.png

vega-undervolt-firestrike-extreme.png

vega-undervolt-grw-4k.png

vega-undervolt-doom-4k.png


http://www.gamersnexus.net/guides/2...tion-undervolt-benchmarks-improve-performance
 
Last edited:
nice.. thanks for the linky.


typical AMD.


every single amd card i have ever had responds the exact same way. look at mining! ~ 40% reduction in power, same or better result in workload. i believe razor1 said it... limitations of architecture at play
 
DirectX is already adding them, but they are standard instructions and not intriniscs now. Why wouldn't Vulkan do the same? In fact I'm fairly sure they are already there as experimental extensions. Doom is already using them.

So unless some IHVs don't plan on supporting SM6 and possibly a Vulkan update, they will be supported. Most were widely used on consoles so facilitate porting.


There were 8-Hi stacks in theory, but they were never used. Likely for a number of technical reasons.


Vulkan and OGL committee both work the same way. If both IHV's have the same features (extensions) then they will add them in. If not, they still have IHV specific extensions. Simple, its not their top priority, nor should it be AMD's, cause if that is their top priority, they are screwing themselves over.

Well 8 hi stacks probably had some issues we don't know about, It would have been extremely useful for Fiji.
 
I hope AMD realizes that they can at least sell a decent 1070/1080 competitor if they attune these cards better.
 
nice.. thanks for the linky.


typical AMD.


every single amd card i have ever had responds the exact same way. look at mining! ~ 40% reduction in power, same or better result in workload. i believe razor1 said it... limitations of architecture at play

I think the real reason is that it allows AMD to sell poorly performing chips that would otherwise be unstable at lower voltages.
 
Last edited:
I hope AMD realizes that they can at least sell a decent 1070/1080 competitor if they attune these cards better.


They can't man, they aren't fully stable with different apps with different ASIC levels and voltage.

Even if they were able to drop the voltages on all their chips with all apps, still looking at 300 watts @ gtx 1080 performance levels.
 
AMD isn't stupid if they could run chips at lower voltage they would
I am not entirely sure but seems as of late that AMD has gone with the Voltage likely to make all chips stable. Seen this with the APUs. My wifes previous 7870k clocked to 4.5Ghz on stock voltage, and seems a lot more users found the same thing. GPUs as far as I can tell get one voltage set too ie:1.2V and such. Maybe it is cheaper than validating each one. I know that was the case with the FX 8350 CPUs, a whole host of voltage ranges from 1.28V-1.35V. Haven't even really seen if Ryzen does this or not.
 
I am not entirely sure but seems as of late that AMD has gone with the Voltage likely to make all chips stable. Seen this with the APUs. My wifes previous 7870k clocked to 4.5Ghz on stock voltage, and seems a lot more users found the same thing. GPUs as far as I can tell get one voltage set too ie:1.2V and such. Maybe it is cheaper than validating each one. I know that was the case with the FX 8350 CPUs, a whole host of voltage ranges from 1.28V-1.35V. Haven't even really seen if Ryzen does this or not.


When they bin chips, validation is different for each bin ;) so that's how it works. When you look at the silicon lottery its a typical bell curve, different bell curves for different tests, Voltage, Frequency, Amps, etc. They are going to testing a plethora of different stats. So if they want to get a certain performance and stability across most applications, that is what is going to end up as the stock chip, All those variables will be validated for AMD needs for a mass production product.

Lets say I'm mining, my 580s all of them have different under volting potential, and under volting also affects stability based on frequency targets too. I don't have a single card that is the same. All of them using the same mining program too. So I set up my rigs based off of sets of 6 cards that are closest to each other.
 
I am not entirely sure but seems as of late that AMD has gone with the Voltage likely to make all chips stable. Seen this with the APUs. My wifes previous 7870k clocked to 4.5Ghz on stock voltage, and seems a lot more users found the same thing. GPUs as far as I can tell get one voltage set too ie:1.2V and such. Maybe it is cheaper than validating each one. I know that was the case with the FX 8350 CPUs, a whole host of voltage ranges from 1.28V-1.35V. Haven't even really seen if Ryzen does this or not.

Sure feels like it, probably dont want to waste a lot of chips, so probably just up the voltage to ensure majority of it is stable.
 
AMD isn't stupid if they could run chips at lower voltage they would

AMD set the voltage higher so that AMD can sell poorly performing units that won't function at lower voltage.

Canse in point:

Buildzoid/Actually Hadware Overclocking bought a Radeon Vega FE and it's a dog.

Gamers Nexus probably got a golden chip.
 
AMD set the voltage higher so that AMD can sell poorly performing units that won't function at lower voltage.

Canse in point:

Buildzoid/Actually Hadware Overclocking bought a Radeon Vega FE and it's a dog.

Gamers Nexus probably got a golden chip.

And another guy over at 3dcenter.org forums I think it was (or Beyond3D? I've been reading around too much) was able to undervolt his to 1050mv at stock clocks.

We just need more data points to draw conclusions.
 
Sure feels like it, probably dont want to waste a lot of chips, so probably just up the voltage to ensure majority of it is stable.


that wouldn't surprise me if the high voltage was do to low yields and / or under performing chips

look at the Vega Buildzoid has his will barely do stock speeds with all the voltage ever

its pretty clear GN won the Vega lotto now the question is did Buildzoid lose or is that what the avg Vega looks like
 
And another guy over at 3dcenter.org forums I think it was (or Beyond3D? I've been reading around too much) was able to undervolt his to 1050mv at stock clocks.

We just need more data points to draw conclusions.

This: data points. Almost 2500 posts in a rumor thread. Sure, it's been fun reading various thoughts on what's about to be dropped, but getting them will be the only way to tell.
 
They are doing RX Vega vs GTX 1080 according to someone who attended the show, no fps counters, FreeSync vs GSync monitors, so they are pulling the freesync is cheaper card. It might be actually worse than we thought, no fps counters vs even the GTX 1080, that's an epic fail!

 
Back
Top