Vega Rumors

Come on Rasterizer it was right there. Page 7
By definition, if you replace the fixed function shaders in the geometry engines with generarilzed non-compute shaders, they become programmable and thus of course they can be arbitrarily switched between behaving like Fiji's geometry engine and behaving like primitive shaders instead as needed.

Look, here is a slide from the Polaris slide deck:
AMD-Radeon-RX-480-Polaris-10_Enhanced-Geometry-Engines.jpg


and here is were primitive discard is talked about in the Polaris whitepaper:
The Polaris geometry engines use a new filtering algorithm to more efficiently discard primitives. As figure 5 illustrates, it is common that small or very thin triangles do not intersect any pixels on the screen and therefore cannot influence the rendered scene. The new geometry engines will detect such triangles and automatically discard them prior to rasterization
This makes it very clear that Polaris' Primitve Discard Accelerators are inside the geometry engines prior to the rasterizers, yes?

Now go back and look at the diagram for used for primitive shaders:
slides-27.jpg


This shows primitive shaders being prior in the rendering pipeline to fixed function PDA culling. This is before the draw stream binning rasterizers on the Vega block diagram. What you are suggesting would mean that geometry would be being sent to the CUs for processing and then brought back through the chip to be put through the PDA fixed function culling and the four draw stream binning rasterizers. That doesn't make any sense, and no one would even try such a thing.

Incidentally, the Polaris slide deck conveniently proves that MSAA performance is a known geometry throughput issue for GCN, which is convenient.
 
Man I am glad this thread is still going, gives me something to read while sitting outside the airport at 10pm... Still waiting for my wife's not delayed, semi-delayed, okay it's delayed, to full-on it will be here sometime soon.

It's hard to blame them. Radeon RX Vega 64 is a catastrophe in every single possible way.

That is no way anyone can sugarcoat this.

Compare to GeForce GTX 1080,

Radeon RX Vega 64 is...

1. Cheaper? ✗

2. Faster? ✗

3. More power efficient? ✗

______________________________________________________________________

It would be pretty hard to justify buying a Radeon RX Vega 64 over a Geforce GTX 1080.

I don't generally judge people's purchases, to each their own and all that.

But I don't disagree, I wouldn't see myself purchasing said product when it was priced so close to the 1080ti.
 
Last edited:
By definition, if you replace the fixed function shaders in the geometry engines with generarilzed non-compute shaders, they become programmable and thus of course they can be arbitrarily switched between behaving like Fiji's geometry engine and behaving like primitive shaders instead as needed.

Look, here is a slide from the Polaris slide deck:

By definition, are you trying to tell me something different than what the entire computer industry has been doing since the 50's? Polaris isn't do what Vega's primitive shaders are doing. Vega's traditional fixed function pipeline is doing the same as Polaris though. Hence the same problems. 4 tris per clock, if they changed the fixed function units to programmable ones, even with emulation that problem can easily be solved. Cause the problem is not a software limitation, its hardware limitation, so if that hardware is now changed, that limitation disappears (even when emulating the old fixed function pipeline), understand? It will no longer be stuck at 4 tris per clock, The fixed function units were done for one reason, to save die space, less transistors, and performance. This is why AMD didn't replace them, its clear as day in the white paper, it has both pipelines, fixed function and programmable and they are not the same units. There is no questioning what it has or doesn't have or its been changed.

Doesn't work that way.
AMD-Radeon-RX-480-Polaris-10_Enhanced-Geometry-Engines.jpg


and here is were primitive discard is talked about in the Polaris whitepaper:
This makes it very clear that Polaris' Primitve Discard Accelerators are inside the geometry engines prior to the rasterizers, yes?

Not talking about rasterizer, its being done before tessellation. Polaris's discard is still not that efficient, still way behind its competitors with polygon throughput by close 50% comparing to equal cards. And they even talk about that in the white paper too cause its not doing the discard till after tessellation, Vega with primitive shaders fixes that problem, but while using its traditional fixed function pipeline still has the problem.

This shows primitive shaders being prior in the rendering pipeline to fixed function PDA culling. This is before the draw stream binning rasterizer on the Vega block diagram. What you are suggesting would mean that geometry would be being sent to the CUs for processing and then brought back through the chip to be put through the PDA fixed function culling and the four draw stream binning rasterizer. That doesn't make any sense, and no one would even try such a thing.

Yeah to do that they can't use the fixed function portion (tesselator) they have to emulate that in CU's as well. Vega has both fixed function units and can emulate those fixed function units in its CU's, it has to no way around it cause I stated before its in the white paper as such and if they do it any other way it will create problems for Dx certification and will cause a lot more work on AMD's part for older Dx games.

Incidentally, the Polaris slide deck conveniently proves that MSAA performance is a known geometry throughput issue for GCN, which is convenient.

well no its also bandwidth, a problem area of GCN since the r3x0 line.
 
Last edited:
Man I am glad this thread is still going, gives me something to read while sitting outside the airport at 10pm... Still waiting for my wife's not delayed, semi-delayed, okay it's delayed, to full-on it will be here sometime soon.



I don't generally judge people's purchases, to each their own and all that.

But I don't disagree, I wouldn't see myself purchasing said product when it was priced so close to the 1080ti.

Same here. I love technical debate.

Also ditto on the pricing.. just bought 3x 1080ti hybrids for the same price as Vega. Whoever buys Vega is out of their minds IMO. It has mining on a 1080 but nothing on a 1080ti...
 
Every time I see Vega benchmarked in all these DX12 games and see it is only a 1%-3% better than a 1080 I keep wondering where this supposed AMD advantage in DX12 is.
 
ASUS Radeon ROG RX Vega 64 STRIX review

Spoiler: it doesn't perform any better than a reference Radeon RX Vega 64

https://www.guru3d.com/articles_pages/asus_radeon_rog_rx_vega_64_strix_8gb_review

That review is garbage. It seems odd the Strix cooler (which works way better than blowers on most cards!) barely beat the blower. But then it all makes sense in the details...

"First and foremost, here we have the Vega 10 graphics processor from AMD with its 4096 Shader Cores. I left the thermal compound in there as I need to re-assemble the card fast for further testing. At the lower GPU side you can see the two HBM2 stacks, each 4 GB. You'll notice that this Vega 64 GPU one we have seen before, the version withouth the molding epoxy and the lower sitting HBM2 stacks. You can tell from the thermal compound (the dot effect on the HBM2) that the memory contact surface area is not optimal. "

Disassembly should be done AFTER the performance review. If they really want to do it before, both surfaces should have been cleaned and new compound used. You often get a mess of air bubbles trying to reuse compound like this. Total rookie move.
 
That review is garbage. It seems odd the Strix cooler (which works way better than blowers on most cards!) barely beat the blower. But then it all makes sense in the details...



Disassembly should be done AFTER the performance review. If they really want to do it before, both surfaces should have been cleaned and new compound used. You often get a mess of air bubbles trying to reuse compound like this. Total rookie move.
It is an odd review and the cooling alone should give it a small but noticable bump
 
Maybe that just goes to show Vega was allready running at its max capabilities.

It seems pretty maxed out just like the Fury X was.

So, in alternate reality fantasy land where Vega gets a magic driver that boosts it into Titan killer area - would it use even more power? I would think so.... I didn't read through the last two pages yet. Maybe tomorrow at my training.
 
Maybe that just goes to show Vega was allready running at its max capabilities.
Somewhat yes but with most, including myself, finding lowering the voltage increases the performance a bit more. This Vega seems to be emulating, although poorly, Nvidias boost setup. Mine will boost over the clock set in Wattman. For example I have mine set at 1772 (wouldn't set to 1770, automatically setting to 1772 when 1770 entered) and when benching in some tests it boosts to 1804Mhz. But the clocks are constantly moving, albeit staying in a very tight range say 1740-1752 during a scene moving between the 2 constantly.
 
Somewhat yes but with most, including myself, finding lowering the voltage increases the performance a bit more. This Vega seems to be emulating, although poorly, Nvidias boost setup. Mine will boost over the clock set in Wattman. For example I have mine set at 1772 (wouldn't set to 1770, automatically setting to 1772 when 1770 entered) and when benching in some tests it boosts to 1804Mhz. But the clocks are constantly moving, albeit staying in a very tight range say 1740-1752 during a scene moving between the 2 constantly.


Dropping voltage should drop temps so that should happen.
 
That does point to yield issues, where they were forced to have a "safe" default which allowed more chips to work in their desired envelope. That said, I'm glad many can undervolt and get an overall better product.
 
That does point to yield issues, where they were forced to have a "safe" default which allowed more chips to work in their desired envelope. That said, I'm glad many can undervolt and get an overall better product.
It just seems odd that they would not default to the lower power profile considering it seams to run much better and use considerably less juice.
 
It just seems odd that they would not default to the lower power profile considering it seams to run much better and use considerably less juice.

They're not going to customize the voltage profile for each chip. They pick the lowest possible voltage to get all chips to work. You don't want to get a card and find out that it artifacts out of the box, no? The fact that you can undervolt your particular card just means that you didn't get a chip that sucks. :)
 
They're not going to customize the voltage profile for each chip. They pick the lowest possible voltage to get all chips to work. You don't want to get a card and find out that it artifacts out of the box, no? The fact that you can undervolt your particular card just means that you didn't get a chip that sucks. :)

That's basically it. They have to guarantee a certain level of clocking with the worst chip they allow for sale. If they aren't having stellar yields, they need to move that point to a less pleasant place. The median chip in the wild may have a really different voltage requirement than the worst case.

This is basically how that whole "Max Q" thing worked for NV. They realized some chips coming off the line were dramatically better at clocking with much lower voltages. They binned those, and made a new market segment for laptops using them.
 
It just seems odd that they would not default to the lower power profile considering it seams to run much better and use considerably less juice.
Actually mine was set to balanced as default which clocks well and temp and wattage were reasonable. The turbo is supposed to be the "unchained setting", although it really did little to change my benchmark scores. But I do agree the overvolting does seem a bit odd and fairly commonplace with AMD as of late, even their APUs were massively overvolted at stock.
 
AMD is trying to get as much performance as possible, and thats why they pushed the limits all the way up. If they didn't need to and hit around the performance of the 1080, nV would be forced to respond.

Just look at Fermi, or Fx series from nV, they did the same thing, when AMD had good products, they pushed the limits of those chips to the max. They pretty much had to, to make a product that was marketable, didn't work for the fx, but for Fermi it did.

When pushing chips like this, it will also increase the differential of the voltages too.
 
Disassembly should be done AFTER the performance review. If they really want to do it before, both surfaces should have been cleaned and new compound used. You often get a mess of air bubbles trying to reuse compound like this. Total rookie move.

This is his response:

Whenever people see something they do not like it is always the media outlet that is suspect. BTW your IP traced back awfully close to the AMD Markham Canada HQ? Is that a well educated observation to make or an incredible coincidence?

1) The disassembly photo-shoot of the card was done AFTER all performance tests had been done, ergo the card is tested in it's default state. The remark was made due to the fact i still needed to use the card for pending FCAT tests. I also make the photos like that so that people can see how TIM is applied. But I'll rewrite that a bit to make it more clear.

2) We tested in a final configuration. All other previews you refer to are just that, based on preview / non-final early sample cards, some media had quick access to a card for merely an hour or so as ASUS dropped by to show them. No-body tested with long duration temps and DBa measurements with the card properly warmed up, we did. Other future media reviews will back the temps we are seeing, unless our sample had an isolated problem of course.

3) The performance differential in-between reference and the ASUS card is 100% based on throttling, and VEGA64 sure is throttling up and down a lot causing small FPS differences.

But you know what - I hope it is an isolated issue on our side, I honestly doubt it, but I do hope it. But be glad we do not cover up our findings and post results as they are measured here in the lab.
 
I undervolted mine from 1200mv to 1150mv and it crashed :sick:. Seems to do OK at 1670mhz and stock volts. HBM2 at 1045 crashed in Superposition, 1020 made it through smoothly. That is a good benchmark, it really does stress the card well.
 
This is his response:
Pretty dickish responce. The way it reads you would think he dissasembledbefore testing without reaplying thermal paste. That being said it does seem odd that it is still thermal throttling that hard with a better cooler, especially since the strix 1080 and 1080ti benefit greatly from the extra cooling. Power draw must still be too high to be effectively cooled on air.
 
I'd just assume that it may be his sample- which is weird that it would wind up in a Strix, and one used for review at that.

We do know that AMD sets a high voltage limit in order to get more cards stable at that limit, which means that some portion will be able to undervolt without stability issues, but it also means that others actually need that voltage. This may indicate that no cooler can save the poorer performing samples.
 
ASUS Radeon ROG RX Vega 64 STRIX review

Spoiler: it doesn't perform any better than a reference Radeon RX Vega 64

https://www.guru3d.com/articles_pages/asus_radeon_rog_rx_vega_64_strix_8gb_review
ASUS Radeon RX 64 Review taken offline

http://www.guru3d.com/news-story/review-asus-radeon-rog-rx-vega-64-strix-8gb.html

"Yesterday we published the ASUS Radeon RX 64 STRIX review. As shown, it performs awfully similar towards the reference Radeon RX 64. This morning I received a phone call from ASUS, asking us if we’d be willing to take down the article for a few days as they have made a mistake.

The sample we received did not get a final BIOS for its final clock frequencies and fan tweaking. Ergo, the sample we received carries a default reference BIOS."
 
IMHO there is nothing "stealthy" over that marketing...it easy to tell who knows their technical stuff and who only parrots PR and are technical clueless...

Well, the stealth part is about not disclosing who they're actually working for and posing as a regular forumer or commenter or .........troll.
 
the way he speaks and make claims sometimes remember to some anarchist that joined the heavy forces of the AMD rebellion on the forum. lol..
What AMD rebellion? It's just a difference of reality and engineering versus an Nvidia marketing narrative based on inconvenient truths as their marketers don't understand what they're talking about. All the points I've made so far have been confirmed by all the tech press and common sense for that matter.

Most/all of the new features are disabled and drivers need work. Not difficult to follow llvm commits to see what AMD is working on. In compute and high margin markets AMD focused on, Vega competes with 1080ti. In gaming there is driver work to do. DSBR and primitive shaders can be automatically enabled with generic paths, improved with per game optimizations, and best case implemented by the developer, possibly with AMDs assistance, to achieve the best results.
 
What AMD rebellion? It's just a difference of reality and engineering versus an Nvidia marketing narrative based on inconvenient truths as their marketers don't understand what they're talking about. All the points I've made so far have been confirmed by all the tech press and common sense for that matter.

Most/all of the new features are disabled and drivers need work. Not difficult to follow llvm commits to see what AMD is working on. In compute and high margin markets AMD focused on, Vega competes with 1080ti. In gaming there is driver work to do. DSBR and primitive shaders can be automatically enabled with generic paths, improved with per game optimizations, and best case implemented by the developer, possibly with AMDs assistance, to achieve the best results.


LOL yeah that's the AMD rebellion ;) rebellions are built on belief and faith, and there is a lot of belief and faith there and no material. Yes NO MATERIAL, all your hypothesizes, and possibilities of magic, not going to happen. There is a reason why those features ere off, because they are hard to implement and when implemented currently (automatic) most likely won't give desired results, most likely its the opposite of the desired results.
 
Last edited:
its not going to change much, APU's not worth mining at all, 2 mhs, lets say 10 mhs, Vega quadruples that, but the CPU it self uses 90watts.

Yeah no money in that.

I think they were analyzing it as a way of seeing if it was worth using these APU's to run mining rigs for that little bit extra of hashing.
 
I think they were analyzing it as a way of seeing if it was worth using these APU's to run mining rigs for that little bit extra of hashing.


yeah its not though power usage of pushing the CPU too might as well get another card. I did have a friend who tried it Eth, had the memory to do it too, 32 gb, but end up making a loss, like -50 cents per day lol.
 
yeah its not though power usage of pushing the CPU too might as well get another card. I did have a friend who tried it Eth, had the memory to do it too, 32 gb, but end up making a loss, like -50 cents per day lol.
Even if you just use the iGPU? Not the CPU power itself?
 
yeah cause ya still have to power the CPU too. Total CPU wattage goes up. Without the APU the CPU wattage is low, really low around half.

There is a reason why ASIC miners are built around cell phone chips ;) their performance/watt are killer :)
 
Last edited:
Back
Top