Vega Rumors

As lolfail9001 stated everything still points to that performance even compute when you look at what does what in optimized software.

Tflops increased by 30-40%, FuryX is at 1070 performance, that means if perfect scaling, Vega ends ups at gtx 1080 or just above gtx 1080..... and from the past we know AMD's scaling is never linear when it comes to adding cores. Factoring in front end enhancements of Polaris, it should hit that estimate.

No he never talked about Primitive Shaders so don't know where you are getting that from, unless you heard or read something that I didn't. Everything so far I have heard about Primitive Shaders, they must be developer handled, and they will help Vega but to what degree its up in the air.

I quoted you on it in another thread. I don't think you read it. Raja responded to a question from a student. He said new geometry pipeline, primitive shader and culling shouldn't require anything from developer standpoint (this is not exact quote but his answer was pretty simple and straight forward.
 
No its not, its about 15 FPS behind, if we compare the same areas, and we saw how they tried to fudge sniper elite 4 before with camera angles with Ryzen showings, so take everything with heaps of salt.

1080ti gets 70+ sometimes dips into the 60ish, that was not what was shown with Vega Frontier it was 60ish and sometimes going into the 70's.

There is a YouTube video of the same spot with OC 1080ti. There was a thread about it getting similar fps as the frontier editions.
 
I quoted you on it in another thread. I don't think you read it. Raja responded to a question from a student. He said new geometry pipeline, primitive shader and culling shouldn't require anything from developer standpoint (this is not exact quote but his answer was pretty simple and straight forward.

He never answered the primitive shader part, he answered the culling part. two different parts, Primitive shaders enhance the culling, pretty much what Dev's do now in some engines like Frostbyte. Primitive shaders themselves don't cull automatically, the shader that is written compares FOV to determine visible collision on a per polygon basis and then it tells to draw the triangle or not. (simplified, still have to figure out Z sort and what not based on the pixel shader) This is a major issue when people can't understand what is going, how are they going to understand what is being talked about?
 
There is a YouTube video of the same spot with OC 1080ti. There was a thread about it getting similar fps as the frontier editions.


I have seen videos that show otherwise, not to mention youtube reviewers have talked about it too. But it comes down to is we really don't know cause of the settings and camera angles so I would still take it with a heap of salt.

And back to the culling and primitive shaders, I have stated it from the start there is no simple way of just incorporating them with out developer involvement, just can't do it, cause the pipeline is dictated by the API, so without those extensions being exposed (which DX will not have nor will they be exposed in older games without updates) Not going to happen. Vulkan, Ogl will have those extensions, cause they tend to put them in but they were be vendor specific.
 
I have seen videos that show otherwise, not to mention youtube reviewers have talked about it too. But it comes down to is we really don't know cause of the settings and camera angles so I would still take it with a heap of salt.

And back to the culling and primitive shaders, I have stated it from the start there is no simple way of just incorporating them with out developer involvement, just can't do it, cause the pipeline is dictated by the API, so without those extensions being exposed (which DX will not have nor will they be exposed in older games without updates) Not going to happen. Vulkan, Ogl will have those extensions, cause they tend to put them in but they were be vendor specific.

reviews have 1080 ti average fps at 68 at 4k for sniper elite. Yea we dont know the settings but I rarely see 4k.
 
reviews have 1080 ti average fps at 68 at 4k for sniper elite. Yea we dont know the settings but I rarely see 4k.


I've seen them all the way from mid 50's to mid low 70's for 4k reviews, its all over the place, so we don't know anything really.
 
[QUOTE="razor1, post: 1043011773, member: 110460"
No he never talked about Primitive Shaders (talked about culling and through put, pretty much the front end changes of Polaris which will no doubt be in Vega as well) so don't know where you are getting that from, unless you heard or read something that I didn't. Everything so far I have heard about Primitive Shaders, they must be developer handled, and they will help Vega but to what degree its up in the air.[/QUOTE]


The new geometry pipeline in Vega was designed for higher throughput per clock cycle, through a combination of better load balancing between the engines and new primitive shaders for faster culling. As a programmer you shouldn't need to do anything special to take advantage of these improvements, but you're most likely to see the effects when rendering geometrically complex scenes that can really push the capabilities of the hardware.

That leaves no doubt for the interpretation. Primitive Shaders to me, seems like something RTG can optimize via drivers for each game. Programmable Geometry Engines no longer rely on fixed function units, but it's capable of tapping into the Stream Processors for processing primitives & culling.


The other one that's interesting was about the HBCC:


To realize the full potential of HBCC, yes we will need to see content from game developers use larger datasets. But we have seen some interesting gains even on current software, particularly in min frame rates. Part of the goal of launching Radeon Vega Frontier edition, is to help speed up that process.

I have seen comparisons of 1080Ti OC in Sniper Elite 4 in that scene and it was very close. That's on Vega FE.

If what Raja claims is true, that RX Vega is faster for games, or other variants of Vega having higher clocks, then that's already well beyond 1080 class.
 
If what Raja claims is true, that RX Vega is faster for games, or other variants of Vega having higher clocks, then that's already well beyond 1080 class.

I'm no fanboy one way or the other and I would welcome some true competition in the market. I just buy whatever is fastest at the time.

However, the amount of faith people have in this company is astounding. I fully expect Vega to disappoint but would be pleasantly surprised if it delivers.
 
[QUOTE="razor1, post: 1043011773, member: 110460"
No he never talked about Primitive Shaders (talked about culling and through put, pretty much the front end changes of Polaris which will no doubt be in Vega as well) so don't know where you are getting that from, unless you heard or read something that I didn't. Everything so far I have heard about Primitive Shaders, they must be developer handled, and they will help Vega but to what degree its up in the air.


The new geometry pipeline in Vega was designed for higher throughput per clock cycle, through a combination of better load balancing between the engines and new primitive shaders for faster culling. As a programmer you shouldn't need to do anything special to take advantage of these improvements, but you're most likely to see the effects when rendering geometrically complex scenes that can really push the capabilities of the hardware.

That leaves no doubt for the interpretation. Primitive Shaders to me, seems like something RTG can optimize via drivers for each game. Programmable Geometry Engines no longer rely on fixed function units, but it's capable of tapping into the Stream Processors for processing primitives & culling.


The other one that's interesting was about the HBCC:


To realize the full potential of HBCC, yes we will need to see content from game developers use larger datasets. But we have seen some interesting gains even on current software, particularly in min frame rates. Part of the goal of launching Radeon Vega Frontier edition, is to help speed up that process.

I have seen comparisons of 1080Ti OC in Sniper Elite 4 in that scene and it was very close. That's on Vega FE.

If what Raja claims is true, that RX Vega is faster for games, or other variants of Vega having higher clocks, then that's already well beyond 1080 class.

I know exactly what he stated, I quoted most of what he stated.

You are just reading it wrong. You are expecting to see x2.4 performance for culling and polygon through put over Polaris per clock, with Vega, that is what you are expecting, YOU are in the world of imagination. its 2.4 x over Fiji per clock culling and polygon through put per clock. That is just above Polaris's increase over Tonga lol. These are theoretical maximums AMD gave use for Vega. Primitive shaders, which need programming will give up to another x2 above that, Raja wasn't being very clear when he stated, this was very clear with Vega presentation prior to this one though. Demos were shown off too, with primitive shaders and RPM, coding had to be done lol.

He never talked about Primitive shaders and its "extra" culling You don't know how the API's function and what would be needed to make something like that function. IT CAN NOT be done without coder intervention.

All of everything AMD has shown us so far doesn't give anything but just above gtx 1080 performance, and its definitely using more power at this point.

And as I stated, I would take that Sniper elite demo with a huge grain of salt, just like at what they did with it Ryzen, just look at the DL bench they did, they skewed the numbers to their favor, reality is much more different when you start equalizing things.
 
Last edited:
As lolfail9001 stated everything still points to that performance even compute when you look at what does what in optimized software.

Tflops increased by 30-40%, FuryX is at 1070 performance, that means if perfect scaling, Vega ends ups at gtx 1080 or just above gtx 1080..... and from the past we know AMD's scaling is never linear when it comes to adding cores. Factoring in front end enhancements of Polaris, it should hit that estimate.

No he never talked about Primitive Shaders (talked about culling and through put, pretty much the front end changes of Polaris which will no doubt be in Vega as well) so don't know where you are getting that from, unless you heard or read something that I didn't. Everything so far I have heard about Primitive Shaders, they must be developer handled, and they will help Vega but to what degree its up in the air.

Vega is yet again a new software stack last I checked. It wouldn't be an apple to apples comparison to Polaris. And the ROP backend was AMDs weak point. A lot of this pre culling which is what tiling saves on is what helps make that backend ROPs more efficient. With the additional ROPS, I predict it will be better than linear performance.

Again this is ALL f'n pointless speculation. It's fun to predict. But to sit there and say it will fail before we have it in our hands with so many unknowns shows bias. Ledra and Razor, I respect you, but you need to cut AMD some slack till the reviews are out the door.

My numbers do indeed show it might match 1080 or possibly 1080ti, but it all depends on the backend efficiency. It could be as slow as a 1070 if the culling engine doesn't reduce the number of discarded pixels.

What will be interesting if AMD can match overclock potential, which I'm seriously doubting.
 
Again this is ALL f'n pointless speculation. It's fun to predict. But to sit there and say it will fail before we have it in our hands with so many unknowns shows bias. Ledra and Razor, I respect you, but you need to cut AMD some slack till the reviews are out the door.

I agree with you, I have no idea what final performance will be and given the degree of design change here, neither does anyone else unless they have a card in hand. Where I deviate from this opinion is cutting AMD slack in a general sense... they are just another company building technology which they hope we will purchase. Red, Green and Blue, they should all be picked over with a fine tooth comb.

AMD in general gets lambasted (and rightly so) because they're smaller with less people to manage launches and builds and thus make a lot of frig ups . It's real hard earned cash we're spending on these products after all. But the theatrical moaning of the trolls... some guys need to just give it up and head to Hollywood or get parts in a stage performance.

I don't think AMD is going to have a hard time flogging cards when they arrive no matter what Nvidia does or launches simply because they have proven historically that they are willing to price to shift the market around even if they don't have a halo product. I personally don't buy halo gear and for those few that do , well, so what, Nvidia will keep you happy/happier than having no halo product at all. It's not like there is no company at all producing product in that price range for you to spend on.

I generally don't believe AMD will have a Ti or Titan type card ever and I bet they never once targeted that level of performance in their design work. So to be disappointed when it doesn't arrive.... I can't understand that. I've also never once been disappointed that Dongfeng Motor has never produced a competitor to the SRT Demon.
 
The 5970, 6990, 7990, 295x2, and Radeon Pro Duo were all supposed to be those type of cards. The strategy since 2009 was to produce 2 chip cards to take on the highest of high end. Unfortunately Crossfire has never performed super well outside of the titles where significant work was done to optimize it (DiRT, BF4).

When the 290 was released, every card available was purchased by miners. I'm not really sure why there are any cash flow problems at AMD.
 
The 5970, 6990, 7990, 295x2, and Radeon Pro Duo were all supposed to be those type of cards. The strategy since 2009 was to produce 2 chip cards to take on the highest of high end. Unfortunately Crossfire has never performed super well outside of the titles where significant work was done to optimize it (DiRT, BF4).

When the 290 was released, every card available was purchased by miners. I'm not really sure why there are any cash flow problems at AMD.

It's called little to no margin products and lack of significant penetration of target market = lack of profit/cash flow.
 
The 5970, 6990, 7990, 295x2, and Radeon Pro Duo were all supposed to be those type of cards. The strategy since 2009 was to produce 2 chip cards to take on the highest of high end. Unfortunately Crossfire has never performed super well outside of the titles where significant work was done to optimize it (DiRT, BF4).

When the 290 was released, every card available was purchased by miners. I'm not really sure why there are any cash flow problems at AMD.

I really don't think AMDs GPU devision has lost any market share. Look at it this way even not having a high end product for the last year seems to have little to no effect. That is because AMD knows miners fuckin jump on these cheap cards. I think that is whats going on here, rx 480 may not have sold shit load for gaming but I think they did sell quiet a bit for mining. Those people are not likely to leave reviews, just plug and play since they are not going to be gaming on it and it never shows up on steam surveys.
 
I know exactly what he stated, I quoted most of what he stated.

You are just reading it wrong. You are expecting to see x2.4 performance for culling and polygon through put over Polaris per clock, with Vega, that is what you are expecting, YOU are in the world of imagination. its 2.4 x over Fiji per clock culling and polygon through put per clock. That is just above Polaris's increase over Tonga lol. These are theoretical maximums AMD gave use for Vega. Primitive shaders, which need programming will give up to another x2 above that, Raja wasn't being very clear when he stated, this was very clear with Vega presentation prior to this one though. Demos were shown off too, with primitive shaders and RPM, coding had to be done lol.

He never talked about Primitive shaders and its "extra" culling You don't know how the API's function and what would be needed to make something like that function. IT CAN NOT be done without coder intervention.

All of everything AMD has shown us so far doesn't give anything but just above gtx 1080 performance, and its definitely using more power at this point.

And as I stated, I would take that Sniper elite demo with a huge grain of salt, just like at what they did with it Ryzen, just look at the DL bench they did, they skewed the numbers to their favor, reality is much more different when you start equalizing things.

can you link these demos done with primitive shaders? Because I have never seen any demos of it.
 
can you link these demos done with primitive shaders? Because I have never seen any demos of it.

Unfortunately that should tell you something about its flexibility, because AMD would had done live demos just like they did with HBCC creating the 2GB environment in Vega with a game and its results.
Cheers
 
can you link these demos done with primitive shaders? Because I have never seen any demos of it.


Remember the RPM hair demo? 99% sure, that was with primitive shaders, it was shown about a month ago. There is no way to get higher then the theoretically output of 2 times from going from fp 32 to fp 16 but they got it.
 
Vega is yet again a new software stack last I checked. It wouldn't be an apple to apples comparison to Polaris. And the ROP backend was AMDs weak point. A lot of this pre culling which is what tiling saves on is what helps make that backend ROPs more efficient. With the additional ROPS, I predict it will be better than linear performance.

Again this is ALL f'n pointless speculation. It's fun to predict. But to sit there and say it will fail before we have it in our hands with so many unknowns shows bias. Ledra and Razor, I respect you, but you need to cut AMD some slack till the reviews are out the door.

My numbers do indeed show it might match 1080 or possibly 1080ti, but it all depends on the backend efficiency. It could be as slow as a 1070 if the culling engine doesn't reduce the number of discarded pixels.

What will be interesting if AMD can match overclock potential, which I'm seriously doubting.


I'm not going to give any slack to AMD for the amount of times they have outright lied, misconstrued reality, and hoodwinked with pre release info that just doesn't muster up in reality. They did it with ever single product for how long now? Even Ryzen!

Then you have Raja flat out telling us its not about beating anyone, with a benchmark in DL compute that is totally slanted towards AMD products where Vega thrashes a Titan Xp, and that will not happen in reality lol.

They are specifically showing up weakness of their previous generation cards, and showing how they improved Vega in an extremely controlled environment that will never be duplicated in realworld tests. And Raja pretty much admitted to that with "its not about beating anyone statement" Its the same things they have always done but now he is saying it with a well stated caveat.
 
Last edited:
He never answered the primitive shader part, he answered the culling part. two different parts, Primitive shaders enhance the culling, pretty much what Dev's do now in some engines like Frostbyte. Primitive shaders themselves don't cull automatically, the shader that is written compares FOV to determine visible collision on a per polygon basis and then it tells to draw the triangle or not. (simplified, still have to figure out Z sort and what not based on the pixel shader) This is a major issue when people can't understand what is going, how are they going to understand what is being talked about?
Primitive shaders CAN enhance the culling part, they don't have to do anything they didn't do previously. The logical design here would be they act like a traditional pipeline until the programmer tells them to do something else. Culling was already occurring so it stands to reason that functionality will continue. They would in fact cull automatically, unless that feature was disabled, contrary to your claim. FOV/frustrum would only be part of the culling, backface culling being rather significant and easily performed. The whole point of a primitive shader is that a programmer can design novel and unique methods of generating geometry. The status quo is just one of the possibilities. GeometryFX on GPUOpen along with a paper Sebbbi presented at SIGGRAPH would be ideas of what this is doing.

http://gpuopen.com/geometryfx-1-2-cluster-culling/
http://advances.realtimerendering.c...siggraph2015_combined_final_footer_220dpi.pdf

Every demonstration of Vega we've seen is probably running very basic primitive shaders. A non-basic shader being any construction that generates triangles for rasterization. A primitive could quite literally be a building and the shaders an extension of indirect execution that we've seen with the latest consoles. Where the GPU is capable of drawing an entire scene with little to no interaction from the CPU on some hardware. Essentially creating draw calls on its own. Another possibility would be procedural terrain, where a single shader could generate a heightmap from scratch, starting only with a randomized seed number.

You are just reading it wrong. You are expecting to see x2.4 performance for culling and polygon through put over Polaris per clock, with Vega, that is what you are expecting, YOU are in the world of imagination. its 2.4 x over Fiji per clock culling and polygon through put per clock. That is just above Polaris's increase over Tonga lol. These are theoretical maximums AMD gave use for Vega. Primitive shaders, which need programming will give up to another x2 above that, Raja wasn't being very clear when he stated, this was very clear with Vega presentation prior to this one though. Demos were shown off too, with primitive shaders and RPM, coding had to be done lol.
Seems like he's in a world of reality and you're confused again. Or should we take your word that you understand how these shaders will work better than Raja? Raja's explanation and common sense are both going against your "understanding" here. All appearances are that it's a simple compute shader designed to creation geometry and pass it along for rasterization within the pipeline.

Unfortunately that should tell you something about its flexibility, because AMD would had done live demos just like they did with HBCC creating the 2GB environment in Vega with a game and its results.
Cheers
I'm not sure who would have had a demo to run outside of maybe DICE? Only the console devs have been toying with the concept to my understanding. Like I explained above, all demos we've seen are likely running some sort of basic primitive shader. Every stage prior to pixel shader being baked into one monolithic shader. At least that's how the driver devs were describing it, but I don't have a link to that mailing list discussion handy.

Remember the RPM hair demo? 99% sure, that was with primitive shaders, it was shown about a month ago. There is no way to get higher then the theoretically output of 2 times from going from fp 32 to fp 16 but they got it.
Besides better cache efficiency with the smaller control dataset?
 
Primitive shaders CAN enhance the culling part, they don't have to do anything they didn't do previously. The logical design here would be they act like a traditional pipeline until the programmer tells them to do something else. Culling was already occurring so it stands to reason that functionality will continue. They would in fact cull automatically, unless that feature was disabled, contrary to your claim. FOV/frustrum would only be part of the culling, backface culling being rather significant and easily performed. The whole point of a primitive shader is that a programmer can design novel and unique methods of generating geometry. The status quo is just one of the possibilities. GeometryFX on GPUOpen along with a paper Sebbbi presented at SIGGRAPH would be ideas of what this is doing.

http://gpuopen.com/geometryfx-1-2-cluster-culling/
http://advances.realtimerendering.c...siggraph2015_combined_final_footer_220dpi.pdf

Every demonstration of Vega we've seen is probably running very basic primitive shaders. A non-basic shader being any construction that generates triangles for rasterization. A primitive could quite literally be a building and the shaders an extension of indirect execution that we've seen with the latest consoles. Where the GPU is capable of drawing an entire scene with little to no interaction from the CPU on some hardware. Essentially creating draw calls on its own. Another possibility would be procedural terrain, where a single shader could generate a heightmap from scratch, starting only with a randomized seed number.

That is exactly what I stated, re read what I stated, they don't cull by themselves, this is the part of the quote to show how they work

the shader that is written compares FOV to determine visible collision on a per polygon basis and then it tells to draw the triangle or not. (simplified, still have to figure out Z sort and what not based on the pixel shader)
That is the primitive shader portion. Based on what the developer is looking for an how their assets are set up, that will influence what is culled and what isn't.

Seems like he's in a world of reality and you're confused again. Or should we take your word that you understand how these shaders will work better than Raja? Raja's explanation and common sense are both going against your "understanding" here. All appearances are that it's a simple compute shader designed to creation geometry and pass it along for rasterization within the pipeline.

Like Raja's explanation of 51% utilization was correct on a mGPU setup for AOTS of rx480's? And how did people take that? In this case he hasn't even misspoke anything, he stated things as they are, without clarification of specifics.

All appearances here, is you haven't read, Dice's engine paper on early discard and how they used AMD's shader array to do it.

I'm not sure who would have had a demo to run outside of maybe DICE? Only the console devs have been toying with the concept to my understanding. Like I explained above, all demos we've seen are likely running some sort of basic primitive shader. Every stage prior to pixel shader being baked into one monolithic shader. At least that's how the driver devs were describing it, but I don't have a link to that mailing list discussion handy.
The "basic primitive shader" is the same thing any hardware is doing right now. What the primitive shader gives is programming flexibility based on what the application needs are.
Besides better cache efficiency with the smaller control dataset?

Really do you know how triangles are rendered based on object, strips, and blocks? API controlled man, not much to do with cache efficiency and smaller control data sets, actually smaller control data sets, creates huge headaches with cache efficiencies, you got it backwards.
 
Last edited:
I'm not going to give any slack to AMD for the amount of times they have outright lied, misconstrued reality, and hoodwinked with pre release info that just doesn't muster up in reality. They did it with ever single product for how long now? Even Ryzen!

Then you have Raja flat out telling us its not about beating anyone, with a benchmark in DL compute that is totally slanted towards AMD products where Vega thrashes a Titan Xp, and that will not happen in reality lol.

They are specifically showing up weakness of their previous generation cards, and showing how they improved Vega in an extremely controlled environment that will never be duplicated in realworld tests. And Raja pretty much admitted to that with "its not about beating anyone statement" Its the same things they have always done but now he is saying it with a well stated caveat.

First off,

I was with you slamming the RX480, especially during the PCIe power debacle. But Ryzen is the real deal. At a similar price point it does indeed outdo intel by a wide margin in most specs with the exception of low end gaming and high end gaming (Small different due to GPU being the bottleneck) If they hit 4.4Ghz I'll get a Ryzen 8/16 to replace my 3770K 4/8 running at 4.4.

Second yes they did pull shenanigans in the past. No I'm not buying a thing from them until they deliver proof. But that has nothing to do with this generation. None, zip NADA. You have to remember NVIDIA had their own share of horse manure BS maneuvers from disabling PhysX where an AMD card was detected, to closing off source code, plus weaseling out of claims on motherboards that failed with their chips. And it's the very reason I want to see some healthy competition

AMDs past spin nothing to do with Vega today. And it's not a valid reason to predict Vega's failure. Will it beat the 1080 or even the 1080ti, maybe. But it doesn't matter if it's competitive price wise.
 
First off,

I was with you slamming the RX480, especially during the PCIe power debacle. But Ryzen is the real deal. At a similar price point it does indeed outdo intel by a wide margin in most specs with the exception of low end gaming and high end gaming (Small different due to GPU being the bottleneck) If they hit 4.4Ghz I'll get a Ryzen 8/16 to replace my 3770K 4/8 running at 4.4.

Second yes they did pull shenanigans in the past. No I'm not buying a thing from them until they deliver proof. But that has nothing to do with this generation. None, zip NADA. You have to remember NVIDIA had their own share of horse manure BS maneuvers from disabling PhysX where an AMD card was detected, to closing off source code, plus weaseling out of claims on motherboards that failed with their chips. And it's the very reason I want to see some healthy competition

AMDs past spin nothing to do with Vega today. And it's not a valid reason to predict Vega's failure. Will it beat the 1080 or even the 1080ti, maybe. But it doesn't matter if it's competitive price wise.


I agree with everything you stated there outside of AMD's past spin not doing anything with Vega today, they will always try to show their products in the best light, and Ryzen, they portrayed as competing with Intel on all fronts, they can't do that, specific targets only.

Also nV did do their own crap tactics, but as of late, they haven't and they aren't even looking at AMD as competition anymore.

Also agree with it should be able to beat the 1080 I have stated it many times, I expect gtx 1080 performance and possibly a bit more, don't think it won't beat the 1080ti.
 
Remember the RPM hair demo? 99% sure, that was with primitive shaders, it was shown about a month ago. There is no way to get higher then the theoretically output of 2 times from going from fp 32 to fp 16 but they got it.

That was Rapid Packed Math. 2x FP16. AMD's TressFX is Compute based. It never goes through the Geometry Engines beacuse it ain't geometry. Because it's a compute shader, it avoid typical front-end bottlenecks and so ALUs/SPs will scale 2x with RPM (working as advertised).
 
Also agree with it should be able to beat the 1080 I have stated it many times, I expect gtx 1080 performance and possibly a bit more, don't think it won't beat the 1080ti.

It doesn't need to beat the 1080Ti, it just needs to be in the ballpark to allow them to price it high for margins.

Let's not get ahead of ourselves and expect miracles. GP102 is pretty big and close to Vega 10 in size, AMD hasn't beat NVIDIA in perf/mm2 since Maxwell. At best, it will trade blows.
 
It doesn't need to beat the 1080Ti, it just needs to be in the ballpark to allow them to price it high for margins.

Let's not get ahead of ourselves and expect miracles. GP102 is pretty big and close to Vega 10 in size, AMD hasn't beat NVIDIA in perf/mm2 since Maxwell. At best, it will trade blows.

Its going to come down to @ what power usage. If its GTX 1080 performance @250 watts, forget it, its done. If its above the gtx 1080 by lets say 10% at 250 better, but still won't look that good. If its at 225 watts, that is doable.

Price, it will be priced at where it should be, just like any AMD products in the past withstanding FuryX and Nano.

That was Rapid Packed Math. 2x FP16. AMD's TressFX is Compute based. It never goes through the Geometry Engines beacuse it ain't geometry. Because it's a compute shader, it avoid typical front-end bottlenecks and so ALUs/SPs will scale 2x with RPM (working as advertised).

Raja never mentioned Tress FX, if it was Tress FX demo, I would be even more weary about the front end changes.

Also I just re watched the Capsacian and Cream presentation, no mention of Tress FX.

Also forgot Tress FX isn't purely compute if I'm not purely mistaken part of it is but the Geometry shaders are still needs.

yes he does say its Tress FX missed the first 10 secs when he stated when I was scrubbing forward. So that really gets me worried about its front end changes man. If they need to use RPM and primitive shader to access RPM (which Raja does mention new programmable geometry pipeline which will be shown later today, which was the RPM demo)

So to recap that demo, uses the programmable geometry pipeline of Vega, which is primitive shaders to give its 2x performance increase in geometry throughput. Which is now looking like not so good cause that is just above the synthetics tests of Polaris vs. Tonga.



Check out 9:25 he talks about RPM and next the new programmable geometry pipeline. They go hand in hand in the demo they show of tress Fx at 22:45
 
Last edited:
Its going to come down to @ what power usage. If its GTX 1080 performance @250 watts, forget it, its done. If its above the gtx 1080 by lets say 10% at 250 better, but still won't look that good. If its at 225 watts, that is doable.

Price, it will be priced at where it should be, just like any AMD products in the past withstanding FuryX and Nano.



Raja never mentioned Tress FX, if it was Tress FX demo, I would be even more weary about the front end changes.

Also I just re watched the Capsacian and Cream presentation, no mention of Tress FX.

Also forgot Tress FX isn't purely compute if I'm not purely mistaken part of it is but the Geometry shaders are still needs.

how is it done at 250w? Common that is still miles better than polaris. RX 580 is using 220w at 1400+ if rx vega doubles that performance and gives it to you at 250w its bad? Yes its bad only compared to 1080ti around 50-60 watts more likely on average. But still all of sudden we expected them to catch nvidia in power and performance? I love it when people say card it dead if it uses 250w. that is not too bad at high end.
 
how is it done at 250w? Common that is still miles better than polaris. Polaris is using 225w if rx vega doubles that performance and gives it to you at 250w its bad? Yes its bad only compared to 1080ti around 50-60 watts more likely on average. But still all of sudden we expected them to catch nvidia in power and performance? I love it when people say card it dead if it uses 250w. that is not too bad at high end.


Its bad compared to what its up against. If it performance just above the gtx 1080, the gtx 1080 is only at 180 watts.

That is a huge power difference man. that is a ~40% perf/watt, you think OEM's are going to like that? Now if its at 225 watts and performance 10% faster, that closes the gap pretty big. Down to like 25% or so, which is manageable, it won't look bad.. But at 40% difference, its worse than the FuryX was to the 980ti.
 
Its going to come down to @ what power usage. If its GTX 1080 performance @250 watts, forget it, its done. If its above the gtx 1080 by lets say 10% at 250 better, but still won't look that good. If its at 225 watts, that is doable.

Price, it will be priced at where it should be, just like any AMD products in the past withstanding FuryX and Nano.

I've been seeing "It's done" quotes here everywhere.

It's not done. It's not even delivered. I refute this claim based on power conditions alone. There are enough red team people around to make this card work if it's 1080ish no matter what power it sucks back. If you don't personally buy it, I assure you , AMD won't care. They won't even send you an apology letter. <----that I guarantee.
 
I've been seeing "It's done" quotes here everywhere.

It's not done. It's not even delivered. I refute this claim based on power conditions alone. There are enough red team people around to make this card work if it's 1080ish no matter what power it sucks back. If you don't personally buy it, I assure you , AMD won't care. They won't even send you an apology letter. <----that I guarantee.


it will have the same sales as FuryX if its a 40% differential in perf/watt that is a given. Vega is coming out later than the competition (if its direct competition is the gtx 1080), over a year, which is 4 times greater than the 3 month difference from Fury X to the 980ti and if its at 40% perf/watt difference that is 10% higher than FuryX to the 980ti. There is a big difference if this comes to light.
 
it will have the same sales as FuryX if its a 40% differential in perf/watt that is a given. Vega is coming out later than the competition, over a year, which is 4 times greater than the 3 month difference from Fury X to the 980ti and if its at 40% perf/watt difference that is 10% higher than FuryX to the 980ti. There is a big difference if this comes to light.

Doubt any of the people with Freesync monitors will give 2 craps about per/watt . But sure, sell me that. I won't buy it regardless so I'm not a hard anit-sale.
 
As Polaris has shown, not nearly enough to get them in the green.

AMD will make money (for better or worse) based on Zen designs, they've hopelessly buggered up ATI. But that doesn't mean they won't sell any cards and the company is doomed. I simply don't buy this gloom and doom tale about an unreleased product that really isn't where they are (obviously) spending their money.
 
Doubt any of the people with Freesync monitors will give 2 craps about per/watt . But sure, sell me that. I won't buy it regardless so I'm not a hard anit-sale.

thats only part of the market, how about the OEM's and system builders? All of them will shun it if its that much of a difference. There is no long term sustainability man. Its like saying the first month the 1%'s will cover the entire year of sales. Just doesn't happen. First off, I have a hard time seeing AMD cutting prices down on this product under 600 bucks. And I know for a fact Dell when they sold the Fury X last quarter raised their system prices over what a comparable system with a gtx 1080 sold for. They were charging 200 or 250 bucks more for a FuryX over a gtx 1080, that's crazy. Who in their right mind would want to spend money on a system like that?
 
thats only part of the market, how about the OEM's and system builders? All of them will shun it if its that much of a difference. There is no long term sustainability man. Its like saying the first month the 1%'s will cover the entire year of sales. Just doesn't happen. First off, I have a hard time seeing AMD cutting prices down on this product under 600 bucks. And I know for a fact Dell when they sold the Fury X last quarter raised their system prices over what a comparable system with a gtx 1080 sold for.

Don't work so hard to convince me, I know what the score is, but power consumption isn't going to be the defining factor, so don't sell me that. The 7970 sales alone put that old acorn to bed.
 
Don't work so hard to convince me, I know what the score is, but power consumption isn't going to be the defining factor, so don't sell me that. The 7970 sales alone put that old acorn to bed.


The 7970 was flat lol. The 7970 did nothing, for them

Nvidia-AMD.png


There was an uptake for the first quarter before then it started to go down again. The uptake wasn't due to the 7970 it was due to the lower end ones being there without any nV counterparts, when the lower nV counterparts came into play, it equalized the playing field and then some.
 
Doubt any of the people with Freesync monitors will give 2 craps about per/watt . But sure, sell me that. I won't buy it regardless so I'm not a hard anit-sale.

At this point, the only reason I'm even remotely considering Vega (based on current rumor/educated guesses) is because if I did, I'll probably be getting a 40"+ 4K FreeSync monitor (like the LG 43UD79 which we already have a thread on the Displays forum here) to replace my current "budget" 42" 4K@60Hz TV.
At a mere $700, that is much cheaper than a roughly equivalent-sized (give or take 4+ inches) G-Sync monitor.
What I'm curious about is if Vega will be able to match a stock-clocked GTX 1080 Ti (possibly with overclocking? but I doubt it).
Otherwise, I'll just get a nice custom GTX 1080 Ti and call it a day, waiting for Black Friday/wintertime for G-Sync monitor releases and sales.
 
The 7970 was flat lol. The 7970 did nothing, for them

There was an uptake for the first quarter before then it started to go down again. The uptake wasn't due to the 7970 it was due to the lower end ones being there without any nV counterparts, when the lower nV counterparts came into play, it equalized the playing field and then some.

You seem to think I'm debating with you, however, AMD made their targets with that product. The fact that the targets didn't meet your expectations and were "flat" is your problem, not AMD's.
 
At this point, the only reason I'm even remotely considering Vega (based on current rumor/educated guesses) is because if I did, I'll probably be getting a 40"+ 4K FreeSync monitor (like the LG 43UD79 which we already have a thread on the Displays forum here) to replace my current "budget" 42" 4K@60Hz TV.
At a mere $700, that is much cheaper than a roughly equivalent-sized (give or take 4+ inches) G-Sync monitor.
What I'm curious about is if Vega will be able to match a stock-clocked GTX 1080 Ti (possibly with overclocking? but I doubt it).
Otherwise, I'll just get a nice custom GTX 1080 Ti and call it a day, waiting for Black Friday/wintertime for G-Sync monitor releases and sales.

I'd seriously just buy what I wanted if I were you. I know that whole Freesync thing looks good on paper, but buy today what you can enjoy for years and ignore the gambling aspect of what comes next in tech, because it usually isn't quite enough. Buy what works today and carry on.
 
You seem to think I'm debating with you, however, AMD made their targets with that product. The fact that the targets didn't meet your expectations and were "flat" is your problem, not AMD's.


I'm not arguing with you, you stated the 7970 did something for AMD, I just showed you the facts, it did nothing for them, no marketshare gain, they were in the red at that point too. That was not their goal with the 7970, they wanted to be back at the top end and compete at the top end with that card, they even stated it not once or or twice, many times, and how proudly they did it by saying how large their die size was too.

We know that FuryX didn't sell well, yet if Vega comes out to be worse in the perf/watt department and with the elongated time lapse than FuryX did by a lot more not a little bit, its going to sell better? How so? Magic I guess. Vega needs to be around 225 watts and at around 10% faster than the 1080 to make an decent sales, otherwise its going to be the same old thing as with Polaris. Its there but its not doing too much for AMD.
 
Back
Top