AMD Vega and Zen Gaming System Rocks Doom At 4K Ultra Settings

Yes, you guys know best. Thing is if its too aggressive it will pass every memory benchmark. It stops the moment you disable crossfire. You can still replicate it even with todays mobos.
Strange thing about it is that it does not matter if its two cards in xfire or a single 7990.
 
Last edited:
I think its more along the lines of this simple setting prevented you from enjoying crossfire only to find out years later that your shit was not configured correctly ...lol
Really.
Interesting that running ram at default none XMP with much higher tRFC didnt help. Even with the slower ram speed that can accept much lower tRFC.
The ram I upgraded to showed the same issue.
It did the same in a friends machine.

I wonder what the common link is.
;)
 
Conspiracy, >?:D
Or maybe the fact that a majority had no problems while you your friend and a bunch of others experienced stuttering.
 
Last edited:
Yes, you guys know best. Thing is if its too aggressive it will pass every memory benchmark. It stops the moment you disable crossfire. You can still replicate it even with todays mobos.
Strange thing about it is that it does not matter if its two cards in xfire or a single 7990.


It took ATi/AMD to get crossfire working well, specially 0 day drivers, I don't have much experience with SLi or Xfire, since when I went to SLi with the 7 series form nV was such a disappointment, I never touched a multiple GPU setup since, but yeah SLi tended to be better over all, and now they both just aren't worth it.....
 
Just an update going to copy and past my B3D post.

Question ansered, you get 2 times and more the polygon through put when using primitive shaders by the use of the shader array.

Enough? 2:34



So pretty much same capabilities as Polaris (geometry is concerned with current or older games) most likely still has 4 geometry units......with the addition of the primitive shaders you get more.

I can see this tech come in handy quicker in consoles with Vega architecture (next gen xbox rumored) but for PC's probably won't see this for a year or two after Vega is released to developers.....

This also goes for Vega's tile renderer which it too needs to go through the primitive shader.
 
Vega is introducing something it calls ‘Rapid-Packed Math’ which allows the FP module to switch precision levels of a particular operation. The most likely case for this is to drop from single to half-precision, as this basically allows twice the number of operations to fit inside the register and consequently doubling the FLOPS. Thing is, half-precision isn’t used much inside games, it’s not accurate enough for various lighting effects (resulting in artifacts). Where it does make sense, is Deep Learning and data analysis, where scale trumps precision (since scale can provide precision, depending on the workload).

AMD has been locked out of the data center for long enough. While it provides workstation cards, it lacks a number of key technologies needed for HPC, scaling and interoperability, and it’s something AMD is all too aware of. This is what’s driving its GPU Open initiative through RTG; getting broader support not just at the OS level with drivers, but with software libraries and programming tools, transcoding CUDA into C in real-time.

AMD is branching out, and as such, some of the technologies being introduced with Vega are not strictly for gaming, but a framework for a much broader range of products in different sectors. Battle harden the architecture in a known market before pushing it into the unknown.

With that said, there are still some interesting architectural changes that may benefit games, one of which is the Primitive Shader. This is set to replace the vertex and geometry shaders that have existed since the DX10 days. However, developers have been pushing more and more through them, that they have to use compute shaders to perform the same task. To make matters worse, not all objects in a scene are viewable to the camera (thus the player). So huge amounts of geometry are shaded when it’s not even in view.
Part of this fix is with the primitive shader. It’s a more programmable geometry shader that offers the fixed hardware speed of a dedicated shader, but with some of the flexibility of the compute shader. Keeping the workflow within these programmable shaders also helps will culling unseen geometry and speeding up shading.
Speaking of shading, there is an overhaul in the department with the Draw-stream Binning Rasterizer. This is meant to be a single-step rasterizer that uses a tile-based approach to rendering – breaking a scene up into tiles, figuring out which objects overlap, and only rendering what’s seen, all in a single pass. This effectively doubles the peak geometry rendering, so we expect to see some rather impressive numbers in the coming months.
 
Vega is introducing something it calls ‘Rapid-Packed Math’ which allows the FP module to switch precision levels of a particular operation. The most likely case for this is to drop from single to half-precision, as this basically allows twice the number of operations to fit inside the register and consequently doubling the FLOPS. Thing is, half-precision isn’t used much inside games, it’s not accurate enough for various lighting effects (resulting in artifacts). Where it does make sense, is Deep Learning and data analysis, where scale trumps precision (since scale can provide precision, depending on the workload).

AMD has been locked out of the data center for long enough. While it provides workstation cards, it lacks a number of key technologies needed for HPC, scaling and interoperability, and it’s something AMD is all too aware of. This is what’s driving its GPU Open initiative through RTG; getting broader support not just at the OS level with drivers, but with software libraries and programming tools, transcoding CUDA into C in real-time.

AMD is branching out, and as such, some of the technologies being introduced with Vega are not strictly for gaming, but a framework for a much broader range of products in different sectors. Battle harden the architecture in a known market before pushing it into the unknown.

With that said, there are still some interesting architectural changes that may benefit games, one of which is the Primitive Shader. This is set to replace the vertex and geometry shaders that have existed since the DX10 days. However, developers have been pushing more and more through them, that they have to use compute shaders to perform the same task. To make matters worse, not all objects in a scene are viewable to the camera (thus the player). So huge amounts of geometry are shaded when it’s not even in view.
Part of this fix is with the primitive shader. It’s a more programmable geometry shader that offers the fixed hardware speed of a dedicated shader, but with some of the flexibility of the compute shader. Keeping the workflow within these programmable shaders also helps will culling unseen geometry and speeding up shading.
Speaking of shading, there is an overhaul in the department with the Draw-stream Binning Rasterizer. This is meant to be a single-step rasterizer that uses a tile-based approach to rendering – breaking a scene up into tiles, figuring out which objects overlap, and only rendering what’s seen, all in a single pass. This effectively doubles the peak geometry rendering, so we expect to see some rather impressive numbers in the coming months.
How about Vega variable length wavefront ability which maybe Razor1 could explain way better then I on any kind of useful impact of optimizing use of available shaders? I believe Nvidia already has this ability.
 
Conspiracy, >?:D
Or maybe the fact that a majority had no problems while you your friend and a bunch of others experienced stuttering.

The 'majority' likely didn't care.

However, the *objective measurements* showed frametimes with Crossfire to be nigh unusable relative to a single card. This was *my* experience. Nvidia was night-and-day better, still, with all of the issues that SLI comes with.
 
Vega is introducing something it calls ‘Rapid-Packed Math’ which allows the FP module to switch precision levels of a particular operation. The most likely case for this is to drop from single to half-precision, as this basically allows twice the number of operations to fit inside the register and consequently doubling the FLOPS. Thing is, half-precision isn’t used much inside games, it’s not accurate enough for various lighting effects (resulting in artifacts). Where it does make sense, is Deep Learning and data analysis, where scale trumps precision (since scale can provide precision, depending on the workload).

AMD has been locked out of the data center for long enough. While it provides workstation cards, it lacks a number of key technologies needed for HPC, scaling and interoperability, and it’s something AMD is all too aware of. This is what’s driving its GPU Open initiative through RTG; getting broader support not just at the OS level with drivers, but with software libraries and programming tools, transcoding CUDA into C in real-time.

AMD is branching out, and as such, some of the technologies being introduced with Vega are not strictly for gaming, but a framework for a much broader range of products in different sectors. Battle harden the architecture in a known market before pushing it into the unknown.

With that said, there are still some interesting architectural changes that may benefit games, one of which is the Primitive Shader. This is set to replace the vertex and geometry shaders that have existed since the DX10 days. However, developers have been pushing more and more through them, that they have to use compute shaders to perform the same task. To make matters worse, not all objects in a scene are viewable to the camera (thus the player). So huge amounts of geometry are shaded when it’s not even in view.
Part of this fix is with the primitive shader. It’s a more programmable geometry shader that offers the fixed hardware speed of a dedicated shader, but with some of the flexibility of the compute shader. Keeping the workflow within these programmable shaders also helps will culling unseen geometry and speeding up shading.
Speaking of shading, there is an overhaul in the department with the Draw-stream Binning Rasterizer. This is meant to be a single-step rasterizer that uses a tile-based approach to rendering – breaking a scene up into tiles, figuring out which objects overlap, and only rendering what’s seen, all in a single pass. This effectively doubles the peak geometry rendering, so we expect to see some rather impressive numbers in the coming months.

They are locked out because of the lack of software support, adding more features, without the software just won't do it. nV was smart in that they even locked out Intel's Phi too! Although Phi has been making strides because of the common programming techniques in their CPU's.

How about Vega variable length wavefront ability which maybe Razor1 could explain way better then I on any kind of useful impact of optimizing use of available shaders? I believe Nvidia already has this ability.

Going by the white papers currently I don't see immediate advantages for AMD as Vega doesn't seem to have this, at least from what they have stated so far, looks to be something for Navi but AMD hasn't disclosed everything yet on Vega. But this would have been on a huge feature change and something they probably would have pointed out. Even if Vega has it, the immediate advantage for AMD won't be seen till apps are developed on them. Even in the instinct launches they haven't talked about this so at this point I don't think they have it in Vega.
 
Razor1, I don't get your logic. You have to come up with hardware in order to write software for it .
My understanding is that it was pushed by the developers to have the flexibility of primtive shading in order to streamline their efforts as well as make the process so much more efficient.
 
Last edited:
Razor1, I don't get your logic. You have to come up with hardware in other to write software for it .

Current software solutions for HPC will not use AMD hardware, cause most of it is on CUDA, they will need to remake their entire software suite, which is much more money and time wasted. The features list of AMD hardware (software libraries and features in those libraries) are limited compared to CUDA, also CUDA is easier to program for, many HPC programmers attest to this because CUDA is being taught at a college level. And this goes back to building the ecosystem which is why Intel is having such a hard time too.

The people in the HPC space don't care much about the hardware, as long as the hardware can do what they need it to do, so if they already have software working, going to another hardware and rewriting software is just a waste for them.
 
Conspiracy, >?:D
Or maybe the fact that a majority had no problems while you your friend and a bunch of others experienced stuttering.
The opposite.
The drivers were so bad, a forum member on Guru3D rewrote them to fix issues AMD wouldnt.
Those drivers allowed me to get Dirt2 working great but other games were still not good enough to play.
Crossfire is the biggest waste of time and money I had on my PC.
 
They are locked out because of the lack of software support, adding more features, without the software just won't do it. nV was smart in that they even locked out Intel's Phi too! Although Phi has been making strides because of the common programming techniques in their CPU's.



Going by the white papers currently I don't see immediate advantages for AMD as Vega doesn't seem to have this, at least from what they have stated so far, looks to be something for Navi but AMD hasn't disclosed everything yet on Vega. But this would have been on a huge feature change and something they probably would have pointed out. Even if Vega has it, the immediate advantage for AMD won't be seen till apps are developed on them. Even in the instinct launches they haven't talked about this so at this point I don't think they have it in Vega.
Something of interest on Vega using various sources, nothing concrete:

Looks like variable wavefront of Vega is for compute only from this. I am beginning to think Vega performance (as like Pascal) will depend more on final clock speed more then anything else. The future tech incorporated will not be used right out of the gate, if and when used is any ones guess. If AMD can fix their rasterization rate without having to rely on the new primitive processor would help.
 
Something of interest on Vega using various sources, nothing concrete:

Looks like variable wavefront of Vega is for compute only from this. I am beginning to think Vega performance (as like Pascal) will depend more on final clock speed more then anything else. The future tech incorporated will not be used right out of the gate, if and when used is any ones guess. If AMD can fix their rasterization rate without having to rely on the new primitive processor would help.


if it was for compute only though I think the instinct launch would have covered it, it would be a huge benefit for things like nueronets.
 
Just an update going to copy and past my B3D post.

Question ansered, you get 2 times and more the polygon through put when using primitive shaders by the use of the shader array.

Enough? 2:34



So pretty much same capabilities as Polaris (geometry is concerned with current or older games) most likely still has 4 geometry units......with the addition of the primitive shaders you get more.

I can see this tech come in handy quicker in consoles with Vega architecture (next gen xbox rumored) but for PC's probably won't see this for a year or two after Vega is released to developers.....

This also goes for Vega's tile renderer which it too needs to go through the primitive shader.


According to Anandtech its a major overhaul over Polaris. There is a reason Vega came out late is probably because there were bigger changes to it than polaris and amd couldn't afford to ramp it up due to R&D most likely. They could have easily slapped more cores to polaris chip and have a faster card instead of taking the spanking for another year.

Also not to mention Draw Stream Binning Rasterizer (which is likely tile basesd) that AMD has needed for so long that was the biggest efficiency for Maxwell.

now you could be right about that Premitive Shaders, but amd does have a slide claiming 2x the throughput per clock with same amount of geometry engines. What we don't know is if that is strictly because of premitive shaders or there is an improvement to individual geometry shaders. But at this point its not just primitive shaders that are at play. There is more to the chip than just slapping on hbm and little higher clocks.

if they can run this sucker at 1500 that alone is a fuckin step in the right direction for AMD lol. But I think they did as much as they could with the R&D they have.

I think Vega does have significant input for Raja, it is fixing something that has lacked on amd side since 290/x cards..
 
Last edited:
According to Anandtech its a major overhaul over Polaris. There is a reason Vega came out late is probably because there were bigger changes to it than polaris and amd couldn't afford to ramp it up due to R&D most likely. They could have easily slapped more cores to polaris chip and have a faster card instead of taking the spanking for another year.

Also not to mention Draw Stream Binning Rasterizer (which is likely tile basesd) that AMD has needed for so long that was the biggest efficiency for Maxwell.

now you could be right about that Premitive Shaders, but amd does have a slide claiming 2x the throughput per clock with same amount of geometry engines. What we don't know is if that is strictly because of premitive shaders or there is an improvement to individual geometry shaders. But at this point its not just primitive shaders that are at play. There is more to the chip than just slapping on hbm and little higher clocks.

if they can run this sucker at 1500 that alone is a fuckin step in the right direction for AMD lol. But I think they did as much as they could with the R&D they have.

I think Vega does have significant input for Raja, it is fixing something that has lacked on amd side since 290/x cards..


Yes I mentioned the tile based rendered but that too has to be done though primitive shaders, the major change is the primitive shader stage, which no game currently or in the past or near future games will have.

Its not going to be running at 1500, its going to be boosting up to 1500, big difference.

Remember AMD has a habit of always showing best case and the product tends to fall below that in most cases. This might be the same here because of what we have seen with Doom and Battlefront.

Marketing things like this, AMD will want to turn a negative into a positive, we know they are lacking these features that have given Maxwell and Pascal an advantage in both performance and power consumption, that is what they are trying to do " We have it now too" is what they are saying, but developers have use this new pipeline stage. This is a tough pill to swallow being a year later, and then waiting another year or more to see them in applications.

Development of programmable features are only useful when their performance per transistor makes them viable, we don't know the draw backs yet, but all we gotta do is look back when they weren't optimal, SM 3.0 on the 6xxx series and the 7xxxx series? Didn't come into play. How about branch performance of the x1800 and the x1900 series? Geometry shaders on the G80? Yeah these are all examples of getting things into programmers hands but hardware was capable of doing something but wasn't used till a gen or two after.

The only card that I know that had the bestest of features in a new API that were usable at time of launch (when DX9 games launched) was the 9700. The g80 did with most of DX10 too but not to the degree the 9700 held up.
 
Last edited:
Yes I mentioned the tile based rendered but that too has to be done though primitive shaders, the major change is the primitive shader stage, which no game currently or in the past or near future games will have.

Its not going to be running at 1500, its going to be boosting up to 1500, big difference.

okay so are you just saying there is simply primitive shaders only at play here and amd is wasting tile based rendering if a game doesn't use primitive shaders? I am pretty sure not everything is reliant on premitive shaders, nvidia does tile based rendering with their existing geometry engine. Looks like primitive shaders is an add on not just the only was to do tile based rendering. I would be very surprised if AMD is not using tile based rendering and wasting it depending on the game if it is using primitive shaders or not. Looks like a giant waste of time.

I am pretty sure it still works to its advantage if game doesn't use primitive shaders. AMD does say it can do 11 polygons per clock with 4 geometry engines.

EDIT: Then again, I leave it up to amd to decide. lol
 
Last edited:
okay so are you just saying there is simply primitive shaders only at play here and amd is wasting tile based rendering if a game doesn't use primitive shaders? I am pretty sure not everything is reliant on premitive shaders, nvidia does tile based rendering with their existing geometry engine. Looks like primitive shaders is an add on not just the only was to do tile based rendering. I would be very surprised if AMD is not using tile based rendering and wasting it depending on the game if it is using primitive shaders or not. Looks like a giant waste of time.


I don't think they had the time to modify their ROPS to make things work automatically when it comes to their tile based renderer, this is a huge change in the way the data is set up and passed through to and from the rasterizer. From what I am thinking is the primitive shader stage talks to the ROP's (this would change the cache architecture and the registers too), that stage was added in for geometry reasons because AMD knew they were behind by quite a bit there since Fermi, then they also saw it could be used to set up a tile based renderer so they did that too. Think about the limit of 11 tris per clock, why is that limit there, I'm sure with the amount of calculations AMD's cores can do, 11 tris to discard should be nothing.......

Yes that is what it sound like to me. They weren't clear on it, but there would be no reason to talk about primitive shaders along with the tile based rasterizer if that wasn't the case.

nV has fixed function units to do tile based rendering (also their geometry units are much more capable and discarding triangles as we have seen in many generations), AMD doesn't have this that is why they mentioned primitive shaders needs to be added into an existing api or an AMD library must be released to developers for them to use it.
 
Last edited:
I don't think they had the time to modify their ROPS to make things work automatically when it comes to their tile based renderer, this is a huge change in the way the data is set up and passed through to and from the rasterizer. From what I am thinking is the primitive shader stage talks to the ROP's (this would change the cache architecture and the registers too), that stage was added in for geometry reasons because AMD knew they were behind by quite a bit there since Fermi, then they also saw it could be used to set up a tile based renderer so they did that too. Think about the limit of 11 tris per clock, why is that limit there, I'm sure with the amount of calculations AMD's cores can do, 11 tris to discard should be nothing.......

Yes that is what it sound like to me. They weren't clear on it, but there would be no reason to talk about primitive shaders along with the tile based rasterizer if that wasn't the case.

nV has fixed function units to do tile based rendering (also their geometry units are much more capable and discarding triangles as we have seen in many generations), AMD doesn't have this that is why they mentioned primitive shaders needs to be added into an existing api or an AMD library must be released to developers for them to use it.

Makes sense. Lets see how it ends up. Any improvement is good for amd at this point lol.
 
Yes I mentioned the tile based rendered but that too has to be done though primitive shaders, the major change is the primitive shader stage, which no game currently or in the past or near future games will have.

Its not going to be running at 1500, its going to be boosting up to 1500, big difference.

Remember AMD has a habit of always showing best case and the product tends to fall below that in most cases. This might be the same here because of what we have seen with Doom and Battlefront.
What we do know that the Instinct MI25 is rated at 12.5 TFlops which we can reasonably predict frequency it has to run at, assuming nothing major changed in AMD Architecture:

4096core count X 2instructions per clock X Clock Speed = TFLOPS

12.5TF/(4096 x 2) = 1.53ghz

That to me would be the base clock and not a boost clock for this card. While the Instinct MI25 is passively cooled it is placed in a rather well ventilated and most likely A/C cooled environment. Now for the gaming card will it be a fully enabled GPU? Higher clock speeds then the MI25? Just too many unanswered questions at this time. I think AMD indicated the top end gaming one will be fully enabled. Boost clock I would expect to be higher then 1.53ghz.

If you took a Fury X and increased the clock speed from 1050 to 1550, assuming enhancements in the architecture allows perfect scaling ignoring that it has the same damn memory bandwidth you are looking at ~ 50% improvement of performance which would very nice indeed! Not including future use of the extra hardware.

The larger size of the chip will allow better heat dissipation and not necessarily packed tight is what I am gathering. So it way consume more power/w then Nvidia rather efficient GP 104 architecture but may also perform very well.

VR performance - not much investigation has gone into why AMD is performing poorly with VR. My thoughts are it deals with AMD poorer or slow rasterization (geometry setup) which hits AMD current hardware hard, even with the rather simple VR geometry wise games - since AMD does not use cache like Nvidia for this - performing it twice, one per eye is killing AMD cards. Vega may and appears to have this corrected by having cache for this. As a note, FuryX would improve in performance when OCing HBM memory - this indicates to me the weakness from rasterization pulling from memory vice cache with an OC HBM memory you reduce the latency for this step.

 
Last edited:
Makes sense. Lets see how it ends up. Any improvement is good for amd at this point lol.

I do see it being very useful in the future specially with the way things are going away from fixed function units explicitly with ROPs because that would open a huge door of possibilities with the way realtime renderers work.
 
What we do know that the Instinct MI25 is rated at 12.5 TFlops which we can reasonably predict what frequency it has to run assuming nothing major in AMD Architecture:
4096core count X 2instructions per clock X Clock Speed = TFLOPS
12.5TF/(4096 x 2) = 1.53ghz
That to me would be the base clock and not a boost clock for this card. While the Instinct MI25 is passively cooled it is placed in a rather well ventilated and most likely A/C cooled environment. Now for the gaming card will it be a fully enabled GPU? Higher clock speeds then the MI25? Just too many unanswered questions at this time. I think AMD indicated the top end gaming one will be fully enabled. Boost clock I would expect to be higher then 1.53ghz.

If you took a Fury X and increased the clock speed from 1050 to 1550, assuming enhancements in the architecture allows perfect scaling ignoring that it has the same damn memory bandwidth you are looking at ~ 50% improvement of performance which would very nice indeed! Not including future use of the extra hardware.

The larger size of the chip will allow better heat dissipation and not necessarily packed tight is what I am gathering. So it way consume more power/w then Nvidia rather efficient GP 104 architecture but may also perform very well.

VR performance - not much investigation has gone into why AMD is performing poorly with VR. My thoughts are it deals with AMD poorer or slow rasterization (geometry setup) which hits AMD current hardware hard, even with the rather simple VR geometry wise games - since AMD does not use cache like Nvidia for this - performing it twice, one per eye is killing AMD cards. Vega may and appears to have this corrected by having cache for this. As a note, FuryX would improve in performance when OCing HBM memory - this indicates to me the weakness from rasterization pulling from memory vice cache with an OC HBM memory you reduce the latency for this step.

AMD have always used boost clocks to calculate their tflops, added to this there is a slide in the Vega deck which talks about peak instructions per clock. This is why they have always stated up to a X amount of Tflops in Polaris and older gens.

Not sure about VR but that is a possibility. But then again, most VR games don't use as much polys has the best AAA games do though. I think its just their effort on driver development and dev support really, just stretching them a bit too thin.
 
AMD have always used boost clocks to calculate their tflops, added to this there is a slide in the Vega deck which talks about peak instructions per clock. This is why they have always stated up to a X amount of Tflops in Polaris and older gens.

Not sure about VR but that is a possibility. But then again, most VR games don't use as much polys has the best AAA games do though. I think its just their effort on driver development and dev support really, just stretching them a bit too thin.
I would think for the professional HPC market that would be the minimum clock, maybe under certain conditions the card will always maintain that rate. Gaming card would obviously be variable, could be higher and I don't think that would be the max speed even before OCing.

As for VR, latency for the two eyes on the geometry setup maybe the killer - I don't really know.

Also what is lacking from Vega - harder to see when something is not there - is DX 12 levels? Is Vega going to meet level 12.1? Plus the ever confusing tier system of 1-3 for like conservative rastilization? Plus AMD new primitive processor which looks to be what the next XBox will used may make a new level/tier for DX 12 eventually is my prediction. If Microsoft continues XBox games on Xbox ready for PC at the or near the same time poster - this may become somewhat important - mainly because these new features for Vega may get used sooner then later ~ 1-2 years maybe.
 
As for VR, latency for the two eyes on the geometry setup maybe the killer - I don't really know.

Also what is lacking from Vega - harder to see when something is not there - is DX 12 levels? Is Vega going to meet level 12.1? Plus the ever confusing tier system of 1-3 for like conservative rastilization? Plus AMD new primitive processor which looks to be what the next XBox will used may make a new level/tier for DX 12 eventually is my prediction. If Microsoft continues XBox games on Xbox ready for PC at the or near the same time poster - this may become somewhat important - mainly because these new features for Vega may get used sooner then later ~ 1-2 years maybe.


That would be a killer in VR, but then again, nV's solution needs its own extension and libraries so AMD should be ok with that for the time being, but UE4 already has nV's approach built into their engine, which has really been hurting AMD on the VR front.

Tier 2 should be enough for the time being anyways, which all GCN is right? Or were they tier three, don't remember either way, I don't see the tier levels being a issue in the next 2 years. CVR's might be an issue, because both Intel and nV support them, but with the added primitive shader stage, I don't see why AMD wouldn't add that feature in....
 
That would be a killer in VR, but then again, nV's solution needs its own extension and libraries so AMD should be ok with that for the time being, but UE4 already has nV's approach built into their engine, which has really been hurting AMD on the VR front.

Tier 2 should be enough for the time being anyways, which all GCN is right? Or were they tier three, don't remember either way, I don't see the tier levels being a issue in the next 2 years. CVR's might be an issue, because both Intel and nV support them, but with the added primitive shader stage, I don't see why AMD wouldn't add that feature in....
I just find it very strange that AMD leaves out any new DX 12 level or feature set if it actually does - like you say probably wouldn't matter that much in the end.
 
either way polaris is running around 1200 with shitty cooling or less. If vega is anywhere above 1450 average clock speed that is shit load of improvement in my book over fury. I am sure with allowing more power through settings and with better cooling you will get a card sustaining around 1500, that is solid improvement over fury x
 
So in theory VEGA is 40% TFLOPs faster than Fury X and 50% clock wise along with memory bandwidth in excess of 1TB/s,
Maybe that choo choo hype train really just got few miles faster :D
 
Last edited:
So in theory VEGA is 40% TFLOPs faster than Fury X and 50% clock wise along with memory bandwidth in excess of 1TB/s,
Maybe that choo choo hype train really just got few miles faster :D
It is not just % faster over previous it is also efficiency gain as well. Nvidia with less TFLOPs still had much greater efficiency resulting in better performing cards in general. Vega TFLOPs increase alone without efficiency increase would be a letdown. So a 40% increase in TFLOPS and a 30% increase in efficiency would give you a 1.3 x .4 = 52% performance over Fiji ;) , hype train, hype train, hype train (new song)
 
If it were all hype what would be the point to develop this thing. Would be easier to just shrink Fiji to 14 nm. Sell it as a refresh and call it a day.
 
  • Like
Reactions: N4CR
like this
Anyone noticed the constant scramble in every Vega thread to try sway casual observers, to downplay any possible parity between AMD and Nvidia with Vega, constantly bringing up how bad it is to choose HBM2, 'GDDR73 has best charts', even though in some cases a little crappy 4gb hbm fury from last generation can stick with a current generation 1070 and 1080, even though it's so super HBM AMD shit, but no mention of that. Just crickets and attack AMD about something obscure to detract attention further. Oh of course now this is said, we'll attack AMD for using 30 more watts than Nvidia no doubt!

'Furiously waving hands by the dumpster fire' "OMG THE AMD CARDZ AR CAUSUN GLOBAL WARMIN FIREZ GUYZ!! TEH PCI SLOTZ!! HBM2 IS EXPENSIVE YET SUPERIOR HENCE GP100 BUT WE CAN"T ADMIT IT REeeeeeeeeeEeeeeeeeeeeeeee AMD DRIVERZ SUCKS. G$INKZ SUPERIOR THATZ WHY IT SELLS SO WELL GUIIYS!!!!"

It's like you are against competition. The only reason I can see that being, is if you're paid to do so.
 
If it were all hype what would be the point to develop this thing. Would be easier to just shrink Fiji to 14 nm. Sell it as a refresh and call it a day.
Yes if AMD wanted to concede to Nvidia and Intel all of the HPC market and even the professional workstation cards. Fiji fails there even with HBM2 and 32gb. Also AMD has a rather huge bottleneck of rasterization that need to be resolve for gaming cards with very complex newer game complexity. Which in a way is good since AMD is ahead in a lot of other capability, solve that and you have a more viable competing card. Now AMD will probably have to concede the Power/Performance crown but they don't have to concede the performance/$ crown or overall performance. Nvidia has proven, if you have the performance even if your power sucks - you can sell well. Of course if you have great performance and low power you can almost have your cake and eat it to.
 
Anyone noticed the constant scramble in every Vega thread to try sway casual observers, to downplay any possible parity between AMD and Nvidia with Vega, constantly bringing up how bad it is to choose HBM2, 'GDDR73 has best charts', even though in some cases a little crappy 4gb hbm fury from last generation can stick with a current generation 1070 and 1080, even though it's so super HBM AMD shit, but no mention of that. Just crickets and attack AMD about something obscure to detract attention further. Oh of course now this is said, we'll attack AMD for using 30 more watts than Nvidia no doubt!

'Furiously waving hands by the dumpster fire' "OMG THE AMD CARDZ AR CAUSUN GLOBAL WARMIN FIREZ GUYZ!! TEH PCI SLOTZ!! HBM2 IS EXPENSIVE YET SUPERIOR HENCE GP100 BUT WE CAN"T ADMIT IT REeeeeeeeeeEeeeeeeeeeeeeee AMD DRIVERZ SUCKS. G$INKZ SUPERIOR THATZ WHY IT SELLS SO WELL GUIIYS!!!!"

It's like you are against competition. The only reason I can see that being, is if you're paid to do so.
I fully expect Vega to beat Pascal at each price point. Also the 1080ti performance may be exceeded as well depending upon how stingy Nvidia restricts the 1080Ti. Beating the FuryX by 50% also to me is similar to what AMD has done in the past as well from one generation to the next. Until we see Vega really in action with multiple games we will just not know.
 
We should be used to it by now. Its always the same people that do the detracting. The shit they come up with is the comedy central gold of forums.
 
  • Like
Reactions: N4CR
like this
Anyone noticed the constant scramble in every Vega thread
is actually every AMD thread and its fucking annoying. either they are paid or they really really have nothing better to fucking do. pissing contests about internet gathered "knowledge" and rumors. these guys aren't actually building shit and don't know shit. "oh I guessed right!" good for you, now fuck off! they just think they know everything and some have done everything! its really pissing me off lately....
 
is actually every AMD thread and its fucking annoying. either they are paid or they really really have nothing better to fucking do. pissing contests about internet gathered "knowledge" and rumors. these guys aren't actually building shit and don't know shit. "oh I guessed right!" good for you, now fuck off! they just think they know everything and some have done everything! its really pissing me off lately....

Reminds me of when Intel was fucking AMD supply chain over while they were kicking Intel ass (and ended up in court over it), just now the battle has intensified over information control in the last 6 years it seems. Which makes it all the more hilarious to discuss this in a Vega thread directly.

All we know is it's around the 1080/1070 ballpark for last 6 months, yet we see a thread where people will pick on AMD about 5 year old driver issues, ignoring recent Nvidia driver issues, while discussing an AMD card in a Vulkan game. Vulkan of all things, where last generation AMD tech performs brilliantly and still holds out with some of the latest Nvidia cards now, many months later. Notice how quiet that got? The 'Nvidia needs time to optimise' for Vulkan spiel? Crickets again. But only AMD drivers suck right?
 
Reminds me of when Intel was fucking AMD supply chain over while they were kicking Intel ass (and ended up in court over it), just now the battle has intensified over information control in the last 6 years it seems. Which makes it all the more hilarious to discuss this in a Vega thread directly.

All we know is it's around the 1080/1070 ballpark for last 6 months, yet we see a thread where people will pick on AMD about 5 year old driver issues, ignoring recent Nvidia driver issues, while discussing an AMD card in a Vulkan game. Vulkan of all things, where last generation AMD tech performs brilliantly and still holds out with some of the latest Nvidia cards now, many months later. Notice how quiet that got? The 'Nvidia needs time to optimise' for Vulkan spiel? Crickets again. But only AMD drivers suck right?
Right now I prefer AMD drivers over Nvidia's
  • Love the built in monitor/OCing ability using Watman which I can profile for each game, save data to a file for evaluation etc.
    • Nvidia - install more software which changes and you need to update on your own - profiles???
  • Nvidia Geforce Experience - Hate! Not as bad as Raptr but about as useful in the end - it is not.
    • Which you need to use that POS if you want to capture video
    • AMD - right in the driver - Relive - Video capture what you want without the GFE spam
    • AMD was smart as well and listened - got rid of Raptr
  • Interface - Hands down AMD
  • Game ready? - Come on, Nvidia last year was a disaster with many DX 12 games while AMD was ready. Nvidia has improved though.
Now Nvidia does have some excellent hardware performing well especially in VR - I say folks should try to be as balance as possible or as objective as possible and be clear when subjective and know you are being subjective.
 
  • Like
Reactions: N4CR
like this
So in theory VEGA is 40% TFLOPs faster than Fury X and 50% clock wise along with memory bandwidth in excess of 1TB/s,
Maybe that choo choo hype train really just got few miles faster :D

Theoretical tflops and effective tflops isnt the same not to mention boost clock issues. Also its 410-512GB/sec, not 1TB/sec until Vega 20(~2 years out).
 
Back
Top