AMD Brings More Console Features To PC Gaming With New "Shader Intrinsic Functions" For GPUOpen

Dude you're just strange. I have NOTHING else to say. Bye!
Well at least I did not get annoyed (like it seems you are) when you deliberately ignored checking the specific AMD-NVIDIA comparison image with slider for Pure Hair I linked-mentioned, and what seemed deliberate you posted the site's video that is not the same type of comparison and instead shows Off/On/Very High on one card (and going by the comparison slider it had to be AMD).
Even when I pointed this out in a following response you still ignored the point by not acknowledging.
Maybe best to continue this discussion later.
Cheers
 

I have never seen a review showing it (Warframe), would be great to read them from the usual publications showing benchmark performance analysis.
So no it is not spouting because I have never seen a bunch of actual published benchmarks for that game.
Got any links?
Cheers
He quoted a thread that quotes a developer of the game. Here's the direct link
Are the people who made the thing not a good enough source?

Back to the point of this thread, given the open nature of gpuopen nvidia could simply offer equivalent assembly code for their cards and developers can add a some code like
if(card==amd)
{
use amd assembly in that one place
}
else if (card==nvidia)
{
use nvidia assembly in that one place
}
else
{
generic code path
}
 
He quoted a thread that quotes a developer of the game. Here's the direct link
Are the people who made the thing not a good enough source?

Back to the point of this thread, given the open nature of gpuopen nvidia could simply offer equivalent assembly code for their cards and developers can add a some code like
if(card==amd)
{
use amd assembly in that one place
}
else if (card==nvidia)
{
use nvidia assembly in that one place
}
else
{
generic code path
}


Well this will only work if nV's hardware has the certain function though and the selected API has it exposed (cap bits for DX and vendor specific extension in Ogl)
 
Well this will only work if nV's hardware has the certain function though and the selected API has it exposed (cap bits for DX and vendor specific extension in Ogl)
Very true, but they do have direct access to the source code. The way I read this is much like the way C compilers used to (maybe still do) work where someone would come up with a one or two operation way of doing something; say checking a condition and jumping, and having the c compiler replace some big chunk of code
if (x)
then call y
else if(z)
then call q
etc...
which instead of compiling down to

test x
jump y
test z
jump q

goes to
test x
jump y
(code path for q here)

or something along those lines.

Back to AMD and nvidia; so long as nvidia's cards can do arbitrary rendering (I'm 99.9% sure they can) then they should be capable of giving developers the equivalent machine code to what AMD has. Again there can also simply be a generic render path that just does plain old GL or vulkan which nvidia (and intel, and arm, and etc...) support.
 
He quoted a thread that quotes a developer of the game. Here's the direct link
Are the people who made the thing not a good enough source?

Back to the point of this thread, given the open nature of gpuopen nvidia could simply offer equivalent assembly code for their cards and developers can add a some code like
if(card==amd)
{
use amd assembly in that one place
}
else if (card==nvidia)
{
use nvidia assembly in that one place
}
else
{
generic code path
}

So now review publications are not needed, lets just rely on developers and AMD/NVIDIA with what they say.
Is it fair to say now that it looks like no decent publications have actually bench tested Wireframe including with frame analysis-latency behaviour?
Coming back to is the source (developers-etc) good enough; You know Nvidia categorically state that GameWorks does not impede on AMD hardware when the options/AO specific to Nvidia/etc are turned off?
Is that good enough as well?
What if a developer categorically states they did nothing to impede AMD architecture when they implemented Nvidia technology or engaged/possibly sponsored by Nvidia?
Witcher 3, and fits well with your example Project Cars that was accused many times including by AMD employees of collusion with Nvidia.

If it is that simple, why has Quantum Break still not been fixed with regards to the post processing Volumetric lighting issue that only affects Nvidia?
Are you also saying the problem with GPUOpen and Pure Hair on RoTR is actually Nvidia's fault for not offering equivalent assembly code for their cards to Crystal Dynamics?
But all of you are still dancing around the fact they are pushing GPUOpen with low level integration focused towards AMD hardware (quoted earlier and means it will be even more difficult to change) and also ignoring the post by barcaman196835 #19, which ties into how much of this is tightly focused on their own architecture , or the complexity of this that makes it a pointless exercise as said by Ocellaris #15.

Thanks
 
Last edited:
Since we lack a good third party analysis; yes the developers word is the best we have. When it comes to claims from developers there's no hard and fast "believe this" and "don't believe that". You need to look at where their money comes from (AMD and Nvidia are clearly biased here) and what sort of track record they have. Beyond that you need to look at what they offer up as substance to their argument. In this case, I am inclined to believe the developer of warframe that he was able to increase the performance of his game.

To your point about tomb raider; I haven't played it or followed much of the news. In another thread around here someone claimed that PureHair was a new name for tressFX which might explain why AMD has an advantage. That said, this is speculation on my part.

To your point on low-level-ness; Anything low level is by definition going to be tied to a particular architecture. Take machine code for example.
gcc PowerPC Assembly <- PowerPC
x86 assembly language - Wikipedia, the free encyclopedia <- X86
ARM Information Center <- ARM
Each cpu arch has it's own machine code and you use machine code if you want cycle accurate prediction of code execution. It's not a bad thing, it's just a thing.
Yes AMD is publishing low level code for their architecture, what else would they publish? The good thing that AMD is doing is not obscuring their implementation behind a black box. They are offering their code up for the world (read: nvidia) who can simply offer up equivalent machine code and be done. The alternative is the gameworks situation where (most likely) machine code is also used in conjunction with a generic code path but AMD doesn't know where or how the machine code is used.
 
So now review publications are not needed, lets just rely on developers and AMD/NVIDIA with what they say.
Is it fair to say now that it looks like no decent publications have actually bench tested Wireframe including with frame analysis-latency behaviour?
Coming back to is the source (developers-etc) good enough; You know Nvidia categorically state that GameWorks does not impede on AMD hardware when the options/AO specific to Nvidia/etc are turned off?
Is that good enough as well?
What if a developer categorically states they did nothing to impede AMD architecture when they implemented Nvidia technology or engaged/possibly sponsored by Nvidia?
Witcher 3, and fits well with your example Project Cars that was accused many times including by AMD employees of collusion with Nvidia.

If it is that simple, why has Quantum Break still not been fixed with regards to the post processing Volumetric lighting issue that only affects Nvidia?
Are you also saying the problem with GPUOpen and Pure Hair on RoTR is actually Nvidia's fault for not offering equivalent assembly code for their cards to Crystal Dynamics?
But all of you are still dancing around the fact they are pushing GPUOpen with low level integration focused towards AMD hardware (quoted earlier and means it will be even more difficult to change) and also ignoring the post by barcaman196835 #19, which ties into how much of this is tightly focused on their own architecture , or the complexity of this that makes it a pointless exercise as said by Ocellaris #15.

You would have to ask Nvidia why they are not putting any time and effort in fixing their issues when they have source code which would allow them to do this. Supposedly they have the best driver team in the whole universe and yet you blame AMD for issues on Nvidia driver side. I gave you the answer earlier it seems that Nvidia development for drivers or gpu game code is revolving around crippling performance on AMD hardware.
 
Warframe is a community funded game where we bought up to $150 or $200 packages to play the game early. Can't remember the price exactly; been so long ago. Anyway the developers talk to us in General chat all the time and announce that they are about to deploy a patch. We are required to stay in our missions until the patch deploys. 3 mins or so later the patch is deployed and we can leave the mission. There is no "going back and checking frame rate " from an earlier time.

All I can tell you is that in the ship, looking out at the galaxy map, I got a helluva boost to fps after the patch. At the time I was hitting over 200 fps in the ship looking out. Mission frame rate is of course going to be less. People in General chat were excited and everyone was talking about how much better the textures looked after they were optimized. Of course Warframe patches have a TON of new content added to them. The developers said that the game was so much smoother that they were looking into adding in more effects.

I haven't played the game in a long time. It's been months since I have touched it. Like I said you can fire it up and see how smooth it runs now. It's F2P and NOT sponsored by AMD. The developers use whatever tools to make the game better for their player base. They don't care where a better solution comes from as long as the end user experience is better than before. They hold livestreams on Twitch for their community. There we are allowed to ask them questions about the current builds, and the upcoming content. We vote on the upcoming content. Then they build what we decide we want.

It's not like other companies where you are spoon fed the company line. You can even hop into games with the developers sometimes. They invite the community to meet and greets / question and answer sessions across the globe. Heck they even let the community sell cosmetic items in the game if they get enough votes on the Steam Workshop.
 
So now review publications are not needed, lets just rely on developers and AMD/NVIDIA with what they say.
Is it fair to say now that it looks like no decent publications have actually bench tested Wireframe including with frame analysis-latency behaviour?
Coming back to is the source (developers-etc) good enough; You know Nvidia categorically state that GameWorks does not impede on AMD hardware when the options/AO specific to Nvidia/etc are turned off?
Is that good enough as well?
What if a developer categorically states they did nothing to impede AMD architecture when they implemented Nvidia technology or engaged/possibly sponsored by Nvidia?
Witcher 3, and fits well with your example Project Cars that was accused many times including by AMD employees of collusion with Nvidia.

If it is that simple, why has Quantum Break still not been fixed with regards to the post processing Volumetric lighting issue that only affects Nvidia?
Are you also saying the problem with GPUOpen and Pure Hair on RoTR is actually Nvidia's fault for not offering equivalent assembly code for their cards to Crystal Dynamics?
But all of you are still dancing around the fact they are pushing GPUOpen with low level integration focused towards AMD hardware (quoted earlier and means it will be even more difficult to change) and also ignoring the post by barcaman196835 #19, which ties into how much of this is tightly focused on their own architecture , or the complexity of this that makes it a pointless exercise as said by Ocellaris #15.

Thanks

Sorry, but the fact that source code is supplied and freely workable makes ANY "AMD GameWorks" argument invalid. As stated: If Nvidia wanted to optimise the code: It's right there for them to optimise. With PhysX, Gameworks and other Nvidia sponsored tech, this is not the case.

This is not to say that certain things do not run better on certain hardware. AMD lacks a lot of power in Tesselation, this is not an "nvidia optimization", rather it is a genuine disadvantage of AMD hardware. Adding Tesselation in your game is not 'sabotaging AMD hardware'. The same as PureHair (Tress FX) it may run slower on Nvidia Hardware, but this is because Nvidia's compute strength is not as high as AMD's.

And before you go on to say that PureHair does not work/does nothing on Nvidia hardware:

GeForce.com Rise of the Tomb Raider PureHair Interactive Comparison: Very High vs. Off - Example #002

It is demonstrated by Nvidia themselves to provide a massive visual benefit. Are you saying they took these screens on an AMD card?
 
And before you go on to say that PureHair does not work/does nothing on Nvidia hardware:

GeForce.com Rise of the Tomb Raider PureHair Interactive Comparison: Very High vs. Off - Example #002

It is demonstrated by Nvidia themselves to provide a massive visual benefit. Are you saying they took these screens on an AMD card?

Sigh.
And if you any of you actually ever bothered to look at the PCGAMESHARDWARE comparison you will see that it is a false off/on/very high on Nvidia...
The very high is nothing compared to the same setting on AMD...
And importantly your example is a cinematic cut scene....

So PLEASE use the following link rather than weak evidence that it works for NVIDIA because it slightly changes it: Rise of the Tomb Raider PC: Update mit neuer Benchmarkszene, frischen Grafiktreibern und CPU-Skalierung

And here is a very close equal comparison from same nvidia images: GeForce.com Rise of the Tomb Raider PureHair Interactive Comparison: Very High vs. Off - Example #004
rise-of-the-tomb-raider-purehair-004-very-high-alt.png



Notice the hair is very similar to pcgameshardware and then look at it in comparison to AMD with the slider that blows it away - Importantly this example is not a cut scene.
It is doing next to nothing on NVIDIA hardware especially when you also consider the snow behaviour-detail (cut scenes may be different but they should not be used as an example) and actual hair detail.
This pcgameshardware static screen shows it for NVIDIA matching above insert, but please find the slider to compare it to AMD in the link provided earlier in this post and it is clear the technology is not really working on NVIDIA.
Rise_of_the_Tomb_Raider_-_Pure_Hair_-_Nvidia-pcgh.png


Thanks
 
Last edited:
Everyone these are vender specific instructions! This is exactly what nV does with some of their game works and samples, for optimization of code paths to their hardware lol, I like the two sided look at by people though ;)

No it isn't. Most Gameworks effects are using standard functions, but the whole middleware is totally wrong for the users. The basic concept of Gameworks is to control the performance, so nVidia can deicide how fast an effect can run. They don't really care about harming AMD or the devs. The point is to harm the users, so they will buy better hardwares, if the performance is too slow. Hairworks is a perfect exapmle of this. I didn't see a user who actually questioned the Hairworks implementation. There are no question for nVidia about why they don't use master/slave strand optimization, why no configurable vertices option per strand, why using geometry shader for extruding segments into front-facing polygons, why using 8xMSAA render target and not a special analytical AA solution ... and the most important thing: why don't they use an order-independent transparency solution to give some quality to the hair? The users just didn't care. But the fact is that if nVidia would allow the devs to control the vertices per strand, and allow to use vertex shader for extruding segments into front-facing polygons, and also allow an analytical AA for thin geometry, than the Hairworks effect should run 3-4 times faster on every hardware, with the same level of quality!



Yep AMD feeds a line of marketing crap about gameworks but when AMD does something similar in the lines of optimizing their code paths so only their cards can use said features and people eat it up! WTF?

AMD see Gameworks in a totally different view. For example Hairworks, they just simply don't understand why nVidia design long pipelined jobs with poor quad occupancy, which will cause a huge amount of unnecessary work in the GPU, and this doesn't even have an impact on the final quality at all, because 80-90 percent of this calculation will be dropped in a later phase. They probably constantly ask themselves about why the hell is Nvidia harming their own userbase. And this is why AMD don't get anywhere, beucase they don't want to fool their own customers, but the money is in their pocket and the only way to get that is to artificially restrict the performance of a graphics effect. As you can see above, the users and the gamers just don't care. They don't even realize they are fooled.

With these shader extensions AMD again improving the performance on the whole GCN lineup, which is good for their customers, but harmful for their business.
 
Last edited:
No it isn't. Most Gameworks effects are using standard functions, but the whole middleware is totally wrong for the users. The basic concept of Gameworks is to control the performance, so nVidia can deicide how fast an effect can run. They don't really care about harming AMD or the devs. The point is to harm the users, so they will buy better hardwares, if the performance is too slow. Hairworks is a perfect exapmle of this. I didn't see a user who actually questioned the Hairworks implementation. There are no question for nVidia about why they don't use master/slave strand optimization, why no configurable vertices option per strand, why using geometry shader for extruding segments into front-facing polygons, why using 8xMSAA render target and not a special analytical AA solution ... and the most important thing: why don't they use an order-independent transparency solution to give some quality to the hair? The users just didn't care. But the fact is that if nVidia would allow the devs to control the vertices per strand, and allow to use vertex shader for extruding segments into front-facing polygons, and also allow an analytical AA for thin geometry, than the Hairworks effect should run 3-4 times faster on every hardware, with the same level of quality!

Hariworks has been open sourced, not to mention the developers could have bought hairworks close to a year ago maybe more. For the dev's not to modify the code for better performance is a problem on their end.

What nV has done and will always do, is have closed sourced libs and open them up as they are ready to do so. As developers tend to create their own effects in the long run without the help of the IHV's.

This is what happened with many of their samples in the past too. Yeah I've been around for a while over 20 years working on games. Do remember the screwed up Parallax bump mapping from Cry Engine 1, where do you think that came from?


AMD see Gameworks in a totally different view. For example Hairworks, they just simply don't understand why nVidia design long pipelined jobs with poor quad occupancy, which will cause a huge amount of unnecessary work in the GPU, and this doesn't even have an impact on the final quality at all, because 80-90 percent of this calculation will be dropped in a later phase. They probably constantly ask themselves about why the hell is Nvidia harming their own userbase. And this is why AMD don't get anywhere, beucase they don't want to fool their own customers, but the money is in their pocket and the only way to get that is to artificially restrict the performance of a graphics effect. As you can see above, the users and the gamers just don't care. They don't even realize they are fooled.

Game works doesn't sell games and has minimal impact on upgrading hardware. Games that push hardware even with out gameworks tend to do that. Name the top selling games out there and tell me how many of those games pushed hardware sales? Most top selling games run well on the midrange of current generations, that's why they sell so well. yeah you have a few % of people what to play a game in all its glory, but that is a tiny minutia of the populace of game players.

Anyone that thinks Gameworks is that big of marketing ploy to sell graphics cards, really doesn't understand or hasn't looked at graphics cards sales history. The game co marketing programs only have limited effect of graphics cards sales it is heavily dependent on how the graphics cards perform against each other (IHV's) it has always been like this and always will.

Every single time you see one of the IHV have deficits in any metric that is used to sell cards to OEM's or end consumers you see their market share go down. Doesn't matter if its nV or AMD.
 
Last edited:
Hariworks has been open sourced, not to mention the developers could have bought hairworks close to a year ago maybe more. For the dev's not to modify the code for better performance is a problem on their end.

With that license it is hard to modify it. You can't change the performance critical parts of the code. With this kind of restriction, open source don't give you a real advantage. But sure, we can see now why is this effect so slow.

What nV has done and will always do, is have closed sourced libs and open them up as they are ready to do so. As developers tend to create their own effects in the long run without the help of the IHV's.

nVidia was much more open earlier. Something happened in the GT200 era. I don't know what, but they kept moving away from the devs, while AMD kept moving closer.

This is what happened with many of their samples in the past too. Yeah I've been around for a while over 20 years working on games. Do remember the screwed up Parallax bump mapping from Cry Engine 1, where do you think that came from?

I'm only worked for Crytek for seven months and it was after Crysis 3. So I don't know what happened earlier.

Game works doesn't sell games and has minimal impact on upgrading hardware. Games that push hardware even with out gameworks tend to do that. Name the top selling games out there and tell me how many of those games pushed hardware sales? Most top selling games run well on the midrange of current generations, that's why they sell so well.

If Gameworks have minimal impact on upgrading hardwares, why don't they allow us the right to optimize the effects?
 
If Gameworks have minimal impact on upgrading hardwares, why don't they allow us the right to optimize the effects?

I do think Gameworks is more bloated and definitely has efficiency issues possibly coming back to the 'tessellation is great so give us more' mentality that maybe exist within their software team.
Your last point is very pertinent because it will be interesting to see just how Blood and Wine for Witcher 3 works as it is no longer the base game but a true overhaul from everything they have previously learnt.
So will be interesting to see what if any software suite/technologies are implemented and their impact on performance vs visual quality, more so with regards to GameWorks and whether they stayed with it and what they have done (lets see if it is more efficient).

Blood and Wine is definitely a step up in quality over Witcher 3.

Cheers
 
I do think Gameworks is more bloated and definitely has efficiency issues possibly coming back to the 'tessellation is great so give us more' mentality that maybe exist within their software team.
Your last point is very pertinent because it will be interesting to see just how Blood and Wine for Witcher 3 works as it is no longer the base game but a true overhaul from everything they have previously learnt.
So will be interesting to see what if any software suite/technologies are implemented and their impact on performance vs visual quality, more so with regards to GameWorks and whether they stayed with it and what they have done (lets see if it is more efficient).

Blood and Wine is definitely a step up in quality over Witcher 3.

Cheers

Tessellation on it's own is not an issue. Sure it is purely overtessellated, but even with an x16 driver limit the effect is still slower than a TressFX 3 implementation, which is really sad if we count that TressFX calculate OIT. Without OIT, TressFX is still 10 times faster than Hairworks with x16 tessellation force. And with this configurations these effects are provide the same quality. Hairworks problem is more like a deep design issue, which is unfixable without changing the basic algorithms.
 
With that license it is hard to modify it. You can't change the performance critical parts of the code. With this kind of restriction, open source don't give you a real advantage. But sure, we can see now why is this effect so slow.

Its not that hard to modify, as long as performance goes up on nV's hardware they don't care. And the other IHV's path, is really is not even their concern. But yeah it does limit dev's from sharing source with parties that don't have the license but there are way around that too, to get information from that party where you don't need to share the code.


nVidia was much more open earlier. Something happened in the GT200 era. I don't know what, but they kept moving away from the devs, while AMD kept moving closer.

Ask Roy Taylor ;), can't get more into that.

I'm only worked for Crytek for seven months and it was after Crysis 3. So I don't know what happened earlier.
The parallax bump map code was straight out of GPU Gems, nV example but Crytek didn't "fix" it and when you look at it from certain angles pixels weren't coherent.



If Gameworks have minimal impact on upgrading hardware, why don't they allow us the right to optimize the effects?

IT helps when the hardware they are selling is good, but its not the main driving force. We are talking about low 1 digit %'s here. A person that is going to buy a 60 buck game is not ready to spend another 300 bucks or more on a graphics card. What kind of added value will shift sales like that? It gives value to the developer so they can get certain effects in time efficiently. But its not the main driving force to sell games. As I stated the top games don't need gameworks, opengpu all that good stuff to sell their games. Look at Blizzards game, did they need them? Hell just one of their games out sells the entire Crysis and Far Cry line.
 
Tessellation on it's own is not an issue. Sure it is purely overtessellated, but even with an x16 driver limit the effect is still slower than a TressFX 3 implementation, which is really sad if we count that TressFX calculate OIT. Without OIT, TressFX is still 10 times faster than Hairworks with x16 tessellation force. And with this configurations these effects are provide the same quality. Hairworks problem is more like a deep design issue, which is unfixable without changing the basic algorithms.
I have seen the arguments go back and forth regarding what is better between HairWorks vs TressFX 3, and tbh there are pros/cons for either when looking at how implemented in games with some liking one over the other.
Just to add x16 on Witcher 3 looked very similar to the x64, still have the 1000s of tessellated hair strands.
Cheers
 
The "AMD Mesh Optimizer" is presumably an offline tool, so of course it's not vendor specific.

AMD exposing parts of GCN via API extensions isn't really comparable to "GameWorks". Intrinsics is a well understood way to access low-level features from a higher-level language. When you write any sort of SSE code you're likely to use compiler intrinsics (as opposed to straight assembly) to get access to CPU instructions that can't readily be expressed in the language in question. Some of them will work on Intel and AMD, some of them will be exclusive to a certain CPU family (like Neon or whatever). No one's ever freaked out over this.

Though it's a bit surprising that they'd expose min/max which SPIR-V must be able to express. Maybe it was min3/max3, I don't recall. (EDIT: yes it was: 'we’re providing the 3-parameter min, max and med functions which map directly to the corresponding GCN opcodes')

(You could already abuse the OpenGL shader compilation/loading paths to load arbitrary binary GCN code. The followup showcasing it. Also, if you go through the second deck, which use GCNs barycentric support, you'll notice that THIS IS EXACTLY WHAT DEVS HAVE BEEN ASKING FOR.)
 
Last edited:
The "AMD Mesh Optimizer" is presumably an offline tool, so of course it's not vendor specific....
I thought it was a runtime Vertex/triangle order optimisation/overdraw reduction.
Yeah it is less vendor specific than the rest of GPUOpen.
Could it potentially conflict with GameWorks suite if also implemented.
Cheers
 
Back
Top