AMD responds to Hairworks, Gameworks and it's engagement with developers.

+1

(and some filler to bring my post to the required 3 character length. :) )

After all these years.. Nvidia still can not SLi a 980GTX and 970GTX together..

AMD is the leader in pushing forward.
 
MS always starts off internally, and then they ask IHV's to join in, the problem is it takes around 3 to 5 years of development for an API to be commercially viable. AMD on the other hand never did that much development at all with mantel, talked alot about it but never did they show what it was capable of in the real world till after Dx12 did.
Yes I did AMD helped out, but there were many other people involved with Volken well before they did not to mention there has been a lot of work done after that needed to be done. Mantel isn't a full API, it was never designed to be, it was made to coexist and run with other API's.

The only similarities between Dx12 and Mantle I can tell you for fact because I have used both, is the use of PSO's, Pixel shader objects. Its possible MS saw this with mantle and decided they could use it too, but it just so happens its the thing that AMD calls close to metal, in reality that's Marketing crap. And it so happens this is what gives improved performance over Dx11.

Your posts are ridiculous, you can't even bother to spell the API names correctly and yet you've "used both." You're trying very hard to make me think you're just making up shit as you go and / or just barfing back whatever stuff you've read elsewhere except more distorted than ever at this point. The PSO acronym in DX12 stands for pipeline state object, this is what it's referred to _everywhere_. But I guess you know this and it was just another typo? Also lol at that being the only similarity...


Here's a DX12 particle system I was playing with a few weeks back, it should run fine on an updated W10 install. I kinda "cheat" and software render the particles, but it should be pretty trivial to extend to get any 2D sprite up on screen (I just draw to a full screen sprite). It's pretty much a wall of text since I just wanted something on screen. I don't know if everything is totally correct, but it does work on actual hardware, though the included stuff creates a WARP device.

Some of it (file loading, window creation, matrix math) depends on my own utility library which I didn't include.
video: https://www.youtube.com/watch?v=9ew9swkkACA
source (and exe): https://drive.google.com/open?id=0B359Nkrkg3-5cFk5TDhZaURobGs&authuser=0
 
Last edited:
Your posts are ridiculous, you can't even bother to spell the API names correctly and yet you've "used both." You're trying very hard to make me think you're just making up shit as you go and / or just barfing back whatever stuff you've read elsewhere except more distorted than ever at this point. The PSO acronym in DX12 stands for pipeline state object, this is what it's referred to _everywhere_. But I guess you know this and it was just another typo? Also lol at that being the only similarity...


Here's a DX12 particle system I was playing with a few weeks back, it should run fine on an updated W10 install. I kinda "cheat" and software render the particles, but it should be pretty trivial to extend to get any 2D sprite up on screen (I just draw to a full screen sprite). It's pretty much a wall of text since I just wanted something on screen. I don't know if everything is totally correct, but it does work on actual hardware, though the included stuff creates a WARP device.

Some of it (file loading, window creation, matrix math) depends on my own utility library which I didn't include.
video: https://www.youtube.com/watch?v=9ew9swkkACA
source (and exe): https://drive.google.com/open?id=0B359Nkrkg3-5cFk5TDhZaURobGs&authuser=0

Sorry that was a mistake, yes pipeline shader object, don't know why I was thinking pixel shader.... thank you for the correction.

was thinking of VBO's


But any case PSO's are about the only thing that is similar between the two API's.

Also its hard for me to do corrections when rendering out 16k textures as it uses all my 20 cores to max when doing AO maps, when doing one texture it takes 8 hours to render out.
 
Sorry that was a mistake, yes pipeline shader object, don't know why I was thinking pixel shader.... thank you for the correction.

was thinking of VBO's


But any case PSO's are about the only thing that is similar between the two API's.

Also its hard for me to do corrections when rendering out 16k textures as it uses all my 20 cores to max when doing AO maps, when doing one texture it takes 8 hours to render out.

Seriously, what exactly is so different?

The memory model is similar, you manage memory heaps on both.
Resource lifetime and hazard resolution are explicit on both.
Resource binding model is quite similar on both.
Device queues (graphics, compute, copy) are exposed on both.
The idea of an immediate context is gone, it's all command lists for both.
State is mostly behind a monolithic state object now for both.

... and so on

There's a lot that maps very obviously, what is so dissimilar?
 
similar doesn't mean the same,

Dx 12 has things like indirect drawing, ROV's, and swap chains, none of these are available on Mantle.

At a high level both API's do look very similiar, but features wise Dx12 has quite a bit more.
 
One could say its truly an evolved form of mantle that we wouldn't have seen had AMD not pushed such innovation to game devs.
But I know some people do not like that reality.
 
similar doesn't mean the same,

Dx 12 has things like indirect drawing, ROV's, and swap chains, none of these are available on Mantle.

At a high level both API's do look very similiar, but features wise Dx12 has quite a bit more.

Mantle supports indirect rendering - it's obvious just from looking at the function list. This has been easy to see even long before the documentation. From day 1 of the Mantle driver being public you could just look at what the DLL exports.
grCmdDrawIndexed
grCmdDrawIndirect
grCmdDrawIndexedIndirect

ROV's could simply be an extension (like Intel exposed in GL) - that's what the extension system is there for.

What do you think a swap chain is? Of course there's a swap chain, you're not flipping your back buffers by sheer force of will. DX just has an explicit object for it.
 
Last edited:
similar doesn't mean the same,

Dx 12 has things like indirect drawing, ROV's, and swap chains, none of these are available on Mantle.

At a high level both API's do look very similiar, but features wise Dx12 has quite a bit more.

i remember Microsoft proclaiming direct x 10 will be their last version of it. :rolleyes:

AMD made mantle an OPEN platform so what if microsoft looked at mantle as they made 12 and added their own improvements that is the point AMD had in making it open. AMD does not want to maintain the API they want to make cpu and gpu and apu that use it that is why they are doing things the way they want they are manipulating the state of gaming and everything is moving based on the bricks amd has laid.
 
i remember Microsoft proclaiming direct x 10 will be their last version of it. :rolleyes:

AMD made mantle an OPEN platform so what if microsoft looked at mantle as they made 12 and added their own improvements that is the point AMD had in making it open. AMD does not want to maintain the API they want to make cpu and gpu and apu that use it that is why they are doing things the way they want they are manipulating the state of gaming and everything is moving based on the bricks amd has laid.

That is what I saw as well. AMD wasn't going to turn the system based on hardware so they needed to create the need for software that would better utilize their hardware. Mantle(DX12) and HSA will bring life to AMD products where before Intels strangle-hold on the software side was limiting it.
 
And btw Scali is a very good programmer and if you want to directly talk to him buzz him at B3D.
MS always starts off internally, and then they ask IHV's to join in, the problem is it takes around 3 to 5 years of development for an API to be commercially viable. AMD on the other hand never did that much development at all with mantel, talked alot about it but never did they show what it was capable of in the real world till after Dx12 did.
Yes I did AMD helped out, but there were many other people involved with Volken well before they did not to mention there has been a lot of work done after that needed to be done. Mantel isn't a full API, it was never designed to be, it was made to coexist and run with other API's.

The only similarities between Dx12 and Mantle I can tell you for fact because I have used both, is the use of PSO's, Pixel shader objects. Its possible MS saw this with mantle and decided they could use it too, but it just so happens its the thing that AMD calls close to metal, in reality that's Marketing crap. And it so happens this is what gives improved performance over Dx11.

You disappointed me when you write more blah blah blah get things mixed up lets make a start.
Scali has nothing to do with the buttload of crap you posting on this forum his status does reflect on anything you wrote down.

The last overhaul of DX11 was a mere introduction of a few new features (Direct Compute) and just an optimization of DX10 features. And from the blog you posted which linked a source clearly stated that MS was not doing anything near what Mantle is. This means that even if DX12 was earlier it was not the DX12 you know of today.


About AMD helping out. AMD gave Mantle (all of it) to the khronos group (since you mentioned the Nvidia guy which you told me is there as well). Now the Khronos group youtube video clearly states 2 weeks before the committee meeting AMD came to Khronos. Khronos is very slow at adapting things. So helped out is the wrong expression even the person describing this said "hit the ground running" which is not the same thing as helped out.

If anything it just shows how stuck Khronos was at the time the person describing this said it in a manner that he was more then happy when AMD turned up with Mantle. This you might not have picked up but it is clear that these things are not mere "helping out".
 
Last edited:
You disappointed me when you write more blah blah blah get things mixed up lets make a start.
Scali has nothing to do with the buttload of crap you posting on this forum his status does reflect on anything you wrote down.

The last overhaul of DX11 was a mere introduction of a few new features (Direct Compute) and just an optimization of DX10 features. And from the blog you posted which linked a source clearly stated that MS was not doing anything near what Mantle is. This means that even if DX12 was earlier it was not the DX12 you know of today.


About AMD helping out. AMD gave Mantle (all of it) to the khronos group (since you mentioned the Nvidia guy which you told me is there as well). Now the Khronos group youtube video clearly states 2 weeks before the committee meeting AMD came to Khronos. Khronos is very slow at adapting things. So helped out is the wrong expression even the person describing this said "hit the ground running" which is not the same thing as helped out.

If anything it just shows how stuck Khronos was at the time the person describing this said it in a manner that he was more then happy when AMD turned up with Mantle. This you might not have picked up but it is clear that these things are not mere "helping out".

I see you have reading comprehension issues, the scali comment was something totally different, it was to give a timeline of when things were being done,

The Khronos group wasn't stuck they were in the intial stages of setting up specs, Mantle was a good stepping stone for that but thats all it is, a stepping stone its still incomplete, Mantle can't do everything that the other API's can or will do. I'm a bit drunk right now but tomorrow I'll will very large list of features that Mantle doesn't have that DX12 has, and guess what features that Vulkan is implementing too, which were already done by nV. I guess you don't have memorial day off, go get a drink.

Actually it would be better if you did your own looking around too. I will wait for your list and please no general Marketing swaths. I want to see API calls.

Take your time because it seems like you actually have to go read up a little. Swap chains are very important in many applications and since mantle doesn't have them you either have to another API or if in windows DXGI can handle it.

It would be better in PM as well since we are pretty off topic.

The comment about the nV employee was stating that nV too is helping, as the other person was saying nV hasn't been helping with Ogl that much again, reading comprehension issues I guess.
 
Last edited:
drunk blah blah blah blah?.

Listen you deny that AMD has a huge part in the shaping the gaming industry you claim that their role is "trivial" when facing a insurmountable task of trying to get performance on a executable you claim that AMD has to have better developer relations.

When told about these things you deny all of it again. You claim to know something about programming yet you don't know how things work in general.
The Witcher 3 is not the fault of AMD. It clearly lies with the developers.
They take money from Nvidia and they used their closed DLL, that is it. Why else would they screw with their consumer base?
 
Actually it would be better if you did your own looking around too. I will wait for your list and please no general Marketing swaths. I want to see API calls.

Take your time because it seems like you actually have to go read up a little. Swap chains are very important in many applications and since mantle doesn't have them you either have to another API or if in windows DXGI can handle it.

I don't get your fixation on swap chains but ok.

Under Mantle they expose it as an extension.
grWsiWinCreatePresentableImage() to create the buffers
grWsiWinQueuePresent() to present it

grCmdPrepareImages() to transition the state between a render target / presentable.


You do not need to explicitly go creating DXGI adapters and DXGI swap chains on your own. Whatever happens behind the scenes is another story. Does this mean OGL is an incomplete API too?
 
I think everyone needs to be clear that first and foremost to blame is CD Projekt Red.

They chose to use a closed tool for hair rendering when an open source one already existed (would have saved time and money using something tried and tested).
They did it very late in the development, despite apparently working with AMD from the start, and informed AMD essentially at the last minute. Looks obviously suspicious.


Nvidia invented the bomb, but ultimately CD Projekt dropped it!

Then again, what can you expect from a bunch of ex-software pirates!? Oh snap I went there... :p
 
GameWorks does what Nvidia has always claimed it does: highlight features on Nvidia hardware. It's a simple concept: as a perk of buying an Nvidia GPU, game developers who choose to use feature/library/Nvidia assistance will get a benefit in exchange for feature(s) benefiting recent Nvidia cards. Whether it's optimization or IQ enhancements, it can turn out to be useful in various ways. AMD does the same damn thing. There has absolutely never been any reasonable expectation that one manufacturer will optimize the hell out of an enhancement for competitors' hardware. There's no need to sugar coat the GPU vendor help game developers receive: it's done for selfish reasons.

It's just so bizarre to see this level of cognitive dissonance. Just because AMD developer relations sucks so badly now isn't a reason to take the ball and send everyone home. AMD has critical dependence on GPU sales now that its CPU sales are in the toilet and has sold off nearly everything of value it once possessed. It sucks that AMD can no longer afford to behave like a top tier GPU maker, but that's its own fault for numerous poor decisions and legendarily bad management.

I ignored all the straw man arguments in your post because a :rolleyes: is all they deserve.

You know, the cool thing is if Nvidia gets what they want and kills off AMD, you all will be paying out the butt for less capable hardware and stagnant advances. Well you guys are doing that, I will be using my money for more important things than to waste money on gaming that favors only one company because they are the only ones left. (I do not think AMD is going anywhere but, I have no idea for sure if that is going to be the case or not.)

Oh well, enjoy your $700 GTX 750 non Ti's. (I know that is exaggerated but not by a whole lot.)
 
You know, the cool thing is if Nvidia gets what they want and kills off AMD, you all will be paying out the butt for less capable hardware and stagnant advances. Well you guys are doing that, I will be using my money for more important things than to waste money on gaming that favors only one company because they are the only ones left. (I do not think AMD is going anywhere but, I have no idea for sure if that is going to be the case or not.)

Oh well, enjoy your $700 GTX 750 non Ti's. (I know that is exaggerated but not by a whole lot.)

If Nvidia "kills off" AMD, it's going to be because of AMD's poor general support. I mean, freaking seriously. They haven't released a non-Beta since last freaking year. They've only released one beta that actually really changed anything this year, crossfire support is outright abysmal for anything new right now. Almost all of their crap is massively delayed, they still haven't properly optimized Mantle for GCN 1.0, and, just bluntly, they freaking suck at supporting their own products.

I don't want Nvidia to own the market. But if they do, it's because AMD handed it to them on a silver platter.


(Full disclosure: I have an AMD videocard, which I got as my videocard after being Nvidia only forever. I am very disgruntled.)
 
If Nvidia "kills off" AMD, it's going to be because of AMD's poor general support. I mean, freaking seriously. They haven't released a non-Beta since last freaking year. They've only released one beta that actually really changed anything this year, crossfire support is outright abysmal for anything new right now. Almost all of their crap is massively delayed, they still haven't properly optimized Mantle for GCN 1.0, and, just bluntly, they freaking suck at supporting their own products.

I don't want Nvidia to own the market. But if they do, it's because AMD handed it to them on a silver platter.


(Full disclosure: I have an AMD videocard, which I got as my videocard after being Nvidia only forever. I am very disgruntled.)

The beta driver is a non-issue, would you feel better if they just called it Catalyst 15.4 and dropped the beta tag at the end? WHQL does not make a driver any better these days, it just costs $!
Does anyone care about who qualifies Linux drivers?
 
The beta driver is a non-issue, would you feel better if they just called it Catalyst 15.4 and dropped the beta tag at the end? WHQL does not make a driver any better these days, it just costs $!
Does anyone care about who qualifies Linux drivers?

Releasing beta drivers in not an issue. Releasing ONLY beta drivers is the issue.

There is a reason they are using the term beta. It's not just a name.
 
Releasing beta drivers in not an issue. Releasing ONLY beta drivers is the issue.

There is a reason they are using the term beta. It's not just a name.

You missed the point

saying BETA in front of the driver or WHQL makes no difference, except for some $$$ in MSs pocket, multiple WHQL Drivers have had serious issues in the past, EG, Nvidia killing cards with WHQL drivers.

AMD should just toss the BETA designation all together, release just "Drivers" and "WHQL" drivers, then people who do nothing but spread misinformation, will have to find another avenue for discrediting AMD. And the end user will see 0 difference in their experience, except that they might be happier, that their driver doesn't say "BETA"

By the way, at my company, we consider BETA as "GA" ready code, as in we don't go into beta with something we are not comfortable releasing to the public as a final version, and most of the time, the changes between Beta and GA are non-existent ( and we sell enterprise software)
 
You missed the point

saying BETA in front of the driver or WHQL makes no difference, except for some $$$ in MSs pocket, multiple WHQL Drivers have had serious issues in the past, EG, Nvidia killing cards with WHQL drivers.

AMD should just toss the BETA designation all together, release just "Drivers" and "WHQL" drivers, then people who do nothing but spread misinformation, will have to find another avenue for discrediting AMD. And the end user will see 0 difference in their experience, except that they might be happier, that their driver doesn't say "BETA"

By the way, at my company, we consider BETA as "GA" ready code, as in we don't go into beta with something we are not comfortable releasing to the public as a final version, and most of the time, the changes between Beta and GA are non-existent ( and we sell enterprise software)

AMD has used the term "release candidate" in the past for drivers that were closer to being ready. So sorry you had to write all that for nothing.
 
AMD has used the term "release candidate" in the past for drivers that were closer to being ready. So sorry you had to write all that for nothing.

I didn't have to write it for nothing, since all you do is try to spread misinformation, I had do make sure it was printed out clearly enough for you!
 
Given the shoddy state of 99% released software nowadays (esp console ports)

If find it rather ironic that video card driver release dates still get heavily scrutinized.

The Witcher 3 has had almost a gigabyte of patches in the last week or so alone.


Optimizations are more than welcome of course - if ATI don't get preferential treatment and the game is a black box - it's gonna take longer.

I've only come across one weird issue with the 15.4.1 drivers in GTA 5 - where for a split second the previous frame (discoloured) gets overlapped/stuck on the current one.

Most probably an artifact of the colour compression feature on the R9 285 - don't think it's overheating.
 
You're right. This is worse. This is business. Then again, don't the two often shake hands? ;)

There's a game being played by both sides, and as a consumer you lose no matter which side you choose at this point.

AMD believes they can win you over by preaching a better world for pc gamers if technology is open. Their marketshare and influence is miniscule. They are not in a position to make this a reality, so this idealistic notion does nothing for them. One thing is for certain - whining about your competitor(s) is not viewed favorably by most people, regardless of the truth behind your words. The caveat AMD employees place on their social networking profiles of "my views are my own" is not sufficient to shield AMD from negative bias.

Nvidia is indeed run more pragmatically, and not "to the betterment of all". Their interest is in money. They have a lion's share of the market and have always operated in the best interests of themselves. However, if Gameworks was designed to sabotage AMD, this does not explain the co-development that made GTA5 run well on both sides hardware. It also doesn't explain how they managed to ruin Kepler in TW3 either. What was the goal? Push us to Maxwell cards? This was discovered more or less immediately, and people already had suspicions they abandoned optimization for previous gen cards.

This whole situation makes me disinterested with the industry as a whole, if anything. The increasing amount of games played by both sides leads me to be frustrated and wholly uninterested in participating at all.

That's how I feel. It really leaves a bad taste in my mouth and in they end I just won't buy anything from either side.
 
That's how I feel. It really leaves a bad taste in my mouth and in they end I just won't buy anything from either side.

Only issue with that is Matrox folded up years ago and intel has no clue about gpu...
 
Since I own AMD hardware, no need to buy this game at all...

Well yes cd project red should learn that is what happens when they optimize for one brand specifically but the fact of the matter is if you looked at a pie chart more people own nvidia cards than amd it is mostly amds fault as they have not released a "new" card in forever.

but the thing is you can turn off the game works features the game still looks good and you can have your good frame rate.

if memory serves tressfx did the same thing to nvidia with tomb raider but did not cripple the game.
 
Since I own AMD hardware, no need to buy this game at all...

Except this game runs quite well on AMD hardware, including Hairworks if you tone down the Tesselation in CCC. Its us Kepler users who got royally fisted, not AMD people.
 
Given the shoddy state of 99% released software nowadays (esp console ports)

If find it rather ironic that video card driver release dates still get heavily scrutinized.

The Witcher 3 has had almost a gigabyte of patches in the last week or so alone.....

I think it's fair to say that it's now just flat out expected for day one patches to be released for any major game.

The variety and combination of hardware is simply to much to test for.
 
I only approve of Gameworks for one reason: they're helping to improve visual effects in PC games without actually increasing the game development costs much.

We all know the price of PC games hasn't gone up more than 10 bucks in 20 years, and yet development costs have skyrocketed. You can make some of that up with reduced distribution costs due to Steam, and increased cash flow due to well-timed Steam "sales." Then you can "hide" some of those costs by making more of the game's content as DLC, but there are limits to how much you can castrate a game and still make it fun.

So one other way to funnel money to game developers is to offer tech like this and TressFX for free. It gives end-users reasons to buy more top-end GPUs, and gives game developers easy to use high-end features. High-end GPU buyers end up being the ones subsidizing advanced game development with premium-priced GPUs (and I doubt they have an issue with that) :D
 
Last edited:
by voting with his wallet, he is not being ignorant, however your response sure is.
How does not buying a good game because he own an AMD video card mean voting with his wallet? The Witcher 3 runs just fine on any AMD devices, even HairWorks (only need minor tweaking), a crossfire profile is coming next month. Not buying a game (that runs perfectly fine on AMD gpu) because you own AMD hardware just mean that you're a stupid brand loyalist trying to show your faith for a corporate that does not give a damn about you.
 
I don't begrudge how others spend their money for whatever reasoning that they want to use.
 
Back
Top