Some GOOD News

People aren't here exactly for love to AMD.. who said anyone have to love AMD to discuss about technology involving AMD? This is a tech site, this is a forum from one of the biggest hardware sites on the net, people is here to discuss technology not love for a brand..

You are here for your love to the *AMD* brand most of us are here to discuss technology. I have several AMD hardware should I love AMD? NO. They were bought with a task in mind however I'm not gonna go to every forum spreading how much I love those products lol..

This is also the gpu forum not the cpu forum, how about staying on that. No requirement to love AMD but at least stay on topic would be nice. Derailing threads seems to be a hobby for some around here.
 
More and more games are supporting AMD Vega GPU's with greater performance, which I like very much. Optimizing for that architecture, especially when they do not produce a gameworks type of layer, is harder to do but definitely pays off in spades.
 
Yeah, that's right, it is the consumers fault for expecting a $600+ video card to work on a $500 Samsung 28 Inch 4k TN panel monitor out of the box. :rolleyes: That is entirely on Nvidia and I will not be burned by them again, just saying. (Personal use, not FUD in any way, shape, form or manor, just the facts and nothing else.) And no, washed out desktop experience is not working out of the box, deal with it.

Damn man, the Nvidia defenders are strong here. (You may or may not be one but.......)

It is the consumers fault. If said consumer took 2-3 days and researched a product properly, not dreamy all dreamy eye’d. Then they can usually see if there is a problem or fault pattern.

This applies to early adopters for any chipset, people buying an amd card w/ G-SYNC display (can reverse the card & monitor for same effect). Because in all honesty, it truly is hard to justify an absolute need for a product on day0.

For the game side of things... Never mind, it’s been explained before, green vs red, DX 11 vs 12, product superiority, games already playable, fucking play it. Your enjoyment should not be tarnished when the other competition pulls 3fps over you.
 
Last edited:
It is the consumers fault. If said consumer took 2-3 days and researched a product properly, not dreamy all dreamy eye’d. Then they can usually see if there is a problem or fault pattern.

This applies to early adopters for any chipset, people buying an amd card w/ G-SYNC display (can reverse the card & monitor for same effect). Because in all honesty, it truly is hard to justify an absolute need for a product on day0.

For the game side of things... Never mind, it’s been explained before, green vs red, DX 11 vs 12, product superiority, games already playable, fucking play it. Your enjoyment should not be tarnished when the other competition pulls 3fps over you.
Only really commenting on your first sentence as the last seems to teeter a bit.

So, not necessarily. I would say the majority of owners that have commented on the "washed out" look were not aware this was an issue to look for. Lets be honest, some times in some threads only the positives are touted and negatives never spoken. Seriously getting honest reviews from forum members is quite rare. I know of the issue simply because I read a lot of forums and reviews... a lot. So I have also seen the community be assholes to said person when they claim they had this issue. Again most of these users weren't changing settings in their respective control panels so the image quality they were seeing was in fact a BASE level setup. I cant be sure any of the remedies given actually would fix the issue users have spoken of as I have yet to see a follow up of said user ever mentioning they fixed it, probably were run off.

But what I find most humorous is the fact that when AMD had the stutter/frame pacing issue a great deal of us were not because we were using Radeon Pro. The argument then given us was that it didn't count because it should be standard within AMDs drivers. Yet here users are being told they have to go in and adjust settings on what should be a plug and play feature if it indeed makes such a difference to image quality and usable as a stock setting. I get not defaulting to 10bit as not all monitors are capable but if this setting does indeed improve the image on desktops then why isn't it on?

Doesn't really matter to me as I haven't used Nvidia in years, and don't have any plans to do so. But if someone were entertaining the thought then I would be absolutely sure to relay the information here about said setting as being helpful is what a forum member should be, and not combative and condescending (not saying you are just a general statement).
 
More and more games are supporting AMD Vega GPU's with greater performance, which I like very much. Optimizing for that architecture, especially when they do not produce a gameworks type of layer, is harder to do but definitely pays off in spades.

Yeah its a lower level than Gameworks, and it really can't be fixed without extensive work via developer. Yay! rather have developers spending time and making games fun instead of having them optimize on a per graphics gen basis..... Wouldn't you? This is what LLAPI's do make more work for Dev's. Its really helping AMD's market share too, just look at what Vega is doing for them, Polaris, meh might as well not even shown up to compete. In the end it just plays to the company that has more marketshare, oh well great strategy! Have a leg up for less then a quarter then get dropped like a bad date when the hot chick shows up.

At least this way AMD doesn't need to worry about their driver team not being on the ball....... they should just down size that department now..... everyone knows they need the money.

Whats worse coming to market and not being able to compete or not coming to the market at all? I would pick the former, at least the later then people won't see where the company is completely screwed.
 
The combative and condescending attitude being displayed in this thread is not because the person needs help. No, said person has already divested himself of the card and is now claiming that nVidia doesn't display colors correctly. (As an aside, as a video engineer that uses both AMD/RTG and nVidia cards in multiple systems, I don't see 'washed out colors' on videos on nVidia cards and vibrant colors on AMD/RTG cards. Then again, part and parcel to the title is being able to successfully calibrate my tools to do the job I need to do.) Since the time has passed for the forum to help said member and said member has a different experience than what most people on this forum experience there's no real point in trying to help now, is there? He's not going to go out and get his 980 Ti back and, based on his signature and multiple forum posts, makes an abundant point of never buying an nVidia card ever again.
 
Yeah its a lower level than Gameworks, and it really can't be fixed without extensive work via developer. Yay! rather have developers spending time and making games fun instead of having them optimize on a per graphics gen basis..... Wouldn't you? This is what LLAPI's do make more work for Dev's. Its really helping AMD's market share too, just look at what Vega is doing for them, Polaris, meh might as well not even shown up to compete. In the end it just plays to the company that has more marketshare, oh well great strategy! Have a leg up for less then a quarter then get dropped like a bad date when the hot chick shows up.

At least this way AMD doesn't need to worry about their driver team not being on the ball....... they should just down size that department now..... everyone knows they need the money.

Whats worse coming to market and not being able to compete or not coming to the market at all? I would pick the former, at least the later then people won't see where the company is completely screwed.

They can do both at the same time and are not limited by one architecture or the other, unlike when the Gameworks black box is used. Not my problem, when developers devote time to optimize for the AMD side of the coin, it benefits all gamers overall, as opposed to using that Gameworks blackbox. You see, optimizing for the AMD architecture does not limit anything on team green. Oh, and AMD driver team has been on the ball for a while now when compared to the lazy Nvidia driver developers, at least compared to what they used to be.
 
They can do both at the same time and are not limited by one architecture or the other, unlike when the Gameworks black box is used. Not my problem, when developers devote time to optimize for the AMD side of the coin, it benefits all gamers overall, as opposed to using that Gameworks blackbox. You see, optimizing for the AMD architecture does not limit anything on team green. Oh, and AMD driver team has been on the ball for a while now when compared to the lazy Nvidia driver developers, at least compared to what they used to be.


You can't do the "at the same time" you do one and then do the other. Time allotted. Simple as that. Each shader is based on the architecture now, as before it was done too, but now its even more so. Even with the engine code itself.

There are many layers of code, just for shader code alone I see people using 3 layers for different paths, different gens, etc. Those will also match up with the layers used in the engine as well.

You should look this up, there are many "layers" of code when creating complex software like games. Google search 3d engine flow chart. Or if google isn't your thing go ask the question in the beginners section in Gamedev.net, they will happily walk you through it and answer your questions. here is the web address for Gamedev: https://www.gamedev.net/

That way you don't need to use google.


Gameworks has been open sourced for over a year now where have you been? The only black boxed gameworks are beta versions which aren't even released to developers, unless the developer really wants to test out beta libraries.

And MOST Gameworks Libraries work just fine on AMD hardware, the only ones that didn't are the ones that require tessellation, guess what AMD's weaknesses are in? Those are also open source now too. So.....

AMD's driver team has been on the ball after the craptastic job they did with like 7 out of 10 Dx12 games that came out before Forza? Really, great track record......

What was that, DX12 doesn't require driver updates for performance out of games..... What was that AMD marketing?

DX12 lets developers have full control over their code so there will be less problems..... What was that AMD marketing?

Want to show me DX12 awesomeness with Quantum Break, and Hitman now? OR AOTS? oops those games don't show AMD in good light anymore, gotta use other games. Yeah Forza lasted a whole 2 days, lets move on to another shell we.....

Its always another day another game with AMD products..... next best thing..... wait for Navi...... wait for what ever is after that too.....
 
Last edited:
Thankfully, AMD is still in the game and makes things move forward. It did take them longer than it ought to have to release Vega though but, you cannot control the supply of the stuff you need to build it.
 
Last edited:
Yes, you can do them at the same time, multitasking is a really good thing. You can make a really good game and also optimize for the hardware it is being played on, you do not have to be limited to just one thing or another.


No you can't you don't multitask when programming code, that is how you mess things up, you start and finish the task at hand in full cause each algorithm of code is ONE MATH PROBLEM, try this do two quadratic equations at the same time. If you can do that I will give props to you, because I can do one in my head with on issue at all, but I need to keep track of every single step so I don't mess up the variables. Doing two nada not going to work.

So you finish ONE path first then do the next path, or you have two different programmers working on both paths, which doesn't work well either because both paths also share commonalities, so those commonalities must be done first before the two paths are done separately.
 
Game devs have time to optimize for hardware and make games fun, it is not mutually exclusive. The game in this thread is an example.
 
Last edited:
So basically, you are saying that Game devs messed things up big time when they had to invest in doing things the Gameworks way, eh?

What does gameworks have to do with AMD's path, gameworks can be turned off, it always has been that way since its inception.

You can't tun off API optimizations done for AMD hardware, nor can you do that with nV optimized paths. The only ones that can change those things would have been IHV's with shader replacement, at this point and time all LLAPI's would require rework of the engine components too.

Look you would never say the things you just did if you had any experience with code, any kind of code. So why do you talk about something and assume something that you have very little knowledge and make wild assumptions that are screwed up. This is worse them making things up like you looked into your washed out colors on your 980ti. This is talking about things that you need to know WTF your talking about.
 
Gameworks causes issues on AMD hardware, DX12 and Vulkan eliminates that "advantage".
 
Last edited:
No, Gameworks has always been able to be turned off and some games have actually or where built completely with it in mind. On the one hand, you say optimizing for AMD takes away from making a good game, on the other hand, you say Optimizing for Nvidia is no big deal and has nothing to do with nothing. So, which is it? Is optimizing for a platform a bad thing or a good thing? It cannot be both.


Gameworks adds nothing to the game play man, it might make something look better, but that doesn't do anything for the game play. And the Libraries are already there, done easy to integrate.

Do you know how hard it was to use TressFX 1.0, you should look it up, google is your friend, dev's have talked about it. It was a pain in the ass to get the realistic lighting with it out of the box. Just like your gtx 980ti out of the box didn't give you the color results you wanted, the dev's didn't get the results they wanted with TressFX, so they went with Hairworks.....

Cutting down cost by providing libraries of effects that are easy to integrate, dev's can focus on what is most important, spending time on making a FUN game. Now what AMD did with the push to LLAPI's was the exact opposite of what most dev's wanted. Gameworks libs are still very popular, much more so than any AMD game solutions, but now, ever single gen of cards has to be catered too. Have you tried older than 6 year old games on your latest and greatest cards. Many of them, those not so popular anymore DX9 games, don't run so well, why cause the abstraction layers haven't been updated as quickly. I can almost say for certainty you will start seeing many complaints for DX10, 9 games coming in the near future as MS starts phasing out all API's prior to DX11 and even DX11. Try Far Cry for example, its got issues. How about FEAR 1, 2 and 3, 1 crashes all the time, 2 not as bad but still, 3 is manageable.

So if the Dev's are thinking like you, you should be ok with that right? But they are thinking like you not over 650 bucks, they are thinking like you over a hundreds of thousands of dollars.
 
Last edited:
It ran well enough on Nvidia hardware, it just was not the best anymore and it's been a problem with some of the newest games out lately. If that trend continues then Nvidia needs to figure it out why they are having issues before it hurts their reputation. But it's funny how many here got bent out of shape over AMD winning at a game benchmark instead of Nvidia and suddenly Nvidia gets a pass that you wont give AMD. Always entertaining as well when the fan card gets passed around by the same people that are obvious fans of the other side. I have owned video cards from both sides and long since defunct companies and never did I feel the need to tell the other guy he was a fool for buying the card I did not, long as they are happy that is all that matters. It's like political forums and hardware forums have merged and if you dont agree with someone they must be a ravenous fan of the other side that must be purged. I dont think Vega is anything spectacular but it has some good qualities and I understand why someone would want those, I also see why people bought a Nvidia this round as well. We got less then a year for new stuff coming from both companies, hopefully both will have competitive cards which is good for us. We already seen how a 1 sided monopoly on a market has been bad for us.
 
Last edited by a moderator:
But it's funny how many here got bent out of shape over AMD winning at a game benchmark instead of Nvidia and suddenly Nvidia gets a pass that you wont give AMD.

See, no one got bent out of shape- this is your hyperbole. What did happen was a clickbait headline (Vega destroys 1080Ti!), when the issue was a game developer modifying an engine that they'd licensed to the point that it no longer worked as well on some hardware versus others.

Further, the main issue was that this game was not the rule- Vega is nowhere near as fast as a 1080Ti on average- but rather an exception, one that any rational person would expect to be rectified through patching and driver updates.

Which it was.

This isn't rooting for or against AMD (well, unless you're that special someone), this is seeing an issue and then seeing it addressed.
 
See, no one got bent out of shape- this is your hyperbole. What did happen was a clickbait headline (Vega destroys 1080Ti!), when the issue was a game developer modifying an engine that they'd licensed to the point that it no longer worked as well on some hardware versus others.

Further, the main issue was that this game was not the rule- Vega is nowhere near as fast as a 1080Ti on average- but rather an exception, one that any rational person would expect to be rectified through patching and driver updates.

Which it was.

This isn't rooting for or against AMD (well, unless you're that special someone), this is seeing an issue and then seeing it addressed.

The hyberbole was real....but shortlived.
I fear that we have not seen the last of hyperbole posts/articles...just wait until the next game that has a bug appears.
Then it becomes THE THING to watch....bugger with the 99,9% of all other games...confirmation bias hard at play.

If anomalies are all you have left that should be a MAJOR clue to that your are leaving the realm of facts.
 
See, no one got bent out of shape- this is your hyperbole. What did happen was a clickbait headline (Vega destroys 1080Ti!), when the issue was a game developer modifying an engine that they'd licensed to the point that it no longer worked as well on some hardware versus others.

Further, the main issue was that this game was not the rule- Vega is nowhere near as fast as a 1080Ti on average- but rather an exception, one that any rational person would expect to be rectified through patching and driver updates.

Which it was.

This isn't rooting for or against AMD (well, unless you're that special someone), this is seeing an issue and then seeing it addressed.

Yes some people did and this thread got purged of some posts due to it. Not sure why any title to a thread should trigger someone that they have to freak out over a single game benchmark. This also went on with Ashes as well and it's just funny, of course anyone that owns a Vega will be happy to see it do well, now is it due to a hardware optimization or is it due to a bug for the other side. In this case it looks to be a little of both as Vega still does well in it but Nvidia is ahead again. But I see a greater trend of Nvidia users over here posting and yet I rarely see the opposite on the Nvidia side. So when your in the AMD side of the gpu forum you should kind of expect more people to like that hardware and get upset when you say it's garbage. Still think the biggest waste of money I ever had on video cards was buying two 8800 Gt cards, what a stuttering mess they were and yet once again I own a Nvidia card a 1080 single card this time after retiring my 290x cards to miners.
 
...now is it due to a hardware optimization or is it due to a bug for the other side...

This is really, really easy to solve: is the game performance in question in line with the average, or far outside it?

In the case of Ashes and Forza, initial AMD performance was an exception, diverging from the average. This meant that those games were optimized poorly for Nvidia, when AMD was running at expected speeds.

With Forza, in particular, the scaling from one Nvidia card to the next was very off, so this was obvious- unless you were trying to use it to portray Nvidia poorly, as many did. Now those same people are having to eat their words, as the game and driver have been brought back to expected performance.
 
It is a little bit pretentious of you to claim things on that level like any of you lot are super star engineers working the forums as a hobby to enlighten any of "us".

It didn't matter a single bit when real engineers actually posted here. God knows how many times I have been dismissed with my DX12 views despite me working with it everyday.

It's just a sign of the modern times. Real news or facts don't matter if it doesn't fit my viewpoint.
 
hey, you were one of those saying bulldozer was amazing and the future with DX12 and intel would be put in shame because IPC wasn't going to be king in gaming anymore..


It is rather simple DX12 is something which Game Developers asked for. Your analogy on Bulldozer needed to be saved that was never going to happen maybe soften the blow IF more threads were supported.

DX12 can only be obsolete if there is some technical limitation clearly holding it back, haven't seen anything yet tho.
hey, you were one of those saying bulldozer was amazing and the future with DX12 and intel would be put in shame because IPC wasn't going to be king in gaming anymore..



Bulldozer is not a bad chip performance wise Mantle proves this. It is bad because it did not sell it is so bad that AMD just made Piledriver revisions and never bothered updating the AM3+ platform.

For people with an AM3+ mainboard it is not a bad chip at all. It just does not perform very well in certain older benchmarks and with gaming it needs a new API to make it fly.

Using DX12/Vulkan or Mantle on games which use high batch count not to sure it will still be good value ....

You don't get it the multi core addressing GPU is important where you had _only_ 1 core talking to the GPU single thread performance was king....

The part where it relies on 1 core now is gone, AMD was always considered as having weak cores and this is where it will shine if you want to see some stats on this you can visit old AMD GPU14 day (oxide presentation) and it has some batch count numbers of certain cpu running star swarm under Mantle.

And to a degree you are right but on higher batch counts where AMD multiple cores are "better" suited for this so with DX12 the cores pretty much scale where Intel hyperthreading just is not good enough where it previously was considered "good".

That the graphics card does all the heavy lifting now everyone already knew but the part you are forgetting is rather important.....

Maybe this is a little bit hard to believe but comparing the cpu is not going to matter much.
Btw CaptNumbNutz AMD did not give up but the funds for closing the gap on Intel just were not available nor was there any decent plan to tackle this. Having said that Mantle does show people that when the API is not the limiting factor AMD can do well. on higher batch count games FX8350 (or better) can still be very worthwhile processor running Mantle DX12/Vulkan games.

As more engines switch to DX12/Mantle/Vulkan then it is bound to the number of cores you have on your processor. And of course the features have to be in the engine for support.
This is why it matters little, soon most if not all big titles will support this.

This is somewhat upsetting you quoting this as DX12 performance which is rather silly because there is no DX12 performance since every game tends to use a different engine and the performance you see now should not even come close to what it will be later this year hoping that in gaming I3 will have a place at all.

As soon as there are more games you will find this out the hard way and people with an older FX-8350 will be still enjoying performance on their cpu for the next 2 years still ...
Soon as Frostbyte 4 games appear you be sure to post any DX12/Vulkan I3 benchmarks..

You are thinking that all of the benchmarks are correct an I3 is faster, I'm saying that an I3 is a waste of space because when it comes to scaling an I3 is not going to scale with anything that does serious amount of batches or have a decent enough multi threading that it could cope with advanced DX12/Vulkan engines

To make the point again DX11 does not allow multi threading in the same way DX12 does where DX11 craps out at around 5 cores gaming engines using DX12 to the fullest will allow much greater scaling (as seen in the Mantle video). There was no incentive for game developers to do serious multi threading because of that ...
This was a funny thread

AMD's take on benchmarks



4 year old demo proves you are wrong. When people are starting to see what is real now. Try looking at consoles even tho Jaguar cores are "faster" the same principle exists more cpu cores driving the gpu. It drives 4K 30 fps games today with hardware that is so obsolete that no gamer uses it any more.

Without DX12 it seems that Bulldozer did better when the gaming engines started becoming using better multi thread (surprise surprise?)


Makes my point doesn't it , even tho one of the platforms in question only one of them use DX12 but the same principle applies.

The only thing you can blame me for is not defining how much more performance you would get. No one ever said that Bulldozer/Piledriver would beat Intel.
 
Where is "mantle" now?
You do know that work on DX12 started before any mention of "mantle" right?
Perhaps that fact slipped your mind?

But thanks for linking to AdoredTV, then I know what my next step is ;)
 
Where is "mantle" now?
You do know that work on DX12 started before any mention of "mantle" right?
Perhaps that fact slipped your mind?

But thanks for linking to AdoredTV, then I know what my next step is ;)

Mantle morphed into Vulkan and also helped DX12, no surprise there. It was rather sad when AMD stopped supporting it but, the cost was just more than they could handle but that said, it was fantastic when it was used properly.
 


4 year old demo proves you are wrong. When people are starting to see what is real now. Try looking at consoles even tho Jaguar cores are "faster" the same principle exists more cpu cores driving the gpu. It drives 4K 30 fps games today with hardware that is so obsolete that no gamer uses it any more.

Without DX12 it seems that Bulldozer did better when the gaming engines started becoming using better multi thread (surprise surprise?)


Makes my point doesn't it , even tho one of the platforms in question only one of them use DX12 but the same principle applies.

The only thing you can blame me for is not defining how much more performance you would get. No one ever said that Bulldozer/Piledriver would beat Intel.



First off mantle is dead, remember the big performance difference of Mantle from Gen to Gen? That will happen with any LLAPI, guess what happens with game dev's stop supporting older games? All of them will be done for with newer hardware.
 
Mantle morphed into Vulkan and also helped DX12, no surprise there. It was rather sad when AMD stopped supporting it but, the cost was just more than they could handle but that said, it was fantastic when it was used properly.

What happened when AMD stopped supporting it, remember how poorly the same games with mantle did from the next generation GPU's? Once game dev's stop supporting 2 or 3 year old games, you guys don't think that won't happen to DX 12 games or Vulkan games?

People that wanted this have to swallow the pill, at the end it will make everyone more money, and we have to spend for it. Thats fine I'm cool with that part, as long as the games that are coming out are better than before right?

http://aras-p.info/blog/2014/05/31/rant-about-rants-about-opengl/

You guys should read this. Interesting tidbit at the end too

I’m actually quite happy that Mantle and upcoming DX12 has caused quite a stir of discussions (to be fair PS3’s libGCM was probably the first “modern to the metal” API, but everyone who knows anyting about it can’t say it). Once things shake out, we’ll be left with a better world of graphics APIs. Maybe that will be the world with more than two non-console APIs, who knows. In any case, competition is good!

http://www.redgamingtech.com/ice-te...apis-sonys-libgcm-first-modern-low-level-api/
Make of these exchanges as you will, but it’s fairly clear that Sony were one of the first to go ‘to the metal’ with their API structure, which likely demonstrates why the system improved so much in its life compared to launch. The PS3’s CPU was notoriously difficult to program and develop for too might I add.

Now you guys think its good or bad that PC's have LLAPI's, its good in one sense, the full potential of the PC hardware can be utilized, but bad in a business sense, where the same problems with consoles will start showing up in PC's. backward capability, lack there of in consoles even one gen of AMD hardware to another, there are some issues if the generation of GPU's change.... even though PC environment is more open than consoles, those same problems will come up. With MS's abstraction layers, older game support on newer hardware, kinda screwed there, and dev's really don't updated older games unless they remain popular......

All the same problems consoles have will show up in the PC space now, everything MS and OGL were trying to avoid in the first place. Granted the abstraction layers were getting to thick so that was a good thing to get rid of them.

Everything else for the IHV's stays the same, no changes there, the IHV's don't give a shit if its DX11, 12, Vulkan, what ever, cause its just a DAMN API. For them its just marketing BS. AMD used it because they didn't have the resources to make the necessary improvements in their drivers in DX11, and we know their abysmal performance in OGL. That means nothing in context of hardware though. nV didn't want to go towards it cause they had an advantage with DX11 and Ogl which they wanted to exploit further.

All of us sitting here running our posts about this and that, all it is, is MONEY everything else doesn't matter.
 
Last edited:
Nope. They can't do anything in the near term, not until Navi comes out.
 
Nope. They can't do anything in the near term, not until Navi comes out.
WTF? I know i didn't wait for it cause i bought a TitanX, but AMD had been promising for like 2 years a Vega 'monster' card that would beat anything by Nvidia.

Aren't people mad? If i had been waiting for Vega all this time i'd be freaking pissed now.

And now there's some new fairy tale called Navi? :facepalm:
 
WTF? I know i didn't wait for it cause i bought a TitanX, but AMD had been promising for like 2 years a Vega 'monster' card that would beat anything by Nvidia.

Aren't people mad? If i had been waiting for Vega all this time i'd be freaking pissed now.

And now there's some new fairy tale called Navi? :facepalm:

Navi probably won't be able to compete with Volta either, AMD is a solid 1 gen behind nV with GPU tech, if nV releases Volta early next year which looks likely they will be 2 generations behind there is no if and buts about it.
 
That's where I fear AMD is painted a bit in a corner. They understandably struggle a bit to compete on the raw horsepower front with Nv, and are doing some fairly interesting "outside the box" things which could allow them to have various advantages.

However, that requires that devs actually use said features. Now, if AMD had really good DX11 drivers, I'd think it might be more possible, since you could abstract a larger amount of special functionality from devs. But with the general need to use DX12 with AMD thus far for good performance, they're shoving that coding into the hands of game devs. Devs want to do as little custom coding as possible, and without doing it for them, it's going to be hit and miss.

I do truly hope I'm wrong, and Navi just has mighty balls without requiring a lot of incredibly unique coding required to achieve better NV competitiveness. Alternately, if AMD just made a really killer midrange card (with reasonable profit margins) that would be good for them as well. It isn't necessary to have the top dog to be a real market power.
 
That's where I fear AMD is painted a bit in a corner. They understandably struggle a bit to compete on the raw horsepower front with Nv, and are doing some fairly interesting "outside the box" things which could allow them to have various advantages.

However, that requires that devs actually use said features. Now, if AMD had really good DX11 drivers, I'd think it might be more possible, since you could abstract a larger amount of special functionality from devs. But with the general need to use DX12 with AMD thus far for good performance, they're shoving that coding into the hands of game devs. Devs want to do as little custom coding as possible, and without doing it for them, it's going to be hit and miss.

I do truly hope I'm wrong, and Navi just has mighty balls without requiring a lot of incredibly unique coding required to achieve better NV competitiveness. Alternately, if AMD just made a really killer midrange card (with reasonable profit margins) that would be good for them as well. It isn't necessary to have the top dog to be a real market power.


My thoughts exactly,

To get to the equality on the top end is going to be too much money to spend for AMD, mid range they can still focus and make money on that and this will also help in future SOC, console and semi custom design business. they need to slowly gain the momentum back to become competitive at the top end.

Doing special code is always a last resort when designing silicon, that only works if the company that requires it has majority of the market share. Consoles vs PC, Consoles have more market share gaming wise, but consoles have a limited life span, 5 years and GPU architectures have even less of a life span. Now what AMD, MS, Sony did was have a mid life crisis so to speak and created two consoles in that 5 year life span great they are compatible, but really that great, still got a 5 year life span and that coincides with the lifespan of AMD generations of architecture, but with GCN they are now pushing it to 7 years. The next generation of consoles, how are they going to keep compatibility, MS and Sony aren't companies that tie their products down to IHV or generation of GPU's, once they decide to move on away from GCN, what will happen to backward compatibility? Its going to be gone, all this LLAPI talk pretty much hits the fan, it won't matter anymore.

7 years of one GPU architecture is a lot of time, that is larger then a CPU life span :/
 
Last edited:
That's where I fear AMD is painted a bit in a corner. They understandably struggle a bit to compete on the raw horsepower front with Nv, and are doing some fairly interesting "outside the box" things which could allow them to have various advantages.

However, that requires that devs actually use said features. Now, if AMD had really good DX11 drivers, I'd think it might be more possible, since you could abstract a larger amount of special functionality from devs. But with the general need to use DX12 with AMD thus far for good performance, they're shoving that coding into the hands of game devs. Devs want to do as little custom coding as possible, and without doing it for them, it's going to be hit and miss.

I do truly hope I'm wrong, and Navi just has mighty balls without requiring a lot of incredibly unique coding required to achieve better NV competitiveness. Alternately, if AMD just made a really killer midrange card (with reasonable profit margins) that would be good for them as well. It isn't necessary to have the top dog to be a real market power.

Nailed it.

DX12 is a total hit and miss, and a lot more miss because most developers simply aren't proficient enough with DX12 to be able to optimize code like the experts in the industry(eg. iD, Epic). We struggle to even render correctly and truth be told, graphics is starting to no longer be the great priority it used to be. I have said this often enough but the real problem for the industry is DX11 is good enough and DX12 doesn't offer enough to developers to look at DX12 exclusively. Till today, you get a few marquee games in DX12 and the vast majority is still DX11.

razor1 also raised another good point, doing special code is a last resort. Look at each time AMD(ATI) is in the lead, did they resort to "special code paths?" No, they simply took what's out there and ran faster. Period, no tricks. Athlon, Radeon 9000 series, Radeon 4000 series, Radeon 5000 series, etc, they didn't need any special help or tricks to lead. They simply were superior.

Bottom line is, you cannot depend on the developers to do special favors for any hardware vendor. You may convince one or two to do something special for your hardware, but the majority will simply just run faster on the superior hardware.
 
What happened when AMD stopped supporting it, remember how poorly the same games with mantle did from the next generation GPU's? Once game dev's stop supporting 2 or 3 year old games, you guys don't think that won't happen to DX 12 games or Vulkan games?
I don't understand this point. If the API has not changed, meaning for DX12 or Vulkan, then games should continue to work, assuming the IHVs maintain driver support. Why would developers need to update their games and, even if that were the case, how is this a different situation from DX11 or OpenGL?
 
I don't understand this point. If the API has not changed, meaning for DX12 or Vulkan, then games should continue to work, assuming the IHVs maintain driver support. Why would developers need to update their games and, even if that were the case, how is this a different situation from DX11 or OpenGL?

All API's prior to DX12 and Vulkan have HAL, Hardware Abstraction Layers which are updated as new generations of GPU's for older older games, essentially mapping, or emulating older DX's. LLAPI's don't have this, cause well their are Low Level lol, as close to the metal as possible with a high level programming language.

So lets say in 3 years time DX13 (probably longer more like 5 years) comes out unless dev's do the work on their own older games, no go for old games on new hardware. As fixed function units are dropped for ALU's or programmable units, the fixed function portion, and this will happen, the fixed function portions have to be emulated via drivers, or dev's have to update. No way around this. No longer does MS have any responsibility of making sure backward compatibility at an API level needs to be there.

The whole purpose of HAL was so dev's don't need to worry about this. LLAPI's is the first major change in the graphics programming model since the advent of programmable shaders. This is why we didn't see that many issues going from DX 9, to 10, 11, but when you look at DX8 games running on DX9 or above hardware, there were quite a few issues some dev's worked them up with patches, most didn't though but HAL took care of most of it too.

OGL, doesn't need to worry about this as everything is supported.

HAL also helps on writing code for multiple pipeline structures, so write once read many....... By its removal, pretty much is a certainty backward compatibility of new hardware with old software will be a pain and the different pipelines of course, we already know and have seen this.

Also this means MS will also drop support of older API's, DX11 and so on as DX12 popularity increases, so.... this will be a bit slower since DX12 uptake is still not that fast, but as you can see most MS's games are coming out with DX 12, and now they are going to DX12 only games.
 
Last edited:
Back
Top