Hawaii matching Maxwell 2?? What's happening under Ashes of the Singularity?

Last edited:
Their 980 TI is faster than the Titan X in those benchmarks, so either a big OC or something up with their results.
 
Only reference GPUs are Fury X and Titan X, which there is no non-ref version. So that probably is the reason why 980ti is performing higher there.
 
http://www.pcgameshardware.de/Ashes-of-the-Singularity-Spiel-55338/Specials/Benchmark-DirectX-12-DirectX-11-1167997/

They did a updated benchmark and this one actually shows 980 performing better than of non-X fury and 390x and Titan X and 980Ti moving ahead of Fury X under DX12 in 1080p. In 4k, 390x and Fury moves ahead of 980 while 980ti remains on top

edit: They did use a lot of non-ref cards so hard to draw any conclusions from there.


I was waiting for someone to post this...

I will respond the same way I responded in a PM.

I figured out what pcgameshardware.de did. They disregarded Oxide's reviewer's guide.

1. They disabled one Post Processing effect (Glare). In other words they disabled a lot of Async shading.
2. They set the TAA to 12x (who uses 12x TAA??? that would blur the image like crazy and puts un-necessary strain on your GPU).
3. They increased terrain shading samples from 8 to 16 Million (putting extra strain on the rasterizers (small triangle issue), no extra image quality)
4. They only posted the result for the heavy batches (averaged them but all cards end up with a low FPS).

This would undoubtedly show the Fury-X faster than the 290x by a long shot. Better able to handle tessellation. As the test is no longer compute limited, you would see nVIDIA pulling away (though all the cards would be outputting unplayable frame rates).


The settings ArsTechnica (and everyone else used):
IqDwsfN.jpg


The settings pcgameshardware used:
sBl64wP.jpg


The results from PCGamesHardware actually bolster the arguments I've made in this thread. When Post Processing effects are utilized (such as Glare and Lighting) you tap into the Compute Performance of the GPUs. In doing so, AMDs GCN has an advantage when using Asynchronous shading (DirectX12).

As for over tessellation (plays into my last post which quoted Joel Hurska over at Extreme Tech, Razor1 also mentioned it):
Wru3Izh.png

K0mo8Ic.png

j8vQVS4.png



TAA is considered to be obsolete because of its tendency to blur the image. You only need a small factor (6). You should mix is with MSAA in order to reduce the blurring effect. Anything higher and the image blurs up. TAA has one great effect, it can reduce aliasing when objects are moving around or the camera is moving around in a scene. This is why Oxide use it. Increasing the setting to a factor of 12 though is beyond crazy.

This is the effect of using a TAA factor higher than 6 (notice the blurring in the background?):
uy0zsAQ.jpg


This is why you use TAA (moving camera while while reducing "jaggies"):
eoqX3RN.jpg


This is a negative side effect of TAA when it is overly used (blurring during a cut scene):
zqugC4m.jpg


Now that's why their results are so odd.

The kicker is that doing this has no discernible image quality advantages. Disabling Glare and boosting TAA to 12 actually drives down image quality. More tessellation leads to no discernible image quality improvements.

Therefore I can conclude one of two things either...

The person doing the benchmarking over at pcgameshardware didn't know what he was doing
or
Those benchmarks were tailored in order to derive a particular result and mis-inform readers.

This is why it is important to have informed readers and why conversations, such as the one we're having in this thread, are of utter importance.
 
Last edited:
Why do the graphics look so bad in this game?

A) Alpha gameplay

B) Gameplay and fun, not graphics, "you will control thousands of units at a time". All the lasers and such will have realistic lighting and effect other objects.

Its supreme commander type gameplay.
 
Ok so the take-away here is that when the GPU is used for compute, AMD is competitive with nvidia. When the GPU is used for rendering graphics, not so much.

?
 
they both have their pros and cons, Maxwell 2 is a more efficient architecture when it comes to compute, but GCN has the ability to do more because of its raw performance due to ALU through put with compute. Its not as simple as one is better in one area than another. (this is based on what we have seen so far in reviews, benchmarks, etc)

It all depends on how the program is made.
 
Heyyo,

http://www.pcgameshardware.de/Ashes-of-the-Singularity-Spiel-55338/Specials/Benchmark-DirectX-12-DirectX-11-1167997/

They did a updated benchmark and this one actually shows 980 performing better than of non-X fury and 390x and Titan X and 980Ti moving ahead of Fury X under DX12 in 1080p. In 4k, 390x and Fury moves ahead of 980 while 980ti remains on top

edit: They did use a lot of non-ref cards so hard to draw any conclusions from there.

Hmm, if anything? I say that review further shows that NVIDIA's drivers for Ashes of Singularity are still immature. Look at the Dx11 Vs Dx12 average framerate... performance regression on the same settings.

I'd definitely like to see NVIDIA fix this and then redo benchmarks. This is the complete opposite of what we saw from Anandtech's earlier preview of Dx12 Star Swarm... which is on the same Oxide Games's Nitrous Engine.
 
Ok so the take-away here is that when the GPU is used for compute, AMD is competitive with nvidia. When the GPU is used for rendering graphics, not so much.

?

Under DirectX 12, I would say that this is a fair statement provided that Asynchronous compute is used and that the compute shaders being utilized are optimized for both GPU architectures. Since DirectX 12 places the burden on developers to optimize for either architectures (in conjunction with working with either nVIDIA and AMD in order to derive the most performance out of either architectures) we will find ourselves in a more just position as it pertains to compute. In terms of rendering graphics... this will depend. GCN has great texturing capabilities whereas Maxwell 2 has great geometry performance. All in all, DirectX 12 has more potential to deliver the theoretical performance out of either nVIDIA or AMDs architectures provided we have talented developers working on the titles.

Under DirectX 11, I would side with what razor1 has posted. Maxwell 2 is more efficient in terms of deriving compute performance, as well as rendering graphics performance, than any GCN architecture. This has a lot to do with the resources nVIDIA have on hand which allow them to perform far more driver interventions, ex: shader replacements, when optimizing compute performance. Draw call limitations also come into play as nVIDIAs architectures, and their software driver, work quite well together in order to tap into the available theoretical performance of nVIDIAs various graphics architectures. That's why we see nVIDIAs graphics card excel at their capacity to make better use of the serialized nature of the DirectX 11 API.

What I hope to see, going forward, is DirectX 12 forcing both organizations to focus more on improving their respective architectures and less on gaming the system. I'd love to see GameWorks die and, in its place, closer developer relationships forming which aren't based on hampering a competitors performance in order to perform better in benchmarks. This would have enormous benefits towards the Gaming community (more competition, lower pricing on hardware as well as better in-game visuals). We could still post benchmarks, argue, brag etc but we'd be doing so on an even playing field and without hindering the overall gaming experience.

My 2 cents.
 
Last edited:
Under DirectX 12, I would say that this is a fair statement provided that Asynchronous compute is used and that the compute shaders being utilized are optimized for both GPU architectures. Since DirectX 12 places the burden on developers to optimize for either architectures (in conjunction with working with either nVIDIA and AMD in order to derive the most performance out of either architectures) we will find ourselves in a more just position as it pertains to compute. In terms of rendering graphics... this will depend. GCN has great texturing capabilities whereas Maxwell 2 has great geometry performance. All in all, DirectX 12 has more potential to deliver the theoretical performance out of either nVIDIA or AMDs architectures provided we have talented developers working on the titles.

Under DirectX 11, I would side with what razor1 has posted. Maxwell 2 is more efficient in terms of deriving compute performance, as well as rendering graphics performance, than any GCN architecture. This has a lot to do with the resources nVIDIA have on hand which allow them to perform far more driver interventions, ex: shader replacements, when optimizing compute performance. Draw call limitations also come into play as nVIDIAs architectures, and their software driver, work quite well together in order to tap into the available theoretical performance of nVIDIAs various graphics architectures. That's why we see nVIDIAs graphics card excel at their capacity to make better use of the serialized nature of the DirectX 11 API.

What I hope to see, going forward, is DirectX 12 forcing both organizations to focus more on improving their respective architectures and less on gaming the system. I'd love to see GameWorks die and, in its place, closer developer relationships forming which aren't based on hampering a competitors performance in order to perform better in benchmarks. This would have enormous benefits towards the Gaming community (more competition, lower pricing on hardware as well as better in-game visuals). We could still post benchmarks, argue, brag etc but we'd be doing so on an even playing field and without hindering the overall gaming experience.

My 2 cents.
DX12 would just push for even more gameworks, if it's onus on developers to optimize that's more work. More work they'll happily push off to nvidia or amd if they offer, hell gameworks lives off the fact developers are lazy and would rather not write their own engine or their own effects but use something pre-packaged. If DX12 requires more out of them you can expect more of them to be willing to trade control for results.
 
I definitely think the new APIs will be a financial boon for engine developers. Game devs aren't going to want to have to worry about how to get their games running well on every single GPU out there. In comes Epic, DICE, etc. to handle that for them (for a nominal fee, of course).
 
On the Ashes forum boards the developers have said that Nvidia's drivers are immature for DX12, and that Nvidia cards will do better under DX11 at this point. Playing with the benchmark, I'm seeing a noticible improvement if I play on DX11.
 
On the Ashes forum boards the developers have said that Nvidia's drivers are immature for DX12, and that Nvidia cards will do better under DX11 at this point. Playing with the benchmark, I'm seeing a noticible improvement if I play on DX11.

can you post your results with picture of the settings used in the Ashes of the singularity benchmark thread?.
 
Hmm... Ark, a gameworks bought out title, also just backed out of releasing their DX12 version of their game at the last second. They even have a free weekend booked on Steam which was to coincide with the DX12 release.

Looks like nvidia is having quite the hard time with DX12.
 
I definitely think the new APIs will be a financial boon for engine developers. Game devs aren't going to want to have to worry about how to get their games running well on every single GPU out there. In comes Epic, DICE, etc. to handle that for them (for a nominal fee, of course).

Most definitely as most engine developers went through the process of updating their engines to be DX12 compliant well before the launch of DX12.
 
Hmm... Ark, a gameworks bought out title, also just backed out of releasing their DX12 version of their game at the last second. They even have a free weekend booked on Steam which was to coincide with the DX12 release.

Looks like nvidia is having quite the hard time with DX12.

I was going to download it to test the DX12 this weekend. Oh well. :(
 
Here's the official post about Ark's DX12 delay

ARK DirectX 12 Delay

Hello Survivors,

It's been a long week here at Studio Wildcard as the programming team has been grinding to get the DX12 version ready for release. It runs, it looks good, but unfortunately we came across some driver issues that we can't entirely tackle ourselves :(. We’ve reached out to both NVIDIA and AMD and will be working with them to get it resolved as soon as possible! Once that’s tackled, we’ll be needing to do more solid testing across a range of hardware with the new fixes. Sadly, we're gonna have to delay its release until some day next week in order to be satisfied with it. It's disappointing to us too and we're sorry for the delay, really thought we’d have it nailed today but we wouldn't want to release ARK DX12 without the care it still needs at this point. Hang in there, and when it's ready for public consumption, it should be worth the wait!
 
I was going to download it to test the DX12 this weekend. Oh well. :(

I was just trying the free weekend for kicks. Even if DX12 works on AMD (like 2x Crossfire for 280x cards with almost perfect scaling), nvidia won't let them release it.


It's funny, there is NO mention of AMD anywhere in that game EXCEPT to say they are reaching out to both companies for DX12 help, gave me a chuckle.
 
Oxide responded to my findings here: http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995

And yes... it appears I was correct in my assessment. Going forward.. expect to see GCN doing very well in DX12.

Wow, there are lots of posts here, so I'll only respond to the last one. The interest in this subject is higher then we thought. The primary evolution of the benchmark is for our own internal testing, so it's pretty important that it be representative of the gameplay. To keep things clean, I'm not going to make very many comments on the concept of bias and fairness, as it can completely go down a rat hole.

Certainly I could see how one might see that we are working closer with one hardware vendor then the other, but the numbers don't really bare that out. Since we've started, I think we've had about 3 site visits from NVidia, 3 from AMD, and 2 from Intel ( and 0 from Microsoft, but they never come visit anyone ;(). Nvidia was actually a far more active collaborator over the summer then AMD was, If you judged from email traffic and code-checkins, you'd draw the conclusion we were working closer with Nvidia rather than AMD wink.gif As you've pointed out, there does exist a marketing agreement between Stardock (our publisher) for Ashes with AMD. But this is typical of almost every major PC game I've ever worked on (Civ 5 had a marketing agreement with NVidia, for example). Without getting into the specifics, I believe the primary goal of AMD is to promote D3D12 titles as they have also lined up a few other D3D12 games.

If you use this metric, however, given Nvidia's promotions with Unreal (and integration with Gameworks) you'd have to say that every Unreal game is biased, not to mention virtually every game that's commonly used as a benchmark since most of them have a promotion agreement with someone. Certainly, one might argue that Unreal being an engine with many titles should give it particular weight, and I wouldn't disagree. However, Ashes is not the only game being developed with Nitrous. It is also being used in several additional titles right now, the only announced one being the Star Control reboot. (Which I am super excited about! But that's a completely other topic wink.gif).

Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports.

From our perspective, one of the surprising things about the results is just how good Nvidia's DX11 perf is. But that's a very recent development, with huge CPU perf improvements over the last month. Still, DX12 CPU overhead is still far far better on Nvidia, and we haven't even tuned it as much as DX11. The other surprise is that of the min frame times having the 290X beat out the 980 Ti (as reported on Ars Techinica). Unlike DX11, minimum frame times are mostly an application controlled feature so I was expecting it to be close to identical. This would appear to be GPU side variance, rather then software variance. We'll have to dig into this one.

I suspect that one thing that is helping AMD on GPU performance is D3D12 exposes Async Compute, which D3D11 did not. Ashes uses a modest amount of it, which gave us a noticeable perf improvement. It was mostly opportunistic where we just took a few compute tasks we were already doing and made them asynchronous, Ashes really isn't a poster-child for advanced GCN features.

Our use of Async Compute, however, pales with comparisons to some of the things which the console guys are starting to do. Most of those haven't made their way to the PC yet, but I've heard of developers getting 30% GPU performance by using Async Compute. Too early to tell, of course, but it could end being pretty disruptive in a year or so as these GCN built and optimized engines start coming to the PC. I don't think Unreal titles will show this very much though, so likely we'll have to wait to see. Has anyone profiled Ark yet?


In the end, I think everyone has to give AMD alot of credit for not objecting to our collaborative effort with Nvidia even though the game had a marketing deal with them. They never once complained about it, and it certainly would have been within their right to do so. (Complain, anyway, we would have still done it, wink.gif)

--
P.S. There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.

And just as I suspected... the AWSs don't function "Out of Order" and do not conduct error checking (contrary to what I believed the pipeline stalls, due to dependencies, made the nVIDIA cards even slower. Therefore Oxide worked closely with nVIDIA in order to improve their shader code to get the most performance they could out of Kepler/Maxwell/Maxwell 2):
AFAIK, Maxwell doesn't support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not.

Weather or not Async Compute is better or not is subjective, but it definitely does buy some performance on AMD's hardware. Whether it is the right architectural decision for Maxwell, or is even relevant to it's scheduler is hard to say.

And Oxide collaborated a lot with nVIDIA. Therefore, no, a driver won't help the GTX 980 Ti as I was alluding to in my statements. Next up... my analysis on Multi-Adapter technology once Ark arrives with DX12.
 
Last edited:
So, is Epic and their Unreal Engine supporting asynchronous shaders on console, but artificially not on PC? Or not at all?

Async shaders is a pretty big thing on consoles for titles in production right now. Also AMD did make an Unreal Engine 3 Async plugin with Square Enix and their title Thief as a proof of concept so we know it is very possible.

Now I'm curious.
 
Thank you OP for taking the time to post what you have done. I appreciate it although I have not yet had a moment to read it. Still thank you. :)
 
Thank you OP for taking the time to post what you have done. I appreciate it although I have not yet had a moment to read it. Still thank you. :)

Thank my wife :p I was glued to the PC for a week straight researching this LOL

And no problem :) It was very amusing and rewarding to learn so much in such a short period of time. I haven't been active in GPU architectures since 2004.
 
So it looks like Nvidia will try to bury this until they can get Pascal on the market.
Interested to see how this works out for them. :rolleyes:
 
I'll be keeping my 3 year old Dual Radeon R9 290x configuration going forward until Greenland and Pascal. Might build a mini-itx R9 Nano rig for LAN parties seeing as lugging around this Cosmos II is a pain in the arse.
 
So, is Epic and their Unreal Engine supporting asynchronous shaders on console, but artificially not on PC? Or not at all?

Async shaders is a pretty big thing on consoles for titles in production right now. Also AMD did make an Unreal Engine 3 Async plugin with Square Enix and their title Thief as a proof of concept so we know it is very possible.

Now I'm curious.

I think you can feel pretty safe that they'll not gimp their engine. It's not like the first release is the end of all improvements.
 
Here's the official post about Ark's DX12 delay:


ARK DirectX 12 Delay

Hello Survivors,

It's been a long week here at Studio Wildcard as the programming team has been grinding to get the DX12 version ready for release. It runs, it looks good, but unfortunately we came across some driver issues that we can't entirely tackle ourselves . We’ve reached out to both NVIDIA and AMD and will be working with them to get it resolved as soon as possible! Once that’s tackled, we’ll be needing to do more solid testing across a range of hardware with the new fixes. Sadly, we're gonna have to delay its release until some day next week in order to be satisfied with it. It's disappointing to us too and we're sorry for the delay, really thought we’d have it nailed today but we wouldn't want to release ARK DX12 without the care it still needs at this point. Hang in there, and when it's ready for public consumption, it should be worth the wait!

Looks like they may have ran into similar issues as Oxide (though they mention AMD as well so I'm wondering what that's about...).

And it looks like I may have caused a lot of bad PR for nVIDIA. I hope they don't hold it against me.

nVIDIA GPUs don't support Asynchronous Shading without incurring a rather large performance hit. That pretty much means, they don't support Asynchronous shading (if you can support it, but your Graphics card takes a huge hit rather than receiving a boost... you don't really support it). They'll likely have to invest a lot of time and resources in helping developers optimize shaders for their architectures. I suppose nVIDIA would be better off running these games using a DX11 path instead (they have the best DX11 cards on the market). But nVIDIA also has 12_1 support, in DX12, for Conservative Rasterizers and ROV... therefore they could simply optimize the shaders and still support DX12 with these extra features.

As for incoming DX12 titles, we may be looking at GeForceFX all over again, because of how prevalent Async Shading is in these incoming titles, especially console ports. The problem is, at least with GeForceFX, you could optimize your drivers and perform shaders replacements (anyone remember the controversy surrounding the GeForceFX driver compiler?). The Compiler, this time around, is in the form of not running Async Shading and instead working with developers to optimize shaders in order to derive good performance out of their architectures.

DX12 support appears to be far more complicated than we initially thought.

nVIDIA might need Pascal, and soon.
 
Last edited:
Something doesn't make sense in that oxide dev's comments, if nV's async units are bottlenecking Mawell 2's architecture, Fury X would blow past the 980ti, which it doesn't they are around the same performance where they should be, unless the batch sizes are enough to become a CPU bottleneck for the higher end cards that the async shader bottleneck isn't noticeable, I find that highly unlikely because of the performance hit that the 980ti and Fury X take when changing resolutions while the batch amounts remain the same.

Something else is going on.
 
Something doesn't make sense in that oxide dev's comments, if nV's async pipelines are bottlenecking Mawell 2's architecture, Fury X would blow past the 980ti, which it doesn't they are around the same performance where they should be, unless the batch sizes are enough to become a CPU bottleneck for the higher end cards that the async shader bottleneck isn't noticeable, I find that highly unlikely because of the performance hit that the 980ti and Fury X take when changing resolutions.

Something else is going on.

Indeed, we're discussing the Fury-X bottleneck over at overclock.net. We've asked the Oxide dev for some input. We're waiting on a response.

One thing to consider. Hawaii and Fiji are based on a similar overall design. Something else in that design might be holding the Fury-X back. Something Ashes of the Singularity makes use of. I'd like to say Tessellation but Fiji improved upon Hawaii on that front. This is what led me to zero in on the gtris/s rate initially.
 
Last edited:
Indeed, we're discussing the Fury-X bottleneck over at overclock.net. We've asked the Oxide dev for some input. We're waiting on a response.


Ask him to do this, run the game through a profiler and post up the results. Screenshot should be enough, with the pertinent lighting shader that heavily uses async code. It would be nice to see it on both an AMD and nV test system, but either one is enough for now preferably, nV.
 
Ask him to do this, run the game through a profiler and post up the results. Screenshot should be enough, with the pertinent lighting shader that heavily uses async code. It would be nice to see it on both an AMD and nV test system, but either one is enough for now preferably, nV.

They don't run async on nVIDIA at all I think.

When he's talking about Maxwell, he means Maxwell 2 as well I think. He mentions the GTX 980 Ti in reference to the low CPU overhead.

Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports.

If they shut down Async, by Vendor ID, then they would have done the same for the GTX 980 Ti afaik. That is unless they targeted a specific Vendor ID for the GPU architecture as well.

What makes me think this is because Maxwell 2 also falls within Tier 2 in class binding hardware.
 
Last edited:
They don't run async on nVIDIA at all I think.

When he's talking about Maxwell, he means Maxwell 2 as well I think. He mentions the GTX 980 Ti in reference to the low CPU overhead.



If they shut down Async, by Vendor ID, then they would have done the same for the GTX 980 Ti afaik.

Well that would show up with the 980ti and the Fury X benchmarks if that is the case, which it doesn't really affect (at least not much as he stated) CPU overhead as batch counts between the two cards don't seem to have any greater effect on either of the two IHV brands.

The reason why the driver was trying to expose it is because it has it. (which it could be driver bugs on nV's Maxwell 2 or the way the code was written and nV's compiler is having issues, or better way to say this is the code is optimized better for AMD hardware and shader compiler)

If he runs the profiler which should only take 5 minutes or so per system, its easy to find out where the issue is, what shaders are slowing down the different IHV's (doesn't matter if they are using the same path or not, it won't give us a perfect example but should be enough to make competitive analysis)
 
Last edited:
Well that would show up with the 980ti and the Fury X benchmarks if that is the case, which it doesn't really affect (at least no much as he stated) CPU overhead as batch counts between the two cards don't seem to have any greater effect on either of the two IHV brands.

The reason why the driver was trying to expose it is because it has it. (which it could be driver bugs on nV's Maxwell 2 or the way the code was written and nV's compiler is having issues, or better way to say this is the code is optimized better for AMD hardware and shader compiler)

If he runs the profiler which should only take 5 minutes or so per system, its easy to find out where the issue is, what shaders are slowing down the different IHV's (doesn't matter if they are using the same path or not, it won't give us a perfect example but should be enough to make competitive analysis)

Why would nVIDIA spend the time to optimize the shaders and help Oxide derive good performance instead of focusing on fixing the issues in their driver? This leads me to believe that, perhaps, for the reasons we've been discussing (lack of error correction) Maxwell/2 don't actually support Async Shading that well.

Ill ask him to to run a profiler.
 
Why would nVIDIA spend the time to optimize the shaders and help Oxide derive good performance instead of focusing on fixing the issues in their driver? This leads me to believe that, perhaps, for the reasons we've been discussing (lack of error correction) Maxwell/2 don't actually support Async Shading that well.

Ill ask him to to run a profiler.


Two separate teams although they work closely together at times *usually* launch times with games, they have two separate back logs, two different sprint cycles, two different focuses from a project management view so if lets say the Driver team doesn't think something is as important to fix, it gets shoved to the back of the back log.

Thx for asking, he might not be able to show us anything depends on if Oxide management will allow it or not. Any case gotta go to bed!
 
Two separate teams although they work closely together at times *usually* launch times with games, they have two separate back logs, two different sprint cycles, two different focuses from a project management view so if lets say the Driver team doesn't think something is as important to fix, it gets shoved to the back of the back log.

Thx for asking, he might not be able to show us anything depends on if Oxide management will allow it or not. Any case gotta go to bed!

IP :p

So far, however, Oxide has been quite open. They've shared information about their CPU Optimizations after I emailed them and now their GPU optimizations after I emailed them.

What a great developer :)
 
wow someone fricking takes the time to put some effort in to whats going on and he just wants to share it around tech forums and he has an agenda. I think people that come out swinging for no damn reason and call him out for wanting to share his work. I frickin hate people sometimes. Without any basis they come out calling someone just because that one person wanted to share what they have put together on different outlets. If you have nothing productive to say why the hell hate on someone without reason. It's even hard to have factual discussion thanks to fanboys.
 
I definitely think the new APIs will be a financial boon for engine developers. Game devs aren't going to want to have to worry about how to get their games running well on every single GPU out there. In comes Epic, DICE, etc. to handle that for them (for a nominal fee, of course).

In the end the consumer decides on where they lay down their money. By making half working games that will function well on one card and on the next it is hardly playable would mean that developers are shooting themselves in the foot.
Currently they have to figure out what is the best path and in certain cases feedback is required and this is only good from a consumer perspective....
 
In the end the consumer decides on where they lay down their money. By making half working games that will function well on one card and on the next it is hardly playable would mean that developers are shooting themselves in the foot.
Currently they have to figure out what is the best path and in certain cases feedback is required and this is only good from a consumer perspective....

Developers will probably pick the path that runs best on consoles and brute force the PC implementation as they normally have done, patching their PC version to kindom come.
 
Developers will probably pick the path that runs best on consoles and brute force the PC implementation as they normally have done, patching their PC version to kindom come.

You mean Batman fiasco. I just have to remind you that Dice/EA done a very good job with their games and they run on multiple platforms (coming to think of it I'm going to regret writing this in some future point , sorry EA)....
 
You mean Batman fiasco. I just have to remind you that Dice/EA done a very good job with their games and they run on multiple platforms (coming to think of it I'm going to regret writing this in some future point , sorry EA)....

I´m not thinking of a spesific game, but the trend in it. Consoles are normally the machines with least resources where developers needs to optimize the most. On PC, they will spend less time optimizing and instead rely on brute force (more powerful hardware instead of optimizations) and hardware vendors work in shader replacements etc. Developers will pick the path of the consoles. Even now, several games on PS4 has async shaders.
 
Back
Top