Looking forward to the comparison between the 8370 and similarly priced Intel chips. Looks like the Skylake equivalent price-wise is the i5 6500. The 6600K looks to be about $50.00 more.
As everyone said. Read the article.What I mean is ,test any dual gpu setup with multiadapter and compare cpu bound scenarios, and also with single gpu compare if asynchronous shaders has some benefit on cpu bound scenarios vs gpu bound
Article said:It would seem to me that while AotS is being held up as the go-to Async Compute game, it is far from it. However, going back to what the developer said, it would seem that Asynch Compute advantages might be much better realized under CrossFire and possibly SLI on the PC. We will test that in the future hopefully.
...This testing has been somewhat harrowing for me over the last couple of weeks. I have spent literally a week dealing with issues with the AotS benchmark. After I had put this article together, I felt as though it would benefit from some added GPUs. Going back and building on my benchmarks, the AotS benchmark started crashing. New driver loads, new OS images, new OS installs, new game installs would NOT fix my issues. I changed all hardware and still had issues. When I moved the system over to an Intel CPU based system, all my issues went away. I do not have an explanation for this, I am just explaining that I wanted to have more AMD CPU based data on this, but it ended up being impossible for me. I will have a follow up article using a Haswell-E system with more GPUs as well.
If I remember my DX programming correctly the game needs to be programmed to follow a separate target rendering path based on the video card's capabilities, but this was with DX11 and earlier. DirectX has "get" functions that will pull a standardized string that tells you what the hardware capabilities of the present video card are, and then you would branch it into the appropriate rendering path, or flag certain methods to be used or not. I have not delved into DX12 to see how it works in this regard.This is an idle curiosity question triggered by the article, but not one I expect to have been covered in the article:
Is there any intelligence in the DX12 API that recognizes whether or not a DX12-compliant
GPU is present?
In other words, does DX12 use different sub-routines/program calls when it comes to the CPU-related programming if it sees a DX12 GPU, or do features like async compute rely on program calls alone and ignore any GPU-related information?
As I am pretty ignorant in the world of programming, please excuse me if my terminology is not accurate.
Really liked the article and i agree, it has potential, now the thing is if developers will be able to extract the performance from it.
The part that you explain that Async isn't the main performance generator for AMD is quite interesting, makes me wonder if AMD wouldn't have shown us such strong performance on dx11 with a better driver development budget than they had, but well, hindsight is 20/20
So AMD had no budget on DX11, but does on DX12. And Nvidia had a huge budget on DX11, but does not on DX12?
Give your head a shake would ya.
How about AMD hardware at this time is better suited to DX12, and Nvidia's better suited to DX11.
Make sense?
So AMD had no budget on DX11, but does on DX12. And Nvidia had a huge budget on DX11, but does not on DX12?
Give your head a shake would ya.
How about AMD hardware at this time is better suited to DX12, and Nvidia's better suited to DX11.
Make sense?
It is simple Nvidia might not want (or can) to do anything about the current situation seeing that most games are still DX11. DX12 however is not the same as DX11 with DX11 you can get a huge difference if you optimize drivers and under DX12 the game developers are doing the optimizations.
That allows bugs to be fixed by developers instead of having to wait for a new driver that interprets the code better from which ever developer ...
Yeah and the repeated posts in the past that nV pays devs to code for their cards aligns with this sentiment? Flipping things to fit your needs when you post doesn't make an argument correct.
Dan Baker– @dankbaker
I'd like to go on the record that if you need a new driver for launch day, the driver and API are busted. #SpecialDriverNotNeeded
Dan Baker
Interesting, I have never shipped a game w/o driver workarounds
And your second sentence this is shown with this game it shows driver overhead of DX12 when async isn't active on nV drivers. Its not optimially programmed for nV.
But then your third sentence that is not true because when async shaders aren't active we still see the driver (CPU) overhead in both DX11 and DX12 paths.
First since we are looking at semi-close-to-metal API/DX12 driver involvement is far smaller as the game is communicating directly with the hardware.
Second if it works/runs that means it was coded for that hardware, but doesn't necessarily speak to how efficiently or optimally.
Third, that Nvidia driver overhead in DX12 is likely the driver coding to make async work not necessarily an optimization issue nor speaking to driver involvement as being as necessary as in DX11 nor anywhere near as important.
Nvidia still working on programming the Async driver for 900 series ? Nvidia was quick to ask the devs to not allow Async when they released the demo weren't they or did you need a link for that.
I'm somewhat baffled the time it takes for Nvidia to get it done...
Then it is more likely proof of architecture limitations, serial opposed to parallel.And your second sentence this is shown with this game it shows driver overhead of DX12 when async isn't active on nV drivers. Its not optimially programmed for nV.
But then your third sentence that is not true because when async shaders aren't active we still see the driver (CPU) overhead in both DX11 and DX12 paths.
wer cores will erase the advantage of DX12, at least the main advantage reported by whoever in that fps comparison 42 to 60, a 43% jump.
The one thing that scenario might point out, if tested on a Fury-X since those are scaling up with DX12, is how much of the performance is coming from the extra cores, and how much of the performance is coming from elsewhere in DX12.
That actually could be interesting.
Thanks for linking was interesting..Since AoTS is part of the article, I'll post this here. It can move it to its own thread if needed.
I think this is an absolutely Great set of tweets from Dan Baker on the subject of drivers, I'll post a couple tweet here :
And
Rest of the big tweet chain :
I'm really interested to see the X99 results, especially at lower clock. Just put together an e5-2670 system, and DX12 could really make it have a longer life then it ever should have. Shame you have to get on W10 for it though....
Thanks for linking was interesting..
Also worth noting Andrew Lauritzen's responses as well, who is heavily involved with the graphics side in Intel and great knowledge on DX12.
Cheers
I like how a feature that gives us a boost in FPS and no expense in quality is looked at as trivial...
why is that scary? that was the point I was trying to make. DX12 is allowing lower end processors to perform better, much better. these numbers are not much lower than another users system running a Xeon X5670 with a 280x. so if the FX-4100@4.5 is almost equal to a FX-8120@4.5 that's almost equal to a X5670@4.2.... am I wrong/confused or is that not good?