Fallout 4 optimized & CF fixed drivers

As for DX12 performance and asynchronous compute, funny how closer to when it matters nVidia wins in AotS. Let's reserve judgment until release time, shall we? No point speculating on alphas/betas.

http://www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/

We only have two requirements for implementing vendor optimizations: We require that it not be a loss for other hardware implementations, and we require that it doesn't move the engine architecture backward (that is, we are not jeopardizing the future for the present).

This does not mean that Nvidia has better Async it means that the way Oxide implemented it for Nvidia currently works faster but this has nothing to do with the feature itself which is not the same as it is for AMD hardware at this moment.
 
Last edited:
So I tried the AMD beta drivers .... and what a surprise .... Crossfire is still not working.

GPU2 usage shows 0% in crossfire
 
So I tried the AMD beta drivers .... and what a surprise .... Crossfire is still not working.

GPU2 usage shows 0% in crossfire

Have you tried the workaround which has been posted several times? The driver doesn't claim to fix CF on its own.
 
Have you tried the workaround which has been posted several times? The driver doesn't claim to fix CF on its own.

Which workaround?

Many have been posted and the results appear to vary per person. Some people use the skyrim profile (this actually just disables xfire), some people report success with the crisis 3 or far cry 4 profile, yet others report it doesn't work at all, or it works somewhat and then frame rates crash.

All the fixes that I have seen posted may increase frame rates however it also creates unreal microstutter effectively making the "fix" useless.
 
http://www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/



This does not mean that Nvidia has better Async it means that the way Oxide implemented it for Nvidia currently works faster but this has nothing to do with the feature itself which is not the same as it is for AMD at this moment.


This is incorrect, multi engine is a DX12 feature, there is no way a video card can get DX12 certification without it. Now Dx does not stipulate how the hardware is made to support that feature, so hardware vendors can make the support in different way. Now as stated before and many times even generational hardware from one IHV will have differences when using multi engine code.

Concurrent kernel and instruction execution is different from multi engine as well and is not stipulated by Direct X, and this is the difference between AMD and nV hardware when we are talking about async.

Andrew over at B3D, an Intel graphics engineer summed it up the best I have seen so far.

https://forum.beyond3d.com/posts/1869935/

Let me try one more time here:

From an API point of view, async compute is a way to provide an implementation with more potential parallelism to exploit. It is pretty analogous to SMT/hyper-threading: the API (multiple threads) are obviously supported on all hardware and depending on the workload and architecture it can increase performance in some cases where the different threads are using different hardware resources. However there is some inherent overhead to multithreading and an architecture that can get high performance with fewer threads (i.e. high IPC) is always preferable from a performance perspective.

When someone says that an architecture does or doesn't support "async compute/shaders" it is already an ambiguous statement (particularly for the latter). All DX12 implementations must support the API (i.e. there is no caps bit for "async compute", because such a thing doesn't really even make sense), although how they implement it under the hood may differ. This is the same as with many other features in the API.

From an architecture point of view, a more well-formed question is "can a given implementation ever be running 3D and compute workloads simultaneously, and at what granularity in hardware?" Gen9 cannot run 3D and compute simultaneously, as we've referenced in our slides. However what that means in practice is entirely workload dependent, and anyone asking the first question should also be asking questions about "how much execution unit idle time is there in workload X/Y/Z", "what is the granularity and overhead of preemption", etc. All of these things - most of all the workload - are relevant when determining how efficiently a given situation maps to a given architecture.

Without that context you're effectively in the space of making claims like "8 cores are always better than 4 cores" (regardless of architecture) because they can run 8 things simultaneously. Hopefully folks on this site understand why that's not particularly useful.

... and if anyone starts talking about numbers of hardware queues and ACEs and whatever else you can pretty safely ignore that as marketing/fanboy nonsense that is just adding more confusion rather than useful information.
and a follow up to this

https://forum.beyond3d.com/posts/1869983/

This is why I have been saying lets not talk about async performance and throw it out the window, because we aren't talking about what is actually important to what Oxide has been doing, nor will it even matter in the future that much with current applications and hardware as both are going to change drastically.
 
Which workaround?

Many have been posted and the results appear to vary per person. Some people use the skyrim profile (this actually just disables xfire), some people report success with the crisis 3 or far cry 4 profile, yet others report it doesn't work at all, or it works somewhat and then frame rates crash.

All the fixes that I have seen posted may increase frame rates however it also creates unreal microstutter effectively making the "fix" useless.

AFR forced in ccc works with this driver release. Performance across the board was greatly increased for me both with and without xfire.

It does exhibit some ghosting, but it's a trade off for me until official xfire drivers drop. This game is a hot mess, and its unplayable at 4k without both cards.
 
Nvidia and AMD haven't changed the way their drivers work in a long time.
Game developers still release broken, buggy games that don't work with Crossfire and SLi.
The way that the MGPU drivers for AMD AND Nvidia work hasn't changed.
Games are still released that are broken and mGPU is too hard to implement.


Why are we still blaming AMD and Nvidia for this situation? Maybe we should go to the developer's forum and ask them why didn't they add in support for Crossfire and SLi while they were developing the game?

Just a passing thought.
 
it has to come from both the developers and IHV's. Many of the issues are due to the fact developers uses a certain type of types of renderers which aren't well suited for multi GPU configurations. And this should be noted by IHV's to educate developers on what is good and what is not good. Sometimes things are unavoidable due to time constraints, each company has their own goals and they can't supersede those goals even for their partners.
 
Since consoles are the primary dev target, and they dont have mgpu, it stands to reason that xfire/sli support is an afterthought.

Just they way the world works unfortunately.
 
So this will end as another Pieter3dNow Against razor1 thread. always discussing about Async?. so it seems that Async its the only word that Pieter3dNow know to "debate".. man, this is endless I wish I could have the razor1's patience =)
 
Since consoles are the primary dev target, and they dont have mgpu, it stands to reason that xfire/sli support is an afterthought.

Just they way the world works unfortunately.

Sad, but true. This is pretty much the reason I stick to single GPS systems these days. I'm not holding my breath, but I hope DX12 can go some ways to changing this.
 
Sad, but true. This is pretty much the reason I stick to single GPS systems these days. I'm not holding my breath, but I hope DX12 can go some ways to changing this.

with DX12 I think, the things will be just worse as almost everything will be in developer hands.. so any Multi-GPU support will depend on them.
 

The OPs second post in that thread reads as follows:

Hmm, another single card increase I think guys, pmsl.

Seems to have a profile in the crimson centre but still doesnt want to use my other gpu just checked with afterburner

AMD OWNED

and further down there is this:

Yes.. There is no crossfire profile. Just tried with default mode and game runs in single card mode.
 
Why are you so mad? Nobody claimed these drivers would fix CF

Mad? lol

The post that your posted quote relates stated (in a quote):
Fallout 4 Profile is finally here, its absolutely flying on my Crossfire 7970's!!!
So .... yeah it did claim to fix CF.
 
Mad? lol

The post that your posted quote relates stated (in a quote): So .... yeah it did claim to fix CF.


No. Some dude claimed that it did. AMD did not claim it.

I could easily blame the vendors on anything random fools post. :confused:
 
So this will end as another Pieter3dNow Against razor1 thread. always discussing about Async?. so it seems that Async its the only word that Pieter3dNow know to "debate".. man, this is endless I wish I could have the razor1's patience =)

Check carefully if the use of a separate compute command queues really is advantageous
Even for compute tasks that can in theory run in parallel with graphics tasks, the actual scheduling details of the parallel work on the GPU may not generate the results you hope for
Be conscious of which asynchronous compute and graphics workloads can be scheduled together

https://developer.nvidia.com/dx12-dos-and-donts

Not my debate how things work in hardware from Nvidia website ...
 
https://developer.nvidia.com/dx12-dos-and-donts

Not my debate how things work in hardware from Nvidia website ...


yet you have many developers saying that do and don't list is for all hardware :p

and that document even states, async is working, just that parallel execution of different kernels (which are two different things), and instructions should be looked into because combining the two queues have different affects on performance based on what is being combined.

Async compute is available even on Keplar based nV architecture.


If you want to talk about parallel execution we can do that, lets do that in another thread and lets not talk about Aysnc because that is not what is important to the discussion.
 
Last edited:
Back
Top