Oxide now says that NVIDIA does support Async Compute

Not for nothing.

nVIDIA implements Asynchronous Compute through a software workaround in the driver. In other words... they don't have the hardware for it.

David Kanter just commented on it here (in relation to VR): https://www.reddit.com/r/hardware/comments/3jnoa9/david_kanter_microprocessor_analyst_on/

Who knows how this implementation will work but according to Occulus Rift, probably catastrophic.

The rage was required in order to get nVIDIA to admit the last piece of the puzzle required. Basically nVIDIA admitted to what I stated in my original post on the topic.
 
Not for nothing.

nVIDIA implements Asynchronous Compute through a software workaround in the driver. In other words... they don't have the hardware for it.

David Kanter just commented on it here (in relation to VR): https://www.reddit.com/r/hardware/comments/3jnoa9/david_kanter_microprocessor_analyst_on/

Who knows how this implementation will work but according to Occulus Rift, probably catastrophic.

The rage was required in order to get nVIDIA to admit the last piece of the puzzle required. Basically nVIDIA admitted to what I stated in my original post on the topic.

They didn't admit to doing anything in software ;). Some things have to be done in software like the shader compiler, breaking up the shader to its instructions have to be done this way even on GCN, but has to be done right otherwise you will get some major issues using async.
 
They didn't admit to doing anything in software ;). Some things have to be done in software like the shader compiler, breaking up the shader to its instructions have to be done this way even on GCN, but has to be done right otherwise you will get some major issues using async.

What I mean is that a large part of the scheduling, on Kepler/Maxwell/2 is done in the driver rather than on the hardware. It's one of the big reasons for the power reduction.

Since HyperQ is a combination of both software and hardware scheduling (moreso on Kepler/Maxwell/2 than on GCN) you can make the argument that it is done in software (in large part) compared to GCN no?
 
What I mean is that a large part of the scheduling, on Kepler/Maxwell/2 is done in the driver rather than on the hardware. It's one of the big reasons for the power reduction.

Since HyperQ is a combination of both software and hardware scheduling (moreso on Kepler/Maxwell/2 than on GCN) you can make the argument that it is done in software (in large part) compared to GCN no?


Not power usage that doesn't seem to fit this is because if Maxwell 2 had more cache and register space allocated to each of its AWS and SMM's it could have more stored queues and stored instructions and cache doesn't take much more space its much more compactable then lets say a scalar ALU, what is a possibility and but still hard to say (we will find out as more Dx12 games come out), if it goes above the queue amounts by a certain amount it will affect the performance by bottlenecking and how much of a performance hit it will take, now there will be a hit and it will be great and GCN will look very good when that hit happens.
 
Last edited:
Not for nothing.

nVIDIA implements Asynchronous Compute through a software workaround in the driver. In other words... they don't have the hardware for it.

David Kanter just commented on it here (in relation to VR): https://www.reddit.com/r/hardware/comments/3jnoa9/david_kanter_microprocessor_analyst_on/

Who knows how this implementation will work but according to Occulus Rift, probably catastrophic.

The rage was required in order to get nVIDIA to admit the last piece of the puzzle required. Basically nVIDIA admitted to what I stated in my original post on the topic.

Hey man it was a good run, creating all those new accounts on multiple forums on the same day and spamming the same FUD thread to all of them. You got, what, 10 days of mileage out of it? Not a bad pull. 'Grats

Now there's a nice long weekend to recoup and recover and plan that next brilliant attack.
 
Last edited:
Do you guys remember when Reddit was much more factual then emotional? lol
 
Do you guys remember when Reddit was much more factual then emotional? lol

The whole internet got emotional on this topic. Youtube, tech review site comment sections, forums you name it.

Even Social media (Twitter and Facebook). I think now, people will calm down. Some may pick AMD cards for their new builds, others will pick nVIDIA cards still, others will sit on what they have now (like me) and wait for Pascal and Greenland.

What I do know is that DX12 gaming, for the most part, will be enjoyable with my dual Hawaii configuration. They should hold me over until the new GPUs release. Explicit Multi-Adapter should allow me Eyefinity support as well. If a game uses Asynchronous Compute, should help even more.

Point being... no upgrade for me in 3-4 years on the GPU front. That's pretty insane.
 
well I wasn't talking about just this, just in general you have many people judging others very quickly, almost like a popularity contest at a play ground who gets picked first.....
 
So... Software workaround for nvidia. I want to see the limit of this workaround.

Apparently my ol'280x Crossfire setup will still have a lot of performance left, next gen it will be.
 
Hey man it was a good run, creating all those new accounts on multiple forums on the same day and spamming the same FUD thread to all of them. You got, what, 10 days of mileage out of it? Not a bad pull. 'Grats

Now there's a nice long weekend to recoup and recover and plan that next brilliant attack.

Someone's angry. You do realize that the lack of Asynchronous Compute wasn't my theory right? That's something which disproved my theory...

The developer at Oxide claimed that, "afaik, Maxwell doesn't support Asynchronous Compute".

I made a big deal out of it in order to get nVIDIA to acknowledge the issue. It has been my goal from day 1, if you follow my many postings on overclock.net.

It remains to be seen what the final outcome will be. But if what David Kanter has stated is true... don't expect a miracle under high Asynchronous Compute workloads because at that point my theory is valid.

Oxide's Ashes of the Singularity makes mild use of Asynchronous Compute (Glare and Lighting). So no, my theory wasn't FUD.

See here: https://www.youtube.com/watch?v=tTVeZlwn9W8&feature=youtu.be&t=1h21m35s

Go to 1:18:00

Or read here: http://www.dsogaming.com/news/oculu...-is-best-on-amd-nvidia-possibly-catastrophic/

Swallow your anger.
 
Last edited:

The only thing the developer said was

We actually just chatted with Nvidia about Async Compute, indeed the driver hasn't fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute. We'll keep everyone posted as we learn more.

I don't know this can be interpreted into "nVidia does support Async Compute". All it says is that nVidia has acknowledged the problem on their side and are working on it. We don't know when it'd become functional and how efficient it'd be.
 
well I wasn't talking about just this, just in general you have many people judging others very quickly, almost like a popularity contest at a play ground who gets picked first.....

Tell me about it.... :(
 
Any flak you get is fully deserved.

You made your own bed...:rolleyes:

In a few months time... you'll be apologizing. You can't see it now... because you still live in a DX11 world with a predominantly Serial GPU architecture.

That's fine... I can wait.

Oh and... in case you weren't aware... most Game developers are partnering up, throughout 2015/2016, with AMD.
FPxtSp3.jpg


Just a coincidence ;)

It's a fact.. go and verify it if you'd like.
 
So basically nvidia had source code for 1year +.

They were aware that oxide was pushing async technology very hard. Instead of fixing their driver to fully support async, they actively discouraged oxide from featuring it in their benchmark.

Fast forward 1 year, nvidia still says it supports async, but that it is a driver issue.


so now the question is: when will it not be a driver issue?
 
This issue has never been about Fiji vs Maxwell2.


The real question is does Pascal offer async on the hardware level. And did AMD move allin on design for arctic islands for DX12.


Any delay in 16nm TSMC production would be amazing for AMD.
 
err I wouldn't take Hallock's word for anything lol, he seems to make a lot of mistakes even on AMD's end as well ;). Remember he is a marketer, marketers will say anything, very dirty, dirty, unclean. lol.
 
This issue has never been about Fiji vs Maxwell2.


The real question is does Pascal offer async on the hardware level. And did AMD move allin on design for arctic islands for DX12.


Any delay in 16nm TSMC production would be amazing for AMD.


theoretically yes up to a certain point, and actually up to that point it should have better results. It will all depend on the program.

On pascal, It will probably have it in spades, just like tessellation for Fermi, doing different precision at the same time will require this.
 
Last edited:
Man nVidia has dropped the ball on their drivers a lot it seems.

I like how the title is so misleading it should say "nVidia enabled async mode in their drivers when it supposedly wasn't ready, oxide is nice enough to give us an update since nV won't" why are you trying to paint oxide as the bad guys this entirely was poor driver issue.

Maybe even the cause for Ark Evolveds dx12 hold off.
 
So basically nvidia had source code for 1year +.

They were aware that oxide was pushing async technology very hard. Instead of fixing their driver to fully support async, they actively discouraged oxide from featuring it in their benchmark.

Fast forward 1 year, nvidia still says it supports async, but that it is a driver issue.


so now the question is: when will it not be a driver issue?


Fiji seems to have similar problems as Maxwell 2 (lesser degree), and its probably drivers.
 
Man nVidia has dropped the ball on their drivers a lot it seems.

I like how the title is so misleading it should say "nVidia enabled async mode in their drivers when it supposedly wasn't ready, oxide is nice enough to give us an update since nV won't" why are you trying to paint oxide as the bad guys this entirely was poor driver issue.

Maybe even the cause for Ark Evolveds dx12 hold off.

Yes, that's what I think. It's something I also discussed over at overclock.net.

So now it is about VR? I am sure there will be a driver for that as well lol.

It's the same issue... see here: https://developer.nvidia.com/sites/...works/vr/GameWorks_VR_2015_Final_handouts.pdf
Page 31
 
So now it is about VR? I am sure there will be a driver for that as well lol.
VR is extremely latency dependent, more so than gaming on a screen, and asynchronous shaders are important for keeping latency as low as possible. So yes, this is a key feature for the VR experience to be usable. Because if you can't maintain a high frame rate, typically >90hz, the experience breaks down and can even make people sick.
 
Yes, when I bought my 980 Ti cards I was thinking about best possible experience on unreleased VR products since I am stupid like that.
Also we have demonstrated proof that Fiji is going to be super fast when VR is going to be released since it will benefit from ASYNC shaders and AMD is so amazing in terms of their drivers and support especially when it comes to DX12 and VR.
Somehow took them 6 months to fix Farcry 4 but we don't talk about that lol.
 
Last edited:
Yes, when I bought my 980 Ti cards I was thinking about best possible experience on unreleased VR products since I am stupid like that.
Also we have demonstrated proof that Fiji is going to be super fast when VR is going to be released since it will benefit from ASYNC shaders and AMD is so amazing in terms of their drivers and support especially when it comes to DX12 and VR.
Somehow took them 6 months to fix Farcry 4 but we don't talk about that lol.

It's not about AMD vs. nVIDIA. That's the way you've been conditioned to see things. It's about innovation in a stagnant market. It's about holding companies to account. Doesn't matter which company it is. Do you want monopoly? Monopoly allows companies to get away with a lot of bad conduct. Conduct which hurts consumers in the end. It hurts you whether you see it or not. It's the invisible hand that guides the market turning into a fist pounding on innovation.

You can still be an nVIDIA fan, still buy their products... only their products will be better. You gain from this. I fail to see how you can't see that. This will help you in the end.

As for AMD and Farcry 4, when I launch my website I will cover that sort of thing with the same dedication I covered this last fiasco. I won't make a simple news posting and let it slide. I'll pick the big issues and work until they're fixed. You can count on it. I'm really serious you have no idea.
 
Just like any feature, let's let the game performance results tell us how cards lineup. At the end of the day, it is all about the gameplay experience, it will all come out in the end which cards are delivering a better experience for the money.

I need to find out if Async is actually in the DX12 spec. Some say yes, others say no. There is too much misinformation at this time.
 
It's not about AMD vs. nVIDIA. That's the way you've been conditioned to see things. It's about innovation in a stagnant market. It's about holding companies to account. Doesn't matter which company it is. Do you want monopoly? Monopoly allows companies to get away with a lot of bad conduct. Conduct which hurts consumers in the end. It hurts you whether you see it or not. It's the invisible hand that guides the market turning into a fist pounding on innovation.

You can still be an nVIDIA fan, still buy their products... only their products will be better. You gain from this. I fail to see how you can't see that. This will help you in the end.

As for AMD and Farcry 4, when I launch my website I will cover that sort of thing with the same dedication I covered this last fiasco. I won't make a simple news posting and let it slide. I'll pick the big issues and work until they're fixed. You can count on it. I'm really serious you have no idea.

The only bad conduct around here is fabricating fake controversies based on half-baked benchmark results.
 
The only bad conduct around here is fabricating fake controversies based on half-baked benchmark results.

and misinformation... DX12 specs, and tiers is so muddy right now, numbers and info are being thrown around like fact
 
Nvidia made only one statement saying that Oxide doesn't know what they are doing and this has been clarified by Oxide's recent statement.

The bad conduct I saw is from everyone making shit up without any proof and AMDs marketing guy acting hilariously stupid (like has been the case with all AMD marketing I have seen recently).

If nvidia doesn't have dx12 ready drivers right now then it seems to be OK since there is no dx12 game out right now.
 
why are you trying to paint oxide as the bad guys this entirely was poor driver issue.

Nvidia software devs are probably a bit busy at this time as a lot of significant events are all coming together, such as the release of Windows 10 and SteamOS / Steam Machines, DirectX 12 and Vulkan, not to mention writing drivers for their next-gen Geforce and Tegra products, let alone all of the ordinary stuff they have to do like support game releases on Windows and updating OpenGL and working on making their driver compatible with EGL display servers. I guess they figured that working on a pre-alpha of a game that nobody is going to play from a company nobody heard of sort of fell through the cracks.

A game which, if originally benchmarked Fury X vs. 980 Ti, would not have registered a beep in the tech press. But, by "coincidence", it just so happened that the 290X was benchmarked everywhere.
 
Man nVidia has dropped the ball on their drivers a lot it seems.

I like how the title is so misleading it should say "nVidia enabled async mode in their drivers when it supposedly wasn't ready, oxide is nice enough to give us an update since nV won't" why are you trying to paint oxide as the bad guys this entirely was poor driver issue.

Maybe even the cause for Ark Evolveds dx12 hold off.
Considering exactly 0 released games use it i'd hardly call it "dropped the ball".
 
and misinformation... DX12 specs, and tiers is so muddy right now, numbers and info are being thrown around like fact

The DirectX definitions are kind of backwards. People are looking at it like MS defines the spec, and then the IHVs have to design the hardware to meet the spec. That's backwards. The hardware engineers create the best GPU designs for their customers' needs. The job of the API is to allow a programming mechanism to access the features of the hardware. The spec simply defines the manner in which the driver exposes the functions of the API.

In theory, a rendering API should expose far more features than what a particular architecture is going to implement. That's just expected. E.g., Kepler has bindless textures but not some of the other crap that's in DX12.

And two years from now there will be a new GPU with a fancy feature that isn't in DX12. Which brings up a problem with DX: it's not extensible. You'll need a DX 12.1 (or whatever) and that means upgrading to Windows 11.
 
It's amazing how emotional people are getting over a game not yet released being published by the makers of Windows Blinds. This kind of bickering and fanboism reminds me of when the Xbox 360 and PS3 came out. PC gaming, while on the verge of possibly becoming bigger than ever, is currently in a sad state, especially community wise.
 
Just like any feature, let's let the game performance results tell us how cards lineup. At the end of the day, it is all about the gameplay experience, it will all come out in the end which cards are delivering a better experience for the money.

I need to find out if Async is actually in the DX12 spec. Some say yes, others say no. There is too much misinformation at this time.

That's true... but now there's an interest for it. Maybe you could write an article and inform everyone about all the features. Which card supports them etc.

Would get plenty of views.
 
Specifications supported doesn't equal gaming performance, just keep that in mind.

Plenty of GPUs in the past have supported plenty of specifications, only to be the slower GPU in games.

Don't let spec charts dictate if a GPU is good or not in said game.
 
While this currently doesn't make a difference in games we are playing, I don't see why issues like this should be swept under the rug either, like some of you are acting. As a hardware enthusiast, I'm pretty interested in this feature since it seems to be worthwhile on dedicated consoles, and if it runs better on AMD, that might give some of the cards in my house a longer life. I'm happy with that. If it doesn't pan out, or by the time it matters NVIDIA has a better compute shader engine, then I'll upgrade. But don't give NVIDIA a free pass, at least push them to clarify how this works.
 
While this currently doesn't make a difference in games we are playing, I don't see why issues like this should be swept under the rug either, like some of you are acting. As a hardware enthusiast, I'm pretty interested in this feature since it seems to be worthwhile on dedicated consoles, and if it runs better on AMD, that might give some of the cards in my house a longer life. I'm happy with that. If it doesn't pan out, or by the time it matters NVIDIA has a better compute shader engine, then I'll upgrade. But don't give NVIDIA a free pass, at least push them to clarify how this works.

How dare you ? Didn't you know you are in Nvidiaforums aka geforceforums. Just kidding. hardforums has become a green circlejerk.
 
Back
Top