Ashes of the Singularity Day 1 Benchmark Preview @ [H]

It does not matter if I confuse the feature set with API. Feature set 11.1 which is part of DX12. This is also where amd 280 cards are compatible or capable up to.. Which makes them sort of half baked. They are not compatible with Feature set 12 or 12.1. That's where the argument started. Do we know up to what feature set Ashes of Sigularity exploits DX12. Im confident that it is beyond feature set of 11.1.
Pendragon1 refuses to agree on that.
It was as I stated earlier, you just misspoke.
You both forget that DX12 is not supported by 79xx or 280 cards which are essentially the same.
I'm going to assume based on his posts that Pendragon thought exactly the same as I did, that you were speaking of the API and NOT the feature level. If you're speaking of the feature level rather than API in the quote above, then everyone is in agreement.

GCN 1.0 supports DX12, but doesn't support feature levels 12_0 or 12_1, it only supports up to feature level 11_1.

EDIT: I'll even go as far as to agree with you, GCN 1.0's DX12 support IS half baked. I agree there also :p
 
It does not matter if I confuse the feature set with API. Feature set 11.1 which is part of DX12. This is also where amd 280 cards are compatible or capable up to.. Which makes them sort of half baked. They are not compatible with Feature set 12 or 12.1. That's where the argument started. Do we know up to what feature set Ashes of Sigularity exploits DX12. Im confident that it is beyond feature set of 11.1.
Pendragon1 refuses to agree on that.

You really need to read the documents already linked.
Feature sets of and beyond those of DX12_0 will not become required for years.
 
Last edited:
I don't see CSI's post either. yes Oni, you and I are on the same page.

heres a break down:
API: DX11 supports DX11 FLs: 9_1, 9_2, 9_3, 10_0, 10_1, 11_0, 11_1, 11_2, 12_0, 12_1
API: DX12 supports DX12 FLs: 11_0, 11_1, 12_0, 12_1

edit: quote about DX12
"There are two new feature levels, 12_0 and 12_1, which include some features that are optional on levels 11_0 and 11_1"
 
It does not matter if I confuse the feature set with API. Feature set 11.1 which is part of DX12. This is also where amd 280 cards are compatible or capable up to.. Which makes them sort of half baked. They are not compatible with Feature set 12 or 12.1. That's where the argument started. Do we know up to what feature set Ashes of Sigularity exploits DX12. Im confident that it is beyond feature set of 11.1.
Pendragon1 refuses to agree on that.

I did not agree because that was not what I was originally talking about. that is what you tried to change too.
I was originally (i have stated it several times throughout this back and forth) talking about you claiming the GCN1 as not being DX12 compatible!
and like i said before, no I DO NOT know what DX12 FL version AOTS is using and NEITHER DO YOU!
i do not work for oxide. I am not a game developer. so I'm not going to speculate.

edit: Oni, yes GCN1 cards are limited for DX12 but so is every card currently available. ONLY intel skylake iGPUs support all features.
 
Last edited:
You really need to read the documents already linked.
Feature sets of and beyond those of DX12_0 will not become required for years.
Of course not. That does not make amd 280X capable of feature set 12 or 12.1 :)
 
Of course not. That does not make amd 280X capable of feture set 12 or 12.1 :)
Its only you that has a problem with it.
I stated in the post you quoted that there is no need to have FL 12_0 or higher for years.
It doesnt matter.
 
Its only you that has a problem with it.
I stated in the post you quoted that there is no need to have FL 12_0 or higher for years.
It doesnt matter.
I don't have a problem with that. Just pointing it out. Just like we established in this same thread that Nvidia is not so hot doing async compute. Or some of the feature set 12 of DX12 :D
 
I don't have a problem with that. Just pointing it out. Just like we established in this same thread that Nvidia is not so hot doing async compute. Or some of the feature set 12 of DX12 :D

You say things that are not correct or dont matter and are not humble about any of them when its pointed out.
Is there something wrong with you?

ps NVidia dont need to be hot at Async.
Only AMD need it to get full performance, finally the playing field is levelling.
The only dilemma is whether programmers use the best methods for each card type.
Its inevitable there will be a performance war based on this kind of difference.
 
ps NVidia dont need to be hot at Async.
Only AMD need it to get full performance, finally the playing field is levelling.
The only dilemma is whether programmers use the best methods for each card type.
Its inevitable there will be a performance war based on this kind of difference.

To be honest, I'm REALLY looking forward to the next line of cards for those exact reasons hahaha. Will AMD come out on top with Async, or will it be NV with raw power? Team Red with it's technical advantages or Team Green with brute force?! It's like Iron Man vs The Hulk! Hurrah for less than perfect (but awesome) analogies! No matter what team wins, we as consumers come out on top :D
 
AsynCompute is part of the API. not one of the FL levels. every FL level has it.



The reference DirectX 12 API (Feature Level 11_0) has performance targeted features while the other two level deliver graphical improvement and this is what really matters in improving the games visually. The feature level 12_0 comes with Tiled Resources, Typed UAV Access and Bindless Textures support. Feature Level 12_1 has the Raster Order Views, Conservative Raster and Volume Tiled Raster enabled on the API. We have talked about these features in previous articles detailing the performance improvement, explicit multi-adapter technology and graphical updates. But to enable these new technologies, the hardware built by companies need to be full compliant with it.

edit: ive been looking and looking to make sure about the async and no where that ive found does it say that async is part of the FL. every thing I can find indicates it's part of the API.
 

Attachments

  • DirectX-12_GeForce-GTX-980-Ti-Support.png
    DirectX-12_GeForce-GTX-980-Ti-Support.png
    131.5 KB · Views: 18
Last edited:
You say things that are not correct or dont matter and are not humble about any of them when its pointed out.
Is there something wrong with you?

ps NVidia dont need to be hot at Async.
Only AMD need it to get full performance, finally the playing field is levelling.
The only dilemma is whether programmers use the best methods for each card type.
Its inevitable there will be a performance war based on this kind of difference.
What does humble have to do with the fact that amd 280X does not do Feature set 12 of DX12. I guess according to you. I don't have a right to point this out. That's whats wrong with me :)
 
What does humble have to do with the fact that amd 280X does not do Feature set 12 of DX12. I guess according to you. I don't have a right to point this out.
You changed to that stance, it isnt the argument you started.
And even then its pointless.
 
To be honest, I'm REALLY looking forward to the next line of cards for those exact reasons hahaha. Will AMD come out on top with Async, or will it be NV with raw power? Team Red with it's technical advantages or Team Green with brute force?! It's like Iron Man vs The Hulk! Hurrah for less than perfect (but awesome) analogies! No matter what team wins, we as consumers come out on top :D

Its sure to be interesting :D
[dons winged war helmet]
 
You changed to that stance, it isnt the argument you started.
And even then its pointless.
If its pointless. Why have you made yourself a part of it.
I don't speak of things that are pointless to me. Since im not humble enough for your standards. The idea of the hardware or gaming is pointless to those that are not interested in it. :)
 
If its pointless. Why have you made yourself a part of it.
I don't speak of things that are pointless to me. Since im not humble enough for your standards. The idea of the hardware or gaming is pointless to those that are not interested in it. :)
To have some fun with you.

Perhaps you can enlighten us why it has value for you?
 
Oye...

The 7970 / 280x cards have 2 ACE units per GPU, therefore the benefit will be a lot less than a 290x / 390x card having 8 ACE units! These cards CAN make use of their Async Scheduler, but having only 2 ACE units makes them not as able to get as much of a benefit (but they don't go negative like nvidia cards with Async enabled). Only by having them in Crossfire and combining 2x 280x cards do you really start to see the benefits with a combined 4 ACE units. Which BTW is a major reason why 2x 7970/280x cards are able to score in the GREEN on the Valve VR test.

For those needing a DX12 primer, PCGamer has a good, well broken down article on the changes involved with DX12. Dan Baker from Oxide himself (developer of Ashes) helped guide the author of the article on some of the details.
 
To have some fun with you.

Perhaps you can enlighten us why it has value for you?
The same reasons it has value for you and many others. How can I enlighten you if im not humble. I would not want to step on another toe of yours :)
 
The same reasons it has value for you and many others. How can I enlighten you if im not humble. I would not want to step on another toe of yours :)
You have no argument then.

You brought it up as a negative aspect. It is irrelevant to this discussion, you would know if you read the linked documents.
This isnt a school playground, I can quote what we have posted to qualify that you have no point to make and are trolling.
I will make a summary post of your mistakes if you continue.
 
On the plus side, there is one hardware that can do all DX12 related features and options... Intel Skylake.
Now who would had thought they would ever see that :)
And they are involved in pushing Conservative Rasterization/ROV in a couple of games, one being Just Cause 3 and the other F1 2015.
Cheers

Intel integrated on skylake (and since the iris models) have really picked their shit up. I have a test rig in at the moment, 6700k, RoG Ranger VIII, no gpu (waiting for polaris). It's driving 21:9 1440p just fine, even simple non gamer stuff like milkdrop with the settings turned up pretty high, it'll do 25-50fps no problem... very impressed along with the new owner.

Let me know if you want me to test anything on it ;) AoTS would take me days to download though, that's the only issue.
 
Here's a good video explain the situation with Nvidia and AMD's Async Compute. Simply put, Nvidia is already pretty efficient without Async Compute. But it does mean that AMD hardware hasn't been fully maxed out in performance.


If you look at his presentation he clearly, but briefly, shows that the nVidia graphics pipeline stalls when asked to perform compute and then picks back up once the compute calculation is finished. Just as clearly, and just as briefly, he shows where GCN performs the compute at the same time, therefor having lower latency and higher fps than the nVidia example. He is absolutely correct that with async compute GCN is using idle shaders. After that though, all of the theorizing that nVidia doesn't have unused shaders so it therefor doesn't gain from async compute, because it's already at maximum utilization is unsubstantiated. Just because the 980 ti is typically faster in DX11 than Fiji doesn't prove this. There are other dissimilarities that can contribute. They don't even run at the same clock speeds. And then the CU's are different designs, different API with different overhead all together, etc... What happens to relative performance when resolution increases? Does AMD all of a sudden gain the usage of the idle CU's?

They are trying to take a negative, that nVidia must stall the graphics pipeline to perform compute, and throw up a smoke screen. Async compute would also improve nVidia's performance if they could use it. Look at nVidia's tessellation performance. They perform tessellation with the CU's. They seem to have plenty of CU's to spare for that task, but we are supposed to believe they don't for compute? This is pure nVidia damage control.
 
If you look at his presentation he clearly, but briefly, shows that the nVidia graphics pipeline stalls when asked to perform compute and then picks back up once the compute calculation is finished. Just as clearly, and just as briefly, he shows where GCN performs the compute at the same time, therefor having lower latency and higher fps than the nVidia example. He is absolutely correct that with async compute GCN is using idle shaders. After that though, all of the theorizing that nVidia doesn't have unused shaders so it therefor doesn't gain from async compute, because it's already at maximum utilization is unsubstantiated. Just because the 980 ti is typically faster in DX11 than Fiji doesn't prove this. There are other dissimilarities that can contribute. They don't even run at the same clock speeds. And then the CU's are different designs, different API with different overhead all together, etc... What happens to relative performance when resolution increases? Does AMD all of a sudden gain the usage of the idle CU's?

They are trying to take a negative, that nVidia must stall the graphics pipeline to perform compute, and throw up a smoke screen. Async compute would also improve nVidia's performance if they could use it. Look at nVidia's tessellation performance. They perform tessellation with the CU's. They seem to have plenty of CU's to spare for that task, but we are supposed to believe they don't for compute? This is pure nVidia damage control.

So what it they don't run at the same clock speeds? You can still batch commands, stall the pipeline, and run them through the compute queue; if youre careful with your batching the stall time is made up for by the performance gain. This is, of course, assuming Hyper-Q isn't coming back from the dead.

I completely disagree regarding your argument of 'usage'. As I've elucidated earlier, Ashes seems to be a very compute intensive game, and as a result the ingame performance seems to be a function of fp32 throughput.

In my case, on a 980ti, matching the 8.6Tflops on Fury X results in almost identical performance (1fps difference at 47fps, 8.45 vs 8.6tflops) with fury X having async enabled.

You could just as easily argue that at fp32 parity the 980ti is faster, and fury X needs async to catch up.

Async shaders are an advantage that translate directly into a 10% performance increase at best, an advantage for amd is not a disadvantage for nvidia.

I'm still very doubtful maxwell/Kepler have as much to gain from multi engine concurrency, the only working example of async shaders on nvidia is in fable legends, and that was coded statically by lionshead AFAIK.

Edit:. As for the presentation you're referring to, assuming this is Ashes we're talking about, have you seen the dx12 presentation from gdc? It is made eminently clear you should not, and cannot if you want your shit to perform, distribute work across queues in the same way for amd vs nvidia, you can always argue( and Ill agree) that in this respect coding for amd is easier, but to use an example of code that runs badly on nv hardware by design and it's expected result (stalls!) to conclude that nvidia hardware is lacking is well... A crock of shit

You will not see differences based on type of game genra, that makes no sense. Graphics work load is greater in FPS's, as you tend to use much higher poly models, and higher textures resolutions etc, etc. If anything it should favor AMD not the opposite.


To this fact, DX11 path nV is either running very poor in AOTS, or AMD is just optimized, either way, it shows.
When I first read your comments regarding performance and different game genres was a little lost, I'm used to large scale RTS running worse than other games, this made particular sense under dx9 and dx11.

A very long time ago I played Battle for Middle Earth on an old, badly cooled school laptop, and the cpu exploded out of its socket. Fun stuff :p

Under dx12 I'm not so sure, since drawcalls are no longer much of a limitation.

Some part of my brain still expects an fps to perform differently to an RTS (even assuming they run the same shaders) but I can't quite justify my expectation :p
 
Last edited:
I also really want to remind everyone involved in the discussion, we're talking about something WE KNOW WAS WORKING IN DX11.

Leaving Cuda aside (concurrent kernel execution most certainly works), all the games that used gameworks used CUDA( which is compute in case you're wondering) ran it in concurrently (possibly even in parallel) with the graphics loads, and nvidia's driver was multithreading as is.

If DX12 doesn't bring changes in iQ (it doesn't) and it kills of Hyper-Q (it seems to) why the fuck would I ever choose dx12 over 11 in the foreseeable future? Seriously? What do I get from it :p


AsynCompute is part of the API. not one of the FL levels. every FL level has it.


The reference DirectX 12 API (Feature Level 11_0) has performance targeted features while the other two level deliver graphical improvement and this is what really matters in improving the games visually. The feature level 12_0 comes with Tiled Resources, Typed UAV Access and Bindless Textures support. Feature Level 12_1 has the Raster Order Views, Conservative Raster and Volume Tiled Raster enabled on the API. We have talked about these features in previous articles detailing the performance improvement, explicit multi-adapter technology and graphical updates. But to enable these new technologies, the hardware built by companies need to be full compliant with it.


edit: ive been looking and looking to make sure about the async and no where that ive found does it say that async is part of the FL. every thing I can find indicates it's part of the API.


Async shaders aren't part of the API, nor any feature level, not any spec.

The API supports asynchronous execution and multi-engine, combining the two you get asynchronous, concurrent execution of graphics + compute, which AMD has called async because marketing
 
Last edited:
You have no argument then.

You brought it up as a negative aspect. It is irrelevant to this discussion, you would know if you read the linked documents.
This isnt a school playground, I can quote what we have posted to qualify that you have no point to make and are trolling.
I will make a summary post of your mistakes if you continue.
Its not a negative. The only one that wants to have an argument is you. Need I remind you that this whole comparison is DX11 vs 12. Including the tech from both camps. Humor me ...:)
 
So what it they don't run at the same clock speeds? You can still batch commands, stall the pipeline, and run them through the compute queue; if youre careful with your batching the stall time is made up for by the performance gain. This is, of course, assuming Hyper-Q isn't coming back from the dead.

I completely disagree regarding your argument of 'usage'. As I've elucidated earlier, Ashes seems to be a very compute intensive game, and as a result the ingame performance seems to be a function of fp32 throughput.

In my case, on a 980ti, matching the 8.6Tflops on Fury X results in almost identical performance (1fps difference at 47fps, 8.45 vs 8.6tflops) with fury X having async enabled.

You could just as easily argue that at fp32 parity the 980ti is faster, and fury X needs async to catch up.

Async shaders are an advantage that translate directly into a 10% performance increase at best, an advantage for amd is not a disadvantage for nvidia.

I'm still very doubtful maxwell/Kepler have as much to gain from multi engine concurrency, the only working example of async shaders on nvidia is in fable legends, and that was coded statically by lionshead AFAIK.

Edit:. As for the presentation you're referring to, assuming this is Ashes we're talking about, have you seen the dx12 presentation from gdc? It is made eminently clear you should not, and cannot if you want your shit to perform, distribute work across queues in the same way for amd vs nvidia, you can always argue( and Ill agree) that in this respect coding for amd is easier, but to use an example of code that runs badly on nv hardware by design and it's expected result (stalls!) to conclude that nvidia hardware is lacking is well... A crock of shit

If you didn't watch the link in the post I was responding to I can understand why you seem to have misinterpreted everything I said.
 
If you look at his presentation he clearly, but briefly, shows that the nVidia graphics pipeline stalls when asked to perform compute and then picks back up once the compute calculation is finished. Just as clearly, and just as briefly, he shows where GCN performs the compute at the same time, therefor having lower latency and higher fps than the nVidia example. He is absolutely correct that with async compute GCN is using idle shaders. After that though, all of the theorizing that nVidia doesn't have unused shaders so it therefor doesn't gain from async compute, because it's already at maximum utilization is unsubstantiated. Just because the 980 ti is typically faster in DX11 than Fiji doesn't prove this. There are other dissimilarities that can contribute. They don't even run at the same clock speeds. And then the CU's are different designs, different API with different overhead all together, etc... What happens to relative performance when resolution increases? Does AMD all of a sudden gain the usage of the idle CU's?

They are trying to take a negative, that nVidia must stall the graphics pipeline to perform compute, and throw up a smoke screen. Async compute would also improve nVidia's performance if they could use it. Look at nVidia's tessellation performance. They perform tessellation with the CU's. They seem to have plenty of CU's to spare for that task, but we are supposed to believe they don't for compute? This is pure nVidia damage control.


Actually the graphs used were made by AMD lol, to show how a context switch works, they weren't saying that is what is happening. The author used them to show what a context switch would do to a kernel execution times.

PS, nV doesn't use compute units to do tessellation!

where do you guys get your information? They use the compute units to do lighting calculations after tessellation is done.
 
When was PS4 design, Xbox one designs and DX12 start in development?

Best of my knowledge, PS4, Xbox one were ready in end of 2012 early 2013, for their end of 2013 launch. Now lets see concurrent shader execution is part of Xbox one and dx11.x. That time line is 3 years to get Pascal ready with concurrent execution? Is that enough time for you? What did you say 2 to 3 years?

See what I'm saying, don't try to fit your story with a narrative which you have created, it will fit every time.

And yes concurrent execution of compute and graphics queues work fine when using cuda for compute and Dx for graphics, it is not activated with direct compute with and dx, which is the problem.

You really don't get it.

NVidia wouldn't have cared about the platforms getting A-Sync. None of us even heard about it until late LAST YEAR, which is when NVidia first responded to it, which stands to reason THAT is when the 2-3 year clock STARTS.

You keep finding reasons to defend team green. It's grasping at straws and a bad look for you.

And before you go there, I don't see AMD as any great shakes either - they may have planned this, but they could have just stumbled into this as a happy accident as well, which if they did, I don't exactly have the upmost faith in them to capitalize or keep the advantage, let alone press it, for long. But they do finally seem to have some very good leadership in the Graphics division, so we'll see.

That said, your attempts are coming off a bit, well, desperate. The clock for Nvidia and Async by all reasonable accounts would have started last fall - well past when Pascal was getting prepped for mass manufacture. So knock off the revisionist history to win a pissing contest.
 
I stated I can fit a story to your narrative and I did, does that make it true? Maybe maybe not, that is the point. Don't try to fit a story to a narrative like you did and I flipped it to my story, it will be true all the time, I proved that when I fit my story to your narrative. There is no defending here, its right down the middle. If you are seeing sides, thats your problem.
 
Actually the graphs used were made by AMD lol, to show how a context switch works, they weren't saying that is what is happening. The author used them to show what a context switch would do to a kernel execution times.

PS, nV doesn't use compute units to do tessellation!

where do you guys get your information? They use the compute units to do lighting calculations after tessellation is done.

AMD can make up for having lower tessellation performance by running it as a compute shader (am I the only one who hates the terminology ? distinguishing between 'graphics' and 'shaders' wtf), my point is this is because of multi-engine concurrency, you can do it in a synchronous concurrent manner as well. The word 'async' should be excised from graphics marketing history, I've never seen such a widespread misuse of a word in a technical context in my life. This is why I'm doubly confused when someone says "maybe nvidia can make up for lack of async by their usual bruteforce methods", bruteforce methods ?! if anything using compute to do CR and tessellation is a bruteforce method, or have you all agreed to change the meaning of the term bruteforce without telling me :p

I'm not saying one method is better/worse than the other; using CR as an example, if AMD can make up for the lack of hw CR by doing it via compute without sacrificing resources that were being used anyway, then they can match hw CR performance. Same with tesselation.

Then I hear "nvidia makes up for lack of hardware with software, emulated dx12 heehee"(TOTAL opposite of 'nvidia uses bruteforce), well wait a minute, this is totally inconsistent; isn't writing compute routines to do geometry work a software solution to a hardware problem :p

If you're doing anything but exclusively praising AMD's increase performance, you're taking sides :p

Edit:

Just hit me that we went from distinguishing from pixel and vertex work at the hardware level ; before Tesla and unified shaders; to distinguishing between the same two things at the software level. Funny eh ?
 
Last edited:
Well I agree with you AMD (or even nV) can run tessellation as a compute shader, but that introduces a whole new set of problems, the same problems that made DX10 and geometry shaders a bust.

I agree, async is gotta be the worst word for what we are talking about.

Funny thing is when did AMD or nV ever use "brute force" when it came to new architectures, only time is when they had issues and had to up the clocks of their CPU.

I agree there is no such thing as emulation of API routines on on CPU lol, it would be so easy to see the latency and frame rate drops, it wouldn't even be funny, I would imagine a drop of 50% in frames would be low when doing something like this.
 
Well I agree with you AMD (or even nV) can run tessellation as a compute shader, but that introduces a whole new set of problems, the same problems that made DX10 and geometry shaders a bust.

I agree, async is gotta be the worst word for what we are talking about.

Funny thing is when did AMD or nV ever use "brute force" when it came to new architectures, only time is when they had issues and had to up the clocks of their CPU.

I agree there is no such thing as emulation of API routines on on CPU lol, it would be so easy to see the latency and frame rate drops, it wouldn't even be funny, I would imagine a drop of 50% in frames would be low when doing something like this.
"emulated dx12" is a reference to the software scheduler, most people think the issue with 'async' is that nvidia doesn't have hardware scheduling

Which new set of problems? You mean load balancing ? What problems were there in dx10 w/ geometry shaders?

Let's say in a particular shadowing implementation that uses CR, nvidia w/ hw CR can do the calculation for every frame in 1000 clocks, using fp32 compute amd can do it in 1300 clocks, will those 300 clocks extra be "stolen" from something else, thus degrading performance? This is what i mean
 
Yeah, the problem is squarely on the scheduling side for nV as the GMU is not funtional during certain times.

What happened when using GS's in DX10, is that the pipeline would stall when the new vertices that were created was need by another block of shaders that were working on the lighting information. This is why MS created the fixed function hull shader so that would never happen again as long as the hull shader isn't over tasked or had to wait on geometry processing in the shader array (this is what is currently happening with AMD and high tessellation, its hull shaders are waiting on the shader array to finish up geometry processing)

Hmm I haven't looked at CR's that much yet, but I don't think that will happen that way, I would presume CR would just be batched into what ever slots are open and scheduled as other compute shaders.
 
Yeah, the problem is squarely on the scheduling side for nV as the GMU is not funtional during certain times.

What happened when using GS's in DX10, is that the pipeline would stall when the new vertices that were created was need by another block of shaders that were working on the lighting information. This is why MS created the fixed function hull shader so that would never happen again as long as the hull shader isn't over tasked or had to wait on geometry processing in the shader array (this is what is currently happening with AMD and high tessellation, its hull shaders are waiting on the shader array to finish up geometry processing)

Hmm I haven't looked at CR's that much yet, but I don't think that will happen that way, I would presume CR would just be batched into what ever slots are open and scheduled as other compute shaders.
Yeah but those open slots would presumably be used by something else otherwise, you know what I mean?

https://devtalk.nvidia.com/default/...?ClickID=bydzvvvqelnu6eeemvzsm1qngqynqz6eqgyg

This thread is pretty interesting, also slightly outdated
 
I stated I can fit a story to your narrative and I did, does that make it true? Maybe maybe not, that is the point. Don't try to fit a story to a narrative like you did and I flipped it to my story, it will be true all the time, I proved that when I fit my story to your narrative. There is no defending here, its right down the middle. If you are seeing sides, thats your problem.


Strawman response, huh?

Typical.
 
Yeah but those open slots would presumably be used by something else otherwise, you know what I mean?

https://devtalk.nvidia.com/default/...?ClickID=bydzvvvqelnu6eeemvzsm1qngqynqz6eqgyg

This thread is pretty interesting, also slightly outdated

It will all depend on how the shader is written
Strawman response, huh?

Typical.


That is not a strawman response, get your words right, its a narrative paradigm. Its looking at a story from different ways to assert what is real and what is not in the eyes of the story teller. In this case I looked at it one way, you looked at it another way. In both our eyes our views and the events point to each others as real, but in fact its the point of view that skews the reality.
 
It will all depend on how the shader is written



That is not a strawman response, get your words right, its a narrative paradigm. Its looking at a story from different ways to assert what is real and what is not in the eyes of the story teller. In this case I looked at it one way, you looked at it another way. In both our eyes our views and the events point to each others as real, but in fact its the point of view that skews the reality.


The way I looked at it was from the information available. And I formed an opinion based on what was available, even with the caveat of "we need to wait and see", but everything I put forth was based on logic from what has been given.

You put forth a narrative that disregards a bunch of stuff that was out there "just to prove a point", and it was poorly executed. And it was an attempt to change the argument from NVidia potentially missing the boat on Async for a span of a few years to more of a commentary on my commentary, which yes, would be a strawman, as it is an attempt to take something that, based on what we know, you cannot refute, and change it into an argument you think you can win, while trying to be persuasive about it being the same argument, when it is in fact not. In other words, a CLASSIC strawman.

Stop the Bullcrap.
 
It will all depend on how the shader is written



That is not a strawman response, get your words right, its a narrative paradigm. Its looking at a story from different ways to assert what is real and what is not in the eyes of the story teller. In this case I looked at it one way, you looked at it another way. In both our eyes our views and the events point to each others as real, but in fact its the point of view that skews the reality.
This also the same point I'm making regarding 'async' and the use of compute shaders to do tesselation for example, what matters is performance delivered to the end user, and nvidia is by no means in a bad position, the moment AMD improved everyone decided nvidia is screwed, it's stupid
 
Still interested in r7 360 vs r7 370 and gtx 950, to find out how much the ASync closes the gap///

Also r9 380x vs GTX 970...
 
Back
Top