DX12 v DX 11: The Plot Chickenz! HITMAN Lead Dev: DX12 Possible After Ditching DX11

Status
Not open for further replies.
Okay, sure, some random chinese OEM PSU won't be worth crap but any normal brand name PSU would have no issues.

Even branded PSUs aren't up to snuff in many cases, especially in the budget-conscious range. Most PSUs I've seen from Corsair targeted at the low/mid end are just crap. Same with FSP, Cooler Master, Zalman, etc etc.
 
The world is completely fucked in DX12 for me, all 2D is ok. The game looks normal and runs better under DX11. I have a 7870.

The 4.11 branch seems to have gotten a bunch of fixes, DX12 isn't a garbled mess there.

Yeah I hadn't been using 'unrealtournament' in the shortcut

I've been waiting for ut to update to 4.11 for the dx12 improvements, not sure what version Ark is currently using, but it's broken for me since last patch :p
 
Add another game to the list of borked DX12 games. Wasn't DX12 going to be the shining white knight for AMD? What's the excuse this time?
I see more nVidia supporters trying to devalue it than AMD supporters touting it. So, what's the big concern then if it's not all that?

Also, comparing over tessellation to async shaders is ridiculous. One improves performance for the hardware that supports it. The other hurts performance for everyone.

While I have no issues with turning it off for nVidia, since they can't support it. I, by the same note, have no issues with reviewers opening up AMD's control panel and reducing tessellation. Even when doing "highest playable settings" where the two cards being compared aren't using the same settings anyway, we never see that happen though.
 
I see more nVidia supporters trying to devalue it than AMD supporters touting it. So, what's the big concern then if it's not all that?

Also, comparing over tessellation to async shaders is ridiculous. One improves performance for the hardware that supports it. The other hurts performance for everyone.

While I have no issues with turning it off for nVidia, since they can't support it. I, by the same note, have no issues with reviewers opening up AMD's control panel and reducing tessellation. Even when doing "highest playable settings" where the two cards being compared aren't using the same settings anyway, we never see that happen though.
I think it was Hallock interview, may be another name, he mentioned that the issue with tessellation in games is that using the 64x was overkill being that the "triangles" are smaller than the pixel/resolution. The whole issue I had with the Witcher3 was the lack of a Tessellation setting in game. I know AMD has an over-ride but the point was the game should have had one, at release.
 
I see more nVidia supporters trying to devalue it than AMD supporters touting it. So, what's the big concern then if it's not all that?

Also, comparing over tessellation to async shaders is ridiculous. One improves performance for the hardware that supports it. The other hurts performance for everyone.

While I have no issues with turning it off for nVidia, since they can't support it. I, by the same note, have no issues with reviewers opening up AMD's control panel and reducing tessellation. Even when doing "highest playable settings" where the two cards being compared aren't using the same settings anyway, we never see that happen though.

What's that got to do with the fact that everyone was shouting from the rooftops that AMD will own DX12 games since AotS entered alpha/beta status?

I'm not devaluing anything, I'm stating facts. This happened when we transitioned from DX9 to DX10, from DX10 to DX11. And yet people believed that well written, well optimized games will appear from day one just because one game dev house made a publicity stunt out of DX12. A dev house who had plenty of experience with DX11 and lots of experience with Mantle, which translate pretty well to DX12.

And now there've been 5 DX12 games released total, 1 of which runs pretty well on DX12, and the rest are just underwhelming to downright broken. AotS works pretty well, but Quantum Break (AFAIK, haven't read much regarding this game since it sucks), GoW, Hitman, Rise of the Tomb Raider all varies between meh and broken on DX12.
 
What's that got to do with the fact that everyone was shouting from the rooftops that AMD will own DX12 games since AotS entered alpha/beta status?

I'm not devaluing anything, I'm stating facts. This happened when we transitioned from DX9 to DX10, from DX10 to DX11. And yet people believed that well written, well optimized games will appear from day one just because one game dev house made a publicity stunt out of DX12. A dev house who had plenty of experience with DX11 and lots of experience with Mantle, which translate pretty well to DX12.

And now there've been 5 DX12 games released total, 1 of which runs pretty well on DX12, and the rest are just underwhelming to downright broken. AotS works pretty well, but Quantum Break (AFAIK, haven't read much regarding this game since it sucks), GoW, Hitman, Rise of the Tomb Raider all varies between meh and broken on DX12.
While there are always a vocal few it is far from everyone. You are exaggerating the truth trying to fabricate a situation that doesn't exist. Sure, there are people, and I'm one, who believe that AMD has an advantage in DX12. Never believed that all DX12 games were going to be well optimized day one, etc. that you are claiming. I think most people know that AotS is going to be one of the better DX12 early releases. But they are working with both AMD and nVidia to offer all gamers the best experience. That's a bad thing, I guess because nVidia isn't beating AMD?
 
While there are always a vocal few it is far from everyone. You are exaggerating the truth trying to fabricate a situation that doesn't exist. Sure, there are people, and I'm one, who believe that AMD has an advantage in DX12. Never believed that all DX12 games were going to be well optimized day one, etc. that you are claiming. I think most people know that AotS is going to be one of the better DX12 early releases. But they are working with both AMD and nVidia to offer all gamers the best experience. That's a bad thing, I guess because nVidia isn't beating AMD?
While there are always a vocal few it is far from everyone. You are exaggerating the truth trying to fabricate a situation that doesn't exist. Sure, there are people, and I'm one, who believe that AMD has an advantage in DX12. Never believed that all DX12 games were going to be well optimized day one, etc. that you are claiming. I think most people know that AotS is going to be one of the better DX12 early releases. But they are working with both AMD and nVidia to offer all gamers the best experience. That's a bad thing, I guess because nVidia isn't beating AMD?

No, not a bad thing because nvidia isn't beating amd, it's a bad thing because dx12 runs worse than dx11, I thought this was made very clear.

As for tesselation and culling small (<1 pixel) triangles polaris should support this, so it'll be a non issue

I never compared async to tesselation, I just brought up the latter because everyone shits on it these days, case in point Relayer claiming it only exists to hurt performance.
 
Last edited:
I see more nVidia supporters trying to devalue it than AMD supporters touting it. So, what's the big concern then if it's not all that?

Also, comparing over tessellation to async shaders is ridiculous. One improves performance for the hardware that supports it. The other hurts performance for everyone.

While I have no issues with turning it off for nVidia, since they can't support it. I, by the same note, have no issues with reviewers opening up AMD's control panel and reducing tessellation. Even when doing "highest playable settings" where the two cards being compared aren't using the same settings anyway, we never see that happen though.

Tessellation is in DX12 specs, Async shaders are not, which is one is what again?
While there are always a vocal few it is far from everyone. You are exaggerating the truth trying to fabricate a situation that doesn't exist. Sure, there are people, and I'm one, who believe that AMD has an advantage in DX12. Never believed that all DX12 games were going to be well optimized day one, etc. that you are claiming. I think most people know that AotS is going to be one of the better DX12 early releases. But they are working with both AMD and nVidia to offer all gamers the best experience. That's a bad thing, I guess because nVidia isn't beating AMD?

AOTS and Hitman both run much faster in DX11 too, you want to explain that? We haven't seen that from previous DX11 titles, ever.
 
Neither AotS nor Hitman suggest AMD has an advantage over NV in DX12, that leaves GoW, Tomb Raider, Quantum Break and DOOM (Vulkan), which ones of those do you base your conclusion on relayer ?

Edit:
And tesselation does improve performance, just depends what you compare it to; compared to using high poly static models throughout your project, dynamic tesselation yields impressive performance gains, particularly so when you can cull triangles smaller than one pixel, it's not an evil scheme; it's a hardware advantage - in that sense it's perfectly analogous to asynchronous shaders (that aren't part of the dx spec though) ; they operate in different parts of the pipeline but they're both hardware features that can improve performance under certain conditions.

Assuming those conditions are met you can, in theory, also get higher iq for free. Think async shaders being used to apply postfx with no performance for example (concurrent with other work)

Similarly tesselation can give you higher iq for free if shaders are stalling the pipeline.

The biggest differences between the two are

A) you can't totally make up for geometry performance via proxy techniques, whereas utilization using Async concurrent graphics*compute *can* be matched without it.
B) gcn has finer preemption, combined with the aces and queue priority it gives a responsiveness advantage (think VR)

ACEs on gcn can make life easier for developers, to a certain extent, if your argument is that because of console optimization being priority these optimizations will be easier to carry over to amd hw on PC then that's a valid point - it would suck, but it's possible.
 
Last edited:
HitmanPresentation.png


From Hitman lead render dev presentation at GDC.
 
AND? Nothing new.

Honestly I am more interested in the entire scope. There is entirely too much attention being wasted on fps metrics.

Entire scope of what ? What metrics would you rather use to quantify game performance ?
 
AND? Nothing new.

Honestly I am more interested in the entire scope. There is entirely too much attention being wasted on fps metrics.

Well, async is just like tessellation, number 1. Number 2, it again reiterates my point that we won't see good implementations of DX12 for a while yet.

Other than that, yep, nothing new.

Outside of FPS metric, you mean image quality? So far the only game that has a noticeable IQ improvement over the rest is Quantum Break, which is DX12 only, so we can't really compare. What other metrics would you like to focus on?
 
Well, async is just like tessellation, number 1. Number 2, it again reiterates my point that we won't see good implementations of DX12 for a while yet.

Other than that, yep, nothing new.

Outside of FPS metric, you mean image quality? So far the only game that has a noticeable IQ improvement over the rest is Quantum Break, which is DX12 only, so we can't really compare. What other metrics would you like to focus on?
Quantum Break looks like shit
 
Quantum Break looks like shit

Really? From the limited screens and gameplay videos I've seen on YouTube it looked good. Granted it was probably the console version. I haven't seen any reviews for the PC version.
 
Neither AotS nor Hitman suggest AMD has an advantage over NV in DX12

ACEs on gcn can make life easier for developers, to a certain extent
Looking at the hardware, I would say AMD has at least a slight advantage in DX12 as it appears the ACEs can actually accelerate the synchronization/fences. It may very well account for the few % Nvidia loses when async is enabled. It also seems likely the hardware acceleration will provide lower CPU overhead. Granted CPU overhead on DX12 is pathetic when compared to DX11, but it's still an advantage.

As for actual concurrent execution form async, I'm not sure we've really seen the benefits of it start to show up. Not until a dev accelerates AI and physics that aren't synced to framerate.
 
I don't think ACE's can do that, how ever because the the ACE's can have more queues lined up, they can give an advantage of more instructions to choose from when there are gaps in the graphics command queue, thus depending on dependencies it would give more flexibility and the resultant will be dropping latencies on AMD hardware, which will not work on nV hardware (well to a much less extent).
 
Looking at the hardware, I would say AMD has at least a slight advantage in DX12 as it appears the ACEs can actually accelerate the synchronization/fences. It may very well account for the few % Nvidia loses when async is enabled. It also seems likely the hardware acceleration will provide lower CPU overhead. Granted CPU overhead on DX12 is pathetic when compared to DX11, but it's still an advantage.

As for actual concurrent execution form async, I'm not sure we've really seen the benefits of it start to show up. Not until a dev accelerates AI and physics that aren't synced to framerate.

ACEs are definitely advantageous, it's definitely an advantage, just doesn't translate to an effective performance advantage for amd over nvidia, and yeah both AI and physics are good candidates for potential async workloads. What do you mean it accelerates fences ?
When have we ever had GPU-accelerated AI though ? Afaik only game to ever use neural network ai is democracy 3, lol.

afaik gameworks/physx effects using CUDA work asynchronously wrt graphics command processor (using gmu), unclear to me how synchronization is resolved though

Edit: I think i got what you mean by accelerates fences, you mean sync resolution is faster because it doesn't have to go back to cpu as it does on nvidia right ?

Really? From the limited screens and gameplay videos I've seen on YouTube it looked good. Granted it was probably the console version. I haven't seen any reviews for the PC version.
DdJpsYL.jpg
FpA7ovD.jpg
BDav6uV.jpg
pJGvp7Q.jpg
l1VTM2b.jpg
CyqOFCY.jpg
FJJgBlM.jpg

Focus on edges of geometry, look at branches in bushes, hair, even larger object likes corners of walls with many objects behind them, it's fucking blurry.

I'm at 1440p so I'm assuming what I'm playing is actually 1080p with fancy temporal upscaling to 1440p, or maybe 900p if it's 4x
 
Last edited:
When have we ever had GPU-accelerated AI though ?
Pathfinding would be a likely use, and we haven't really had it. Begs the question as to why? As it could be rather long running, it's the the kind of task you wouldn't want in a serial graphics queue. GPU work that doesn't necessarily need finished within a single frame.

Edit: I think i got what you mean by accelerates fences, you mean sync resolution is faster because it doesn't have to go back to cpu as it does on nvidia right ?
Yeah. Theory is it just sets there waiting for a condition to be met and starts dispatching ASAP. So it would be quicker as you wouldn't have the CPU polling or responding to an interrupt. In theory it could react in a single cycle without adding the job to the back of the queue. Or filling a stalled pipeline because there was no work left.
 
No, not a bad thing because nvidia isn't beating amd, it's a bad thing because dx12 runs worse than dx11, I thought this was made very clear.

As for tesselation and culling small (<1 pixel) triangles polaris should support this, so it'll be a non issue

I never compared async to tesselation, I just brought up the latter because everyone shits on it these days, case in point Relayer claiming it only exists to hurt performance.

Wasn't you I quoted was it? Then you take what I said and exaggerate it (again proving the weakness of your position, and exaggerate is putting it mildly. Lying might be closer) I NEVER said it only exists to hurt performance. That would depend on how it's used.

Tessellation is in DX12 specs, Async shaders are not, which is one is what again?


AOTS and Hitman both run much faster in DX11 too, you want to explain that? We haven't seen that from previous DX11 titles, ever.

AOTS runs faster in DX11? Not for AMD, so you can hardly blame the API.

Neither AotS nor Hitman suggest AMD has an advantage over NV in DX12, that leaves GoW, Tomb Raider, Quantum Break and DOOM (Vulkan), which ones of those do you base your conclusion on relayer ?

Edit:
And tesselation does improve performance, just depends what you compare it to; compared to using high poly static models throughout your project, dynamic tesselation yields impressive performance gains, particularly so when you can cull triangles smaller than one pixel, it's not an evil scheme; it's a hardware advantage - in that sense it's perfectly analogous to asynchronous shaders (that aren't part of the dx spec though) ; they operate in different parts of the pipeline but they're both hardware features that can improve performance under certain conditions.

Assuming those conditions are met you can, in theory, also get higher iq for free. Think async shaders being used to apply postfx with no performance for example (concurrent with other work)

Similarly tesselation can give you higher iq for free if shaders are stalling the pipeline.

The biggest differences between the two are

A) you can't totally make up for geometry performance via proxy techniques, whereas utilization using Async concurrent graphics*compute *can* be matched without it.
B) gcn has finer preemption, combined with the aces and queue priority it gives a responsiveness advantage (think VR)

ACEs on gcn can make life easier for developers, to a certain extent, if your argument is that because of console optimization being priority these optimizations will be easier to carry over to amd hw on PC then that's a valid point - it would suck, but it's possible.

Tessellation improves performance if used properly. If over used it doesn't improve anything. I think we all know what we are talking about here.

It's to early to be certain where GCN is getting it's advantages from entirely. I just said that I believe AMD has an advantage in DX12. Sorry if you think it would suck if porting from console to PC were easier on GCN because it's the wrong brand.

Looks like I've annoyed the nVidia defense force here. Relax guys. You can still brag about market share and how awesome margins are for nVidia. ;)
 
Wasn't you I quoted was it? Then you take what I said and exaggerate it (again proving the weakness of your position, and exaggerate is putting it mildly. Lying might be closer) I NEVER said it only exists to hurt performance. That would depend on how it's used.



AOTS runs faster in DX11? Not for AMD, so you can hardly blame the API.



Tessellation improves performance if used properly. If over used it doesn't improve anything. I think we all know what we are talking about here.

It's to early to be certain where GCN is getting it's advantages from entirely. I just said that I believe AMD has an advantage in DX12. Sorry if you think it would suck if porting from console to PC were easier on GCN because it's the wrong brand.

Looks like I've annoyed the nVidia defense force here. Relax guys. You can still brag about market share and how awesome margins are for nVidia. ;)


That's about how these threads end anyway here I've noticed. Might get a few pages of good discussion and then the NVidia fanbois come in to trash, threadcrap and derail the thread into another AMD/Nvidia fight...
 
Wasn't you I quoted was it? Then you take what I said and exaggerate it (again proving the weakness of your position, and exaggerate is putting it mildly. Lying might be closer) I NEVER said it only exists to hurt performance. That would depend on how it's used.



AOTS runs faster in DX11? Not for AMD, so you can hardly blame the API.



Tessellation improves performance if used properly. If over used it doesn't improve anything. I think we all know what we are talking about here.

It's to early to be certain where GCN is getting it's advantages from entirely. I just said that I believe AMD has an advantage in DX12. Sorry if you think it would suck if porting from console to PC were easier on GCN because it's the wrong brand.

Looks like I've annoyed the nVidia defense force here. Relax guys. You can still brag about market share and how awesome margins are for nVidia. ;)


Tessellation doesn't improve performance, unless you are doing tessellation by a GS or CPU which neither of these should ever be used in real time applications.
 
That's about how these threads end anyway here I've noticed. Might get a few pages of good discussion and then the NVidia fanbois come in to trash, threadcrap and derail the thread into another AMD/Nvidia fight...

Shouldn't you be be busy studying math?
 
That's about how these threads end anyway here I've noticed. Might get a few pages of good discussion and then the NVidia fanbois come in to trash, threadcrap and derail the thread into another AMD/Nvidia fight...

who started it? here is the 3rd post in this thread

Unlike all those developers plugging in GameWorks effects rather than programming it themselves. Right?

You calling Creig a nV fanboy? yes selectively making a post that will support your thoughts which of course was not what happened.
 
Wasn't you I quoted was it? Then you take what I said and exaggerate it (again proving the weakness of your position, and exaggerate is putting it mildly. Lying might be closer) I NEVER said it only exists to hurt performance. That would depend on how it's used.



AOTS runs faster in DX11? Not for AMD, so you can hardly blame the API.



Tessellation improves performance if used properly. If over used it doesn't improve anything. I think we all know what we are talking about here.

It's to early to be certain where GCN is getting it's advantages from entirely. I just said that I believe AMD has an advantage in DX12. Sorry if you think it would suck if porting from console to PC were easier on GCN because it's the wrong brand.

Looks like I've annoyed the nVidia defense force here. Relax guys. You can still brag about market share and how awesome margins are for nVidia. ;)
Nobody is blaming the API, were blaming dev implementation.

I've realized you believe AMD has an advantage under dx12, what do you want me to do? Falsify the Hitman and aots results to better fit your belief?

You absolutely did say tesselation hurts performance, you mentioned 'over tesselation' but it's unclear what you mean by it. Too much async compute load can also be detrimental to performance, what's your point?

What did I exaggerate exactly?

This what "AMD has an advantage under dx12" entails :

Two gpus, identical in terms of hardware capabilities (compute, pixel fill, tex fill etc) and the amd one performing better than the nvidia one under dx12

Please. Present your evidence.

Edit: of course it would suck if console optimizations affected performance on nvidia gpus negatively, why would *you* be happy if this is the case? In my experience people who make unfounded claims, and rely on dismissive tactics like 'you're an nv fanboy' are usually both uninformed and fanboys themselves

Razor wasn't saying dx11 performs better than dx12 for amd, he was saying dx11 performance is better than Nvidia's in those games, this is true for Hitman. Fury X doesn't even perform any better under dx12. For AotS it's not the case, maybe he got mixed up
 
Last edited:
That's about how these threads end anyway here I've noticed. Might get a few pages of good discussion and then the NVidia fanbois come in to trash, threadcrap and derail the thread into another AMD/Nvidia fight...


Pot meet kettle. Nothing more pathetic than a fanboi calling out another fanboi and thinking everyone won't notice they are a hypocrit.
 
Nobody is blaming the API, were blaming dev implementation.

I've realized you believe AMD has an advantage under dx12, what do you want me to do? Falsify the Hitman and aots results to better fit your belief?

You absolutely did say tesselation hurts performance, you mentioned 'over tesselation' but it's unclear what you mean by it. Too much async compute load can also be detrimental to performance, what's your point?

What did I exaggerate exactly?

This what "AMD has an advantage under dx12" entails :

Two gpus, identical in terms of hardware capabilities (compute, pixel fill, tex fill etc) and the amd one performing better than the nvidia one under dx12

Please. Present your evidence.

Edit: of course it would suck if console optimizations affected performance on nvidia gpus negatively, why would *you* be happy if this is the case? In my experience people who make unfounded claims, and rely on dismissive tactics like 'you're an nv fanboy' are usually both uninformed and fanboys themselves

Razor wasn't saying dx11 performs better than dx12 for amd, he was saying dx11 performance is better than Nvidia's in those games, this is true for Hitman. Fury X doesn't even perform any better under dx12. For AotS it's not the case, maybe he got mixed up


Yeah that is what I was saying. AOTS and Hitman both prefer AMD hardware in DX11, I wasn't even talking about DX12.
 
Pot meet kettle. Nothing more pathetic than a fanboi calling out another fanboi and thinking everyone won't notice they are a hypocrit.

And who in the blue hell are you? Other than someone looking to pick a fight and start another pissing contest.

Try again. And try harder. I am a merc. Always have been since I started building PCs, and always will be. I have bought, enjoyed, and never regretted purchasing both AMD and Nvidia, or for that matter AMD and Intel, and will always go with what gives me the most value for my money.

I favor competition. And you are full of shit.
 
And who in the blue hell are you? Other than someone looking to pick a fight and start another pissing contest.

Try again. And try harder. I am a merc. Always have been since I started building PCs, and always will be. I have bought, enjoyed, and never regretted purchasing both AMD and Nvidia, or for that matter AMD and Intel, and will always go with what gives me the most value for my money.

I favor competition. And you are full of shit.

Ah, guess I hit too close to home. Any unbiased user of these forums knows exactly what you are.........
 
Ah, guess I hit too close to home. Any unbiased user of these forums knows exactly what you are.........

I'm not looking to pick a fight. I just call them as I see them and you are a hypocrit and a fanboi.


Sorry, seen your tired bullshit before. Including the close to home crack. Hell, even used it a few times in my youth to know what a bullshit copout it is.

You are looking for a fight former Titan X owner and NVidia fanboi. But you won't find one here and you won't troll me any further.
 
That's about how these threads end anyway here I've noticed. Might get a few pages of good discussion and then the NVidia fanbois come in to trash, threadcrap and derail the thread into another AMD/Nvidia fight...


Its like I was a prophet... :/
 
Sorry, seen your tired bullshit before. Including the close to home crack. Hell, even used it a few times in my youth to know what a bullshit copout it is.

You are looking for a fight former Titan X owner and NVidia fanboi. But you won't find one here and you won't troll me any further.


Burn.....

Yeah, you got me. I bleed nvidia.......

I guess the quadfire 7990 rig and the trifire 290 rig prior to that didn't exist. Yep, it was all a Dallas dream and I never owned those. I need to lay off the meth.

Keep up the good fight for the ADF.
 
Burn.....

Yeah, you got me. I bleed nvidia.......

I guess the quadfire 7990 rig and the trifire 290 rig prior to that didn't exist. Yep, it was all a Dallas dream and I never owned those. I need to lay off the meth.


Just like how I had 2 760s in SLI prior, and 2 unlocked 6950s w 6970 shaders before that, and an NVidia card before that?

All day long, homie. All day long. Your schtick impresses no one. But hey, you got one last response out of me, so at least you "won" that, right? After all, that's all you're really after.

In the meantime, I'll just leave this here.

It applies wonderfully to YOUR posts in this thread, and your history, and by the way, also was posted before your first little "pot kettle" salvo.

Game over.
 
Seriously? Are you related to Zion Halcyon by any chance? 10% gains means 10%. If you're running at 60fps it's 6... If running at 100fps it's 10...
 
Seriously? Are you related to Zion Halcyon by any chance? 10% gains means 10%. If you're running at 60fps it's 6... If running at 100fps it's 10...
seriously it isn't hard to understand: A % does not give a complete story. If I am 10% taller than Dave, how tall is Dave? A % gives no real information other than comparative. Also 10% @1080p has much different end results/impact than @ 4K.

So drop the inane discussion or jabs at comments on %s.
 
seriously it isn't hard to understand: A % does not give a complete story. If I am 10% taller than Dave, how tall is Dave? A % gives no real information other than comparative. Also 10% @1080p has much different end results/impact than @ 4K.

So drop the inane discussion or jabs at comments on %s.
tmp_9506-images(5)1330206652.jpg


A % gives the complete story once you have a reference value, at 4K the difference will be smaller, but it was still be 10%. It doesn't matter if at 1080p the perf advantage translates to 30fps, it's still 10%.

If you are 10% taller than dave, you are 10% dave taller than dave, whether dave is 60cm tall, or 260cm tall, it doesn't matter, because your height (perform w/async) is 11:10 ratio with his (perform w/ no async)

JustReason, please... just... reason.
 
Last edited:
seriously it isn't hard to understand: A % does not give a complete story. If I am 10% taller than Dave, how tall is Dave? A % gives no real information other than comparative. Also 10% @1080p has much different end results/impact than @ 4K.

So drop the inane discussion or jabs at comments on %s.

If you're comparing height between 2 people, does it matter how tall Dave is? The important thing here is that you're 10% taller than Dave, end of story. And yes, % gives comparative info, which is what is being discussed here, i.e. 'DX11 vs DX12 performance'.

10% @ 1080p vs 10% @ 4K doesn't matter for comparative purposes. Irregardless of the situation/scenario, 10% is 10%. If you're talking about whether the 10% will result in something more meaningful than just a comparison, that's a different discussion than what we're having here.
 
ok pure maths, a % gives a relationship between two know numbers or variables, that relationship gives different meanings based on what is being contrasted. The % by itself means nothing but with the context of what the denominator and numerator are it gives a lot of meaning.

Now having said that, a 10% gain going from 1 fps to 1.1 is still as significant as 100fps going 110 fps. The relationship of the numbers is what matters, the numbers by themselves have no meaning or value.
 
Status
Not open for further replies.
Back
Top