Blind Test - RX Vega FreeSync vs. GTX 1080 TI G-Sync

Frame Times Mean Shit - you just can't get over it. HardOCP put frame times in perspective as in VR testing which shows relevance to prevent going into reprojection - still the testing always is based on user experience and not just a single item being looked at using a canned benchmark with zero actual involvement.

Please keep posting stuff like this- it's so easy to make points when you make them for me!

Since you seem to have trouble with them, some definitions are in order: the word objective

This is what Kyle did not do for this test; of course, he's not claiming to do so, and he's controlling as many variables as possible given actual hardware availability. This doesn't make the test bad or wrong, but it does make it limited. Which, incidentally, is what Kyle is claiming it to be. It's not a full review.

Here's another: the word subjective

This is what Kyle did for this test.

See, you keep claiming that the subjective side is more important, but that's only true for the end user. In reviews, objective testing is more important, because that allows end users to make informed decisions. And for that, frametime analysis is what we can use to break down what's going on, and prove what is seen in subjective testing.

Of course, we know why you do this: you seem to think that subjective Youtube reviews that track frames per second are valid, when they're literally the worst source you could possibly use as a reference for gameplay!

To summarize: your argument is that you'd rather use people claiming they 'feel' a certain level of performance without having had reviewed the data that came from that performance than have people actually look at the data and provide an analysis.

Is the earth flat? Do vaccinations cause autism?

I can't wait to hear your 'opinion'!
 
I'm sorry, where did you see FCAT measurements posted for this setup?
You claim that frame-time measurements don't mean anything, and this test proves that, yet there haven't been any frame-time measurements posted from it.
It might be meaningful if one setup had especially bad frame-time measurements while the other did not, and yet there were no statistically significant differences based on the blind testing.
However DOOM is known to be a particularly well-optimized game, so I would not expect significantly different frame-time results for either GPU.

I don't think this was a particularly well conceived test, where both setups will have had the game running at or near 100 FPS the majority of the time anyway, rather than ensuring that the fastest setup was below 100 FPS at all times.
I had previously said that it would have been fine to continue using DOOM if Kyle had increased the resolution to achieve that, but it turns out that AMD's VSR apparently doesn't work with ultrawide displays, so that would not have been possible.

Can you point to any meaningful examples of this in the last decade?
Outside of rare bugs - which largely affected ATi/AMD cards - I can't think of anything.


That is why I would consider it to be a bad test - especially with the responses from people like noko.
While I agree that there is some merit to subjective testing, I think this could have been set up better.
Selecting displays that use the same panel, ensuring that they were properly calibrated, and ensuring that the fastest system never hit the maximum refresh rate would have been huge improvements to this test.

If you create a bottleneck somewhere in the system that affects your results, it's not a good test - just like when AMD had requested reviewers to test Ryzen CPUs at 4K instead of 720p/1080p. Create a bottleneck for the GPU and the CPU results don't look so bad.
Frame times by themselves without context I am saying has little meaning. If that is clear. You could have great frame times at 640x480 and that would not indicate at all if you have a great gaming experience. By themselves = Shit. In context they have value.

The test was outstanding! Only differences were video card and monitor! Yet the better performing card 1080Ti did not exceed the experience with the lesser card, RX Vega. For the ultimate gaming experience a well rounded look is better than a bow to single approach.

Sorry that I am very unclear to you on thoughts. Will work on it just for you. ;)
 
Please keep posting stuff like this- it's so easy to make points when you make them for me!

Since you seem to have trouble with them, some definitions are in order: the word objective

This is what Kyle did not do for this test; of course, he's not claiming to do so, and he's controlling as many variables as possible given actual hardware availability. This doesn't make the test bad or wrong, but it does make it limited. Which, incidentally, is what Kyle is claiming it to be. It's not a full review.

Here's another: the word subjective

This is what Kyle did for this test.

See, you keep claiming that the subjective side is more important, but that's only true for the end user. In reviews, objective testing is more important, because that allows end users to make informed decisions. And for that, frametime analysis is what we can use to break down what's going on, and prove what is seen in subjective testing.

Of course, we know why you do this: you seem to think that subjective Youtube reviews that track frames per second are valid, when they're literally the worst source you could possibly use as a reference for gameplay!

To summarize: your argument is that you'd rather use people claiming they 'feel' a certain level of performance without having had reviewed the data that came from that performance than have people actually look at the data and provide an analysis.

Is the earth flat? Do vaccinations cause autism?

I can't wait to hear your 'opinion'!
Not at all. You fail to understand any any point I have made. Maybe, just maybe I am in left field and don't know it. Anyways totally pointless from my view to deal with you.

Many things can be measured besides frame times and could be objective tests:
  • Color
  • Sharpness
  • Texture resolution
  • Loading times
  • Texture loading times
  • Responsiveness as in input delays
  • Rendering artifacts
  • AA (which could be somewhat subjective come to think about it)
  • Synching issues with monitor
I just see HardOCP more accurately reflect all of the above better than anyone else testing. Many sites just have a canned benchmark - not even played or even viewed yet data is extracted into frame times with a here here - epeens go here.

Anyways all objective observations are observed by someone.
 
Not at all. You fail to understand any any point I have made.

Oh, we understand them. They're just irrelevant to the discussion at hand. These are things you look for as anomalies, not something that you worry about. Hell, if a good reviewer runs into these kinds of issues, they check against other reviewers' results and get with vendors before publishing- to make sure it's not something on their end first. Because this stuff isn't common for video cards.

So you're literally saying, "don't look at the stuff that matters, this other stuff is more important guyz!", and we're pointing it out.
 
Oh, we understand them. They're just irrelevant to the discussion at hand. These are things you look for as anomalies, not something that you worry about. Hell, if a good reviewer runs into these kinds of issues, they check against other reviewers' results and get with vendors before publishing- to make sure it's not something on their end first. Because this stuff isn't common for video cards.

So you're literally saying, "don't look at the stuff that matters, this other stuff is more important guyz!", and we're pointing it out.
I don't think you understand, actually.

He isn't condoning a single metric but the addition of and why its weight could be considered higher. Frametimes outside of context means little. FPS outside of context means little. Doesn't mean they mean nothing. This test ADDED CONTEXT. It is another variable to be considered. So far Kyle nor any other rational being has given any credence to any kind of Hanky-panky going on. Monitor differences may impact some but guess what... Welcome to the real world and the fact few monitors exist with comparable screens and refresh rates and sizes in the context of Freesync/G-sync.

Here is where I agree with Noko. A blind test gives real world feedback, an insight into what another person, and in many blind tests "other PERSONS", sees and how they then might quantify that experience. In this case the biggest take away isn't AMD WON, it is how a much stronger cards normal fps advantage shown in many a benchmark became null in the end. This is good news for the majority of the market. The market tends greatly toward the middle of the stack, so this means they can get adequate performance on a scale that equals a much more expensive card.

Of course this will not apply to every person out there. Some just have to know what those numbers are, it helps justify their purchase in their eyes. It gives a sense of pride, be it in the hardware or their years of hard work to attain the means by which to pay for such.

So having all the data helps, including tests like these. Your disdain for it is noted and rightfully so for you. For Noko and myself it adds another data point which helps further with future purchasing decisions. Brents reviews are great but I NEVER use Radial Blur or DoF, so does that mean his reviews are crap/pointless/irrelevant? No. It is just another data point to consider, just as this test has shown.
 
  • Like
Reactions: noko
like this
So having all the data helps, including tests like these. Your disdain for it is noted and rightfully so for you..

Subjective testing without objective data is 'interesting'. That's what this is.

Objective data can stand on its own, and is always preferable if you can only have one.

My 'disdain' is your projection. I appreciate Kyle's work and understand the limitations.
 
Subjective testing without objective data is 'interesting'. That's what this is.

Objective data can stand on its own, and is always preferable if you can only have one.

My 'disdain' is your projection. I appreciate Kyle's work and understand the limitations.
Your disdain is stated by you for these tests in general, not my projection. You keep projecting Nokos point to be this test is the only test as a way to discredit his point when in fact he only weights this test higher than you and you disagree, which is fine, its what debate is for. However skewing the point so far as to lose his original point as a way to paint him as a biased observer on serving his own agenda is disingenuous (getting tired of using this word lately but seems too appropriate).

Do you wonder how you would choose if you were part of the test? Some guys are cheating knowing which monitor types exist for each, giving them bias up front. But I wouldn't know them so I would likely be a great candidate and I am very curious to how I would choose. Hell this isn't like before where it may be obvious ie: screen tearing, studder and such. This time we have an equalizer so we have a whole new game. So I get your point as it isn't scientific, but this test did just make all those scientific number irrelevant. No one picked a clear winner so it is safe to assume all things LOOKED virtually equal. So how do you explain the numbers and graphs when in context of this test. They could have picked the best of both, no matter size or resolution and we may wind up with the same "virtually the same" conclusion albeit maybe the choice would favor the other or maybe not.
 
Your point? Then explain why 6 did not see a difference, 3 thought the AMD setup was better and only one thought the NV Gsync gig was better? With frame times as you so well describe - it totally failed to predict the outcome or even the significance of the results. So in other words there is more than just frame times for a good gaming experience for real humans. The answer is more complex than timing variations between frames in other words. As it was only one test it does open up good questions. Now repeat "Frame Times mean shit" over and over again :)
Possibly because without training one does not necessarily look at the right variables.
Case in point it is/was known that the latest ID engine with Doom has a delayed texture loading on Nvidia GPUs that is notable when moving, previous iterations of the engine had the same issue for AMD GPUs instead to the point where there were complaints about the game.
Now if one is not trained the effect of this can lead to preference to be for AMD with Doom.
Would had been interesting to see the preference if they also used a game with the earlier engine with G-Sync and Freesync.
One area I would had expected a difference to be noted by the twitch gamer and that would be input lag, so not sure what happened there (maybe settings Nvidia officially recommended *shrug*).

Caveat being not sure if the texture loading behaviour was ever resolved with Doom and Nvidia GPUs, but that is just one clear example why training is important.
Cheers
 
Last edited:
However skewing the point so far as to lose his original point as a way to paint him as a biased observer on serving his own agenda is disingenuous (getting tired of using this word lately but seems too appropriate).

It's not disingenuous when we quote him directly ;)
 
Possibly because without training one does not necessarily look at the right variables.
Case in point it is/was known that the latest ID engine with Doom has a delayed texture loading on Nvidia GPUs that is notable when moving, previous iterations of the engine had the same issue for AMD GPUs instead to the point where there were complaints about the game.
Now if one is not trained the effect of this can lead to preference to be for AMD with Doom.
Would had been interesting to see the preference if they also used a game with the earlier engine with G-Sync and Freesync.
One area I would had expected a difference to be noted by the twitch gamer and that would be input lag, so not sure what happened there (maybe settings Nvidia officially recommended *shrug*).

Caveat being not sure if the texture loading behaviour was ever resolved with Doom and Nvidia GPUs, but that is just one clear example why training is important.
Cheers

I don't think Kyle wanted anyone "trained" to look for certain variables. The gist of what I think he was going for was a semi-blind test from gamers of different experience levels to see which platform they preferred. If Kyle only choose one game maybe some training might be necessary to ensure that any obvious bugs are overlooked. And to your point about input lag, I think that's one of the most overcooked bunch of bs that people get crazy about but unless it's beyond a certain threshold it's generally not noticeable. I've seen a guy write up a whole guide about it disabling lots of motherboard features claiming they could increase input lag. I did my own testing and found most of it to be total nonsense.
 
I don't think Kyle wanted anyone "trained" to look for certain variables. The gist of what I think he was going for was a semi-blind test from gamers of different experience levels to see which platform they preferred. If Kyle only choose one game maybe some training might be necessary to ensure that any obvious bugs are overlooked. And to your point about input lag, I think that's one of the most overcooked bunch of bs that people get crazy about but unless it's beyond a certain threshold it's generally not noticeable. I've seen a guy write up a whole guide about it disabling lots of motherboard features claiming they could increase input lag. I did my own testing and found most of it to be total nonsense.
My point is nothing to do with Kyle, like he says it was done for fun and what he could do for now.
My point is those trying to make conclusions from this, which I explained several times now why training is important for blind testing relating to perception and preference - I know as was involved in some international format standards many years ago for some technologies, while also following many journals/papers and co-workers on the subject.

But the point regarding Doom that you ignored, if that delayed texture loading behaviour still exists on Nvidia GPUs it would skew perceptions and make it meaningless from a conclusion point of view (which is not what Kyle was doing).
That is just one example of why training is required (I hinted about some biases before) if you wanted to make some conclusion to what Kyle did and the choices by those involved.
Anyway to reiterate, blind testing requires training because the participants need to understand how and what to look/listen/perceive/etc for.
As an example a study into AA would also need to train the participants because most would look at this wrongly when blind testing and the perception results could be generated from a mixture of biases and other perceived quality variables.
I am not making it up, it is part of any project doing this looking for conclusive data but like I have said several times Kyle was doing this for fun and did not put much emphasis on any conclusions.
Another case is like I mentioned much earlier Nvidia GPUs seem to suffer for some reason with low level APIs on the AMD platform and this was all done on the AMD platform while AMD GPUs seem to do better on AMD platform rather than both AMD and Intel, potentially another weighted variable.

Cheers
 
Last edited:
My point is nothing to do with Kyle, like he says it was done for fun and what he could do for now.
My point is those trying to make conclusions from this, which I explained several times now why training is important for blind testing relating to perception and preference - I know as was involved in some international format standards many years ago for some technologies, while also following many journals/papers and co-workers on the subject.

But the point regarding Doom that you ignored, if that delayed texture loading behaviour still exists on Nvidia GPUs it would skew perceptions and make it meaningless from a conclusion point of view (which is not what Kyle was doing).
That is just one example of why training is required (I hinted about some biases before) if you wanted to make some conclusion to what Kyle did and the choices by those involved.
Anyway to reiterate, blind testing requires training because the participants need to understand how and what to look/listen/perceive/etc for.
As an example a study into AA would also need to train the participants because most would look at this wrongly when blind testing and the perception results could be generated from a mixture of biases and other perceived quality variables.
I am not making it up, it is part of any project doing this looking for conclusive data but like I have said several times Kyle was doing this for fun and did not put much emphasis on any conclusions.
Another case is like I mentioned much earlier Nvidia GPUs seem to suffer for some reason with low level APIs on the AMD platform and this was all done on the AMD platform while AMD GPUs seem to do better on AMD platform rather than both AMD and Intel, potentially another weighted variable.

Cheers
How about we just enjoy the game and let the experience express itself vice some sort of training as in look for this, ignore this or what ever? Do you want to take the person out of it to determine what is the best gaming card?

I was going to capture the 1080Ti and then the Nano using 240fps IPhone looking for texture loading, LOD etc. in Doom. I knew of the slower loading times for the higher res textures for Nvidia - is that game designed or driver related? Anyways while in motion if you are exposed to lower detail/textures that can take away from your response time since the environment will not be as sharp. Does this carry over to other games with Nvidia hardware? I do not know. If Nvidia does this to increase frame rate or lower frame times (loading less higher resolution mipmapped textures to save bandwidth since where you may end up may well be way different from where you start could be an optimization) would be interesting to explore. Is that the main reason in this test that slightly tilted the results towards Rx Vega?
 
Kyle,

One very interesting fun test and well done video! Do you find that this may lead into similar testing in the future or for refining HardOCP approach more?
 
How about we just enjoy the game and let the experience express itself vice some sort of training as in look for this, ignore this or what ever? Do you want to take the person out of it to determine what is the best gaming card?

I was going to capture the 1080Ti and then the Nano using 240fps IPhone looking for texture loading, LOD etc. in Doom. I knew of the slower loading times for the higher res textures for Nvidia - is that game designed or driver related? Anyways while in motion if you are exposed to lower detail/textures that can take away from your response time since the environment will not be as sharp. Does this carry over to other games with Nvidia hardware? I do not know. If Nvidia does this to increase frame rate or lower frame times (loading less higher resolution mipmapped textures to save bandwidth since where you may end up may well be way different from where you start could be an optimization) would be interesting to explore. Is that the main reason in this test that slightly tilted the results towards Rx Vega?
Engine related.
The issue in previous versions affected AMD, so looks like they resolved that but now moved the issue onto Nvidia :)
Although from what I remember it was a lot worst historically for AMD and did need to be resolved but now introduced and subtle on the Nvidia cards so it is not a nightmare situation but does skew when doing peception blind tests such as these.
Cheers
 
Engine related.
The issue in previous versions affected AMD, so looks like they resolved that but now moved the issue onto Nvidia :)
Although from what I remember it was a lot worst historically for AMD and did need to be resolved but now introduced and subtle on the Nvidia cards so it is not a nightmare situation but does skew when doing peception blind tests such as these.
Cheers
Which may explain the results, I think I will do some crude testing and if reasonable post the video. Need to include other games if possible. Three monitors to pick from all IPS 60hz. Freesync 4K, 3440x1440p and a 2560x1440p. The 2560x1440p the Nano should be able to maintain 60fps without issue on Ultra, use vertical sync maybe adaptive sync, wait that won't work with the Nano since it is DVI only and the Nano does not have DVI. Looks like the non-freesync 3440x1440p is best option. Basically looking at texture loading from a lower mipmap resolution to a higher. Other items such as geometry could be looked at as well. So basically capture it at 240fps and slow it down to 30fps 1/8 speed (which would be 60fps on monitor to seeing 7.5fps) . A real high speed camera and faster monitor would be much better but that is all I have.

One person in the video indicated, if I remember correctly, that he had more clarity or the game play was sharper at larger angles while playing allowing for better reaction was the jest.
 
Which may explain the results, I think I will do some crude testing and if reasonable post the video. Need to include other games if possible. Three monitors to pick from all IPS 60hz. Freesync 4K, 3440x1440p and a 2560x1440p. The 2560x1440p the Nano should be able to maintain 60fps without issue on Ultra, use vertical sync maybe adaptive sync, wait that won't work with the Nano since it is DVI only and the Nano does not have DVI. Looks like the non-freesync 3440x1440p is best option. Basically looking at texture loading from a lower mipmap resolution to a higher. Other items such as geometry could be looked at as well. So basically capture it at 240fps and slow it down to 30fps 1/8 speed (which would be 60fps on monitor to seeing 7.5fps) . A real high speed camera and faster monitor would be much better but that is all I have.

One person in the video indicated, if I remember correctly, that he had more clarity or the game play was sharper at larger angles while playing allowing for better reaction was the jest.

If you are looking for historical games with earlier engine; there is Rage (notoriously bad on AMD for texture streaming) and others such as Wolfenstein New Order.
Basically it comes down to the texture streaming aspect of the engine and possibly level of settings (considering every test/review/comparison with various publications done with Doom is at maximum need to reflect that with these earlier games).
Cheers
 
If you are looking for historical games with earlier engine; there is Rage (notoriously bad on AMD for texture streaming) and others such as Wolfenstein New Order.
Basically it comes down to the texture streaming aspect of the engine and possibly level of settings (considering every test/review/comparison with various publications done with Doom is at maximum need to reflect that with these earlier games).
Cheers

Yeah I had the texture pop-in with Wolfenstein:NO among other issues, and I know lots of others didn't. I just don't think any training should be done in a blind test of this kind. If the game has those issues/bugs then they will affect the player's opinion (possibly) and they should, it's part of the experience. Playing multiple games should minimize that variable so that it's not a deal-breaker on their choice of the nvidia or AMD system. All the in-game settings should be kept as close to the same as possible though.
 
Yeah I had the texture pop-in with Wolfenstein:NO among other issues, and I know lots of others didn't. I just don't think any training should be done in a blind test of this kind. If the game has those issues/bugs then they will affect the player's opinion (possibly) and they should, it's part of the experience. Playing multiple games should minimize that variable so that it's not a deal-breaker on their choice of the nvidia or AMD system. All the in-game settings should be kept as close to the same as possible though.

If you are testing specifically for G-Sync and Freesync then you must be trained because your perception and preference will be skewed, you need to be trained to be able to use certain techniques and understand how to evaluate for specific preferences/factors relating to scope of test and in this case it relates to the variable rate refresh and input lag.
If people do not appreciate this then they will base it upon image quality and exacerbated by the texture streaming happening when moving as image quality is subtly 'distorted'....
Reminds me how audiophiles think they can do blind tests and pass them then shocked they could not, because without training your fooked and why some of the best studies out there spend a lot of time and resources training participants - Look at Harman Group that are more publically available than some of the scientific papers I followed.
While it relates to audio it is very specific as well to this discussion (albeit different core factors) for the very reasons I have outlined multiple times: http://web.arch.usyd.edu.au/~wmar0109/DESC9090/old/BechZach_doc/115_Tutorials/5 Listening Test Workshop_Listener Training (Olive).pdf
Notice the difference between trained and untrained perception in some of those charts, while page 19 summarises why training and its goals when doing subjective blind testing.
So it is quite probable in this situation trained participants would identify anomalies-factors that should not be part of their perceived quality preference for variable framerate technology.

Sure you do not need training, but then one cannot make any conclusions and especially so when it is weighted with one game that impacts visual quality on one competitor and also weighted by using one platform that benefits one manufacturer.
Cheers
 
Last edited:
If you are testing specifically for G-Sync and Freesync then you must be trained because your perception and preference will be skewed, you need to be trained to be able to use certain techniques and understand how to evaluate for specific preferences/factors relating to scope of test and in this case it relates to the variable rate refresh and input lag.
If people do not appreciate this then they will base it upon image quality and exacerbated by the texture streaming happening when moving as image quality is subtly 'distorted'....
Reminds me how audiophiles think they can do blind tests and pass them then shocked they could not, because without training your fooked and why some of the best studies out there spend a lot of time and resources training participants - Look at Harman Group that are more publically available than some of the scientific papers I followed.
While it relates to audio it is very specific as well to this discussion (albeit different core factors) for the very reasons I have outlined multiple times: http://web.arch.usyd.edu.au/~wmar0109/DESC9090/old/BechZach_doc/115_Tutorials/5 Listening Test Workshop_Listener Training (Olive).pdf
Notice the difference between trained and untrained perception in some of those charts, while page 19 summarises why training and its goals when doing subjective blind testing.
So it is quite probable in this situation trained participants would identify anomalies-factors that should not be part of their perceived quality preference for variable framerate technology.

Sure you do not need training, but then one cannot make any conclusions and especially so when it is weighted with one game that impacts visual quality on one competitor and also weighted by using one platform that benefits one manufacturer.
Cheers

I see what you're saying and I'm guessing you just hate subjective tests right? :p
 
I never saw any compelling evidence that AMD and NVIDIA handled the texture streaming in DOOM in a different way - though the engine does handle some things a bit differently on either vendor. (no async compute for NVIDIA)
What I did find from my testing is that the texture streaming speed seemed directly linked to framerate, so if I capped the framerate to 30 FPS the textures would load in much slower than if the game was running at 200 FPS.
None of the comparisons I saw seemed to control for things like framerate, and a lot of them were set up with older NVIDIA cards (780 / 970) trying to run the game at 4K, so it could have been framerate differences or VRAM limitations causing the textures to load in slower.
You need to control every variable possible, except the one that you are testing.

No one picked a clear winner so it is safe to assume all things LOOKED virtually equal. So how do you explain the numbers and graphs when in context of this test. They could have picked the best of both, no matter size or resolution and we may wind up with the same "virtually the same" conclusion albeit maybe the choice would favor the other or maybe not.
Am I missing something?
You keep saying that frame-time testing doesn't mean anything, and now you're asking "how do you explain the numbers and graphs when in context of this test".
Has there been any objective data published for this test? I thought the only information released was a video of the subjective testing.

If I set up Quake 3 running on a 650Ti Boost and a 1080Ti, both using the same 100Hz ultrawide G-Sync monitors, would you expect there to be a difference?
Or would you expect things to be the same since both GPUs are powerful enough to keep the framerate locked to 100 FPS at all times during testing?
We know that there are meaningful performance differences between those two cards, but that test would not show them.

And that is the fundamental issue with this blind test.
Even if there are differences in performance between the 1080Ti and RX Vega when running DOOM, using a game where either card is able to hit the 100 FPS cap invalidates the test.

Let's say that the 1080Ti stays at 100 FPS 100% of the time during the test, and the RX Vega drops to 95 FPS at some points.
Does that mean the RX Vega is within 5% of the 1080Ti? No. Without more data, we have no way of determining the difference between them from this test.
The 1080Ti could be at 50% GPU load or 90% GPU load when it's hitting that 100 FPS cap for all we know.

A proper test requires that the GPUs are able to stay at or near 100% usage throughout (i.e. not CPU bottlenecked) and neither card is able to reach the 100 FPS cap. Then you can draw some meaningful conclusions from the result.
 
I never saw any compelling evidence that AMD and NVIDIA handled the texture streaming in DOOM in a different way..snip.
Been plenty of articles showing the effect out there.
Even been discussed with some noticing while others not here on Hardforum: https://hardforum.com/threads/texture-loading-slower-in-doom-on-nvidia-vs-amd-cards.1905395/
However the condition is triggered and relates to the engine/options and who knows possibly platform, in other words it does exist and so makes it impossible to make any conclusion even if it is not consistent for everyone with Nvidia GPUs, some with latest Pascal GPUs have noticed it and not just 970.
The exact same issue existed in the previous ID Tech engine version but back then affected AMD rather than Nvidia.
But I raised it not just because of that but also as an example why training is really required (this is an example just like AA/shadow-lighting behaviour and how studied) and importantly why need other games and other platform not just AMD Ryzen; it is about training and game/platform specifics.
Anyway the effect is subtle and not an issue generally, but will skew blind test perception if not accounted for.

Cheers
 
Last edited:
I see what you're saying and I'm guessing you just hate subjective tests right? :p
Well that may be your subjective opinion :)
More seriously they are fine, just not keen when conclusions are then created from them beyond scope/context possible, and yeah I am a nightmare with audiophiles and 'audio objectivists' who want to take shortcuts with their conclusions :)
Cheers
 
Last edited:
Been plenty of articles showing the effect out there.
Even been discussed with some noticing while others not here on Hardforum: https://hardforum.com/threads/texture-loading-slower-in-doom-on-nvidia-vs-amd-cards.1905395/
However the condition is triggered and relates to the engine/options and who knows possibly platform, in other words it does exist and so makes it impossible to make any conclusion even if it is not consistent for everyone with Nvidia GPUs, some with latest Pascal GPUs have noticed it and not just 970.
Like I said: the tests were not properly controlled, and the NVIDIA cards tested in that video were running the game at 4K and well below 60 FPS.
Low framerates cause slower texture loading in DOOM, and he was testing 4K with cards that only have 3/3.5 GB VRAM.
I'm not saying the issue does not exist, but that the testing done was insufficient. I never saw it mentioned again after that initial video was posted about it.

The exact same issue existed in the previous ID Tech engine version but back then affected AMD rather than Nvidia.
I believe you are referring to the issue with RAGE which was caused by AMD putting out the wrong driver on the release day, breaking the virtual texturing. That was quickly fixed.
These things happen. NVIDIA did the same thing recently by putting out a bad driver for Prey on the release day which caused it to stutter with the higher quality texture settings.
 
Like I said: the tests were not properly controlled, and the NVIDIA cards tested in that video were running the game at 4K and well below 60 FPS.
Low framerates cause slower texture loading in DOOM, and he was testing 4K with cards that only have 3/3.5 GB VRAM.
I'm not saying the issue does not exist, but that the testing done was insufficient. I never saw it mentioned again after that initial video was posted about it.

I believe you are referring to the issue with RAGE which was caused by AMD putting out the wrong driver on the release day, breaking the virtual texturing. That was quickly fixed.
These things happen. NVIDIA did the same thing recently by putting out a bad driver for Prey on the release day which caused it to stutter with the higher quality texture settings.
Nah I am talking specifically about texture streaming and it being delayed that I have been going on about (look I mentioned it is the same as what is experienced now in Doom for nvidia and that is not stutter), the problem also exists in otherearlier ID Tech games of the same engine, specifically games such as Wolfenstein New Order as someone earlier mentioned remembering seeing themselves.
The stutter is separate unfortunate issue, also it is fair to say the issue is not related to the VRAM and probably worth noting a 1080ti does not run below 60fps or even near it at full settings in nearly every area (seems only one segment runs lower and this is back with original drivers at time of 1080ti at launch).

1489035168S7z42o2d6c_7_2.png




Cheers
 
Last edited:
No I am talking specifically about texture streaming and it being delayed that I have been going on about (look I mentioned it is the same as what is experienced now in Doom for nvidia and that is not stutter), the problem also exists in otherearlier ID Tech games of the same engine, specifically games such as Wolfenstein New Order as someone earlier mentioned remembering seeing themselves.
As I said, I've not seen any in-depth testing on the subject, only that one video you linked to which has a number of issues with the test setup.
The YouTuber who made it said he was going to follow up with further testing and does not appear to have posted anything more on the subject.
I'm not saying that the issue doesn't exist, only that there is insufficient testing/evidence to say anything definitive about it being a general problem across all NVIDIA GPUs.

The stutter is not the same issue
No, my point was that both AMD and NVIDIA sometimes push out a bad driver near a game's release specifically for that game, which is actually worse than the previous driver without those optimizations.
AMD released a completely broken driver for RAGE's release, and that seems to have stuck with them.
Unless you're saying that AMD cards still have issues with the virtual texturing used in id Tech 5 games today?

worth noting a 1080ti does not run below 60fps or even near it at full settings in nearly every area (seems only one segment runs lower and this is back with original drivers at time of 1080ti at launch).
I'm not sure what that is in reference to, since the testing was done with a GTX 970 and a GTX 780 at sub-60 FPS, not a 1080Ti.
I would be interested in seeing the results if someone were to compare both a 1080Ti and an RX Vega in DOOM under controlled conditions. (including framerate)
 
As I said, I've not seen any in-depth testing on the subject, only that one video you linked to which has a number of issues with the test setup.
The YouTuber who made it said he was going to follow up with further testing and does not appear to have posted anything more on the subject.
I'm not saying that the issue doesn't exist, only that there is insufficient testing/evidence to say anything definitive about it being a general problem across all NVIDIA GPUs.

No, my point was that both AMD and NVIDIA sometimes push out a bad driver near a game's release specifically for that game, which is actually worse than the previous driver without those optimizations.
AMD released a completely broken driver for RAGE's release, and that seems to have stuck with them.
Unless you're saying that AMD cards still have issues with the virtual texturing used in id Tech 5 games today?

I'm not sure what that is in reference to, since the testing was done with a GTX 970 and a GTX 780 at sub-60 FPS, not a 1080Ti.
I would be interested in seeing the results if someone were to compare both a 1080Ti and an RX Vega in DOOM under controlled conditions. (including framerate)
Well you said I was going on about the stutter and driver related, I had to clarify I really am not.
Others also have mentioned the streaming texture delayed issue and so have other articles and with other Nvidia cards both Maxwell and Pascal....
The point is there is an inherent known texture streaming issue with the ID Tech engine that has been in all generations going back to 2011, it does not matter if below or above 60fps as it is the optimised streaming rendering engine interraction with the GPU.
Others had problem with Doom with 1070 and texture streaming, I did mention that earlier.
In the link I gave earlier along with the video one member here even said they experience the delayed loading (not stutter but same issue that has plagued earlier iterations of the engine that affected AMD, separate to the stutter issue you think I am talking about but I really am not.
This is what one said with a 1070 in that Hardforum topic:
"Running a 1070 on Nightmare graphics I've seen this a few times and never considered this to be an issue. The game is loading textures for a second or two, nothing I considered noteworthy. I would suspect that my enjoyment of the game would increase by zero of this was "fixed", I am happy the game runs so damn smooth, even from the HDD I have it loaded on."
Anyway I am dropping this because it is not going to change things and all my points and context is being lost, and the issue is not game breaking or an issue generally but would skew blind testing perception related tests if not accounted for.
 
Last edited:
I never saw any compelling evidence that AMD and NVIDIA handled the texture streaming in DOOM in a different way - though the engine does handle some things a bit differently on either vendor. (no async compute for NVIDIA)
What I did find from my testing is that the texture streaming speed seemed directly linked to framerate, so if I capped the framerate to 30 FPS the textures would load in much slower than if the game was running at 200 FPS.
None of the comparisons I saw seemed to control for things like framerate, and a lot of them were set up with older NVIDIA cards (780 / 970) trying to run the game at 4K, so it could have been framerate differences or VRAM limitations causing the textures to load in slower.
You need to control every variable possible, except the one that you are testing.


Am I missing something?
You keep saying that frame-time testing doesn't mean anything, and now you're asking "how do you explain the numbers and graphs when in context of this test".
Has there been any objective data published for this test? I thought the only information released was a video of the subjective testing.

If I set up Quake 3 running on a 650Ti Boost and a 1080Ti, both using the same 100Hz ultrawide G-Sync monitors, would you expect there to be a difference?
Or would you expect things to be the same since both GPUs are powerful enough to keep the framerate locked to 100 FPS at all times during testing?
We know that there are meaningful performance differences between those two cards, but that test would not show them.

And that is the fundamental issue with this blind test.
Even if there are differences in performance between the 1080Ti and RX Vega when running DOOM, using a game where either card is able to hit the 100 FPS cap invalidates the test.

Let's say that the 1080Ti stays at 100 FPS 100% of the time during the test, and the RX Vega drops to 95 FPS at some points.
Does that mean the RX Vega is within 5% of the 1080Ti? No. Without more data, we have no way of determining the difference between them from this test.
The 1080Ti could be at 50% GPU load or 90% GPU load when it's hitting that 100 FPS cap for all we know.

A proper test requires that the GPUs are able to stay at or near 100% usage throughout (i.e. not CPU bottlenecked) and neither card is able to reach the 100 FPS cap. Then you can draw some meaningful conclusions from the result.
Lets make it a bit more simple then. So is there any outcome of this test that is false? Any evident bias? If a tree falls in the woods and no one hears it... See how this is going. This test is Only representative of this game, not at all indicative of all other games, just as those benchmark numbers some of you just cant be separated from don't indicate performance of one game to another.

But even more than that, if the given output is the visual on screen then if the experience is equal then does the frame rate or GPU load matter at all? Again this is the point that the way reviews have been done will likely change a bit as these sync technologies gain greater traction. It doesn't in any way nullify the way it has been done because that is still necessary for base facts of performance level. It seems too many of you are debating an either/or approach when it is more about the addition of rather.
 
  • Like
Reactions: noko
like this
The irony is most here incluing myself would prefer to buy the GPU and CPU-platform 1st priority and then just go with the variable rate solution that GPU manufacturer uses.
I doubt anyone is buying primarily for G-Sync or Freesync with monitor 1st and then worrying about the GPU and CPU-platform second - bit of a nightmare though if had to upgrade to a competitor after buying the monitor and can see why some want to switch back to Vega after say upgrading from 390 to Pascal.
In the scheme of things the choice around VRR is pretty academic because our choices are driven primarily by our concern regarding GPU performance/traits and CPU-platform, especially members here.
Cheers
 
Last edited:
Lets make it a bit more simple then. So is there any outcome of this test that is false? Any evident bias? If a tree falls in the woods and no one hears it... See how this is going. This test is Only representative of this game, not at all indicative of all other games, just as those benchmark numbers some of you just cant be separated from don't indicate performance of one game to another.

But even more than that, if the given output is the visual on screen then if the experience is equal then does the frame rate or GPU load matter at all? Again this is the point that the way reviews have been done will likely change a bit as these sync technologies gain greater traction. It doesn't in any way nullify the way it has been done because that is still necessary for base facts of performance level. It seems too many of you are debating an either/or approach when it is more about the addition of rather.

Since the [H] benchmark of a 1080Ti running DOOM completely maxed-out at 3840x2160 results in an 84.8 FPS average, that means the average framerate should be around 142 FPS at 3440x1440 if the R7-1800X is not bottlenecking it.
So I would be surprised if the 1080Ti setup was not locked to 100 FPS the majority of the time in this test, since the card has considerable performance to spare at that resolution.

So there are two possible outcomes for what this test can show us:

1) Card A running at or near 100 FPS with its VRR solution ≈ Card B at or near 100 FPS with its VRR solution.
Which is a largely meaningless result if the goal was to compare the value of two separate GPU + VRR display setups.

2) Perhaps the RX Vega was not able to hold 100 FPS as often as the 1080Ti, so the comparison might be showing us that ~80 FPS with VRR is nearly indistinguishable from ~100 FPS for the majority of people taking part.

If that is the outcome of this test, I would consider that to be a victory in favor of VRR rather than RX Vega though, since it would mean that an RX Vega is dropping frames while a 1080Ti is coasting along at ~70% GPU load on average.

But we don't have any numbers to tell us what sort of performance was being compared in both setups.
We can only make assumptions, which is always a good way to draw conclusions from a test.

And that's why I don't like how the test was set up.
Neither of the conclusions that we can draw from it tells us anything useful about RX Vega.
It is, however, a good way to skew perceptions of RX Vega vs a 1080Ti by setting up an intentionally misleading test - if that was your goal. Capping the performance of the faster GPU makes the slower GPU look better.

That's why you need to set up a test where the fastest setup can never hit 100 FPS at ~100% GPU load.
Then you can see the actual difference between the two cards.

Let's say that the 1080Ti was 100 FPS average and the RX Vega was 80 FPS average, at 3440x1440.
If we compared the two at 4K where the average framerate of the 1080Ti drops to 84.8 FPS, RX Vega would drop from 80 FPS to 48 FPS.
Do you think people would still be unable to tell the difference between 85 FPS and 48 FPS, even with a VRR display?
Now I don't think RX Vega is going to be that far behind, but can you see why this test is misleading?

I have no issue with setting up a blind test on two 3440x1440 100Hz VRR displays, but there are other factors which had to be considered to make this a fair test.
 
OK, I went TL;DR on the comments, and just throw in my opinion.

As far as I can determine, adaptive sync technology is much better than V-Sync technology. My problem is going from "System 1" (nVidia) to "System 2" (AMD). How about the reverse.... System 2 to System 1? Also, more games.... Doom is a very optimized game, and there are other, more challenging games out there.
[H] acknowledged that this comparison isn't statistically significant. Not to read too much in to it.

There is far more that could be done to improve the analysis. I do find it interesting that overall the two systems (and you can't finger the cards specifically) playing this game perform similar enough that there was no consensus that one was better. Although one was definitely more expensive. But without any analysis it's not possible to determine why.
 
The irony is most here incluing myself would prefer to buy the GPU and CPU-platform 1st priority and then just go with the variable rate solution that GPU manufacturer uses.
I doubt anyone is buying primarily for G-Sync or Freesync with monitor 1st and then worrying about the GPU and CPU-platform second - bit of a nightmare though if had to upgrade to a competitor after buying the monitor and can see why some want to switch back to Vega after say upgrading from 390 to Pascal.
In the scheme of things the choice around VRR is pretty academic because our choices are driven primarily by our concern regarding GPU performance/traits and CPU-platform, especially members here.
Cheers
I actually think including the monitor in the original decision is better than doing it ahead of time or making it an afterthought. There's no denying though that you can get a more powerful PC with a "highend" Freesync monitor than a similar Gsync.
 
Lets make it a bit more simple then. So is there any outcome of this test that is false? Any evident bias? If a tree falls in the woods and no one hears it... See how this is going. This test is Only representative of this game, not at all indicative of all other games, just as those benchmark numbers some of you just cant be separated from don't indicate performance of one game to another.

But even more than that, if the given output is the visual on screen then if the experience is equal then does the frame rate or GPU load matter at all? Again this is the point that the way reviews have been done will likely change a bit as these sync technologies gain greater traction. It doesn't in any way nullify the way it has been done because that is still necessary for base facts of performance level. It seems too many of you are debating an either/or approach when it is more about the addition of rather.
It is as simple as you state. Some do look beyond what is obvious it seems. We now need special training, lol, to see what we don't see. Yes in some things that maybe true but way overboard in my view here.

Anyways playing Doom with the 1080Ti and forgot why I was playing it :LOL: was having too much fun. Using adaptive sync and it is utterly fluid.

Anyways capturing with a phone or camera at a higher frame rate then the data being delivered as in 60fps is pointless. So I was going to just capture the frames using Nvidia GeForce Experience (which I had to load to capture video with). Anyways that part of the GeForce Experience in order to capture video will not open - POS and pointless for end user - at least for me that is. Just tried to open it again it is going through an update? Just installed it with the newest driver.

Looks like now it works or can be used. Be back later.
 
You should hang out on audio forums to get a window into the whole "subjective" vs "objective" viewpoints.

"The tube amp displayed silky midrange!"

"Yeah, because it was running at 10% harmonic distortion".

That said, tube amps are still popular ;)
And you should try listening to a good tube amp.
 
  • Like
Reactions: noko
like this
I actually think including the monitor in the original decision is better than doing it ahead of time or making it an afterthought. There's no denying though that you can get a more powerful PC with a "highend" Freesync monitor than a similar Gsync.
Apart from people who went Freesync+390 ended up upgrading to GTX1080 with their Freesync monitor - some have done that here to enjoy the performance and visual quality even if it does not feel as smooth (although input lag would be less with the better GPU) with said monitor.
Performance does outweigh for many here, if all was equal in performance then yeah what you say has a lot of merit for many.
I still think it is better to decide on what GPU you will be going with and what it gives you and when you are next likely to upgrade again.

High end Gsync are silly priced IMO, forever chasing freaking ever higher refresh rates and that sucks as it would make sense for consumers if they had option for a more sensible refresh rate and quality, I do not want a 165Hz or higher refresh monitor and that is how they are justifying prices on those ones.
As an example Freesync from Asus 1440p 27inch 144Hz is £443 while they do it as 165Hz at £589 here; but yeah generally even ignoring that the difference is still generally £120 in favour of Freesync even coming down to 24" when comparing like for like from same manufacturer.
But that is possibly oversimplifying it as one needs to look at actual panel components used in both as they can be subtly different in performance spec (may find in cases Freesync is better and others G-Sync).

Personally I am more interested if HDR is going to take off by end of year and possibly want a monitor supporting that (only if it does take off), rather than focusing on either Freesync or G-Sync for now.
Cheers
 
Last edited:
Currently running G-Sync, and I wouldn't trade it for another non-adaptive sync monitor. Freesync is certainly good enough, but AMD's hardware isn't fast enough yet, so I'll probably be sticking with what I have.

And the reason? The hallmark 'smoothness' of adaptive sync technologies is one thing, but being able to have no tearing and no (extra) input lag at the same time? Priceless.
 
Apart from people who went Freesync+390 ended up upgrading to GTX1080 with their Freesync monitor - some have done that here to enjoy the performance and visual quality even if it does not feel as smooth (although input lag would be less with the better GPU) with said monitor.
Performance does outweigh for many here, if all was equal in performance then yeah what you say has a lot of merit for many.
I still think it is better to decide on what GPU you will be going with and what it gives you and when you are next likely to upgrade again.

High end Gsync are silly priced IMO, forever chasing freaking ever higher refresh rates and that sucks as it would make sense for consumers if they had option for a more sensible refresh rate and quality, I do not want a 165Hz or higher refresh monitor and that is how they are justifying prices on those ones.
As an example Freesync from Asus 1440p 27inch 144Hz is £443 while they do it as 165Hz at £589 here; but yeah generally even ignoring that the difference is still generally £120 in favour of Freesync even coming down to 24" when comparing like for like from same manufacturer.
But that is possibly oversimplifying it as one needs to look at actual panel components used in both as they can be subtly different in performance spec (may find in cases Freesync is better and others G-Sync).

Personally I am more interested if HDR is going to take off by end of year and possibly want a monitor supporting that (only if it does take off), rather than focusing on either Freesync or G-Sync for now.
Cheers
HDR is part of the Freesync 2 spec. AMD didn't want to put too many conditions on the original spec that might have hindered adoption. Now that the spec is widely supported though they can require higher performance to get the Freesync endorsement.
 
HDR is part of the Freesync 2 spec. AMD didn't want to put too many conditions on the original spec that might have hindered adoption. Now that the spec is widely supported though they can require higher performance to get the Freesync endorsement.
Think you missed my point.
You will still need to upgrade your monitor is my point.
You cannot buy a Freesync monitor now and expect HDR support in future from that monitor; meaning you would need to replace it again, therefore it may be costly to buy now rather than waiting

So if I am waiting for HDR, this brings me also closer in line with Volta, factors that may change the decision landscape, along with when we get to see how HDR performs on both Freesync and Gsync if the standard is more broadly accepted and coded well for (although that then raises argument on forums what is the right contrast/brightness level as this can vary even between Televisions-Bluray with HDR format).

Cheers
 
Well you said I was going on about the stutter and driver related, I had to clarify I really am not.
Others also have mentioned the streaming texture delayed issue and so have other articles and with other Nvidia cards both Maxwell and Pascal....
The point is there is an inherent known texture streaming issue with the ID Tech engine that has been in all generations going back to 2011, it does not matter if below or above 60fps as it is the optimised streaming rendering engine interraction with the GPU.
Others had problem with Doom with 1070 and texture streaming, I did mention that earlier.
In the link I gave earlier along with the video one member here even said they experience the delayed loading (not stutter but same issue that has plagued earlier iterations of the engine that affected AMD, separate to the stutter issue you think I am talking about but I really am not.
This is what one said with a 1070 in that Hardforum topic:
Anyway I am dropping this because it is not going to change things and all my points and context is being lost, and the issue is not game breaking or an issue generally but would skew blind testing perception related tests if not accounted for.

I've found zero issue with the 1080 Ti streaming textures, below game play is at 3440x1440 (video encoded to 4K so folks if remotely interested can view). Setting is Nightmare minus no motion blur which is a terrible setting for this type of game, no depth of field and no chromatic aberration so texture loading can be looked at better. Adaptive sync is used which came through extremely well with the video with the very consistent 60fps game play, smooth.

I ran up to different objects at times and the 1080Ti maintain perfect texture sharpness at all times. Slowing 60fps to 20fps in video software (not in this video) was very interesting and the game design is utterly fantastic with the depth of detail in the animations and textures were always sharp, no mip-map lines. Something you may not appreciate at normal speed but it is there. Game play is at Extreme-Violence. For me the 1080Ti was utterly flawless in this title.

So really not sure were the slight advantage came from in Kyle's video - would be nice if it can be nailed down a little bit better so we know the most important aspect to get the most bang per buck.

 
I've found zero issue with the 1080 Ti streaming textures, below game play is at 3440x1440 (video encoded to 4K so folks if remotely interested can view). Setting is Nightmare minus no motion blur which is a terrible setting for this type of game, no depth of field and no chromatic aberration so texture loading can be looked at better. Adaptive sync is used which came through extremely well with the video with the very consistent 60fps game play, smooth.

I ran up to different objects at times and the 1080Ti maintain perfect texture sharpness at all times. Slowing 60fps to 20fps in video software (not in this video) was very interesting and the game design is utterly fantastic with the depth of detail in the animations and textures were always sharp, no mip-map lines. Something you may not appreciate at normal speed but it is there. Game play is at Extreme-Violence. For me the 1080Ti was utterly flawless in this title.

So really not sure were the slight advantage came from in Kyle's video - would be nice if it can be nailed down a little bit better so we know the most important aspect to get the most bang per buck.


And like I said others have seen it with decent Pascal, also if you really that adamant did you use same map as the test?
Cheers
 
Last edited:
I've found zero issue with the 1080 Ti streaming textures, below game play is at 3440x1440 (video encoded to 4K so folks if remotely interested can view). Setting is Nightmare minus no motion blur which is a terrible setting for this type of game, no depth of field and no chromatic aberration so texture loading can be looked at better. Adaptive sync is used which came through extremely well with the video with the very consistent 60fps game play, smooth.

I ran up to different objects at times and the 1080Ti maintain perfect texture sharpness at all times. Slowing 60fps to 20fps in video software (not in this video) was very interesting and the game design is utterly fantastic with the depth of detail in the animations and textures were always sharp, no mip-map lines. Something you may not appreciate at normal speed but it is there. Game play is at Extreme-Violence. For me the 1080Ti was utterly flawless in this title.

So really not sure were the slight advantage came from in Kyle's video - would be nice if it can be nailed down a little bit better so we know the most important aspect to get the most bang per buck.



Noko I notice you locked fps.
I notice it is locking to 60fps nearly all of the time and very very briefly to 59fps sometimes.
Also like I said it may come down to options, in previous engine it was worst with greater visual options.
So your setup is not exactly how the engine is used by everyone tbh, that said the point is even if you cannot replicate the issue others have, meaning no conclusion can be made.
Cheers
 
Last edited:
Back
Top