AMD & NVIDIA GPU VR Performance: Island 359 @ [H]

I don't really consider the current batch of "VR" games much more than cheap arcade games. Sure better examples are coming, but they are still hacking together the tools that inherently come with DX12/Vulkan. We are talking about ATW everywhere with APIs that don't currently support the functionality directly.

OK, I take back my previous comment. This quote of yours is the silliest thing ever.

It is clear you have no idea what you are talking about. Do you even own a VIVE/Rift or have you tried any VR games in depth?
 
So if the devs want to perform temporal AA to avoid shader aliasing reprojection is a bit of a repeat of the step just taken. A knowledgeable dev should be able to do a more performant job than leaving it up to the SDK.

I don't really understand what the hell you're trying to say here. Nobody uses TAA with VR except UE4 based games, and that's only because it doesn't have a forward renderer and can't use the recommended 8xMSAA.

ATW is not trivial and I don't think many devs, particularly the garage indie studios who are making the most content right now, could do it better than Oculus.


I guess I haven't checked specifically, but I'd have to imagine they have some implementation for ATW. It begs the question of why use 45fps and even intervals for async operation. If you are already adjusting for time and motion, a little bit more shouldn't be that big of a deal. That framerate should be flexible with lower rates having more artifacts.

A flexible framerate IS an artifact, and that is exactly why valve chose to only have "90 fps no reprojection" and "45 fps with reprojection" settings. SteamVR does not have ATW at all. They have given GDC talks on the subject, check them out if you want.


With foveated rendering I'm not sure how helpful that stenciling would be. The console approach of scaling resolution or compositing tiles would seem a far better approach than a fixed resolution and stencil. Do the VR kits even support DX12/Vulkan paths yet or are we arguing over DX11/OGL where these techniques require IHV workarounds? Last I checked the async behavior was only available through vendor libraries for the kits as DX12/Vulkan were lacking to provide it more directly.

Fixed foveated rendering on current headsets would reduce the cost of drawing areas outside the lens FOV, but it is less than free. Valve claims the perf gain of the stenciling is around 15% vs drawing the full frame. Valve promotes a dynamic res/IQ solution to maintain 90fps and has implemented it in Source 2 as well as a Unity plugin. It will scale rendering resolution and MSAA levels to maintain 90fps as much as it can.

You can only really put an image on a rift or vive by going through their runtime. Oculus supports dx12, I'm not sure about steamvr. Nobody seems to be rushing in that direction in any case.


That's the point of reprojecting quickly. It wouldn't be as fast as possible as much as being as close as possible to being displayed. So if it took 2ms to reproject, schedule it 3ms prior to presenting a frame to keep latency low. Even a minor reprojection should be more accurate than a direct rendering with higher frametime. If the frame can be rendered in a few ms direct is fine, but if reprojection is sufficient to hide a lower framerate that opens up additional performance. Point being the dev could do a better job with that reprojection than just handing off a frame to the sdk. At the very least you could remove a memcpy from the chain.


Safety net sure, but I'd think reprojecting frequently to get an effective 4ms frame would be superior to >14ms reprojected to 14ms or even just less than 14ms. I'm sure future kits could push those refresh rates a bit. Even at 0ms there will still be a delay on the display.

The oculus runtime does this. It is called synchronous timewarp. You hand off the frame and it reprojects with updated headset info. PSVR also does it. SteamVR does not. I have used PSVR and Vive and did not notice any difference in responsiveness.

Lower motion to photon latency is of course better, but the current methods already put it below the 20ms "good enough" rule of thumb that has been around for a few years. Maybe there will be incentive to lower it in the future when we have hundreds of games running on headsets with 200hz 16k screens with 200 degree FOV and no cables. Maybe then latency will be the weak link in the chain that needs to be addressed further. Until then there are bigger fish to fry.

It is a tradeoff, oculus decided that accuracy is more important than framerate. Valve decided that framerate is the most important. If a developer really wanted to they could modify the default runtime behavior but why? The market is so small that it's far more important for devs to put their limited resources toward content. I don't think it should be the job of a 3 man indie team to fix AMD's shitty VR performance. Check reddit's vr related subs. The biggest complaint people have by orders of magnitude is lack of content. Not lack of scheduled reprojection.


I don't really consider the current batch of "VR" games much more than cheap arcade games. Sure better examples are coming, but they are still hacking together the tools that inherently come with DX12/Vulkan. We are talking about ATW everywhere with APIs that don't currently support the functionality directly.

What tools, specifically? Games in development now are not behind held back by reprojection related limitations of dx11. They use what is provided to them by the oculus runtime and steamvr. Even when VR games start switching over to DX12/Vulkan, they will likely still rely on the API to do reprojection (both async and sync) because the built in methods are beyond "good enough" and they will be able to focus on creating content.
 
Last edited:
'Async' operation? You seem to forget that displays requires syncing, especially in VR. You render at refresh, or you drop to intervals - and you don't want to go 45hz unless you want to make someone sick, but sometimes it has to happen due to the absolutely dynamic and time-sensitive nature of the operation.
Where did I forget that? What's worse, intervals or reprojection from 89Hz? You should never be at 45Hz with VR. You should be close to 90Hz and reprojecting with a temporal method to cover any discrepancies.

Generates intermediate frames to make up for the missing 45hz. Do you *want* the system generating fake frames at all times?
What would you define a fake frame? Any sort of motion blur or temporal effect and your frame is fake by your definition. There should never be 45Hz period. 60Hz and reprojection maybe. To have a true frame would require performance far in excess of what could be "faked" using ATW. The real question here is if 90Hz is sufficient or rendering 90FPS and reprojecting towards 100+ yields a better experience.

I'd like to understand this statement more. For non-VR... Do you feel that the only real games are AoTS and Doom?
No, just that the tech doesn't allow the performance and detail in the scene to quite progress to where it would be fully immersive without a ridiculously overpriced system.

OK, I take back my previous comment. This quote of yours is the silliest thing ever.

It is clear you have no idea what you are talking about. Do you even own a VIVE/Rift or have you tried any VR games in depth?
You're one to talk. I don't own a Vive or Rift because I don't see the use yet. The tech just isn't where I'd want to bother with it yet.

I don't really understand what the hell you're trying to say here. Nobody anyone uses TAA with VR except UE4 based games, and that's only because it doesn't have a forward renderer and can't use the recommended 8xMSAA.

ATW is not trivial and I don't think many devs, particularly the garage indie studios who are making the most content right now, could do it better than Oculus.
Which begs the question why aren't games using temporal AA in place of MSAA within a compute pass? It removes shader aliasing and provides a significant performance boost form not having the MSAA. I'd agree it's not trivial, but more performant methods of interfacing with the kits are available.

It will scale rendering resolution and MSAA levels to maintain 90fps as much as it can.
Again, cutting the MSAA in favor of superior quality and performance options seems like a win to me. My ideal is much closer to what Doom was doing with Vulkan and they do have a VR release in the works.

The biggest complaint people have by orders of magnitude is lack of content. Not lack of scheduled reprojection.
Which gets back to why exactly? Why can't I just fire up fallout with a VR headset and run around? What I'm trying to get at here is that there are alternatives that should increase performance significantly so it's possible. I get that there are some issues with moving the camera for VR, but faster more reliable framerates addresses a huge part of that.
 
Again, cutting the MSAA in favor of superior quality and performance options seems like a win to me. My ideal is much closer to what Doom was doing with Vulkan and they do have a VR release in the works.
Exactly what HDM hardware do you own and what games do you have experiences with for more than a few minutes?
 
Which begs the question why aren't games using temporal AA in place of MSAA within a compute pass? It removes shader aliasing and provides a significant performance boost form not having the MSAA. I'd agree it's not trivial, but more performant methods of interfacing with the kits are available.

Again, cutting the MSAA in favor of superior quality and performance options seems like a win to me. My ideal is much closer to what Doom was doing with Vulkan and they do have a VR release in the works.

Which gets back to why exactly? Why can't I just fire up fallout with a VR headset and run around? What I'm trying to get at here is that there are alternatives that should increase performance significantly so it's possible. I get that there are some issues with moving the camera for VR, but faster more reliable framerates addresses a huge part of that.

Years of experimentation says that a default 1.4x native render target (more if performance allows) + 4xMSAA (more if performance allows) provides the best quality experience. Do you really believe nobody thought to try TAA?

Fire up Pool Nation VR and then The Lab. That's all the experience you will need to know why MSAA is preferred. TAA looks like hot garbage when there are so few pixels so close to your face.

Most VR games have resolution and quality settings. Super sampling can be forced in most steamvr games with a config file change. What exactly are you asking for that you're not getting? MSAA is really the only viable option because all others look crappy or are even more taxing. Image quality is extremely important in VR. Nobody who regularly uses VR would be in favor of degrading image quality in favor of fancier effects.

You can't just fire up fallout in VR and run around because you would puke everywhere, even if the game was running at 300fps and you were doing 8 different kinds of reprojection. There are wrapper applications that allow this but most games are not at all suited to a 0 effort VR implementation. People have also modded games directly to support VR and the experience, even at 90fps+, is not pleasant for the vast majority of people. Only mutants who never experience motion sickness can handle it for more than a few minutes.

The issues with moving the camera in vr have nothing to do with framerate. If a movement causes discomfort at 90fps, it's going to cause the exact same amount of discomfort at 800fps. The issue is a mismatch between what your eyes see (your "head" moving), and what your brain feels (no movement). It is an issue of game design, not performance. Poor performance itself can cause issues, but that is independent from slapping a VR mode onto existing games.
 
No, just that the tech doesn't allow the performance and detail in the scene to quite progress to where it would be fully immersive without a ridiculously overpriced system.

How the fuck would you know? You don't even own a HMD. I run my Vive just fine on a 980ti.

Also overpriced is a relative term. If you are honestly expecting to run a $600-800 display with a $250 GPU you are insanely naive.


You're one to talk. I don't own a Vive or Rift because I don't see the use yet. The tech just isn't where I'd want to bother with it yet..

I am one to talk because I actually own a Vive. I am not commenting out of my ass like you.

So because you don't see the use yet that automatically means there are no VR games? But once you buy a HMD the tech and games will finally exists? Kind of a myopic approach don't you think?

Perhaps maybe you are not in the right place to make any comments on VR since it is painfully obvious you have done little research on what is actually available and what is coming.
 
I'm sure they tried it, but found it problematic for one reason or another. I have however seen a lot of devs stating that MSAA is now useless compared to shader AA and not worth the performance hit. Devs I would have expected to try all the available options.

MSAA is not possible with modern deferred lighting engines, that is why most games are using Unity's forward renderer with VR, specifically so they CAN use MSAA. That is how important IQ is and how good MSAA is for VR. Oculus wrote a forward renderer for UE4 because it natively does not have one.


Framerate is largely irrelevant compared to latency.

Wrong. Both are very important. Latency on my htc vive without any sort of reprojection is perfectly fine. It technically could be better, but it is not a primary concern to me. Maintaining a high framerate, to avoid judder or falling into reprojection is very important and is a good part of why I bought a new GPU.

Point of the reprojection I keep arguing for is to minimize the perceived latency even further. Render 90FPS then reproject to respond to head movement more quickly with a very fast pass.

Yes, that is fine, but it is not necessary. It is already more than good enough. Reducing it further may have subtle effects on "presence", or some other things, but it will not be noticeable to 95% of the population.

My understanding was it's more the turn head left and camera continues panning right briefly.

This could have an effect but it is not a primary or even common cause of motion sickness. I don't think anyone has reported latency-based sickness with either major headset. Any sickness can be attributed to either artificial locomotion or too low of a framerate.

An effect current hardware attempts to handle, but what if second generation HMDs had 144Hz refresh rates? Would 90fps reprojected up towards 144Hz help? Rendering 144FPS directly I could see being problematic for any sort of mass market without curtailing the quality of the game.

PSVR has a 120hz screen and supports 120fps input. Most games are being rendered at 60fps with 120hz reprojection. This is a good compromise due to the limited power of PS4, but 60fps -> 120hz is inferior to 90fps -> 90hz. Second gen headsets will likely be 120hz and support 120fps native, 90fps native, or 60fps with reprojection to 120. In any case you want a stable framerate, not just "as good as you can do, and then let ATW fill in the gaps".
 
I am one to talk because I actually own a Vive. I am not commenting out of my ass like you.
I heard the same thing from a buddy that stayed at a Holiday Inn Express.

Also overpriced is a relative term. If you are honestly expecting to run a $600-800 display with a $250 GPU you are insanely naive.
Or marketing is extremely effective in some instances. You should go explain that to Microsoft and Sony.

So because you don't see the use yet that automatically means there are no VR games?
I never said there were no VR games. Just that most that I've seen are still a bit lacking in depth. I commend the progress, but there aren't a whole lot of AAA VR games out there despite all the money companies are dumping into displays.

Wrong. Both are very important.
Framerate is a function of latency. Frames per time vs time.

PSVR has a 120hz screen and supports 120fps input. Most games are being rendered at 60fps with 120hz reprojection. This is a good compromise due to the limited power of PS4, but 60fps -> 120hz is inferior to 90fps -> 90hz. Second gen headsets will likely be 120hz and support 120fps native, 90fps native, or 60fps with reprojection to 120. In any case you want a stable framerate, not just "as good as you can do, and then let ATW fill in the gaps".
Would not adaptive sync/gsync be better? Why drop to 60fps and reproject when you can output 80fps?
 
The argument that VR games lack depth is not entirely unfounded...several are still early access and some are basically just tech demos at this point, but in most cases the prices reflect that (several are free or very cheap).

You have to understand that these HMDs only released to the public in late March and early April of this year...so like, 5 months ago. How deep of an experience would you expect from a VR-only game from the ground up in 5 months?

The games are coming...Obduction is going to be a big one, as well as other experiences like Fallout VR. It's just a matter of time now that the hardware is available.
 
Framerate is a function of latency. Frames per time vs time.

It's not that simple, especially with VR. High framerate allows for a lower latency but does not guarantee it. You can easily have a high framerate game with high motion to photon latency.

Would not adaptive sync/gsync be better? Why drop to 60fps and reproject when you can output 80fps?

No. VR systems use low persistence OLED screens. The pixels are only lit for a couple milliseconds of the 11ms between new frames. This has big benefits in image quality, particularly by reducing blur. You can't have variable refresh on a low persistence screen because you would also get variable brightness. For example if you were getting 90 fps, your nominal brightness would be "1", if for whatever reason your framerate dropped to 60 for a few seconds, your brightness would become "0.67" because the screen is off for longer while waiting for the next frame.
 
Last edited:
Maybe he is still waiting for this to happen?

images_BrandPromos_AMD_PolarisUprising_AMDUprisingLandPgTop.jpg
I'm embarrassed for them
 
If you are looking for VR only you need only 1 Titan. VR does not support SLI yet.
There is absolutely mGPU support in VR, just no games implementing it yet, you will see more soon in UE4 games for sure.
 
For that I can't wait. I voted it as not "the next big thing" simply because I don't think it's ready yet. Games picking up SLi support might make me change my vote.

Why? Is SLI going to make the games better? The screens are low res enough that a single Titan X will be viable for at least the entire first VR generation. There are also inescapable drawbacks to VR SLI. Since you're rendering almost the exact same scene twice, using separate GPUs entirely removes any redunancy reducing methods like single pass stereo. Additionally, since the first gen headsets all have 1 video input, GPU 2 will have to send its half of the image to GPU 1 before sending the final image to the headset. This adds a non trivial amount of latency.

VR SLI is not like desktop SLI where a well made game can achieve near perfect scaling (SFR vs AFR). Valve has said SLI gives you a 30% boost in performance over a single GPU in VR. Single pass stereo and SMP are said to give 20-30% boosts in single GPU performance. Until we have multiple GPUs on a single card with a shared memory pool, or we have VR headsets with a video connection for each eye, I think mGPU is a waste of time and money.
 
Where did I forget that? What's worse, intervals or reprojection from 89Hz? You should never be at 45Hz with VR. You should be close to 90Hz and reprojecting with a temporal method to cover any discrepancies.

You must be confused - both reprojection and ATW ensure that you do *not* go below 90hz in terms of effective frames - it keeps the headset fed with frames to always fill the 90hz sync interval. You still do *not* want to be doing any of them as much as possible. Which brings us to your next point of cluelessness...

What would you define a fake frame? Any sort of motion blur or temporal effect and your frame is fake by your definition. There should never be 45Hz period. 60Hz and reprojection maybe. To have a true frame would require performance far in excess of what could be "faked" using ATW. The real question here is if 90Hz is sufficient or rendering 90FPS and reprojecting towards 100+ yields a better experience.

Any frames generated by reprojection and ATW are 'interpolated' frames, thus are intermediate frames created out of existing ones. This works, but only to an extent - artifacts are inevitable, which breaks realism. Both Valve and Oculus only see it as a safety net and not a 'free performance pass'. Go see what SVP does to videos - it does create interpolated intermediary frames, but in many scenarios, it creates artifacts - ones that you may not mind when watching a video on a flat screen, but in VR, it's going to be jarring AF.

The fact you're implying that 100+fps is better than 90fps for a time-sensitive 90hz display shows gross ignorance, as you either render over refresh (introducing tearing), or you fall victim to Vsync.
 
Last edited:
If no games implement it yet how is mGPU supported and how is what you said different than what I said?

Dunno if you count NV's VR Funhouse as a game, I guess you can, but it's so far the only title that supports VR SLI at this time. Perhaps Kyle doesn't consider it?

It's also UE4, just so you know.
 
Yet there are no AAA VR games publicly available, despite hardware companies investing billions in the technology, so you do have experience with them? I'm all for indie developers, but the current techniques they are employing aren't what I'd consider a good indicator of future VR performance. Nor are they the type of games I'd see myself playing for any significant period of time. The games are coming, but not here yet IMHO.

All I can say is after two minutes of demoing it at a Microsoft Store I saw enough to know I am buying this for Christmas. There's enough content, games and tech demos that I believe I'll get my use out of it based on what's out right now. There are AAA titles in the works but it will take some time.

One thing we might agree on - is if you're happy with what you see now it's only going to get better. I was very impressed with the system as is and will purchase one Cyber Monday.

I am not sure if you're trying to devalue [H]'s VR reviews by saying the tech isn't mature yet... but considering Rifts/Vives sold are already 100k+ and [H] is reviewing some of the best/popular games I don't think that would be a valid opinion. They are reviewing games you can play today with the hardware available. I don't see any other point to your posts. [H]'s reviews helped me an incredible amount in feeling comfortable with my purchase and what to expect for performance.

Only worried about the wife aggro now.
 
Last edited:
Yet there are no AAA VR games publicly available, despite hardware companies investing billions in the technology, so you do have experience with them? I'm all for indie developers, but the current techniques they are employing aren't what I'd consider a good indicator of future VR performance. Nor are they the type of games I'd see myself playing for any significant period of time. The games are coming, but not here yet IMHO.

Well then why is AMD pushing TAM to the masses? You have to understand there must be a reason for this, if AAA VR games are coming out much later.... Now the game industry as a whole they are a slow moving bunch, much like the movie industry but even more so, tried and proven is better then trying something new and failing because of the upfront costs involved that will be never recouped if failed. This is why we are seeing many more indie dev's working on VR right now, for them the risk is less and they are more nimble. (remember I stated it was about a quarter or two too earlier for this type of marketing)

Just depends what quality of title you consider an actual game.

Indie game dev's, there are many good ones out there, and some of them even better then AAA games. Its not about the budget at the end, its about the game play.


That's the point I was getting at. You could render 200fps and the experience suck because they arrive 10s later. Low latency implies you can render enough frames.

Latency doesn't equal performance, the chance of having higher performance is there with lower latency, it doesn't imply it either.
 
Yet there are no AAA VR games publicly available, despite hardware companies investing billions in the technology, so you do have experience with them? I'm all for indie developers, but the current techniques they are employing aren't what I'd consider a good indicator of future VR performance. Nor are they the type of games I'd see myself playing for any significant period of time. The games are coming, but not here yet IMHO.

Well, I guess we know what you think of Indie game devs and their content.

I am not sure how long you have been around PC hardware and development, but we have this thing called a "Chicken and Egg" scenario. In the last 20 years, I have not seen a lot of software developed that did not have hardware to support it yet. When you have some actual experience under your belt in VR rather than what you have read, I would be a lot more inclined to accept any of your arguments, but currently those are fallacious at best. I truly thought you were just trolling us at first. I have played a few games and feel good about my opinions on the subject.

upload_2016-9-3_8-0-54.png


As for your thoughts on the hardware side, you seem a bit confused about the current limitations and overall direction of the market. But then again, maybe you know better than all the hardware tech companies behind it. But I doubt it. I really thought you were just trolling us at first....
 
Sounds like they might need to vary the brightness of the display a bit there. Just need a method to differentiate a pixel between white and black now.

I feel like I'm having a stroke every time I read your posts. This is straight up gibberish.

If the argument framerate matters is accurate, would not 80fps rendered a bit dimmer be better? Would seem a better solution than doubling 45fps and shifting the image slightly.

No. A fair number of people dont even know when their vive has dropped into 45fps with reprojection. We are very sensitive to changes in brightness. It would be obvious and annoying every time. A continually varying brightness would be far worse than a constant low framerate. Assume you have a VR headset with gsync and your game is running at 90fps. Then suddenly there is a cpu spike that causes the next frame to take 50ms. You'd be looking at blackness for 48ms.

Why do you keep throwing out theories without thinking about them for even 15 seconds? People have been working on VR for quite some time, things are the way they are for a reason. Do you think you're the first person to suggest variable refresh rates?
 
There are definitely good indie devs out there, but my problem with this argument is I don't feel the tools currently exist for them to do what they want. They also won't have the cash to develop those tools on their own. Within a year I could see that situation changing substantially though. Unity, UE, and CryEngine are only just getting to the point of having rewritten their engines for the newer APIs. I'm doubtful indie devs, that largely use those engines, have made more progress than that.


Unless you're multi-tasking being able to supply a frame in a given interval is a pretty good indicator of performance. Low latency and low performance would seem to imply some giant gaps in execution. Gaps that could be removed to increase framerate.


All of the engines you listed have EXCELLENT tools for VR development, I can definitely attest to the three. And they don't need to fully rewrite their graphics core, as all three have done quite a bit of changes for LLAPI's already. tweaking here and there is all they have to do now.

Latency and Frame rates are only interlinked when you are able to translate latency shifts based on rasterization throughput. Having latency in the shader pipeline doesn't always affect frame rates, its highly dependent on what you are doing and how you can hide it. So you can have low frame rates and low latency, high frame rates low latency, low frame rates high latency or high frame rates and high latency, the more important thing is to be able to even out your latency and frame rates so visually it doesn't disturb the viewer. Lower the latency is always better though. But again that doesn't translate to automatically having high frame rates. This is what you are seeing with amd vs. nv cards right now and we can see even with the potential of lower latency for AMD cards, they can't achieve the frame rates that nV cards can, and this causes increased reporjection and dropped frames.
 
Last edited:
All of the engines you listed have EXCELLENT tools for VR development, I can definitely attest to the three. And they don't need to fully rewrite their graphics core, as all three have done quite a bit of changes for LLAPI's already. tweaking here and there is all they have to do now.
Tools maybe, but the software stack has only recently started to become available. DX12 has started landing and Vulkan is generally coming in the next few months from what I've seen. That's not nearly enough time for an indie dev to pick up the engine and put something out there. The key is how many VR games are currently running on DX12/Vulkan?
 
any game that is in the works with VR or already delivered on those engines, should be a simple recompile with some minor tweaks too get them to work on those API's, I haven't had to many changes to our game with UE4 when using DX12 coming off of coding with C++ on the DX11 version of the UE4 engine. Also porting game code from one engine to another as long as we stick to C++ its not much of a problem either. Blue prints don't come across but we use those mostly for game mechanics and simple surface effects, most of the game logic, ai, etc, are all coded in C++

The only time I can see developers having issues is if they make changes to the engine code, and most indie developers won't do that if its avoidable, as it really increases cost and time and also support is less since you have branched the engine and updated versions of the engine will take more time to integrate the code changes.

VR components are ancillary to the engine code, just like the game code is, everything is fairly well packaged so making changes in one area should not affect another unless the developer is looking for a specific thing the engine is incapable of doing.
 
Last edited:
Not if you understand basic electronic fundamentals. Duty cycle and voltage.

I'm a software engineer and I took many physics classes in college. You are talking gibberish.

"Just need a method to differentiate a pixel between white and black now." This is a meaningless combination of words.


I never said anything about varying the effective brightness. I indicated a rather simple method to keep the brightness consistent while varying refresh. Most display circuits buffer what is being displayed and continue scanning it. It should only be an issue if the refresh rate changes prior to receiving the next frame.

????????????????????????????

The whole point of VRR is that the refresh rate can and will change prior to receiving the next frame. You don't know exactly how long the next frame will take to render until it's rendered.

It is literally impossible to have a constant brightness with a variable refresh rate low persistence screen. It's possible on sample and hold LCD screens because you just display the current frame until the next one is ready. This is incompatible with VR because of the need for low persistence. This is basically the same reason that monitors which support ULMB and Gsync don't support both simultaneously.

You have no idea what you're talking about.


Doubt it, but just because it's not currently available doesn't mean it won't be in the near future. If it did exist the displays would all end up with freesync and only work well on AMD cards or increase the HMD costs $100 to get a chip for gsync and only support Nvidia like all the monitors. That would seem to be a problem on current hardware.

It's possible now, but it's never going to happen with VR because it's a terrible idea.
 
So here's a thought, have a very predictable refresh rate that you can still change. ATW techniques run asynchronously and complete very quickly. They also don't need to occur once per frame. So if you can guarantee output of a frame in <2-3ms, regardless of the rate you're driving the game, why is it so hard to vary that refresh rate? Although under these conditions you'd likely never need to change the rate as you could practically guarantee any reasonable framerate. There's already been one AAA game doing this that I've seen, but I'm not 100% on their VR demo. It's also extremely likely all VR games developed on console will do something similar and then get ported to PC. Microsoft appears to have designed the next shader version to do exactly that. So if the entire industry, save the indie developers, are moving in the direction I suggest, why do you think it's that outlandish?


I need you to explain this, I don't see that at all with SM 6.0, what I see if using intrinsic it makes it easier for certain GPU's but not all GCN, same problem as before, you still need to hand optimize from within generations and code changes because not all GCN's have the same functionality or extensions.

You keep repeating this, but that is NOT true, if you look at OpenGPU intrinsic they even say you have to make sure which GPU's supports what before you go ahead with things. First off intrinsic for DX12 those extensions aren't even active at a driver level....

So I don't even know where you are getting your information from, because you can't even test that out, nor even see what GCN version supports what.

There are changes outside of the shaders that need to be done too..... The graphics pipeline isn't all shaders! And the other parts of the graphics pipeline affect the shaders being used......

Now if you have been fallowing SM 6.0 presentations, its target hardware is DX12.1, so what does that mean? I don't know yet cause I can't test it out......


About the variable refresh rate, Anarchist, do you know how the human eye works and how the brain processes info and why variable refresh rates in VR would cause havoc with the way our brain and eyes process the visual data? This is also a reason its not being done.
 
Last edited:
I need you to explain this, I don't' see that at all with SM 6.0, what I see if using intrinsic it makes it easier for certain GPU's but not all GCN, same problem as before, you still need to hand optimize from within generations and code changes because not all GCN's have the same functionality or extensions.
Well then I guess those cards won't support SM6.0 along with Pascal. I doubt all the intrinsics are being made available, but the functionality provided should be fairly common for most cards. Nvidia can't support some of the functions directly, but they could emulate them with acceptable performance to be compliant. In just about all cases I'd expect performance to be better than the alternative and aid GCN console code in being ported to PC. The biggest porting concern I'd expect to be the different warp sizes.

You keep repeating this, but that is NOT true, if you look at OpenGPU intrinsic they even say you have to make sure which GPU's supports what before you go ahead with things. First off intrinsic for DX12 those extensions aren't even active at a driver level....
They will be, that seems to be the point of the update. What exactly do they say about intrinsics within SM6? Just because it's not true now doesn't mean it won't be with an update. Things change. XB1 is literally the testbed for SM6 and I'm sure they planned on supporting more than just 7000 series AMD cards with the release. Some of those intrinicis must have been readded to Polaris if that's what's being used for Scorpio and your argument is accurate.

About the variable refresh rate, Anarchist, do you know how the human eye works and how the brain processes info and why variable refresh rates in VR would cause havoc with the way our brain and eyes process the visual data? This is also a reason its not being done.
It aggregates pulses over time. Next gen HMDs are apparently pushing 120Hz, so apparently there is some room for different refresh rates and higher apparently better. Maybe these games should be tested to see which cards maintain 120fps? Bottom line it will be simpler without a variable framerate because of the whole ATW thing. That's still reliant on being able to run an asynchronous operation effectively.
 
Well then I guess those cards won't support SM6.0 along with Pascal. I doubt all the intrinsics are being made available, but the functionality provided should be fairly common for most cards. Nvidia can't support some of the functions directly, but they could emulate them with acceptable performance to be compliant. In just about all cases I'd expect performance to be better than the alternative and aid GCN console code in being ported to PC. The biggest porting concern I'd expect to be the different warp sizes.

Sorry updated my post a little bit with more info.

No AMD's Open GPU was specific on its own hardware too. Pascal will support all of it because its a DX12.1 card,no card or GPU on AMD's side will support the full feature set on release.

They will be, that seems to be the point of the update. What exactly do they say about intrinsics within SM6? Just because it's not true now doesn't mean it won't be with an update. Things change. XB1 is literally the testbed for SM6 and I'm sure they planned on supporting more than just 7000 series AMD cards with the release. Some of those intrinicis must have been readded to Polaris if that's what's being used for Scorpio and your argument is accurate.

We don't know at this point, it was never talked about as of yet by MS. Yeah they will support more than the 7000 series I agree but they did go into deprecating features and what not nothing specific though, so some older features might not work.


It aggregates pulses over time. Next gen HMDs are apparently pushing 120Hz, so apparently there is some room for different refresh rates and higher apparently better. Maybe these games should be tested to see which cards maintain 120fps? Bottom line it will be simpler without a variable framerate because of the whole ATW thing. That's still reliant on being able to run an asynchronous operation effectively.

The eye can't process anything over 24 FPS right? That is a visual limit of our eyes, but the brain on the other hand can. Now we might not think it can't but the problem is the visual data coming from our eyes, is not one sensor, its 3 total sensors, but some of them are limited and this is why 24FPS is pretty much good for TV, Monitors etc, and with VR you need even more, because now there is no outside reference to help set the processing on the brains side. This topic is not going to be easy to talk about with out a deep understanding of the anatomy and physiology of how are eyes and nervous system (brain and optic never) function. I did a cursory over look of VR and could see it would cause problems with it, hence why people still get sick/dizzy after too long of use.

Now if you want to add variable frame rates in just imagine what that would do to our brain processing that information, or trying to with a set local?
 
Last edited:
That won't confuse people (435, 430). That's like nVidia naming the pascal Titan the Titan X. Oh waaiiit....
Curious what the differences between a R5 435 and R9 435 are.

Pascal will support all of it because its a DX12.1 card,no other card or GPU on AMD's side will support the full feature set on release.
The presentations I saw I thought indicated it would be a requirement for DX12.0 although nothing is official to the best of my knowledge. Should be features already largely supported through the compute APIs. I'd expect all cards to work, but it likely maps a bit better to GCN. The difference would be minimal at best though. The cross lane stuff would seem to be the big addition and all the hardware should support that. My understanding was it brings GCN intrinics to PC without requiring the explicit checks and IHV specific libraries. Bottom line it should be better for everyone and likely only affects compute shaders.

The eye can't process anything over 24 FPS right? That is a visual limit of our eyes, but the brain on the other hand can. This topic is not going to be easy to talk about with out a deep understanding of the anatomy and physiology of how are eyes and nervous system (brain and optic never) function. I did a cursory over look of VR and could see it would cause problems with it, hence why people still get sick/dizzy after too long of use.
Not in a meaningful way but should be distinguishable up to around 50Hz. Higher than that and it's likely a more autonomic response which could still affect sickness. For instance you can blink without thinking about it if something approaches your eye. Get a bright light and you likely see spots. Running average is the best analogy I can come up with, but still sensitive to higher frequencies.
 
The presentations I saw I thought indicated it would be a requirement for DX12.0 although nothing is official to the best of my knowledge. Should be features already largely supported through the compute APIs. I'd expect all cards to work, but it likely maps a bit better to GCN. The difference would be minimal at best though. The cross lane stuff would seem to be the big addition and all the hardware should support that. My understanding was it brings GCN intrinics to PC without requiring the explicit checks and IHV specific libraries. Bottom line it should be better for everyone and likely only affects compute shaders.


MS GDC talks this year stated 12.1.

Not in a meaningful way but should be distinguishable up to around 50Hz. Higher than that and it's likely a more autonomic response which could still affect sickness. For instance you can blink without thinking about it if something approaches your eye. Get a bright light and you likely see spots. Running average is the best analogy I can come up with, but still sensitive to higher frequencies.

Right, but the brain can do more, there is a lot more data the brain can process intuitively without "real" sight. And this is where the problems with VR come in. The way we grow up, the way the human body is, is not something that can be easily "duped" lol by VR. Our eyes or brain are expecting data and references from the outside world, without us even knowing about it. Its like a person that is blind in one eye and has no depth perception, the brain still processes it or tries to but yeah it fails lol.

The real world, there is no such thing as refresh rate, frame rates etc. but the brain can pick those up without us knowing it, why do you think some people get headaches when the FPS is lower than 60? Most don't but it happens.

So adding things like variable refresh rates although sounds great since it works on monitors, monitors have a set local the brain can process that local and figure out what is going on, with VR you don't have that anymore......
 
Last edited:
MS GDC talks this year stated 12.1.
Maybe, but that's gotta be awfully difficult to test on XB1. Just seems odd MS would update to code that neither of their consoles currently support when trying to bridge PC and XB.

The fact that future HMDs are looking at 120Hz would seem telling and that's down around 8ms frame times. So without some interesting adaptations we won't see any progress on VR hardware as they simply spend all their time running the same content faster.
 
Pretty sure razor was joking when he said 24Hz. A while back [H] linked an article where it said the center of your viewing and in full color it's something around 60Hz or a little higher. Black and white is higher and your peripheral can sense over 150Hz. All "if I recall correctly" but year, refresh rate is not black and white.
 
Well obviously not from a very good school. Being an electrical engineer and having actually designed circuits to drive a display you can vary the effective brightness by changing the duty cycle(refresh rate) or voltage(color/intensity). It's pretty simple stuff in just about any display technology you will find. It's no different than alternating between black and white pixels to make gray. You indicated changing the refresh rate (duty cycle) would affect the brightness and I suggested dimming the screen to counteract the effect. Changing the brightness of a pixel should be a pretty simple step for most graphics and display hardware. Most monitors I've seen even have an option to do it. It's as simple as scaling an 8 bit value. Just because you don't understand this doesn't mean it's gibberish and you've demonstrated a lot of ignorance here.

"Just need a method to differentiate a pixel between white and black now."? That sentence still makes no sense in that context. Black and white pixels by definition are differentiated.

Your suggestion still doesn't fix the problem. How would dimming the screen counteract the unpredictable brightness of low persistence VRR?

Here's a simplified timeline of a hypothetical VRR low persistence screen, '-' means 1ms of no image displayed, '+' means 1ms of displaying a frame

---------++---------++---------++---------++-------------------------------------------++

How does your suggestion fix the large gap between the last two frames (the last frame took longer to render because a big explosion occurred on screen or whatever)? The screen will be off for way longer while waiting for the next frame, so the perceived brightness will drop.

When do you dim the screen brightness? For how long? When do you bring it back up? How is dimming the screen fixing the problem of variable brightness???

How is any of this better than just maintaining a framerate above 90?


So here's a thought, have a very predictable refresh rate that you can still change. ATW techniques run asynchronously and complete very quickly. They also don't need to occur once per frame. So if you can guarantee output of a frame in <2-3ms, regardless of the rate you're driving the game, why is it so hard to vary that refresh rate? Although under these conditions you'd likely never need to change the rate as you could practically guarantee any reasonable framerate. There's already been one AAA game doing this that I've seen, but I'm not 100% on their VR demo. It's also extremely likely all VR games developed on console will do something similar and then get ported to PC. Microsoft appears to have designed the next shader version to do exactly that. So if the entire industry, save the indie developers, are moving in the direction I suggest, why do you think it's that outlandish?

Yet another thought that makes no sense. How is the refresh rate predictable if you can change it?

If the refresh rate changes on a low persistence screen, the brightness will fluctuate, and that is bad, and nobody is ever going to do that. I don't understand what is so hard to accept about that.

I don't at all understand the rest of this or how it is beneficial to anyone. You want to guarantee a frame in <2-3ms? Meaning what, you're going to be doing ATW 350-500 times per second? How does that help? How does that make it less bad to vary the refresh rate on the displays?

I'm not sure what AAA game you're referring to and have no idea what you mean in regards to console VR games. PSVR's refresh rate is fixed in a game just like every other headset. The only difference is that it can run 90hz or 120hz depending on the game.
 
The eye can't process anything over 24 FPS right? That is a visual limit of our eyes, but the brain on the other hand can.

You got that backwards. 24 is the LOWEST fps where the brain will see it as continual motion rather than a series of pictures. We can process way higher rates than that.
 
Back
Top