Oculus Rift Unveils New Prototype at CES

Bullshit.

Actually not bullshit.

They are taking advantage of a special type of display that Samsung has the rights to. Its used in the Oculus to solve the latency issues that were giving people motion sickness. Using these displays that have solved the motion sickness issue.
 
I thought it required two eyes though? In reality, everything is not 4 inches away from your face.
 
The rift still works, even if you have vision impairment.

It wont suddenly grant you the power of sight like Geordi LaForge's visor, but you should be able to see the game world as well as you see reality.

The above post was directed at this.
 
Doctor botched my eye when I was an infant, severed the retina so I've no vision in my left eye.
If it wasn't for that, I'd LOVE to have a rift.
 
I am much more interested in blur elimination. They said blur was actually inducing nausea which is why they couldn't sit back on the issue and had to make it a priority.
A large factor in any strobing is the rate at which it strobes. A real-world strobelight strobes slowly to purposefully show a staggered effect, which is why you can see the blackout state of a real world strobe light very noticeably and why they are annoying to your eyes. They are a bad example. That's like comparing a flip-book animation to a movie. Strobing to eliminate blur on a display demands high refresh rates, 100hz+. Hopefully these are 120hz refresh displays. Best case scenario is that you have 100 - 120fps+ fed to a 100hz - 120hz display. There are a bunch of people who do this now already with lightboost and eizo fg2421 backlight strobing displays. Perhaps in VR the framerate becomes more important, but then again I don't know what their testing setup's hz and framerate was. Low hz would be bad and produced bad stroboscopic effects like a real world strobe light. High hz is a must.
The same look at the floor and spin your head up to the wall scenario without strobing would make a blur-smear at 60hz sans strobing, which would make a lot of people nauseous in a vr perspective. You would rather the whole world "slush" smear every time you move your head, losing all detail, making your eyes strain to focus, and inducing nausea than use high hz strobing which has already been shown to be effective in certain desktop monitors?
I was very disappointed when I had been looking to 120hz backlight strobing monitor tech that has been coming out for desktops and found everything I heard about the oculus would be no strobing and only 60hz. These upgrades to oled, "high hz" (120hz??), and backlight strobing/screen blanking at high hz are great news to me and many others. They are also working on getting the input lag down under 20ms.
This shows that the pc will blow away the consoles in the VR segment going forward during a long console life cycle. Consoles can only output 60hz. There will surely be future revisions of the rift and competitors at much higher resolution as well since VR really demands it, and consoles won't be able to do high rez (they can barely do 1080p now even with muddier textures, lower view distances, objects viewable in distance, etc than pc). The pc mod support and communities promises a much more open authoring environment too.
 
Doctor botched my eye when I was an infant, severed the retina so I've no vision in my left eye.
If it wasn't for that, I'd LOVE to have a rift.

Haha you're in the same boat at me. Sucks, right? I've been bling in my right eye since birth.
 
Haha you're in the same boat at me. Sucks, right? I've been blind in my right eye since birth.

REALLY sucks. All I ever wanted in life was to be a pilot but you've got to have vision in both eyes for it.
Helped develop an almost inhuman sense of hearing though. Probably why I love music so much, I can hear all the little things most people miss.
Not really worth the tradeoff IMHO.
 
reading that valve article, it is from july.. I'm not sure he was even testing his "game prototype" on the current "crystal cove" very high refresh rate oled version of the rift He does mention that they were experimenting with backlight strobing. He gives no details as to the refresh rate his prototype game is capable of being set to, or what the refresh rate the iteration of the rift he is using is capable of.
.
He also says " It’s unclear whether the visual instability effect is a significant problem, since in our experiments it’s less pronounced or undetectable with normal game content.".
 
REALLY sucks. All I ever wanted in life was to be a pilot but you've got to have vision in both eyes for it.
Helped develop an almost inhuman sense of hearing though. Probably why I love music so much, I can hear all the little things most people miss.
Not really worth the tradeoff IMHO.

Interesting. I didn't get anything. :p I was thinking about going into the army for a few years to earn those awesome benefits but you also need both eyes for that.
 
REALLY sucks. All I ever wanted in life was to be a pilot but you've got to have vision in both eyes for it.
Helped develop an almost inhuman sense of hearing though. Probably why I love music so much, I can hear all the little things most people miss.
Not really worth the tradeoff IMHO.

Considered becoming a sonar operator for the navy? :D
 
I thought it required two eyes though? In reality, everything is not 4 inches away from your face.
Does seeing reality require two eyes? Nope, you just don't get telescopic depth perception. Your brain can still get some semblance of depth perception using angular diameter and motion parallax.

Same thing that works for seeing the real world works for seeing the virtual world inside the Oculus Rift.
 
Strobing to eliminate blur on a display demands high refresh rates, 100hz+. Hopefully these are 120hz refresh displays. Best case scenario is that you have 100 - 120fps+ fed to a 100hz - 120hz display. There are a bunch of people who do this now already with lightboost and eizo fg2421 backlight strobing displays
100Hz isn't nearly high enough for LED's...

Phosphor and CCFL take a few moments to fade out, so they can get away with slow cycle rates (the bulb never goes entirely dark). LED's are practically instant-on, instant-off, they need MUCH higher frequencies (closer to 1000 Hz) if you want to totally eliminate stroboscopic artifacts.
 
valve article from last year
Not long ago, I wrote a simple prototype two-player VR game that was set in a virtual box room. For the walls, ceiling, and floor of the room, I used factory wall textures, which were okay, but didn’t add much to the experience. Then Aaron Nicholls suggested that it would be better if the room was more Tron-like, so I changed the texture to a grid of bright, thin green lines on black, as if the players were in a cage made of a glowing green coarse mesh.

this is the worst scenario for many screen abberations (including ghosting and overshoot) - extreme bright lines on black. I remember playing rock band on a VA tv and the bright glowing lyrics on the black background in the kareoke style gameplay of the singer would look terrible, ghosting and blurring. When your eyes see bright on black you get some wierd effects besides, like viewing a black skull on white paper or vice versa, then looking away to a white cieling shows the skull burned into your retinas.
.
Again the article is from july? The testing scenario from when?.. and doesn't say if he is using the crystal cove oled sub millisecond pixel switching time + very high refresh rate and strobing version of the rift (which I seriously doubt). Setting some "prototype" light-brite neon grid on black sample room game environement up and staring into the corner and quickly to the middle of the bright on black grid wall is grasping at straws nit picking.
Valve article from last year:
It’s unclear whether the visual instability effect is a significant problem, since in our experiments it’s less pronounced or undetectable with normal game content.

From The CES 2014 Interview with Palmer Luckey and Nate Mitchel..
Interviewer(Ben): Why don't you start out with a quick explanation of what low persistence is, why you are using it, and why it is better.

Oculus Devs: "I'll start back to front. We are using low persistence because it allows us to eliminate motion blur, reduce latency, and make the scene appear very stable for the user. The best way to think about it is.. a full persistence frame, you render a frame, you put it on the screen, it shows it on the screen and then it stays on the screen until the next frame comes. Then it starts all over again. The problem with that is a frame is only correct in the right place when it's right here <motions with hands together to indicate a short middle period>. For the rest of the scene, it's kind of like garbage data. It's like a broken clock - you know how a broken clock, how it's right occasionally when the hands move to the right place - most of the time it's showing an old image, an old piece of data. What we're doing with our low persistence display is rendering the image, sending it to the screen, we show it for a tiny period of time - then we blank the display. So it's black until we have another image. So we're only showing the image when we have a correct, up to date frame from the computer to show. If you do that at a high enough frame rate, you don't perceive it as multiple discreet frames, you perceive it as continuous motion , but because you have no garbage data - you know, nothing for your retina to try to focus on except correct data - you end up with a crystal clear image.
And part of that - one of the missing features that was required to do lower persistence is pixel switching time. We needed a sub millisecond pixel switching time which we get from oled technology to allow us to do all of this.
- And to be clear, pixel switching time is a big factor in motion blur. In fact, we used to think it was even a bigger factor and we drove pixel switching time down, down, down.. and .. once we starting experimenting with displays that allowed us to switch almost instantly, getting completely rid of the pixel switching time, it turns out that there are a lot of more artifacts like judder that look like motion blur even when the panel is perfect. When you put our panel under a high speed camera, every single frame would be perfectly crystal clear where as an lcd, you would see a smeared blurry image because the pixels are switching. For us, it's always crystal clear.. it's all in your brain this motion blur.

That's probably the biggest update we've made to this prototype. It's a major breakthrough in terms of immersion, comfort, and actually visual stability of the scene. Now you can actually read text, not only because of the high resolution, but with text in the world before, even if you were moving your head just a little bit, which most of us naturally are as we are looking around a scene, the text would just smear - very heavily. Now with low persistence, all of the objects feel a lot more visually stable and locked in place.
It's worth noting that, this technology will continue to be important for VR for a very long time. It's not a hack that gets around some issue we have right now. Until we get to displays and engines that can render at 1000 frames a second and can display at 1000hz basically displaying whole persistence frames that are as short as our low persistence frames, there's going to be no other way to get a good VR experience. It's really the only way that's known. And Valve's michael abrash has a blog that he posted about a year ago , talking about the potential for low persistence to solve these issues. Right now there is no other way that we know of.
.
Interviewer: "And although - everyone is talking about the positional tracking, and that's awesome, and everybody's been looking forward to that, you guys were telling us earlier that you think low persistence is perhaps a bigger, more important breakthrough for now that positional tracking".

A: "I mean, position tracking it's really good and it's important but it's something we've always known we needed to have and so we were going to have to build it, it was an expected. That's obvious for any VR system. Any VR system where you're trying to simulate reality, you want to simulate motion as accurately as possible. We weren't able to do it in the past, but we knew it was going to happen for consumers. So low persistence is a breakthrough. In that it was unexpected..it was, we did not expect to see the kind of jump in quality that we saw - where we said this isn't just one of those "every little bit helps" , it is a killer - it completely changes the way that, it completely changes the experience.. fundamentally."
.
Interviewer: "Now I want to backtrack slightly to low persistence. Earlier I think you guys had mentioned that lower persistence, in addition to bringing up the visual fidelity, reduces latency is that correct?"

A: "Kindof. Well, it that they all work together. You can't do low persistence without really fast pixel switching time, and fast pixel switching time also allows us to have really low latency. Um, because as soon as the panel gets the frame, we're displaying it and it's instantly showing the correct image."
"So I think if you look at the motion to photons latency pipeline that we've talked about alot, pixel switching time has always been one of the key elements in there, in that there is this major delay as the pixels change color. Now that we've eliminated the pixel switching time because of the oled technology - it's not that low persistence is getting us even lower latency, but all together - I think what's interesting is that at E3 when we showed the HD prototypes, those demos were running between you know, 50 to 70 ms of latency for the ue4 elemental demo. Here at ces 2014 we're showing the epic strategy VR demo and E valkyrie and both of those demos are running between 30 and 40ms of latency. So that's a pretty dramatic reduction, you know, in terms of the target goal which is really delivering consumer V1 under 20ms of latency. "
"That is a goal we'll be able to pull off."
 
Does seeing reality require two eyes? Nope, you just don't get telescopic depth perception. Your brain can still get some semblance of depth perception using angular diameter and motion parallax.

Same thing that works for seeing the real world works for seeing the virtual world inside the Oculus Rift.

Problem is, when you see out of one eye, you're really missinga decent chunk (just over 1/3 due to the center overlap from your eyes) so theres going to be things a one eyed person isnt going to see, where we can get the whole image on a pc monitor or tv.

It'd be money better spent for me to upgrade to a larger monitor than get a Rift.
 
Problem is, when you see out of one eye, you're really missinga decent chunk (just over 1/3 due to the center overlap from your eyes) so theres going to be things a one eyed person isnt going to see, where we can get the whole image on a pc monitor or tv.
Right, just like how they see the real world...

It'd be money better spent for me to upgrade to a larger monitor than get a Rift.
How so? A rift allows a MUCH higher FOV than a monitor.

Instead of sticking everything in a tiny square in the middle of your central vision, you get to use every last degree of vision that your eye(s) allow.

Sounds like a rift would be a much better investment for the vision-impaired. You get to squeeze out every last bit of value from that one good eye.
 
I don't think it would work fully for him since it uses a separate image for each eye and combines them.

http://www.roadtovr.com/what-does-it-look-like-in-the-oculus-rift/

The Oculus Rift has a horizontal field of view (HFoV) of approximately 90 degrees in each lens. Total horizontal field of view for humans is close to 180 degrees. The view inside the Rift appears to significantly surround the user (though not completely).

The panel's resolution is expected to be 1920 x 1080 (effectively 960 x 1080 for each eye) when the consumer version launches. Each eye will have its own screen, which simulates normal human vision to give the user a true 3D stereoscopic experience.

It is because of the fovea that it makes sense to use variable acuity resolution (VAR) on the Oculus Rift.

VAR means that the resolution is not consistent across the display. Technically, the Rift’s display has consistent resolution, but the lenses warp the image, causing a compression of pixels toward the center. This doesn’t noticeably warp the imagine in the Rift because the corresponding virtual scene is warped equally in the opposite direction so that it is projected correctly.

The result is a higher pixel density in the center of the screen and a lower pixel density as you approach the edges of the screen. VAR helps prevent ‘wasted’ detail in your peripheral vision by bunching pixels closer in the area seen by your fovea.

There are some single display VR style headset displays coming out and competitors to the rift eventually so there might still be some other headset options for him at some point that don't use a different image for each eye relying on combining both eye's vision..

http://www.kotaku.com.au/2013/12/oculus-rift-your-competition-has-arrived-and-will-cost-500/

http://gamerant.com/gameface-vr-headset-prototype-ces-2014/

http://www.geek.com/games/sonys-new-oculus-rift-competitor-has-a-750-inch-virtual-screen-1581537/

Steven Spielberg recently said, “We’re never going to be totally immersive as long as we’re looking at a square, whether it’s a movie screen or a computer screen. We’ve got to get rid of that and put the player inside the experience, where no matter where you look you’re surrounded by a three-dimensional experience. That’s the future.”

http://www.nytimes.com/2013/06/23/opinion/sunday/movies-of-the-future.html?_r=0

http://conditionone.com/blog/zero-point-coming-soon/
 
Back
Top