So when are we going to see Ironman level hologram computer interfaces?

thegreywizard

Limp Gawd
Joined
Dec 5, 2015
Messages
132
So with VR technology finally arriving and now having the ability to be wireless and 4k (pinnacle is true 8-10k probably, at least over the next 5 years)

When are we going to see Ironman level 3D hologram computer interfaces and technology?

And when will we see Minority Report style gesture interfacing? I know there are a few devices like Perception Neuron that is super small and tracks movement and things like VRGluv that allow full VR interaction, when are these type of devices going to be perfected for use within the VR dimension and available to the average-high end computer consumer to use for productivity and multitasking?
 
Last edited:
When are we going to see Ironman level 3D hologram computer interfaces and technology?

And when will we see Minority Report style gesture interfacing? I know there are a few devices like Perception Neuron that is super small and tracks movement and things like VRGluv that allow full VR interaction, when are these type of devices going to be perfected for use within the VR dimension and available to the average-high end computer consumer to use for productivity and multitasking?

Probably never.

It's not that the *technology* to deliver these types of interfaces won't exist; I think our current trajectory clearly indicates that we'll have the capability to do these types of VR interfaces in the near-ish future.

The problem is your last bit - when will it be used for consumer productivity and multitasking. Here is where it stumbles, because without tactile feedback, muscle memory is significantly harder to develop. Think how fast and precisely you can type (or if you're a slow typist, how fast your friends who aren't slow typists can go) and now compare that to how fast you can use a touchscreen interface. And with a touchscreen interface, there is still *SOME* tactile feedback - you are tapping against a device, one that at least sometimes vibrates feedback as your inputs are received, and even with that most folks are far slower. The compensating method cell phones use is of course autocorrect, and oh boy that's it's own ball of wax.

Imagine you were just waving your hand in the air trying to click a button; what exactly is the feedback mechanism for when you've successfully pressed it? It'd be an audio cue, visual indicator, or if you were wearing some kind of VR interactivity gloves maybe some kind of simulated tactile buzzing on the tips of your fingers, but nothing as immediately recognizable as an actual click of a button. Imagine your cellphone in your pocket and the phantom buzzing syndrome that seems to collectively attack everyone, where occasionally you feel your phone vibrate but when you check it doesn't look like it actually happened. Imagine the possibility of that level of false feedback for your entire computing experience. Ugh.

The most impressive part of the Iron Man and Minority Report interfaces isn't the mechanics of it, it's that the computers are able to interpret and process the inputs they receive from their users at almost an intuitive or anticipatory level. The display tech is near to us, the intuitive interpretive portion of the interface is much farther away, if not just complete fiction. And it's this last bit that overcomes the inherently imprecise nature of these input formats, and I think unless we can figure out an equivalent then the old standbys of hardware interfaces (buttons, mice, keyboards, joysticks, etc) will continue to reign supreme.
 
If I was typing up a letter or using a word processor, I think I rather just sit down. In fact standing for hours on the computer hitting air or grasping air may become very tedious. Still being able to interact and walk around information would also be very useful and powerful. Agreed the interactivity from a thinking computer would be extraordinary.
 
Oh, I think 3D hologram type interfaces, augmented reality, or just room-scale VR implementations can herald in a wave of *new* ways to interact with data and information, I just don't think it'll *replace* many of the existing ways we use data.

An architect would probably love the ability to virtually walk through his designs in a fully immersive way, but I'll bet he'll get his damn keyboard out when it's time to write his emails.
 
Until something like "Hard Light" (and I don't mean a weapon from Destiny or whatever, or anything else, I mean a physical representation of light that can exist without a directly attributable source for it in 3D reality, more akin to what was used in the game Halo and also in Overwatch) is actually discovered or created, I can't imagine this kind of stuff will ever exist without the need to wear some kind of visual augmentation hardware to be able to "see" the information being presented as light similar to how the Hololens thing from Microsoft works.
 
Back
Top