Handing Over VR’s Toughest Challenge to GPUs

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
In the real world, our hands are our guides. We feel with them, we manipulate with them, we explore with them. We use them to eat, dress and primp ourselves, make a living, and connect with others. And yet, in the virtual world, we’re lucky if we can use them at all. A team of researchers at Purdue University hopes to change that with DeepHand, a deep learning-powered system for interpreting hand movements in virtual environments. By combining depth-sensing cameras and a convolutional neural network trained on GPUs to interpret 2.5 million hand poses and configurations, the team has taken us a large step closer to being able to use our dexterity while interacting with 3D virtual objects.

DeepHand fulfills the long-time vision of its lead researcher, Karthik Ramani, the Donald W. Feddersen Professor of Mechanical Engineering, at Purdue. GPUs are helping the cause by speeding up the training of convolutional neural networks such as the one created for DeepHand. Ramani and his two graduate student researchers, Ayan Sinha and Chiho Choi, used NVIDIA GPUs to train their network, and Ramani said they were able to complete the process 2-3 times faster than if they’d used CPUs.
 
Combine that with some sort of feedback system and that would be a double take Wow! I think we maybe looking at a very big transition period here - maybe as big as from typewriters to PC's or phones to cell phones. Maybe interacting with computers with rather bright AI like in Iron Man Tony Stark is not really that far in the future.
 
Back
Top