MIcrosoft HoloLens hasn't gotten off to the fastest start as the device has been relegated to researchers, developers and limited industrial uses. At the Conference on Computer Vision and Pattern Recognition, Microsoft gave a demo of the recently released Research Mode which allows developers access to the live sensor data feed. The difference shown between the new upcoming Project Kinect for Azure sensor and the current generation HoloLens sensor is tremendous. The old sensor could barely output an usable image while tracking a single pair of hands. The new Project Kinect for Azure sensor was able to track lanyards and an entire audience raising their hands without slowdowns or jerky video. I can't wait to see what becomes of HoloLens 2.0. During the tutorial, the researchers showed the audience a preview of the depth sensor feed from the Project Kinect for Azure, which Microsoft unveiled earlier this year as the sensor for the next version of HoloLens. Video from that presentation has now been made public. The footage shows the level of detail that the Kinect sensor is capable of achieving in rendering a point cloud, with even lanyards and wrinkles in clothing visible in the data feed.