Qualcomm Demonstrates New Depth-Sensing Camera Tech for Android

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
Qualcomm is bringing depth sensing and biometric authentication to Android through its Spectra Module camera program, which lets manufacturers swap out different sorts of camera functionality during the manufacturing process. In addition to better face detection and VR/AR opportunities, the premium version of Qualcomm’s depth-sensing cameras are sophisticated enough to detect and accurately recreate complex actions, such as someone playing a piano.

...Qualcomm’s timing on this is very deliberate. This news isn’t tied to any specific product, but the company is looking to get out in front of the iPhone 8 launch. That device is widely expected to feature its own depth sensing camera, as part of Apple’s own big push on photo effects, facial scanning and AR, and the chipmaker really wants to be a part of that conversation. By revealing its plans a month ahead of Apple, Qualcomm will undoubtedly get name dropped in a number of pieces about the iPhone and perhaps gets to take some wind out of the company’s sails.
 
What's interesting here is their video shows full 3D recreation of fingers.

With depth sensing technology up to this point, you could tell how far away the front of an object was, but you couldn't see behind it. You see a similar effect when you try to watch satellite views in 3D and you see a bridge over a river/dam/gorge/valley. The bridge seems to drop off all the way to the floor of the valley because there is no information for the bottom of the bridge.

So is qualcomm seeing behind the surface of the object to see where it ends? Or are they applying an algorithm to generate the fingers based on what it sees up front? I'm thinking the later. Cameras can't be specially placed that far apart on a tablet to get enough difference on angle and any see through vision is high energy like x-rays. And no way would that pass safety inspections.

So this tech would essentially fail for objects it doesn't recognize.
 
Qualcomm’s timing on this is very deliberate. This news isn’t tied to any specific product, but the company is looking to get out in front of the iPhone 8 launch. That device is widely expected to feature its own depth sensing camera, as part of Apple’s own big push on photo effects, facial scanning and AR, and the chipmaker really wants to be a part of that conversation. By revealing its plans a month ahead of Apple, Qualcomm will undoubtedly get name dropped in a number of pieces about the iPhone and perhaps gets to take some wind out of the company’s sails.

How dare they copy something Apple will invent in a few months!


(Apple Invention translation: Something Apple releases on a device that someone else created many years ago.)
 
Apple wasn't the first to put email and web on a mobile device, but they did catapult it into mainstream.

I don't care who makes AR mainstream - just somebody do it already!
 
What's interesting here is their video shows full 3D recreation of fingers.

With depth sensing technology up to this point, you could tell how far away the front of an object was, but you couldn't see behind it. You see a similar effect when you try to watch satellite views in 3D and you see a bridge over a river/dam/gorge/valley. The bridge seems to drop off all the way to the floor of the valley because there is no information for the bottom of the bridge.

So is qualcomm seeing behind the surface of the object to see where it ends? Or are they applying an algorithm to generate the fingers based on what it sees up front? I'm thinking the later. Cameras can't be specially placed that far apart on a tablet to get enough difference on angle and any see through vision is high energy like x-rays. And no way would that pass safety inspections.

So this tech would essentially fail for objects it doesn't recognize.

That's not what I saw. Look at the 34 second mark - you see the tops of the fingers, hands, and arms; but anything beyond the view of the camera (the bottoms of fingers and hands) is simply not being filled in.
 
That's not what I saw. Look at the 34 second mark - you see the tops of the fingers, hands, and arms; but anything beyond the view of the camera (the bottoms of fingers and hands) is simply not being filled in.

Watch the thumb from 55 seconds onward. It seems to exist behind the palmal branch (The base of the hand that extends out to your thumb)
 
Watch the thumb from 55 seconds onward. It seems to exist behind the palmal branch (The base of the hand that extends out to your thumb)
You're misinterpreting, because the point of view is not the same as the camera location.

Look at where the shadows are, as well as the point density across the scene. The camera generating the point-cloud is somewhere off to the right of, and in front of, the PoV.
 
Back
Top