A camera looking around a corner - sort of

seanreisk

[H]ard|Gawd
Joined
Aug 29, 2011
Messages
1,711
Remember when Deckard analyzed a photograph in the original Blade Runner? (Btw, R.I.P. Rutger Hauer.) In the movie, the scanner uses the picture to enter a 3D space that feels intriguing but is not explained.

Now scientists at Stanford are using lasers, plotting and some heavy calculations to look around a corner (so to speak.) It's not the same thing as Blade Runner, but all the same it's a very clever idea that builds on current technology.

As a concept it's interesting. As a proof-of-concept it's a little mind-blowing, because this could be refined and enhanced for uses that will make even the most level-headed person want a tinfoil hat!

YAY!
 
Seen it on Voyager. Holo deck sorcery I thought and yet here we are.
 
Its cool.. but can it do it with a changing field of vision. I mean in the experiment you can count on the wall being a stable white wall, not so everywhere else.
 
Its cool.. but can it do it with a changing field of vision.

That's the part that really stretches your brain - you should be able to do a lot of things, but it's going to be applied learning, not theoretical.

Since the invention of the camera we've known that our eyes and brain interpret things and give us a 'perspective', not an image. It was the impressionists (Monet, et al) who demonstrated that the brain does a lot of editing to smooth and calibrate colors. Equally, we have known the basic principles of light and the way light reflects off of surfaces, but it wasn't until we had the software that could model light and the computers powerful enough to render those models that we found out how complicated the recipe is. It's taken 30 years, with lots of baby steps, bigger computers, and the accumulation of a monstrous library of filters, textures, and sheens, to get to a point where we can build realistic models (The Third & The Seventh is a ferocious lesson in applied modeling.)

But the idea that we might be able to do this in reverse (that is, examine slices of reflected light and educe what has been reflected) is a new thought. It should be possible. As humans we've always done it - if you look at the bottom of a closed door and see a shadow of someone walking by your brain will interpret that, and learn quite a bit. But this trick with the lasers is taking a slice of light over time and seeing how it interacts with the environment. Its rudimentary, but its early days, and if you're picking apart the problem you realize that the scientists have to have a starting point, and it has to be something simple. It's primitive now, but so was Dire Straits' Money For Nothing in 1985.
 
Last edited:
i'm sure the white wall issue can be solved given time and experience.
That's the part that really stretches your brain - you should be able to do a lot of things, but it's going to be applied learning, not theoretical.

Since the invention of the camera we've known that our eyes and brain interpret things and give us a 'perspective', not an image. It was the impressionists (Monet, et al) who demonstrated that the brain does a lot of editing to smooth and calibrate colors. Equally, we have known the basic principles of light and the way light reflects off of surfaces, but it wasn't until we had the software that could model light and the computers powerful enough to render those models that we found out how complicated the recipe is. It's taken 30 years, with lots of baby steps, bigger computers, and the accumulation of a monstrous library of filters, textures, and sheens, to get to a point where we can build realistic models (The Third & The Seventh is a ferocious lesson in applied modeling.)

But the idea that we might be able to do this in reverse (that is, examine slices of reflected light and educe what has been reflected) is a new thought. It should be possible. As humans we've always done it - if you look at the bottom of a closed door and see a shadow of someone walking by your brain will interpret that, and learn quite a bit. But this trick with the lasers is taking a slice of light over time and seeing how it interacts with the environment. Its rudimentary, but its early days, and if you're picking apart the problem you realize that the scientists have to have a starting point, and it has to be something simple. It's primitive now, but so was Dire Straits' Money For Nothing in 1985.
I guess its true...
Given enough data a system could discard irrelevant information.. having a state, and then changing state (laser light added) differences can potentially still be "seen"... just how much processing, i suppose is the issue.
I think even if it detects moving blobs, its still pretty incredible, and I think it would be useful just to know there is a moving blob out of your field of vision.
Very cool overall.. very cool.
 
Back
Top