Researchers Use Deep Learning to See in the Dark

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
Researchers from MIT have developed a technique that can reconstruct images of objects snapped in near total darkness. The scientists trained a deep neural network on "more than 10,000 transparent glass-like etchings, based on extremely grainy images of those patterns." They claim those grainy images were taken with about one photon per pixel, and that they used a "light modulator" to display the vast amount of images they needed to reproduce. Anexandre Goy, one of the co-authors of the study, said that "We have shown that deep learning can reveal invisible objects in the dark," and that "This result is of practical importance for medical imaging to lower the exposure of the patient to harmful radiation, and for astronomical imaging."

From an original transparent etching (far right), engineers produced a photograph in the dark (top left), then attempted to reconstruct the object using first a physics-based algorithm (top right), then a trained neural network (bottom left), before combining both the neural network with the physics-based algorithm to produce the clearest, most accurate reproduction (bottom right) of the original object.
 
eK2BbIC.jpg
 
So take a blurry as hell picture, Apply a shitload of AA, repeat using a different type of AA, then merge the two images and sharpen the piss out of it.
 
Gotta wonder how much false detail is added; it's one of the issues that is prevalent in photography, and one of the reasons that larger sensors are still used.
 
Gotta wonder how much false detail is added; it's one of the issues that is prevalent in photography, and one of the reasons that larger sensors are still used.
Quite a lot I'd imagine. If the information isn't there, it simply isn't there.
 
I think the point is that you lose some clarity but can get the result faster and with less power if you have a baseline of what the end result could be
 
Back
Top