- Joined
- May 18, 1997
- Messages
- 55,634
While some of us have some incredibly complex smartphones that take beautiful pictures, there are still those times when lighting causes us to get less than favorable shots. While this tech is not available to us yet, and likely will not be for a while, the folks at Aalto University, MIT, and NVIDIA have been working on a solution to getting those photos touched up without even having to see "clean" ones for comparison. That said, the technology was trained by looking only at corrupted photographs, which is a bit wild when you think about it. While everyone loves a great photo, these techniques are also looking to be used in the medical field, but of course in those instances it has to be exactly right.
Check out the video.
“There are several real-world situations where obtaining clean training data is difficult: low-light photography (e.g., astronomical imaging), physically-based rendering, and magnetic resonance imaging,” the team said. “Our proof-of-concept demonstrations point the way to significant potential benefits in these applications by removing the need for potentially strenuous collection of clean data. Of course, there is no free lunch – we cannot learn to pick up features that are not there in the input data – but this applies equally to training with clean targets.”
Check out the video.
“There are several real-world situations where obtaining clean training data is difficult: low-light photography (e.g., astronomical imaging), physically-based rendering, and magnetic resonance imaging,” the team said. “Our proof-of-concept demonstrations point the way to significant potential benefits in these applications by removing the need for potentially strenuous collection of clean data. Of course, there is no free lunch – we cannot learn to pick up features that are not there in the input data – but this applies equally to training with clean targets.”