It’ll Take More Than AI to Get DSLR-quality Photos From Your Phone

DooKey

[H]F Junkie
Joined
Apr 25, 2001
Messages
13,553
Scientists at the ETH Zurich university have trained an AI to improve on photos taken with any phone camera, with a goal of attaining DSLR-quality output. Unfortunately,it's still not good enough to make a phone camera photo as good as a DSLR photo. However, AI enhanced photos are still pretty good and you can check out the results of the research here. I really like the improvement.

So while DSLR-quality pictures may not yet be a click away on your phone, hardware makers are working to improve their devices in the fields that matter to most people – suggesting appropriate settings, increasing dynamic range, performing automatic edits, developing dual-camera systems for more depth of field, and selecting the best frame out of a burst of shots, all for better results.
 
Last edited:
At least to my shitty eyes, it looks better. Besides blowing out bright areas.
 
Looks like shitty instagram filters.

It's clipping highlights, and despite their claims that one of the tests is that they be able to reconstruct the original image form the output, I suspect that portion of the test is failing.
 
Take every picture - add brightness + saturation.

They do look better, but perhaps some of those original pictures were accurate in their darkness.
 
I'm happy to see there's still some improvement happening with crappy cell phone cameras. I admit I like mine on the go, can't always carry around my gear.

That said, there's still no replacement for a good lens. I'll be rocking my DSLR for years to come.
 
Its not about the camera hardware that makes a great photo. Its the skill behind the camera. Granted there are some things that can be done better with equipment but buying a $25,000 camera and lenses won't make your crappy photos better if you don't know how to frame a good shot. Took photography in college and watched a few documentaries on some really well know photographers using really crappy toy cameras and still getting decent photos.
 
You want DSLR quality, then get a DSLR lens. People are always amazed at how much better the pictures are from my shitty 8 year old DSLR than their new expensive phone. Especially when they find out their phone camera has almost twice the "megapixels" as my DSLR. And that's without me even putting much effort into it, just using the Auto mode.
 
Take every picture - add brightness + saturation.

What I like to call the "SweetFX" effect which any self-respecting photographer knows is garbage. It just makes pictures subjectively look better to the average person, which is basically the visual equivalent of music's "loudness wars."
 
Open PS, brightness +10, save.

Advanced AI.

I find it's better to use the Curves/Levels tool. Quicker and produces better quality (just add a droop to it so it looks kinda like an exponential curve). Shall I script it and announce it as AI?


What I like to call the "SweetFX" effect which any self-respecting photographer knows is garbage. It just makes pictures subjectively look better to the average person, which is basically the visual equivalent of music's "loudness wars."

It'll always be in the ongoing battle between the creatives with their art and the general consumers. Creatives want to use color (or lack of it), shadows, blur, and framing to tell a story. Many general pop people want it be vibrant and over-saturated or with weird color gradients.

Could you imagine Shindler's List without the artistic vision?
 
Last edited:
Nope, totally and utterly trash. Looks to me like all they are doing is increasing the brightness levels. If you have a crappy monitor or an uncalibrated monitor.

It might look better on a crappy monitor, but go ahead and print that out with a real photo printer and you will see how crappy it is.

On to of that, if you are not taking photos in RAW format, you really can't do a lot of proper repair/adjustment as the data is not there in compressed jpeg format.
 
Nope, totally and utterly trash. Looks to me like all they are doing is increasing the brightness levels. If you have a crappy monitor or an uncalibrated monitor.

It might look better on a crappy monitor, but go ahead and print that out with a real photo printer and you will see how crappy it is.

On to of that, if you are not taking photos in RAW format, you really can't do a lot of proper repair/adjustment as the data is not there in compressed jpeg format.

My $0.02 on what is going on

1)If you believe the setup described on the page for the AI:

They are increasing brightness or gamma until you get the maximum detection of edges within shadows for a reduced dymanic range applicable to humans looking at a monitor, increasing saturation, doing some kind of cross processing look filter so there are more colors than in the original photo, and possibly before that last bit doing some local contrast enhancement so that the contrast reduction of the fake HDRI adjustments to being up the shadow levels doesn't reduce edge contrast, which it scores highly.

2)If you believe they did something much lazier in terms of AI:

They set up a neural net that trains to match popular photos and has the ability to adjsut all the sliders of a filter. THey then fed it popular images from instagram, or popular feeds as the training set of data. It sdjusted the sliders until it game up with a filter that is the nearest approximation of the multiple popular filters.
 
My $0.02 on what is going on

1)If you believe the setup described on the page for the AI:

They are increasing brightness or gamma until you get the maximum detection of edges within shadows for a reduced dymanic range applicable to humans looking at a monitor, increasing saturation, doing some kind of cross processing look filter so there are more colors than in the original photo, and possibly before that last bit doing some local contrast enhancement so that the contrast reduction of the fake HDRI adjustments to being up the shadow levels doesn't reduce edge contrast, which it scores highly.

2)If you believe they did something much lazier in terms of AI:

They set up a neural net that trains to match popular photos and has the ability to adjsut all the sliders of a filter. THey then fed it popular images from instagram, or popular feeds as the training set of data. It sdjusted the sliders until it game up with a filter that is the nearest approximation of the multiple popular filters.

So they are using crappy quality pictures to try to make other crappy quality pictures better? Makes perfect sense.

The blown out highlights and invisible clouds in the "improved" pictures is proof enough for me that it has major flaws. And those flaws are not going to be able to be fixed unless they are editing RAW images where you can dodge/burn at the single pixel level like you can with actual phot editing software.

Applying multiple filters to a crappy quality picture is not going to do much at all in actually fixing pictures unless the whole picture only needs a very slight increase in brightness and possibly contrast.

When you have a compressed image, the black areas are just that.. absolute black. With a RAW image, there is usually a whole lot more data in the apparent black areas that you can brighten up to make what is there visible.

Blown out areas of course can't be recovered because there is no data there to recover.
 
So if the AI makes cell phone photos look like DSLR, what happens if applied to DSLR photos? If there is a true improvement in one, and it would also improve the better one, then that makes the worse one still bad in comparison.
 
The original versions of most of the photos look better to me.
 
...........If AI can do this:


then its a understandable path to try to get better pictures.
Been waiting for a couple of years for adobe to create a plugin based on it, there are other solutions today but this demonstration was amazing.
 
My Nikon d3300 still murders my v30 for photo quality.

Better cameras offer a wider Gulf between tiny baby cameras and real cameras.

Without gobs of post production work phones just are not studio quality.
 
...........If AI can do this:


then its a understandable path to try to get better pictures.
Been waiting for a couple of years for adobe to create a plugin based on it, there are other solutions today but this demonstration was amazing.


I wonder if that can be used to upscale images
 
We've already had phone cameras that can take pictures as good as a DSLR under the right conditions. (IE great lighting) Photography is all about how to harness light. I'm not even joking that I've seen pictures we've pulled off of a Lumia 1020 that I could have told you came from a APS-C DSLR, and you would probably believe me. (I'm not suggesting every picture can, because most pictures won't make the cut) The biggest issue that is most noticeable is sharpness and resolution, which even if you have 20mpix on a smartphone, they are not picking up the fine details. You can sharpen a picture all you want, but if the details were not there to begin with there is nothing to sharpen. The 1020 on the other hand those details can be present under the right conditions. For the detail to be present you HAVE to use the RAW, because JPEGs just kills all detail, and they also tend to kill softness. (The images become flat like they were drawn with a box of crayons, so even if they are vivid and saturated, they lack depth and complexity) The thing to remember is that the 1020 although a camera phone has a sensor that is about 4x the size of the one in an iPhone. That makes a difference for light collection that I've yet to see made up. You simply can't fake light collection by using software, you need to come up with ways to gather more light. (You can keep going up the chain, APS-C and FF will be similar under certain conditions where the APS-C can gather as much light as the FF, but won't hold a candle to it when it's lacking light. Further still the MF guys will complain about FF, for the same reasons.)

It doesn't matter which side of that original picture you look at, you can tell it was ran through the JPEG o matic. (both sides look terrible) The shadows on the left are completely dark (The tree and in the dark parts of the building), then on the right where they brightened it up, you can clearly see there is no fine detail in those dark parts that were in the building, and they basically killed the contrast of the image by trying to brighten it up too much. Like Cyclone said once the information is gone, you're not getting it back, and I've definitely not see anything that can accurately recreate it either.


TL;DR: The sensor size is always the end all be all, because light matters.
 
The reality is...no one cares about your pictures.

Digital photography is merely a visual means to prove something did or didn't happen or the state of something at that time.
 
Back
Top