WinDAS White Point Balance guide for Sony Trinitron CRTs

k here's the procedure.

i've read through your post a couple times, but brain's still not getting the logic. Lemme think out aloud as usual :)

2. shrink the screen's geometry to as small as possible. this maximizes the size of the unscanned region.
by scanned and unscanned regions, i'm referring to the regions the electron beam hits. the actual displayed image is smaller than the scanned region.
3. fix camera somewhat close to the screen so that a good chunk of both the scanned and unscanned region is captured. i recommend the bottom edge as there, there's little difference between the boundary of the image and the boundary of the scanned region.

But what test pattern do you show here. Based on rest of post, I'm guessing (0,0,0)?

5. show (0,0,0) fullscreen. after waiting for the phosphor glow to go away, take a picture with the same settings as above.

So now you have two images - one with (0,0,0) full screen, and another with (0,0,0) "half screen". Am I following correctly so far?

Let me know, so I can continue trying to digest the logic.
 
Ok I've re-read your post after edits, I still don't get the logic. Why bother with the unscanned region stuff?

If you know luminance at level 20, and you know average pixel value (APV) at level 20, and you know APV at level 0, then you can just do an easy scaling calculation.

You're gonna need to spell it out in more detail - sorry, my brain just sucks at this stuff.

I vaguely sense ur using the unscanned APV as some sort of offset? But what for? What does the unscanned region represent to you? Are you assuming it represents true black or something?
 
Here's my latest data for estimating black level.

DSLR: 15 second exposure, f/6.3, ISO 100

Six test patterns:

25,22,21,20,10,0

For each of these, I took 15 images, and took concurrent measurements with colorimeter, which was placed very close to camera (with exception of level 0, which couldn't be read by colorimeter).

Here are plots of luminance vs average pixel value (APV), relative to their means, for all the test patterns except level 0. (and of course, APV is calculated after black frame offset).

20jmufs.png


For each of these five sets of measurements, I calculated the scaling factor that relates luminance to APV. Here is a plot of test pattern vs scaling factor:

2dbkrgi.png


I then calculated the mean APV for the 15 images taken of test pattern 0, and divided this number by the range of scaling factors from the previous step. Estimated black level is between 0.0027 and 0.0029 cd/m^2.

In near future, I'm gonna repeat these measurements but with more test patterns closer to (0,0,0) that can be read by colorimeter. I'll also use a 30 second exposure and a larger aperture.
 
here's what i mean by scanned and unscanned. this is just a picture with the brightness wayyyy up
http://i.imgur.com/bIYNVav.jpg

blue = unscanned
red = scanned
green = display area

i have two images, one completely black i.e. (0,0,0) and one completely (20,20,20)
I vaguely sense ur using the unscanned APV as some sort of offset? But what for? What does the unscanned region represent to you? Are you assuming it represents true black or something?

yes the unscanned region represents true black. the only light coming from there is the reflections of ambient light. the scanned region has light from ambient reflections and from the electrons hitting the phosphors. subtracting the two gives the black level that would be measured in a completely dark room.

the reason I'm doing it this way, instead of subtracting a dark frame, is that (haven't actually tried, but i'm pretty sure) the average value for a dark frame isn't even stable from picture to picture.
 
For each of these five sets of measurements, I calculated the scaling factor that relates luminance to APV. Here is a plot of test pattern vs scaling factor:
hm this is weird
could be that the camera is capturing more ambient light

you subtracted a dark frame for these right?
try plotting mean apv vs mean luminance for the different images, and doing a linear fit. ideally the line would intersect zero, but given that the scaling factor decreases with more light, there appears to be a positive y intercept.

I then calculated the mean APV for the 15 images taken of test pattern 0, and divided this number by the range of scaling factors from the previous step. Estimated black level is between 0.0027 and 0.0029 cd/m^2.
try the same with pictures of the monitor turned off to estimate how much ambient light there is.
 
Last edited:
here's what i mean by scanned and unscanned. this is just a picture with the brightness wayyyy up
http://i.imgur.com/bIYNVav.jpg

blue = unscanned
red = scanned
green = display area

oh you're right I didn't realize that there were three areas. Yea the unscanned region is pretty dark - I couldn't make it out against my own bezel I don't think :)


yes the unscanned region represents true black. the only light coming from there is the reflections of ambient light. the scanned region has light from ambient reflections and from the electrons hitting the phosphors. subtracting the two gives the black level that would be measured in a completely dark room.

There may be a small region of veiling glare from non ambient sources on the unscanned region, such as from diffuse transmission through the phosphor layer (especially if it's a thicker layer), electron back scatter, and internal reflections of light off the inner glass surface. But with such low luminances the area of this glare is probably much tinier than the exposed unscanned region that you were capturing. Still, would be interesting to compare your APV when you take an image of unscanned region only, and with lens cap on under layers of blankets :)


the reason I'm doing it this way, instead of subtracting a dark frame, is that (haven't actually tried, but i'm pretty sure) the average value for a dark frame isn't even stable from picture to picture.

Yes it does fluctuate, but at least on my camera, the amount is tiny. Here is APV data from 50 lenscapped images, taken 5 seconds apart with shutter speed 0.1 seconds.

2mocohz.png
 
Last edited:
hm this is weird
could be that the camera is capturing more ambient light

Possible. This time I had lens about 5 mm away from screen, though entire rig was hooded with a blanket, and room was very dark (I use blackout cloth on my window).

you subtracted a dark frame for these right?

yep.

try plotting mean apv vs mean luminance for the different images, and doing a linear fit. ideally the line would intersect zero, but given that the scaling factor decreases with more light, there appears to be a positive y intercept.

Will do this tomorrow. For what it's worth, after black frame subtraction, the APV for (0,0,0) was around 10. Perhaps using longer exposure, and increasing aperture by one more f stop could help. Last resort might be to bump up ISO a bit (apparently the EOS 450D is very noise free up to 600, and I can do some noise tests).

try the same with pictures of the monitor turned off to estimate how much ambient light there is.

Yea good idea - should have done this as part of the procedure.


Another possible source of this inconsistent scaling factor is the different areas of the screen being measured (i.e. maybe the patch of screen that the camera was on darkened at a lower rate than the patch that the colorimeter was on). I could test by reversing the positions and seeing if the scaling factor trend reverses.

Another idea would be to extrapolate the trend and adjust the scaling factor (assuming trend is significant and linear across the range being measured). It's probably not linear though (the plot I showed was scaling factor vs gray level).
 
Last edited:
Sorry to barge in, but I haven't been able to understand the last few pages of this thread. Are you guys experimenting with gamma or some way to calibrate for the best gamma?
 
barge away :)

basically we're seeing if we can use a camera to measure luminances that too low to be read by our colorimeters. This could be used to adjust gamma, but it has other interesting applications such as measuring screen uniformity and contrast modulation. For now, much of the stuff is aimed around validating the approach.
 
Ok did another experiment to see whether the two areas of the screen darkened at different rates (which would explain the change in scaling factor). I used HFCR to take a series of measurements with the same series of test patterns, and I moved the colorimeter from one part of the screen to the other part every five measurements (so five measurements on each of the two parts of the screen for each test pattern). I then calculated the scaling factor that separated the two patches and plotted them as a function of test pattern.

Results shown below (top plot). So it's probably not the diff areas of screen that contributed to the scaling factor trend seen earlier.

Bottom plots shows comparison of APV with lens cap on (left) and with monitor off (right). It's clear that ambient light was not a factor.

2iw5xdg.png


I can think of two other possibilities - one is that the white balance wasn't consistent across the images, and since the green channel isn't a perfect luminance curve, relative luminance results were compromised.

The other is that the sensor bottomed out with my exposure settings. The 0 test pattern had an APV of only 10 (after black frame subtraction). When I have time, I'll repeat the experiment with higher exposure. But I'm not sure this explains the trend seen across the entire range of patterns.

edit: just found this and this- will see if I learn anything useful.
 
Last edited:
one is that the white balance wasn't consistent across the images
yesssssssssss that makes sense now especially as the wpb for the very dark colors are not set very well by the procedure

a value of 10 isn't anywhere near bottoming out (if you mean affected by quantization). mine was 0.1 :p
 
yesssssssssss that makes sense now especially as the wpb for the very dark colors are not set very well by the procedure

Tomorrow I'll try to get a sense of how different the green filter is from a proper luminance filter, by seeing how consistent the scaling factor (between DSLR and colorimeter) is between red and blue test patterns (so measure a bright red pattern, then a darker blue pattern).

I should also mention that I did those tests without loading the Argyll LUT (so black level was really deep). As such, the WPB wasn't as tight.
 
Last edited:
omgoodness I just realized that my green indexer was backwards - I'd been extracting the red and blue channel instead of the green!

:eek: :eek: :eek: :eek:

Hopefully not an issue assuming consistent spectral signature in all my comparisons so far. Also about to post new data testing viability of green channel as a luminance fitter
 
lOL

shouldnt matter much but you should be getting even less noise now.
 
hm, I think I misspoke - I thought I got the indexer wrong, but apparently I didn't. Confused, looking into it. Yep, I had a brainfart just now - all was well all along lol :)
 
Ok, new data.

First, I hunted down two patches on my screen that had the same luminance (and chromaticity). I then made a series of alignment patterns, centered on these patches, as follows:

v5ij4n.png


The gray level of the circles differed from pattern to pattern. I then made sure that the luminances were very similar in both patches, across seven different gray levels.

I then created three test patterns, a full red field, green field, and blue field. I adjusted the level of each pattern so that my colorimeter read off the same luminance within 1 cd/m^2 or so (the luminance was around 17 cd/m^2). So these were equiluminant patterns, but differed in color, and therefore had different spectral signatures.

Using the alignment pattern, I placed the camera against one patch, and the colorimeter against the other, and took concurrent measurements (camera and colorimeter) for each of the three test patterns (red, green, blue). I took 8 images for each pattern.

For each pattern, I calculated the mean APV, and I also calculated the mean luminance (as measured with colorimeter). I then plotted the relative values for each of these measures. If the camera is indeed using a luminance filter, then relative APV should track relative luminance across the three patterns.

Here are results. In all four figures, the x axis represents the color of test pattern: 1 = red, 2 = green, 3 = blue.

Top plot (blue) shows relative luminance as measured with colorimeter.

Green plot shows relative APV when using green channel
Magenta plot shows relative APV when using red+blue channel
Black plot shows relative APB when using all channels

5cau50.png


As you can see, the actual measured luminance varied about 3% across the test patterns, but the APV varied about 18%. Interestingly, the APV tracking was virtually identical across all three channel conditions (green, red+blue, green+red+blue).

The 18% variation is sizeable, but the test is fairly challenging, as the spectral signatures varied quite dramatically. See below, which shows my own measurements of each primary with an i1 pro (from top to bottom: red, green, blue). For each primary, I've overlaid the spectral sensitivity function of the human eye (the scale of the y axis is meaningless, so just pay attention to spread).

scgk6p.png


As is clear, the spectral signature of the red primary is particularly different from the green and blue, relative to the spectral sensitivity function. So you'd need considerably more radiant energy from the red primary wavelengths to achieve a luminance equal to that produced by radiant energy from the green primary wavelengths.

If you look carefully at the plotted data, you can see that APV overestimates the luminance of the blue primary, and underestimates the luminance of the red primary. This is exactly what one would expect given the fact that the green bayer filter is actually shifted a bit to the left, relative to the spectral sensitivity function. See figure (d) on this page, where they tried to reverse engineer the spectral transmission function of the green filter. You can see that the peak is about 520 nm, whereas the peak of the human spectral sensitivity function is around 550 nm. I'm assuming that the green channel on my sensor is similar.

Perhaps the red and blue filter combination in the camera work well with these primaries, because they somehow average out to reduce error. Not sure, brain starts to hurt at this point.

One thing should be clear though: if the red filter in the camera is centered around the red phosphor, then I'd definitely expect to see very poor APV-luminance tracking between a red and green test pattern, when using only the red filter. Might try that out in near future, if I can figure out which photosites are the red channel. Would be nice to verify the relative importance of the green channel.

Tomorrow, if I have time, I'll repeat this experiment, but take more images per test pattern to reduce noise, and I'll also include a fourth, white, pattern.
 
Last edited:
Been doing a bit more reading - a weighted sum approach might approximate the luminance function better than just using the green channel. Will think more on this.
 
nice work ;d

a (correctly) weighted sum would perfectly reproduce the luminance function for the output of a crt, assuming that the crt's spectra is always some weighted sum of its primaries spectra (and it better be)

to figure out the weights... shit brain's dead, i needa use pen and paper for this...
but it's definitely possible.

k

afaik this is how colorimeters work anyway.

basically you need to measure the contribution from red/green/blue to luminance using a colorimeter or spectro<something>meter. for example (255,0,0) may measure as 15cd/m^2, (0,255,0) may measure as 50cd/m^2 and (0,0,255) may measure as 10cd/m^2 with the colorimeter. represent these as a vector: [15 50 10]

and then you need find the 3x3 matrix corresponding to the response of the camera sensor to each primary

for example if matrix is [3 4 7; 2 6 8; 1 5 9], that means when you display red, the camera measures an average of 3 in the red pixels, 2 in the green pixels, and 1 in the blue pixels. when you display green, the camera measures 4, 6 and 5 for the red green and blue pixels respectively

invert the matrix, left-multiply by the luminance contribution vector ([15 50 10] in my example) and you get the weighting factors for the camera pixels

great now we completely turned our cameras into colorimeters :D (repeat first part for X and Z in place of luminance)
 
Last edited:
very nice - at first I was thinking we couldn't do something like this, for the reasons outlined here. But since we're only dealing with a singular SPD (as you say, the CRT will always be a weighted sum of the primary spectra), we're essentially creating a custom calibration matrix to turn our cameras into a colorimeter (which is only accurate on our particular display).

In fact, this is exactly what is described here:

In all the cases above we have been discussing expensive instruments that are designed to measure &#8220;any&#8221; light source. In the unique case of display calibration we often know exactly what kinds of spectra we are going to measure. As an example, let&#8217;s say we know we are going to measure a Sony CRT. All Sony CRTs use the same three phosphors (one red, one green and one blue) known collectively as the P-22 phosphor set. Variation of the spectra from these phosphors is slight. If we use these spectra at the factory when we calibrate our colorimeter, the colorimeter will have incredible accuracy measuring a Sony display, far better than any spectroradiometer at 10X the price

An interesting advantage of this kind of &#8220;purpose built&#8221; colorimeter is it does not need to have filters that are near perfect XYZ simulators. All you need are simple RGB filters that can discriminate very well between the three primary phosphors. If you know the source primaries you are going to measure, you can build a very accurate device for an extremely small amount of money. I have worked with colorimeters that cost less than $100 to build that rival the accuracy of a $20,000 spectroradiometer when used on the display they were designed for. That same spectroradiometer could not provide any measurement of black on the display. The $100 colorimeter could measure black with a high degree of accuracy.

right on man, I know what I'm doing tomorrow night :)

Also, now that I've figured out two spots on my screen that are matched, and I have the raw data from my last luminance experiment, once I've figured out the red from the blue filtered photosites, and taken the measurements of the primaries to figure out the calibration matrix, it will be a matter of seconds to directly compare our previous approach to this new approach :D
 
Last edited:
wait about the whole spot matching thing, do you just mean white balance or do you mean that the luminance vs input curve in different spots are different (i.e. resulting in different gamma in different areas of the screen)?

btw how accurate is an i1 pro compared to the much more expensive stuff?
 
well what I did was first find two patches that matched at one gray level, and then tested to make sure they had the same gamma (across 7 gray levels). Wasn't a super thorough test, and I did find much better correspondence at the lower levels.

i1 pro has an optical (spectral) resolution of 10 nm (while lab grade instruments may be 1 nm, although they have lower signal to noise ratios, see here).

So it's not going to be able to resolve narrow band stuff very well.

btw check out this

One thing I can't figure out is that, if I'm understanding right, the highest resolution grating offered is 1800 lines per mm, which is about 560 nm. Yet the plots are resolving peaks at a much higher resolution.
 
Last edited:
haha the spectral resolution isn't determined just from that...

http://h2physics.org/wp-content/uploads/2009/08/coloredspectrum.jpg

the grating density determines the amount of separation between the colors. the resolution is related to the width of the slit, the resolution of the detector and the distance between the grating and the detector. i'd imagine that the i1pro is limited by the resolution of the detector

btw not sure if you've noticed, but in your primaries spectra the red peaks appear in the green and blue...
 
Last edited:
the grating density determines the amount of separation between the colors. the resolution is related to the width of the slit, the resolution of the detector and the distance between the grating and the detector. i'd imagine that the i1pro is limited by the resolution of the detector

The i1 pro samples the spectrum at 3.5 nm I think, but I always thought the optical resolution was the most important thing. Let's take an extreme example - what would happen if you sampled light through a grating that had an optical resolution of 100 nm, but the detector array sampled at 1 nm intervals? edit: oh, I think I get what you mean - optical resolution is not limited by grating density.

btw not sure if you've noticed, but in your primaries spectra the red peaks appear in the green and blue...

You mean in the red primary, there are peaks in the green and blue regions? If that's what you meant, that doesn't mean the red primary will look blue or green - what matters is how the SPD is integrated over the standard observer functions. (but maybe I misunderstood you here)
 
http://www.argyllcms.com/doc/i1proHiRes.html
never mind, it's not limited by the sensor density.

By measuring a very narrow band source such a as a laser, using the default 10nm resolution indicates a FWHM (Full width at half maximum) of about 25nm. Doing a measurement at 3.3nm resolution reveals that the optical limit seems to be about 15nm, so there is some hope of improvement from that perspective.

i meant in the green and blue spectra, you can see peaks around 600nm and 670nm. so it appears that when youre displaying green, you're also seeing a little bit of the red phosphors. could be due to landing, purity, video signal interference, ... who knows. or maybe just internal light reflections off the red phosphors?
 
ah! right, good catch. Color purity isn't ideal. I took those measurements a while back - it could be the case that my black level wasn't as low. I'll remeasure at some point.
 
Ok, so using those two patches on my screen, I took concurrent measurements with camera and colorimeter for the three primaries (level 255 for each). I took 10 pics for each.

First order of business was to figure out the red and blue channels. Here are cropped images of the top left corners of the red blue and green images:

qpp7iw.png


So was simple to extract these channels.

Then, to see whether each of these channels could distinguish the three primaries, I calculated the mean APV (across the 10 images per primary) for each of the three channels. Below are the results, along with the mean X Y Z that I measured concurrently for each primary.

2l92nfo.png


Looks like there's clear separation between the primaries for each of the channels, so next order of business is to create the calibration matrix. I think I want to double check everything - I might have made an error in some of the code. Gotta go for a bit, but will work on it in an hour or so. Stay tuned.
 
Last edited:
thanks, will look into the -o option.

Figured out problem, after extensive debugging. I think I saturated the damn sensors. I figured a shutter speed of half a second would be fast enough not to saturate, but I was wrong. Will work on it tomorrow if I've time.
 
I'm getting an "array" error when I search this thread, so I apologize if my question has already been asked. Has anyone tried to use a virtual machine in place of the laptop with the virtual machine being output to a separate display?

Closest thing I have to a second computer at the moment is a FreeNAS box which won't really fly, but I have a 24" LCD that I could hook up to the video card's second DVI output in addition to the CRT I have. Thanks in advance.
 
I don't see a problem with that. So long as you're able to interact with the WinDAS and HCFR gui's on a separate display, then you're good.

If you were really desperate, you could have the WinDAS and HCFR windows displayed on the same screen as the test pattern, but this would not be ideal, as the presence of these other windows may have an effect on the measurements (especially on darker test patterns).
 
Ok, so histograms are really important to avoid floor/ceiling effects in sensor.

Here is some data. Camera right up close against a full white pattern (guessing about 85 nits). f/5.0, ISO 100, and 14 different shutter speeds, ranging from 0.00025 seconds to 30 seconds (although the first two shutter speeds ended up using the same setting (either I made an error, or there was a bug).

I took one image for each setting, and I calculated APV for each image, along with a histogram. Data is from all photosites, not just one channel.

Here are results:

2a4xusx.png


The central plot shows the shutter conditions vs. APV. Keep in mind that the shutter conditions are plotted nominally, rather than with their actual duration values.

Notice how only three of the images (green highlight) had no ceiling/floor effects (6, 7, and 8, corresponding to 0.05, 0.125, and 0.25 seconds).

Even though the 9th value appears not to be saturated based on APV, the histogram shows that there was a large degree of saturation. Also notice that even though these images are nominally 14 bit, saturation occurs at a value of around 14,500.

Conservatively, this means that the actual available values is between around 1000 (black offset) and 14000

Here is a plot of shutter speed duration vs APV for conditions 5 to 9. Notice how the line going through 6,7,8 is linear.

vy426q.png


So in all future experiments, I'm going to screen my settings with histograms! Apparently with CHDK you have a real time histogram, although not sure if this is calculated for the preset shutter speed or whatnot. I'm also curious to see how the available range compares on the casio (I know it's 12 bit for starters).


edit: just found this - could learn a thing or two!
 
Last edited:
So I finally had time to play some games. I created the LUT file in .cal format using the dispcal command as per the guide.

But in some games, my LUT just resets. How do I enforce it? I tried CPKeeper but it asks for .icm or .icc files.
 
I've never used cpkeeper, but i just downloaded it and read the readme.txt:

Run CPKeeper.exe then click the [...] buttons to choose a .icc or .icm profile file for your monitor(s). If you do not have a .icc or .icm file, click the [*] button to grab and save your current gamma ramp to file.

see if that works.
 
Ok I repeated the XYZ approach, this time using R G B test patterns that didn't cause ceiling / floor effects in the sensor. This was tricky with with the blue pattern, as it was hard to find a level for blue that didn't top out the blue photosites yet didn't bottom out, say, the red photosites. So I didn't use the full level primaries for this.

2w6dlja.png


Before I continue, need to think about a few things:

1) I'm assuming it's ok that I didn't use (255,0,0) (0,255,0) (0,0,255) as the test patterns right?

2) Whatever calibration matrix comes out, will only be valid for the current camera settings (exposure, ISO, etc.), right?

3) The red channel doesn't do a very good job of distinguishing the red and green primaries. This may partially be because the red and green test patterns had almost identical luminances. But does this present a problem in light of:

An interesting advantage of this kind of &#8220;purpose built&#8221; colorimeter is it does not need to have filters that are near perfect XYZ simulators. All you need are simple RGB filters that can discriminate very well between the three primary phosphors. If you know the source primaries you are going to measure, you can build a very accurate device for an extremely small amount of money.[/B] I have worked with colorimeters that cost less than $100 to build that rival the accuracy of a $20,000 spectroradiometer when used on the display they were designed for. That same spectroradiometer could not provide any measurement of black on the display. The $100 colorimeter could measure black with a high degree of accuracy.

My brain's starting to hurt, gonna take a break, and then will try to sort through the math - I get the feeling that by working through it I'll figure the answers out (I hope).
 
Last edited:
K, so for luminance, here is my weighting matrix, based on those three primary test patterns (of which, btw, I had taken 10 images each)

[-12.2909 24.5951 -12.2983]

To verify this, I went ahead and took 10 images of a 30 IRE pattern, while concurrently measuring with colorimeter. Average Y value during this period was 7.5946 cd/m^2.

Multiplying the the filter responses for the 30 IRE test pattern by the Y-weighting matrix yielded a value of 7.7991 - that's accurate to 2 and a half percent :D

I think I can use the same weighting matrix for different exposure durations if I just scale it to compensate - so if I use a shutter speed twice as long, just divide the result by 2. And I can do this for X Y and Z. I'll verify this later :)

worst case scenario I have to derive custom matrices from appropriate primaries for each shutter speed, but I don't see why this would be necessary given the physics of the situation (assuming the shutter speed is reported accurately in the meta data).

Ok, I've created some code that automatically screens images for cutoff values, and generates nice histograms. I've also created code that generates calibration matrices for all three trichromatic quanties X Y Z, so if I need to re-create any with diff camera settings/primaries it'll be a breeze. I've stored the calibration matrices for one set of camera parameters, and I've also written code that will take any set of images and calculate the mean X Y Z values. Tomorrow I'll do some more verification at lower luminances and then will hopefully be able to measure black level!
 
Last edited:
Well I wanted an exposure that would be high enough to capture meaningful data at lower luminances, although I could always use a faster shutter speed for matrix generation and then scale the results accordingly when using slower shutter speeds. edit, oh you're talking about my previous data where I did a number of diff shutter speeds, and the 85 hz refresh issues - I see.

Yea, if I right click on the tiff files and choose details, the shutter speed is reported differently - like 1/1000 secs on camera seconds = 1/1024 secs in tiff metadata.
 
Last edited:
Also, this may be relevant:

From here

Some cameras clip the low end, while others have an offset to the digital data so when there is no light on the sensor, the data properly records the noise. If the camera clips the low end data, single pairs of fast exposures in a dark room to determine read noise is not adequate. Pairs of exposures just above zero level are needed and the pairs subtracted, noise statistics measured and the statistics modeled, projecting to zero light level. Thus for each ISO, pairs of exposures that give light levels from zero to a few percent of maximum signal are needed. It is not necessary to go all the way up to camera saturation. This sequence is a must for Nikon cameras because all Nikon cameras tested so far clip the low end.

and from here

Figure 3. Read noise per pixel for various sensors. Data from Table 2. Note older cameras (e.g. Canon 10D, S60) have higher read noise than newer models. Currently Canon's technology leads in read noise performance. Lower read noise values = better performance. Nikon currently clips the average read noise at zero, losing some data. Canon includes an offset, so processing by some raw converters can preserve the low end noise, which can be important for averaging multiple frames to detect very low intensity subjects (as in astrophotography).

I'm guessing that the 1024 value is offset for Canon cameras :)


edit: yep

Here are a couple histograms - left with lens cap on, right with a low luminance pattern. Both with same exposure & ISO:

2ujjndl.png
 
Last edited:
hm, tested at a lower luminance pattern.

Here are actual X Y Z values:

X: 0.167
Y: 0.175
Z: 0.208

and here are the values my code gave:

X = 0.5316
Y = 0.1474
Z = 0.5383

Here is a histogram from each channel from one of the ten images I took (R G B from left to right):

sm5d14.png


Maybe the red channel is bottoming out a bit... not sure. Will take a quick image of a 2 second exposure (instead of 1 second) and see what happens.
 
Back
Top