WinDAS White Point Balance guide for Sony Trinitron CRTs

In an attempt to figure out which is the green channel, I took a pic of a green screen and a red screen (blue was too dark with my exposure settings). Here are cropped images of the top left corners of each respective image:

2wo8ykw.png


So the green channel starts on the second column on the first row and is every alternating pixel thereon. Gonna do some luminance tests now.

edit: the alternating pixels for green don't hold from the end of one row to the next. Gonna try figure out a simple algorithm to extract the channel properly.
 
Last edited:
According to here

you can extract each channel independently:

You can do that just by extracting one of the RGGB RAW channels and ignore the other 3. The image will be half the size in both axis, sharp and should have less aliasing artifacts than when demosaicing. The problem is it will be a monochrome image

You can do that by extracting the RAW channels in DCRAW:

dcraw -v -d -r 1 1 1 1 -4 -T file.cr2 (use -D here for us)

Rescale the resulting TIFF to 50% using nearest neighbour (no idea the name nearest neighbour is given in the English version of PS). Add two extra rows/columns of pixels before rescaling to pick up the preferred RAW channel.

One sample (no sharpening at all applied, 100% crop):


Aliasing still happens (see nearly horizontal hairs on top of the girl's head). I guess the gaps between the photocells in the sub-grid used to form the image make them behave closer to delta samplers than to the more desired light averaging samplers, therefore easily producing aliasing. Accutance is higher this way though.



gonna try figure this out.
 
also found this:

http://users.soe.ucsc.edu/~rcsumner/rawguide/RAWguide.pdf

Note that though this pattern is fairly standard, sensors from different camera manufacturers may have a different "phase." That is, the "starting" color on the top left corner pixel may be different. The four options, typically referred to as `RGGB', `BGGR', `GBRG', and `GRBG', indicate the raster-wise orientation of the first "four cluster" of the image. It is necessary for a demosaicing algorithm to know the correct phase of the array.
 
Yep, the pixels don't alternate when going from the right edge back to left edge one row down.

Here is data from a raw file that was captured using a green test screen (image is 4290 columns).

Fig 1 shows the pixel values for the last few pixels of the first row, plus two pixels from the second row.

Fig 2 shows the same for the last few pixels of second row, plus two from third row.

Well at least now I know the bayer layout of this camera, so extracting the green channel should be easy now.
oa3hwh.png
 
Looks like we're on the right track. 0.999 correlation between measured luminance and averaged green channel:

I ran a linear regression (not shown here), but the best fitting equation is giving me negative luminance numbers for my black level (which was 1025.6). Will work on this more tomorrow, assuming I can keep camera for a while longer. Will use a number of successive video values that are just within measuring range of my instrument and see what kind of relationship emerges.

What could be going on is that there is some baseline signal level in the sensor (even though I was using the lowest ISO of 100). I'll experiment by taking a shot with the lens cap on and seeing what the average pixel value is. I can then subtract this from the rest of the data.

2ngx8w6.png
 
Last edited:
Ok took a pic with the lens cap on, and yep there is a black offset. Here are results with black offset removed:

Rough estimate of my black level based on these values is 0.002 cd/m2

I might buy a camera soon so I can do more experiments at my leisure. Might get even tighter results with longer exposure and taking multiple images per test pattern. Might also be worth examining the average value of all pixels - the red and blue pixels do carry luminance information also, as the radiant flux of every wavelength should rise linearly with luminance, so long as the spectral signature is the same (and if we're using grayscale patterns, the spectral signature should be the same if we have a good white balance. hmm, but at low black levels the white balance might be quite off - probably better to just stick with the green channel, as I believe the green filter's usually are a good match for the spectral luminosity function). Also noticed that luminance readings are much more stable in the low end with the i1display pro - not sure if this is instrument or CRT based.

1zebmte.png
 
Last edited:
nice work

Might also be worth examining the average value of all pixels - the red and blue pixels do carry luminance information also, as the radiant flux of every wavelength should rise linearly with luminance
yes if you include the red and the blue, you'll get a slightly lower signal-to-noise ratio.

are you sure abotu 0.002? for me 0.01 nits was a very visible glow.
 
yes if you include the red and the blue, you'll get a slightly lower signal-to-noise ratio.

I think we should stick with green channel only, as including red and blue channel only works if white balance is perfect across entire grayscale. This is actually why a debayered sensor would probably suck for this with anything less than a perfect grayscale (unless you were interested in radiance rather than luminance).


are you sure abotu 0.002? for me 0.01 nits was a very visible glow.

I think it's lower than 0.002.

edit: Yes! Get to borrow the camera for a few more days :)

tonight I'll do some repeatability measurements both with test patterns and with lens cap on. edit2: will have to wait until tomorrow to borrow the camera but I'll be able to keep it over wknd.
 
Last edited:
well i'm only interested in using the camera for relative luminance measurements, and independently for each channel. for that, the camera's sensitivity spectrums are not important since luminance will always be proportional to whatever the camera measures. since the camera's spectrum doesn't match the luminance spectrum, the constant of proportionality varies for green red and blue, but that's not an issue

this is what i plan to do:

use colorimeter to measure (20,20,20). adjust via LUT to make the white balance and luminance at (20,20,20) perfect.

use camera to measure (20,0,0).
suppose i measure 1234 with the camera for (20,0,0).
then i'd adjust the lut for (19,0,0) so that the camera measures 1234*(19/20)^2.4
and adjust the lut for (18,0,0) to make it measure 1234*(18/20)^2.4
repeat to (1,0,0)

repeat above with (0,20,0)
repeat above with (0,0,20)

this should work so long as the channels in the crt are perfectly additive, which they should be...



the main difficulties would be drift... and phosphor persistence/lag for low levels. i can see my mouse cursor trail for 20 some seconds on a completely black image.

the psychophysical method previously discussed has the advantage of being robust against drift and also being much faster than taking pictures and going through the processing process. probably ill first approximately calibrate using my eyes then make final adjustments with camera maesurements
 
Last edited:
...since luminance will always be proportional to whatever the camera measures.

Nope. radiance will always be proportional to whatever the camera measures, not necessarily luminance. This is why even a lab grade photon counter will not do the trick here.

The only way that counting photons will be proportional to luminance is if either:

1) the photons are first spectrally weighted with the luminosity function.

OR

2) the set of measured signals (in our case, the different intensites of white/gray) all have the same relative amount of radiant energy across the spectrum). One caveat to this condition is that our scaling factor will only be useful for this particular spectral signature.

If the white balance is consistent (doesn't have to be D65 or whatever) across all gray intensities, then condition 2 will be met. This is probably unrealistic though at very low black levels, which is why we should rely on assuming the first condition is met. Based on what I've looked at briefly, the green filter comes pretty close to the luminosity function.

We can even test whether my thinking is correct here, by looking only at the red and blue channels, and seeing if their pixel value is consistent across different measured luminances (though be sure to compare different colored test patterns to ensure different spectral signatures as a "stress test"). My prediction is that it won't be as highly correlated compared to using only the green channel.

this is what i plan to do:

Ok trying to follow - I'm a slow thinker so bear with me while I think out aloud here :)

use colorimeter to measure (20,20,20). adjust via LUT to make the white balance and luminance at (20,20,20) perfect.

By perfect, you mean where it should be given the measured peak luminance and a perfect gamma of 2.4?

use camera to measure (20,0,0).
suppose i measure 1234 with the camera for (20,0,0).
then i'd adjust the lut for (19,0,0) so that the camera measures 1234*(19/20)^2.4
and adjust the lut for (18,0,0) to make it measure 1234*(18/20)^2.4
repeat to (1,0,0)

repeat above with (0,20,0)
repeat above with (0,0,20)

this should work so long as the channels in the crt are perfectly additive, which they should be...

very interesting idea - you'd get your white balance and gamma sorted from levels 0-20, and you'd never have to explicitly measure luminance or chromaticity (except at that first (20 20 20) step). And verifying the luminance and chromaticity of the measurable levels will validate the technique.

The one (probably minor) issue is the non zero black level combined with the L = V^2.4 (no offset) approach. With a non zero black level there are three approaches to deal with it (as I understand it):

1) use an input offset (BT.1886)
2) use an output offset
3) just force the function to a no offset function. Doing this will introduce a bit of crush at the first few video levels, but given that our black levels are so low, this crush will probably be minor.
 
luminance will always be proportional to whatever the camera measures.
what i meant was that this is true for each individual channel, since the crt phosphor's spectrum doesnt change with intensity.

for example if the luminance if (0,255,0) is twice that of (0,190,0) then the camera will measure the ratio to be exactly 2

2) the set of measured signals (in our case, the different intensites of white/gray) all have the same relative amount of radiant energy across the spectrum). One caveat to this condition is that our scaling factor will only be useful for this particular spectral signature.

yup exactly, and by handling each channel independently we acheive exactly that
 
the main difficulties would be drift... and phosphor persistence/lag for low levels. i can see my mouse cursor trail for 20 some seconds on a completely black image.

Well adjusting the LUT at low levels will not produce any significant phosphor persistence unless you're suddenly switching from a much brighter pattern to the darker one (assuming I've understood your concern here).

also being much faster than taking pictures and going through the processing process.

would be really sweet to automate the entire process. Matlab can interface with many instruments, but it might be tricky with a DSLR.

If you could at least somehow connect your camera to PC while operating it, and have live access to the camera storage, then at least the extraction and processing of the image could be automated.
 
what i meant was that this is true for each individual channel, since the crt phosphor's spectrum doesnt change with intensity.

Ah, yes, correct. Although to be ridiculously nitpicky this depends on a perfect black level :p (but yea, with our black level not an issue)
 
btw, your approach could be modified to target any luminance function, even BT.1886. That's pretty impressive. And the single channel idea is brilliant - might tighten things up even more than relying on the green filter alone :)

Cannot wait until tomorrow to do the repeatability measurements. Once we have standard error data, we'll be in a good position to figure out what the reasonable number of images to take per "measurement" is (well, for the camera I'm using for now at least).
 
Well adjusting the LUT at low levels will not produce any significant phosphor persistence unless you're suddenly switching from a much brighter pattern to the darker one (assuming I've understood your concern here).
yea... but i think that even going from 10 to 0 could be significant.
yesterday i took three pictures, one of a blank 0,0,0 image, one with monitor off. for the last one i turned on the monitor and only showed the blank image on it for a minute or so before taking the picture.

haven't processed the files yet but to my eyes the last state was darker than the first.

it could be more than just phosphor lag though. the electronics could cool down significantly after displaying a dark image for a while. and probably a cold cathode has a different response characteristic from a warm cathode.

i think we're being a bit too ocd about these dark colors but at least it's good practice for when oled's become mainstream :D
 
i think we're being a bit too ocd about these dark colors but at least it's good practice for when oled's become mainstream :D

my thoughts exactly. btw a cool thing about this is that the very quality of the display that renders conventional instruments useless (low black levels) actually helps produce purer individual RGB channels.
 
my friend from the lab forgot to bring in his camera today so I'll have to wait until Monday. Thought about renting a DSLR but it would have cost close to $100 for the wknd, also considered buying a second hand nikon d50 from a second hand store for $175, but then just went ahead and ordered a Casio Exilim EX-ZR700 :)

they really do seem to be the cheapest raw solution out there, and the wide angle and high speed will be great for analyzing my tennis strokes.

also strange that the zr1000 model is more expensive yet seems worse:

http://www.digitalversus.com/digital-camera/face-off/16023-14497-versus-table.html
 
Anyone noticed that the i1 Display Pro has an magnet. I noticed it because the full white screen got a bit greenish in the middle.
 
Last edited:
wow never noticed this, but you are right. Here is explanation of its function.

fortunately it doesn't seem to have an effect on the CRT unless you purposely angle the magnet towards the screen. Did you get the green tint even when the instrument was placed properly on screen?
 
After i realized i searched for it and i also found that video with this guy explaining what this magnet is doing. ^^

When turning this clip with the magnet away from the screen it does not affect it.

But it is still an unpleasent feeling when a magnet is so close to your CRT.
 
yea you can set iso exposure and aperture

note that the max exposure time depends on iso, for iso 100, it's 15s and it scales roughly so that iso*exposure is constant.
 
I don't understand why it doesn't let you adjust them independently. The camera I borrowed from lab allows that. I wonder if anything like CHDK will be released for casio models...
 
i have no idea either, but yea dslr's are understandably more flexible, but from what i know this point&shoot has a lot of manual control available (even focus :D)
 
should be arriving in about a week - free shipping from japan not bad! Looking forward to experimenting. Will be interesting to compare repeatability measurements between the Canon EOS XSI rebel (which I'm borrowing again tomorrow), and the casio.
 
this is what i plan to do:

use colorimeter to measure (20,20,20). adjust via LUT to make the white balance and luminance at (20,20,20) perfect.

use camera to measure (20,0,0).
suppose i measure 1234 with the camera for (20,0,0).
then i'd adjust the lut for (19,0,0) so that the camera measures 1234*(19/20)^2.4
and adjust the lut for (18,0,0) to make it measure 1234*(18/20)^2.4
repeat to (1,0,0)

repeat above with (0,20,0)
repeat above with (0,0,20)

just so we're on the same page here, this would only work if you first subtracted the baseline level (with lens cap on), right? So if you measured an average of 500 with lens cap on, then in your example, you'd work with 734 as your multiplier.
 
Here are some repeatability measurements with lens cap on:

(mean pixel value for each of 13 images that I took).

35n9gfc.png


mean value = 1024.9, standard error of the mean = 0.0313!

Now gonna test with actual luminance patterns.
 
just so we're on the same page here, this would only work if you first subtracted the baseline level (with lens cap on), right? So if you measured an average of 500 with lens cap on, then in your example, you'd work with 734 as your multiplier.
yes that's the right logic

but it gets more complicated if for example you measure an average of 550 with the monitor turned off, due to a tiny bit of ambient light. then it would probably make more sense to subtract off 550.

btw i think i mentioned this before but make sure to take the lens cap pictures with the same exposure time as the actual measurements.
 
Ok, so the main issue is there is drift, either in the CRT or in the camera.

Here is the typical trend, from trial to trial. This was at video level 21, which had a measured luminance of 0.082 cd/m2 (btw all these measurements were taken with default LUT, so gamma is very high).

2cik6mb.png


And for level 20 (0.073 cd/m3):

1z4zlah.png


For lower luminances, not as bad:

level 10 (0.018 cd/m2):

2ldip81.png


level 0:

2lo6akp.png


Here a plot of luminance vs RAW pixel value:

2ikchna.png


Using the average scaling factor between luminance and pixel value, I get a value of 0.0036 for my black level. That may not be too far off. I took these measurements (both with colorimeter and camera) more towards bottom edge of screen where there is a bit of "bleed". When I tried measuring black level with colorimeter, it briefly hovered at 0.002 before returning to 0.000.

I should also say that I had variance in the colorimeter readings - I did some measurements before the camera, and after, and there were differences - not sure if it was due to time, or me loading up other screens in between, or the colorimeter itself.

So, first line of business is to establish the source of the systematic error. Will go about that tomorrow.

(and yep, I took lens cap image with exact same settings as the regular images).
 
Ok, so the main issue is there is drift

crap......

for lower luminances it is just as as bad but the noise hides it more. in each case the luminance increases by 1-2% from 0 to 10.

I'd guess it's the crt

how far away is the camera? there could be heat from the crt affecting the sensor
 
had the camera about half a foot away, could be an issue - might test that theory tomorrow too.

I'm gonna test around vid level 20. Will let monitor warm up at that level for half an hour, and then will take about 50 successive images. It's mind numbing stuff, as you have to sit still and press the button after each 15 seconds. Doesn't work if u hold down button unfortunately :mad:

I'll also aim camera at center of screen this time (also, I had zoomed in completely to a small area on the screen, would be interesting to see if same drift occurs when imaging the entire screen).

I've largely automated the dcraw and pixel value extraction, so once images are taken it's a breeze.
 
not sure if that camera supports it but you may be able to control the camera from a computer via a usb cable.

googled a little; I think it's called "canon eos utility"
 
wow thanks that looks awesome:

from here:

If you want to capture a series of stills taken at precise intervals and then run them together as a timelapse sequence, you could spend money on a programmable remote control timer device. Alternatively, you can tether your camera to a laptop, fire up the free EOS Utility application and click the Remote Shooting label. A little stopwatch icon launches the Timer Shooting window and from here you can set your camera to shoot at regular intervals, from one shot every five seconds to a single shot every 99 minutes and 59 seconds. EOS Utility will record your series of shots straight into your Picture folder, so you don’t need to worry about the camera’s memory card filling up.
 
but it gets more complicated if for example you measure an average of 550 with the monitor turned off, due to a tiny bit of ambient light. then it would probably make more sense to subtract off 550.

good call. I suspect though that the main source of ambient light is reflections off the surfaces in my room coming from the CRT. My room can be made pretty damn lightproof in a minute or 2.

If I get this Canon EOS software to work, I'll enclose the monitor and camera with a blanket, but I'll definitely take some baseline measurements of the turned off monitor with lens cap off.

edit: got software working. Running low on battery though so don't want to push my luck tonight.
 
Last edited:
hah, finally had an idea that you hadn't thought of :)

btw, anything similar to the Canon EOS utility software for the casio?
 
Just had another thought (and this is probably not as relevant when using your gamma calibration approach which doesn't require knowledge of raw luminance information).

even with a blanket, there will likely be some ambient reflections that may affect camera readings (this itself is worth testing). If so, then this would affect the camera more than the colorimeter which is in contact with the screen (and thus more protected from ambient interference). This would mean that linearity between measured luminance and measured pixel values is compromised.

Solution is to just have camera right up against screen - this would also make measurements more repeatable from session to session.
 
Back
Top