24" Widescreen CRT (FW900) From Ebay arrived,Comments.

Oh and by the way - I just got an AMD video card for one of my LAN boxes and HOLY FUCK!!! The analog out is stupid clear! I'm going to have to do some side-by-sides, but let's just say that 1920x1440 on the Artisan is USEABLE (for text)! Guess my next card is going to be a used HD-7970 or R9-280x or something like that.

What was the video card you used before the AMD? And it wasn't clear before @ 1920x1440?
 
GTX-560. It was clear enough on the F520, but the Artisan? Nope. The AMD card is clearer on both screens.

I refuse to believe that the rumors about AMD being clearer are true. :)

Kind of a moot point anyway now if AMD doesn't include analog out.

Clarity was one thing I did worry about when getting a GTX 980, because it's a ZOTAC and I wasn't familiar with the brand. And in ancient times analog was one thing some manufacturers skimped on. After some follow up though, best I could figure out was that they used good quality parts and that the output was indeed clear and such...
 
I refuse to believe that the rumors about AMD being clearer are true. :)

Kind of a moot point anyway now if AMD doesn't include analog out.

Clarity was one thing I did worry about when getting a GTX 980, because it's a ZOTAC and I wasn't familiar with the brand. And in ancient times analog was one thing some manufacturers skimped on. After some follow up though, best I could figure out was that they used good quality parts and that the output was indeed clear and such...

That's good to know! It could very well be MY GTX-560, after all. But my AMD card is simply clearer than it. I'd need more cards to verify, but I don't have the time nor the money. And for the record, the ATI chipset on my laptop is blurrier than my AMD on the LAN box - so as with everything analog, your mileage is going to vary. :)
 
on my gtx 970, i (used to) see a lot of overshoot that makes the image look a bit sharper than it really is

i made this image to test...
YBxLqi9.png

do you see any overshoot for any of the lines? for me the only significant one is the white line on the green background... i can see a little shadow to the right (not convergence related)
 
Assuming by overshoot you mean bleeding? Isn't that highly dependent on resolution/refresh rate?

But yeah, at 1440x1080 @ 85 hz I'm getting lots of "overshoot" on my Dell P992. But at 640x480 @ 60hz I get zero ;)
 
on my gtx 970, i (used to) see a lot of overshoot that makes the image look a bit sharper than it really is

i made this image to test...
YBxLqi9.png

do you see any overshoot for any of the lines? for me the only significant one is the white line on the green background... i can see a little shadow to the right (not convergence related)

LAN box is not on internet and it's put up, so I cannot just whip it up and give you an answer. I have your artisan measurements by the way. Does Hardforum allow attachments in PM's?

EDIT: None of my boxes are out and ready to go, btw.
 
intentionally slightly out of focus so that the there's no moire (due to aperture grill + camera sensor array). these pictures show pretty well how I actually see it

http://i.imgur.com/Vmigmwm.jpg
http://i.imgur.com/OctXolj.jpg
see how it has an embossed look? there's a darker shadow to the right

i thought it's related to impedance matching at a connector and reflections somewhere, but last time i calculated, it didn't really make sense as the cable was too long for the shadow to be displaced by so little. plus it only shows up with certain color combinations, so perhap it's related to crosstalk? i've no idea really
 
Last edited:
flod, might be silly question, but you sure those aren't chroma subsampling artifacts - i.e. those artifacts aren't in the image pixel data right?
 
yea it's a lossless png with full information at every pixel

plus chroma subsampling (when performed correctly) doesn't lead to consistently displaced shadows

also it's not convergence related; i can move the image to any part of my screen and see the same thing
 
methinks a loupe + a few diff video cards, or even a video signal generator might prove a fertile source of data :)
 
fuck it, gonna take a break from work and do some goddamn loupe experiments!

edit: this document is useful, in particular look at page 85 (section 303-9). Instead of a CCD array, our DSLR setup could achieve the same measurements.

flod, before I create test patterns, can you explain why overshoot would explain the embossing effect?
 
Last edited:
I don't understand overshoot - based on the wiki, it seems like when the voltage of the video signal is too high (and doesn't fall down fast enough?) - wouldn't that cause a bit of bleeding?

Can you explain overshoot in laymen terms, and how it could cause a shadow?
 
ah! thank you, that's a very clear illustration. I love your drawings btw, you should publish a book with them or something. They're like super instructive, but hilarious at the same time :)

I have an idea about how to measure this with the DSLR. The main challenge will be ensuring that all the patches in the test pattern don't saturate the sensor. Might have to combine diff exposures if that's the case.
 
I notice it on the red on green and magenta on green (dark shadow on right edge of red). At first I thought it might be a mach band effect, somehow interacting with red-green color opponent mechanisms, but the loupe tells me that at least part of the effect is really there. My first guess is that it has something to do with the order of color striping in the phosphor layer.

Let's see, they're ordered (left to right) as red, green, blue

So if you have a red patch on left, and adjacent to it on the immediate right is a green patch, that means that the only phosphor stripes in the rightmost vertical column of pixels in the red patch that are being "innervated" are the leftmost ones. And in the leftmost vertical column of pixels in the green patch, the middle stripe is being innervated. Which means that there is a dark gap of three "subpixels" between the edges of the actual color information.

Probably better to test for quality of signal using black and white contrast.
 
i see the same dark stripes. but red on green could be obscured by convergence

the weird thing is that i can also see a darker region to the right of white lines on a green background

if you think about the phosphors, for white lines on pure green, the green signal should be completely flat as the transitions are between (0,255,0) and (255,255,255). the only things changing are the red and blue signals and those shouldn't affect the green signal.
 
But even with white lines on green, there will be a dark subpixel between them, right?

btw, tried capturing some loupe photos - tricky without the rig set up, but here was best I could do for now:

(see this image for the full image, though its not full res like the excerpt below.

14wysyt.png
 
ya but what i'm seeing isnt a dark subpixel. its a grey shadow displaced by 2 pixels or so
 
yea that's weird, I just looked again at your other photo - the one where u wrote in white on green. I'm in the middle of a lot of reading and data analysis, if I get done with energy to spare, I'll develop the DSLR protocol.

Do you get that same displaced shadow when you have a relatively high luminance patch on a darker luminance patch? And does the embossing only occur with thinner lines, or would it occur with a larger patch? (so in that white on green, example, instead of thinner white writing, have a large square patch of white)
 
i can check when i get back tonight but i honestly think it's not worth investigating without an oscilloscope
 
yea an oscilloscope would be great for directly comparing the quality of the analog out of various video cards.
 
k did this quick and dirty, here we go:

Used matlab to create a test pattern, based on page 85 of this document.

Note that they specify the two patches to be a 90% max luminance square on a 10% peak luminance background, and say that this will be equivalent to gray level 229 and 25, respectively. This is incorrect, unless your gamma is 1.0, so I created patches that had luminances that differed roughly by a factor of 9, with the lighter one being around 78 nits, and the darker one being around 8.5. I used video levels 85 and 229.

Here is what the pattern looked like, although I created it to be offset from the center of my screen as the convergence isn't great there.

note, this is a resampled version at 25% full size, so don't use this image for your own testing.

outv78.png


Next, I took an image with the DSLR of some test patches with those gray levels, and screened them in matlab to ensure they weren't bottoming out or saturating. ISO 100, F 5.7, 0.5 second exposure was able to handle both patches without any ceiling or floor effects.

I also used the calibration matrix code to see whether the correct luminances were being calculated, and indeed all was good, however, I later realized that our approach to turning the camera into a colorimeter averages all pixels in a single image and calculates the luminance based on that. In order to combine that approach with spatial information, I'd need to integrate the matrixing code with a subsampling of the demosaiced images (which would be accurate, even though it halves the spatial resolution in each axis, relative to the original demosaiced image). I wasn't prepared to work on that at this point, so I decided to just use the raw pixel values in the demosaiced image. I knew this would result in a high frequency periodic pattern, as each neighbouring pixel belongs to diff color channels with diff sensitivities to light, but I reckoned that this pattern would be insignificant relative to the global pattern of luminance change (a good assumption, as I'll show at end).

I set up camera and took 20 successive shots, with the right edge of the lighter square roughly in the center of the frame, and quickly examined them in an image viewer to make sure they looked right. To my dismay, I noticed some discolorations, as seen in this cropped excerpt below (see middle of image):

2qkldsi.jpg


I had cleaned my screen before taking the shots, so I looked at the DSLR lens and sure enough, some nice smudges.

Luckily, there were only two smudges, and I was able to crop a (smudge free) horizontal strip (about 4200x1050 pixels worth, which represents, at a very rough estimate, a strip about 1.25 inches wide and 0.3 inches thick).

I then flattened these strips along the vertical dimension (essentially calculating the average change in horizontal pixel value across all vertical pixels in the strip), resulting in a 4200 element vector, and averaged all 20 vectors, and plotted. Result below:

15fszky.png


The right plot is a zoom of the top knee of the left graph (where the pixel value starts to drop). As you can see, there is a lot of high frequency energy due to the use of the raw demosaiced pixels. If I'd used the subsampled approach, the plot would be a single thin curve.

Anyway, this is more proof of concept. Ideally, there'd be a sharp transition at the boundary of the square and background. Also interesting that there's a shallow rise in luminance towards the right edge of the square.
 
just had a cool thought...

if, after subsampling, the resolution of the image is still high enough, it may be possible to pick up, in a white patch, the individual phosphor stripes. The code doesn't just do luminance, but also chromaticity. If it were able to pick up the phosphor pattern that would be a good demonstration of the sensitivity of this method.

And, to get precise distance data, you could put a couple pieces of tape on the screen, and measure the distance between their inner edges, and capture that with the camera and convert pixels to absolute distance units. Hell, if the above idea works, you could directly measure the phosphor pitch this way :)
 
raw "undemosaiced" pixels

not too sure where you're trying to go with this... but you should check the focus of the camera as if you want to see any pixel-sized details your camera should probably be able to see individual phosphors
 
right, undemosaiced - it's like a double negative in one word :)

yea, focus is something I should experiment with too ... hmm, the way I set it up, the lens frame is literally flush with the screen. Not sure if possible to get a sharp image with my stock 18-55 mm lens. And if it's out of focus, that's gonna naturally blur boundaries, isn't it. damn. At least it's good for giving a sense of how uniform a patch is.
 
yea, just looked through the viewfinder, the lack of focus definitely adds huge amounts of blur, so completely useless for measuring how sharp transitions between patches or lines actually are. I could move camera further away, but then you sacrifice the spatial scale, and it's harder to standardize the setup when it's not flush against screen.

Might start thinking about investing in a macro lens now. If I could get a sharp focus then this thing could do wonders. Something tells me, however, that a macro lens capable of producing a sharp image when flush against screen might be expensive. I'm guessing the actual phosphor layer is under a pretty thick layer of glass - close to an inch?
 
This approach looks cheapest - you literally reverse the stock lens and you can capture 1:1 images. The only disadvantage is that the aperture ends up being the highest, and you're therefore stuck with a very narrow depth of field. But this is perfect for this purpose, as the phosphor layer is a plane :)

I just took out my lens and looked through it the other way. I could resolve the phosphor stripes with stunning clarity, although I could only get that image with the lens a tiny bit away from the screen, so not fully flush. No matter, I can easily live with that, and I'm more interested in relative luminance than absolute, when it comes to measuring these particular things, so I'm not gonna be too concerned about replicating exact distance from screen each and every time.
 
So for cards like the R290x which doesn't natively support CRT's, is there any way to get a FW900 working on it?

I had a 290x for a little while, and use the HD Fury adapter with it. Problem was the Fury had a 165mhz pixel clock limit, so I couldn't play anything over 1600x1200 or 1080p at 60hz.

So, there are ways to do it, but there isn't an ideal solution yet. Waiting on the HD Fury guys to design something with a much higher pixel clock limit. The analog limit on most video cards is 400mhz.
 
I had a 290x for a little while, and use the HD Fury adapter with it. Problem was the Fury had a 165mhz pixel clock limit, so I couldn't play anything over 1600x1200 or 1080p at 60hz.

So, there are ways to do it, but there isn't an ideal solution yet. Waiting on the HD Fury guys to design something with a much higher pixel clock limit. The analog limit on most video cards is 400mhz.

How much pixel clock does 1920x1200@85hz need? I found a few Asus mobo manuals for H87 boards. Apparently the VGA port on many mobos are 165mhz limited as well. But their HDMI ports are 300mhz. Could those ports be converted?

Also, how much work would it take to make our own 400mhz+ adapters? I'm thinking down the line this might be something we may want to explore.
 
the FW900 prime mode (1920x1200@85hz) requires just over 282 mhz (see image below).

I think HDFury may be making an adaptor that would be able to support this, but I think it adds a frame of input lag (about 12 ms), which is unacceptable to many. Interesting idea to make our own. I know next to nothing about the issues involved, but would be interesting to find out! Maybe there'd be a way to cannibalize the RAMDAC off another video card?

33becyp.png
 
the FW900 prime mode (1920x1200@85hz) requires just over 282 mhz (see image below).

I think HDFury may be making an adaptor that would be able to support this, but I think it adds a frame of input lag (about 12 ms), which is unacceptable to many. Interesting idea to make our own. I know next to nothing about the issues involved, but would be interesting to find out! Maybe there'd be a way to cannibalize the RAMDAC off another video card?

33becyp.png

Is such input lag unavoidable?
 
Back
Top