Separate names with a comma.
Discussion in 'Video Cards' started by erek, Apr 11, 2018.
Yeah, I don't see how they'll ever come back from this.
Seriously, though. Well done video. Gets to all the points about what went wrong with the FX series without being long winded.
They got it going on right now however. Interesting video
Yah, i wasn't aware of the 128-bit floating point shader pipes 32-bit R, 32-bit G, 32-bit B, and 32-bit alpha and that due to the register pressure requiring fallback to 16-bit per channel and reduced image quality
3 cents a share during that time. Umph
Yeah, it's long been known. Those 32 bits per-channel cut the available register storage space in half. It also doubles the bandwidth required for each read/write operation (when the data enters or leaves the register), putting a strain on that (originally 128-bit) memory bus.
ATI did things right by limiting the R300 accuracy to 24 bits per-channel. It meant they had just the right balance of register space and accuracy for a first-gen DX9 part.
Now if only ATI had actually upgraded those registers to 32-bits per-channel sometime before late 2005, Nvidia would have stayed down there at the bottom.
Just to be clear, that was 3 cents Earnings Per Share, not share price.
Thanks for the info.
wow, 128mb of memory goodness ...
totally the same right?
I remember that debacle. Running the 9500np that unlocked to a full 9700 was fun times.