NVIDIA's Huge Misstep

upload_2018-4-11_12-56-41.png


Seriously, though. Well done video. Gets to all the points about what went wrong with the FX series without being long winded.
 
Yah, i wasn't aware of the 128-bit floating point shader pipes 32-bit R, 32-bit G, 32-bit B, and 32-bit alpha and that due to the register pressure requiring fallback to 16-bit per channel and reduced image quality
 
Yeah, it's long been known. Those 32 bits per-channel cut the available register storage space in half. It also doubles the bandwidth required for each read/write operation (when the data enters or leaves the register), putting a strain on that (originally 128-bit) memory bus.

ATI did things right by limiting the R300 accuracy to 24 bits per-channel. It meant they had just the right balance of register space and accuracy for a first-gen DX9 part.

Now if only ATI had actually upgraded those registers to 32-bits per-channel sometime before late 2005, Nvidia would have stayed down there at the bottom.
 
Last edited:
  • Like
Reactions: erek
like this
Back
Top