Stream processors

http://www.overclock.net/graphics-c...s-vs-stream-processing-units.html#post4258197

NVIDIAs stream processors do more work per clock, and are clocked quite a bit higher.

ATI's equivalent does less work perclock, are clocked slower, but are apparently small enough to fit ass loads on the die.

They both do very similar things, and as benchmarks show, both strategies are rather comparable in final performance.

Also, stream processors usually have nothing to do with AA (except in cases of differed shading). AA performance is limited by the number and clock of the ROPs, and to a lesser extent, in most cards, memory bandwidth.
essentially, they do similar things

ATI has Stream processors arranged in units of five to make up a 1 5-isssue shader (800 stream processors /5=160 Shaders). IIRC each one of those stream processors can do 1 operation-per-clock. Nvidia has a more straight forward approach with 1 shader= 1 shader. However you can't compare ATIs and Nvidias numbers.

though in nVidia CUDA, they also call them stream processors
 
i wonder how fast ati gpu's would be if you can somehow clock the stream processors up to 1600mhz.
 
not yet, you cant OC the stream processors?

i dont know much about amd gpu's

No, the clock speed is global for the entire core. In fact, Nvidia has also ditched the separate clocks with Fermi - all Fermi devices have the stream processors running at twice the ROP clock. This ratio is fixed, so there is no way to alter just one of them.

But back to your question about ATI:

If someone managed to clock just the stream processors twice as fast, you would see the following:

At lower resolutions:

Up to %100 improvement in shader-limited games.
Less or no improvement if the game is texture-limited or geometry-limited or memory-limited or CPU-limited.

At higher resolutions:

Up to %100 improvement in shader-limited games.
Less or no improvement if the game is ROP-limited or texture-limited or geometry-limited or memory-limited or CPU-limited.

The point here is this: ATI and Nvidia have relatively flexible GPUs compared to a decade ago, but there are still many fixed attributes when it comes to 3D processing. These fixed attributes are designed to "optimally" handle your average game workload, such that no one portion of the GPU processing path is holding-up the rest. If you suddenly double the power of the shaders (without altering anything else), you may find yourself held-back by any number of things.
 
So, core clock speed is the most important aspect when comparing cards? Or are they rated differently also.
 
So, core clock speed is the most important aspect when comparing cards? Or are they rated differently also.

It's rated differently. One card may have more processors, or one card may have a higher clock speed, and it just gets messy comparing such things. If you want to get into a deep architectural discussion like that we can, but I'm betting you just want to find the faster card, right?

The best way to compare two cards is to see how they perform in gaming benchmarks (even better, games you actually play). You can break-down video cards into numbers all you want, but the reality is the benchmark results are the only important numbers.

Once you know the relative performance between a few Nvidia and ATI models, you can make educated guesses about how other cards will perform based on their configuration.
 
Can't seem to find the 6970 review I saw a while ago, but it had some interesting numbers from a benchmark. Not one of the regular 'gaming' benchmarks, but rather one of those benchmarking programs which tries out tons of different stuff. It basically showed that ATI cards had a huge advantage for some specific 'effects' or instructions, while Nvidia cards had advantages in others. There were nearly none in which they were even. Sort of strange, considering how close the cards are performance-wise in gaming, but I guess it explains why some games run much better on Nvidia cards while others run better on ATI cards.
 
Back
Top