Next gen cards delayed until Q1 2012

Cayman is 40nm, perhaps the 68/900 series design/process will be the basis of the new mid/low end die with some tweaks here and there and a downclock, hence powercolours' statement. With 7900 parts using 28nm still.
 
So with this delay, should I sell my 2 gtx570's and grab 2 toxic 6850's, or one 6990? I game at 2650x1600 and am mainly looking for the vram upgrade. Was offered $500 for the 570's, so if I go the 6850 route I'll be spending barely anything.
 
Dammit, if 7900 series is 40nm I'll be waiting for the 8000 series no doubt. Trifire 6900's is raping everything at 1600p atm anyways. BF3 Ultra is smooth as butter if I turn off the AA.
 
Dammit, if 7900 series is 40nm I'll be waiting for the 8000 series no doubt. Trifire 6900's is raping everything at 1600p atm anyways. BF3 Ultra is smooth as butter if I turn off the AA.

The heck it better run BF3 at max settings since you spent close to $1000 on your GPUs alone.
 
"Delayed". I see this less as an announcement of delay, more 'correction of false rumours'.
 
Firstly tons of power won't work. They can throw a million more cores on there you won't see a performance increase. You can put ram on it thats faster than Superman on speed you will still see no performance increase.

Look past the shaders past the Vram bandwidth. Remember the amount of Vram it got on the Gpus was solely for dx 11. Nothing else. First of they were forced to go for the more expensive Gddr5 on the old cards because they struggled to sort the bus with out.. They added the two Tesselation shaders where in fact the older cards were doing Tesselation with the vertex shaders. Yes dx 9 and 10 are also capable of using Tesselation but will use the vertex shaders. Cayman got 2 shaders added for it. That's why on dx 11 only it starts to make tracks from the older generation.

The gpu don't even have to go to the cpu to access the ram its got a DMA enabled specifically for that function. That is also part of the two schedulers and I think its one of the problem areas with the current design. Now the job of these schedulers is to sort of prioritise the data. old to new date time etc etc. If a problem arises it will reschedule it but in some cases not directly after the threads that filled it place. It will be pushed back into the que.

Now I don't know if you guys are familiar how dx want its stuff. But dx wants its threads neatly running in the right order in the correct sequence. If that doesn't happen we see that on the screen and start to swear Amd. Now the scheduler is actually programmable but by Amd. And Amd didn't say or release much info about it. So the drivers might set new rules for arbitration for the scheduler to use. I'm not saying that's the cause I'm just saying its one of the suspects.

Then if we move on to the ram. Its EDC. Ok fine but when ram go a bit over spec with EDC, EDC will start correcting errors like crazy which drops the memory performance by a big margin. I think that's why Gddr5 was only a one time thing as nvidia was fighting with it but with ECC.

Now the Cayman got some tweaks over the older stock but the L2 cache was still the same as the old cards. They added a rasteriser which theoretically is suppose to increase gpu performance to 1.7Tirs/s. But in reality it only gets half of that. Why?
It turns out that the Amd cpu although with a rasteriser that's boosted to almost double the speed of the previous Gpus the gpu can only still produce 1 triangle every clock cycle. This is where nvidia pulls back their crappy ram connection and out do Amd with the core performance. They manage to get triangles per clock cycle.

Another fact is the Cayman gpu is not sensitive when you push up the ram or decrease its speed. Because the scheduler, prioritise then pair the SIMDs then the threads can go but at sort of hyperthreaded way where it draws one trangle every clock cycle.

Then there's still the cache hits and misses. When the data is not in the caches nor on the gpu it has to sit and wait for a gap to go get it in the system ram. Don't forget dx likes it calculation in order, neatly like a book in its place.

The last thing is the underflow and overflow protection on the card. When that happens the card will just signal a 0 or nand or whatever they using. Now I think this one of the main reasons Amd Gpus doesn't like bad coding. I mean really we all know a game is buggy when released its natural that's why after a driver or two I the heavy gpu intensive games it gives a bit of a problem. You can imagine what software will do when there's a 0 or something else rather than the calculations.

Nvidia got some of their own problems. Bot going to go into theirs but their design around the core is a bit better than Amds that's why they do a little better with a piece of bad code. They still must learn to connect that big old bus up though because that's where Amd got it worked out.
 
Back
Top