Could be this true?.. GTX 880 rumored details

It looks like nvidia is doing 680 again. What I mean by that is releasing 104 (680) part as their high end and saving the 100/110 (780) part for the refresh high end.

To me those specs don't look that impressive. Granted, generation to generation can't be compared cuda core to cuda core.
 
Can large amounts of cache really offset small bus width? I know about Intel doing it with higher end Iris Pro 5200 and XONE does this with 32MB of ESRAM.
The eDRAM is where the Iris Pro gets most of its performance advantage, yeah. From a cost perspective, I don't think there's any compellingly good reason to use extremely large caches versus having a wide bus, but more cache can pretty effectively mitigate the impact. It's just an issue of balancing the characteristics of the cache against the narrower bus.

I think GM104 being on a 256-bit bus is a "probably". To what extent NVIDIA's going to go through to mitigate the pressure of that is really anyone's guess, but I don't think it's an intractable problem.
 
The last time ATI or AMD stomped nvidia was approximately...yeah, uh, never.

Radeon 9000 series up against the GeForce 5000 series. Yes, that was quite some time ago. The 9000 series was ATI's Opteron/Athlon64 and the 5000 series was NV's Pentium 4. Basically NV messed up like Intel did with the P4. It seemed like a good idea at the time, but it didn't work out so well. Thankfully lately they've both been doing a good job of trading blows, giving us two good options for GPUs.

It sounds like NV is doing the 680 thing again. Make a new base design, launch the 256-bit bus part with a modest speed increase first, then launch the 384 (or 512?) bus part with more cores later. They'll just keep the big chip in their back pocket until they need it to beat or keep up with AMD. At the rate things are going I'll probably stick with my 680s until the 900 series comes out. I have this RPG backlog to clear up, and so far 680 SLI has handled everything I've thrown at it as long as I turn surround off.
 
The eDRAM is where the Iris Pro gets most of its performance advantage, yeah. From a cost perspective, I don't think there's any compellingly good reason to use extremely large caches versus having a wide bus, but more cache can pretty effectively mitigate the impact. It's just an issue of balancing the characteristics of the cache against the narrower bus.

I think GM104 being on a 256-bit bus is a "probably". To what extent NVIDIA's going to go through to mitigate the pressure of that is really anyone's guess, but I don't think it's an intractable problem.

Hmmm, GM104 on a 256 bit bus. I suppose it could be, at the moment there is no compelling reason why not. If its aimed at the mid range market then 256 bit is fine.

Still, i hope they bring out a 384 bit bus for the GM104 in 3 and 6 GB varieties with the GM100 coming in at 512 bit and 4/8GB. Unless of course they save the 512 bit for the refresh, the GM110.

Better yet, i hope things go back to the good old days where games really push the graphics cards hard.
 
That'd be crazy to get the 880 before a 790. Then again, it seems like the 790 might be vaporware as no real news has come out since the 'leaked' specs in January.

The 690 I bought in May of 2012 has been the single best video card purchase for me. I'm excited if there is to be a 790 as my 690 would get replaced in heartbeat knowing that the 790 would last just as the 690 has.
 
Or unless you are actually going to say there was nothing wrong with the 480 at launch?

Just to be clear here, I did not like GTX 480 one bit. It was hot and loud, and i'm not a big fan of hot and loud cards.

The main thing I wanted to convey was that both camps have done all of the following at one time or another:

1) being more expensive
2) making a hot card
3) making a loud card

So you really can't pin point either nvidia or AMD as being the only side that has done these things. So I saw a post here conveying the sense that NV is always expensive, Fermi being "hot and loud", and then the entire thing with the GTX 680 and so on. I appreciate that AMD is really bringing the competition now, I really do. I do prefer nvidia, but I do not like their pricing. Anything AMD does to mitigate that and bring us better competition, i'm 100% all for it. In fact, GPU performance hasn't been this close in years. I can definitely appreciate that as a consumer.

But the main thing I wanted to convey is what I said above. In addition to the fact that NV has held the performance crown more often than not, although the ATI 9700 pro remains one of the best and greatest GPUs of all time. The 9700 was a true stomping to which NV didn't have a good answer to. But, for the most part, NV has held the performance crown over the years. Maybe I was a bit over the top in stating it. Certainly i'm not the only one guilty of that, but anyway - Both camps have had hot, loud, and expensive cards. Period. I have not liked any hot and loud card from either camp; I did not like the reference GTX 480 and I do not like the reference 290 cards. But the aftermarket cards fix that problem, and those are good buys now that they're closer to MSRP price.
 
well i don't know even how to start to believe something like this.. TitanZ waiting for launch, not even a rumored 790, and then PLOP.. big smoke pot.. gtx 880 Rumor?..

share your thoughts..



First Thought.. I like way more the leaked specs of the R9 390X, however if this its true its amazing the power efficiency.. disagree with the tight buss and ROPS count.. for me? a fail in Ultra high resolutions, the time will speak itself..

maxwell.png



specs(4).png


TechPowerUP!

Seems very disappointing, to be honest.
 
Fixed that for you!
Was about to say the same, the 9800 was just a refresh. ATI's investment in the ArtX development crew really paid off. I still have my 9700 Pro in my mother's system plugging away :D

Lest we also forget how horrible the NVIDIA 5800 Ultra debacle was..

blower2.jpg


Even though I prefer NVIDIA these days, I will always have a soft spot for the 9700 :cool:
 
Last edited:
Was about to say the same, the 9800 was just a refresh. ATI's investment in the ArtX development crew really paid off. I still have my 9700 Pro in my mother's system plugging away :D

Lest we also forget how horrible the NVIDIA 5800 Ultra debacle was..

blower2.jpg


Even though I prefer NVIDIA these days, I will always have a soft spot for the 9700 :cool:
The 9700/9800 was a helluva a card back in the day. I think the 8800 GTX has the same sort of majesty on the NVIDIA side. I'm seriously considering mounting the thing in a display case now that it no longer works :cool:.
 
Gotta be fake. 256-bit bus? Makes no sense with the GTX 780 being 384-bit.

That's what I was thinking also. 256-bit bus? Maybe for the GTX 860 or 850, definitely not for the GTX 880. Heck, GTX 860 should be close to what the GTX 780 is.
 
Call me stupid, but isn't the GeForce GTX-580 384-bit? And the GTX-680 is? Oh yeah - 256-bit. NVidia's proven time and time again that buswidth is expendable. :)


They didn't really expend the bandwidth, they just didn't change it. You can do two things to make traffic go faster. Add more lanes (wider bus) or speed up the data (higher memory clocks). They chose the easier design and higher clocks.

GDDR5 can't go much higher the limits are being reached and GDDR6 from the sounds of both AMD/Nvidia is lackluster, which is why the focus is now stacked memory.
 
They didn't really expend the bandwidth, they just didn't change it. You can do two things to make traffic go faster. Add more lanes (wider bus) or speed up the data (higher memory clocks). They chose the easier design and higher clocks.

GDDR5 can't go much higher the limits are being reached and GDDR6 from the sounds of both AMD/Nvidia is lackluster, which is why the focus is now stacked memory.

L2 cache also mitigates memory bandwidth. Besides which, as bus width goes up, memory clockspeeds go down. If you compare the memory bandwidth between the 780ti and the 290X, the 780ti actually has more memory bandwidth. The 290X has 320GBs, while the 780ti has 333GBs if i'm not mistaken.

This underscores the fact that you can't simply look at an arbitrary bus width number and draw sweeping conclusions from it. Number one, L2 cache mitigates memory bandwidth to a great extent, this is precisely what happened with the 750ti. The 750ti is a 60W TDP part. There is no way a 60W TDP part should perform that well at only 60W TDP. But the L2 cache mitigates the 128 bit bus to a great extent. Number two, memory clockspeeds affect memory bandwidth. (and you did mention this) Finally, as I said, bus width is an arbitrary number which isn't always indicative of final performance. If it were, the ATI 2900XT would not have performed so poorly despite the 512bit memory bus.

All in all, though, I feel the point is moot. The slides here were verified as fake, and the numbers just really don't add up anyway.
 
L2 cache also mitigates memory bandwidth. Besides which, as bus width goes up, memory clockspeeds go down. If you compare the memory bandwidth between the 780ti and the 290X, the 780ti actually has more memory bandwidth. The 290X has 320GBs, while the 780ti has 333GBs if i'm not mistaken.

That isn't necessarily true. There are trade-offs to be made when designing the memory interface/controllers in regard to performance, power and area density.
 
Lest we also forget how horrible the NVIDIA 5800 Ultra debacle was..

blower2.jpg

Not that the FX 5800 was that bad of a card... the cooler design just REALLY sucked.

The FX 5800 only draws 189 watts under full load. That's not difficult to keep cool and quiet with modern cooler designs.
 
Not that the FX 5800 was that bad of a card... the cooler design just REALLY sucked.

The FX 5800 only draws 189 watts under full load. That's not difficult to keep cool and quiet with modern cooler designs.
In all fairness, I assumed he was referencing how it didn't have full DX9 support or the kind of performance its cooler design would have suggested.
 
Not that the FX 5800 was that bad of a card... the cooler design just REALLY sucked.

The FX 5800 only draws 189 watts under full load. That's not difficult to keep cool and quiet with modern cooler designs.

??? Pretty sure most cards back then were drawing <100w.

Edit- 75w seemed to be the ultra highend at the time, FX5800Ultra had a 75w TDP.
50-60w was considered highend.

Edit2- Yeah, you were looking at TDP for the Quadro FX5800.
 
Last edited:
Not that the FX 5800 was that bad of a card... the cooler design just REALLY sucked.

The FX 5800 only draws 189 watts under full load. That's not difficult to keep cool and quiet with modern cooler designs.
You're right in that it wouldn't have been so bad if it weren't for the cooler but it was an overglorified DX8 part, just a desperate attempt at keeping up. Thank goodness NVIDIA survived because I am hating AMD's drivers and don't care for bitcoins these days.
 
That isn't necessarily true. There are trade-offs to be made when designing the memory interface/controllers in regard to performance, power and area density.

It's somewhat true, higher bus width means more complicated PCB wiring which will result in more crosstalk / interference. Hence, reduced frequency is a must. GPU memory controller is also a factor in that.
 
You're right in that it wouldn't have been so bad if it weren't for the cooler but it was an overglorified DX8 part, just a desperate attempt at keeping up. Thank goodness NVIDIA survived because I am hating AMD's drivers and don't care for bitcoins these days.

Except it was late to the party and slower than a 9700pro and had less features.

remember the "crouching lion, NO TIGER" cheats in 3DM2001?
 
Except it was late to the party and slower than a 9700pro and had less features.

remember the "crouching lion, NO TIGER" cheats in 3DM2001?

Oh yeah that's what I meant by nvidia being desperate and it kept on through defaulting to quality lod whereas at I defaulted to HQ lod but nvidia hit back hard with the 6800gt. The GPU competition has always been more interesting than CPUs..
 
In all fairness, I assumed he was referencing how it didn't have full DX9 support or the kind of performance its cooler design would have suggested.

Pretty sure it did have full DX9 support. Only thing was that it had no intermediary between half and full precision. ATI's R300 met DX9's minimum spec of 24-bit precision, which the FX series couldn't do. When FX needed to do 24-bit precision, the card went into 32-bit mode, which killed performance. So technically - it's a DX9 part. But in practice, it couldn't really handle it.
 
It's somewhat true, higher bus width means more complicated PCB wiring which will result in more crosstalk / interference. Hence, reduced frequency is a must. GPU memory controller is also a factor in that.
Again, that isn't completely accurate. While what you are saying is true, it doesn't make it impossible. More traces = more interference solved by more layers.
The fact of the matter is, the power efficiency + area density cost of making complex memory controllers to drive the memory ICs at technically overclocked frequencies for GDDR5, while supporting that with a more complex, i.e. expensive, PCB just isn't worthwhile.
Transferring data is expensive, the farther you have to go increases that cost(not sure if it is exponentially but it certainly isn't linear). That is why you see GM107 with a massive increase to cache, why you see all the R&D into HBM/HBC and trying to shorten those interconnects.

Great so now Nvidia is charging $500 for a midrange card?

They were able to pass GK104 as a $499 product because GK100 had to be redesigned and AMD was content to price Tahiti at $549.
 
Back
Top