Could be this true?.. GTX 880 rumored details

Araxie

Supreme [H]ardness
Joined
Feb 11, 2013
Messages
6,463
well i don't know even how to start to believe something like this.. TitanZ waiting for launch, not even a rumored 790, and then PLOP.. big smoke pot.. gtx 880 Rumor?..

share your thoughts..

TechPowerUP! said:
20 nm GM204 silicon
7.9 billion transistors
3,200 CUDA cores
200 TMUs
32 ROPs
5.7 TFLOP/s single-precision floating-point throughput
256-bit wide GDDR5 memory interface
4 GB standard memory amount
238 GB/s memory bandwidth
Clock speeds of 900 MHz core, 950 MHz GPU Boost, 7.40 GHz memory
230W board power

First Thought.. I like way more the leaked specs of the R9 390X, however if this its true its amazing the power efficiency.. disagree with the tight buss and ROPS count.. for me? a fail in Ultra high resolutions, the time will speak itself..

maxwell.png



specs(4).png


TechPowerUP!
 
Last edited:
Power consumption seems a bit high, I would expect 180-200w.
7.9b trannies on 20nm would mean ~300-330mm2.

Edit- Everything seems a bit off with these but not outrageously so.
 
If this is true, it's like the GTX 680 - using the smaller die for the high end part until the "big die" product is ready. GM204, small ROP count, 256-bit bus - definitely not the full Maxwell product. And I wouldn't be surprised by this, because 20nm is supposedly not very mature yet. Better to test the waters with a smaller product.
 
There are four issues I see:

A) I don't expect it to be known internally as "GM204", but as GM104.
B) I expect lower power consumption than 230W from the GM104 part.
C) 5.7 TFLOPS would be low-balling it for NVIDIA on this part. I expect them to aim somewhat higher.
D) I expect NVIDIA to be more aggressive with their boost clocks.
 
5.7 tflops? So its basically a 1250 mhz titan black lol. Although i guess its believable if nvidia wants to be first to 20nm, but that just means when 390x comes out a few months later its gonna stomp nvidia.
 
The last time ATI or AMD stomped nvidia was approximately...yeah, uh, never. Competitive? Sure. Stomped? Wishful thinking, especially considering AMD's significant disadvantage with R+D. In fact, I believe there was a digitimes article citing which companies have secured 20nm wafers - NV (along with apple, qualcomm, and others) has secured 20nm wafers while AMD has not. I'll try to check that out but i'm not even sure AMD has bought 20nm wafers as of yet. I'm pretty sure that was on digitimes. Nvidia has generally always had the performance crown with a few rare exceptions. ATI 9700 pro days maybe being one exception? 5870 had a 4 month lead but then the 480 beat it in performance by 15%.

Now certainly we can't discount AMD and i'm sure they wont' take things lying down, they have a competitive product currently and a drive to compete. But, I think both of the purported specs are fake. For the 390X and the 880. Numbers just don't add up on either side really.

Anyway, these were confirmed as fake. I'll hit a link up later but, yeah, the numbers just don't add up. I think NV stated they were aiming for 7-8TFlops on their high end next gen parts, and this is supposedly less than 6. I'd say both the supposed 390X rumors and 880 rumors are fake. Makes for great click bait for the sites hosting them though, that's for sure.
 
Last edited:
The last time ATI or AMD stomped nvidia was approximately...yeah, uh, never. Competitive? Sure. Stomped? Wishful thinking, especially considering AMD's significant disadvantage with R+D. In fact, I believe there was a digitimes article citing which companies have secured 20nm wafers - NV (along with apple, qualcomm, and others) has secured 20nm wafers while AMD has not. I'll try to check that out but i'm not even sure AMD has bought 20nm wafers as of yet. I'm pretty sure that was on digitimes. Nvidia has generally always had the performance crown with a few rare exceptions. ATI 9700 pro days maybe being one exception? 5870 had a 4 month lead but then the 480 beat it in performance by 15%.

You certainly have a short-term memory. Recently there has been no stomping, last true stomping was G80 vs R600 but ATI had a couple good stomps back in the day.
Getting 20nm wafer allocation info from digitimes and taking it as gospel is pretty amusing. Especially since Nvidia has no long term contract deals with TSMC and TSMC can re-allocate wafers promised to Nvidia at their fancy.

Edit- Nice edit.
 
Yes, because dropping 50-100GBps in memory bandwidth would work out so well (as we've seen the limitations in the 750/750 Ti along with bus width). That L2 cache boost Maxwell is promising will only take you so far and not make up for raw bandwidth. That's pretty huge and kind of makes the rumor "laughable" just at that.

Playing the SP game, we've seen how adding a few hundred doesn't really cause any life changing results (780 > Titan > 780 Ti). If they're going to continue to play that game we'll need to start seeing 1,000+ gains to really see much of an impact now. Add onto that a few extra hundred (although more efficient than Keplers) that are only 90% as powerful per core as Kepler and those stats are again a joke.

Manufactured on 20nm (30% power savings or performance boost alone) and Maxwell's architecture adding another 30% savings in power consumption now the TDP becomes a joke. Plus Maxwell would allow for more cores just because of architectural changes. Back to core count looking like a joke.

I hope this is true so I can laugh my ass off into the 900 Series.
 
That L2 cache boost Maxwell is promising will only take you so far and not make up for raw bandwidth.
That is exactly the purpose of cache: to make up for bandwidth (though, more typically, the culprit is latency). Why else do you think half of a single Haswell core is cache?
 
That is exactly the purpose of cache: to make up for bandwidth (though, more typically, the culprit is latency). Why else do you think half of a single Haswell core is cache?

It is about locality in regards to compute.
It certainly has some performance advantages in gaming situations but highly dependent on other variables.
 
Yes, because dropping 50-100GBps in memory bandwidth would work out so well (as we've seen the limitations in the 750/750 Ti along with bus width). That L2 cache boost Maxwell is promising will only take you so far and not make up for raw bandwidth. That's pretty huge and kind of makes the rumor "laughable" just at that.

Playing the SP game, we've seen how adding a few hundred doesn't really cause any life changing results (780 > Titan > 780 Ti). If they're going to continue to play that game we'll need to start seeing 1,000+ gains to really see much of an impact now. Add onto that a few extra hundred (although more efficient than Keplers) that are only 90% as powerful per core as Kepler and those stats are again a joke.

Manufactured on 20nm (30% power savings or performance boost alone) and Maxwell's architecture adding another 30% savings in power consumption now the TDP becomes a joke. Plus Maxwell would allow for more cores just because of architectural changes. Back to core count looking like a joke.

I hope this is true so I can laugh my ass off into the 900 Series.
Already confirmed as fake. There are so many issues in those specs that you would have to be some sort of moron or AMD fanboy to believe this was true. Especially with the push for 4k, there is no way this would be NVIDIA's top-of-the-line card in the next generation. 256-bit memory bus? Only 4GB of VRAM? Only 40 ROPs? 200 TMUs? 900 MHz base clock? Etc... It is to laugh.
 
4GB would actually be ample in the vast majority of situations. You certainly don't have to be a moron to believe that the reference GM104 card could very well be 4GB.
 
Already confirmed as fake. There are so many issues in those specs that you would have to be some sort of moron or AMD fanboy to believe this was true. Especially with the push for 4k, there is no way this would be NVIDIA's top-of-the-line card in the next generation. 256-bit memory bus? Only 4GB of VRAM? Only 40 ROPs? 200 TMUs? 900 MHz base clock? Etc... It is to laugh.



I can easily fake specs too.


just triple everything on the 750Ti.
 
I can play too


GTX 880 Ti GPU Engine Specs:
1920 CUDA Cores


1100 Base Clock (MHz)


1300 Boost Clock (MHz)



GTX 880 Ti Memory Specs:
7.5 GbpsMemory Clock


6 GB Standard Memory Config


GDDR5Memory Interface


384-bitMemory Interface Width


360 Memory Bandwidth (GB/sec)



GTX 880Ti Support:
4.4OpenGL


PCI Express 3.0 Bus Support


YesCertified for Windows 7, Windows 8, Windows Vista or Windows XP


NVIDIA GameStream, GPU Boost 2.0, 3D Vision, CUDA, DirectX 11, PhysX, TXAA, Adaptive VSync, FXAA, NVIDIA Surround, G-SYNC-readySupported Technologies1


Yes3D Vision Ready


Microsoft DirectX 12 API


Yes Blu Ray 3D


Yes3D Gaming


Yes3D Vision Live (Photos and Videos)



Display Support:
4 displaysMulti Monitor


4096x2160Maximum Digital Resolution


2048x1536Maximum VGA Resolution


YesHDCP


YesHDMI


One Dual Link DVI-I, One Dual Link DVI-D, One mini-HDMI Standard Display Connectors


InternalAudio Input for HDMI


Double-slotWidth



Thermal and Power Specs:
Maximum GPU Tempurature (in C) 95 C



Graphics Card Power (W)
60 W


Minimum System Power Requirement (W) 500 W
 
The last time ATI or AMD stomped nvidia was approximately...yeah, uh, never. Competitive? Sure. Stomped? Wishful thinking, especially considering AMD's significant disadvantage with R+D. In fact, I believe there was a digitimes article citing which companies have secured 20nm wafers - NV (along with apple, qualcomm, and others) has secured 20nm wafers while AMD has not. I'll try to check that out but i'm not even sure AMD has bought 20nm wafers as of yet. I'm pretty sure that was on digitimes. Nvidia has generally always had the performance crown with a few rare exceptions. ATI 9700 pro days maybe being one exception? 5870 had a 4 month lead but then the 480 beat it in performance by 15%.

Now certainly we can't discount AMD and i'm sure they wont' take things lying down, they have a competitive product currently and a drive to compete. But, I think both of the purported specs are fake. For the 390X and the 880. Numbers just don't add up on either side really.

Anyway, these were confirmed as fake. I'll hit a link up later but, yeah, the numbers just don't add up. I think NV stated they were aiming for 7-8TFlops on their high end next gen parts, and this is supposedly less than 6. I'd say both the supposed 390X rumors and 880 rumors are fake. Makes for great click bait for the sites hosting them though, that's for sure.


ATI Radeon 9800 Pro...never forget.
 
The last time ATI or AMD stomped nvidia was approximately...yeah, uh, never. Competitive? Sure. Stomped? Wishful thinking, especially considering AMD's significant disadvantage with R+D.
Let's crack open our history books, all the way back to 2009.

First 40nm cards, guess who won:
http://www.techpowerup.com/reviews/ATI/Radeon_HD_5870/30.html
GTX 480 came out 6 months later.

First 28nm cards, guess who won again:
http://www.techpowerup.com/reviews/AMD/HD_7970/28.html
GTX 680 came out 4 months later.

If you were a betting man, you'd be putting your money on AMD right now.
 
Let's crack open our history books, all the way back to 2009.

First 40nm cards, guess who won:
http://www.techpowerup.com/reviews/ATI/Radeon_HD_5870/30.html
GTX 480 came out 6 months later.

First 28nm cards, guess who won again:
http://www.techpowerup.com/reviews/AMD/HD_7970/28.html
GTX 680 came out 4 months later.

If you were a betting man, you'd be putting your money on AMD right now.

With Nvidia, it's all about the refresh. The first gen GPU always has some kind of issue or shortcoming.
 
Specs on a piece of paper to actual performance doesn't mean crap especailly when it comes to a brand new architecture.
 
I'm pretty sure they wouldn't step the bit interface down from the 700 series. That's clue #1 it's fake right there.
 
I'm pretty sure they wouldn't step the bit interface down from the 700 series. That's clue #1 it's fake right there.

Yeah, it doesn't make sense to me either to drop from a 384-bit bus in the higher end 780 series and go down to 256-bit bus for the 880 series.

We've seen reviews where even the 128-bit bus for the Maxwell 750/750 Ti is limiting in certain scenarios. Unless Nvidia knows how to squeeze more bandwidth in a 256-bit bus without compromising performance overall, then this leaked news is possibly fake. At 4K resolution or higher settings of gaming with high resolution, that 256-bit bus is going to be a problem unless Nvidia did some magic to this card we don't know about.

And, I highly doubt NVLink is going to help here since won't be released until 2016 with Pascal.
 
It's the Nvidia forum, are we allowed to talk bad about the 290 and 290x?
Tried that on the AMD-side and apparently since the cards are about $50 less than the competition that alleviates AMD from releasing gigantic turds.

It took them ~6 months to release Fermi with a red sticker which is actually slower than the competition (at least Fermi was faster).
 
GTX 480:
Hot, expensive, 6 months behind the competition.


GTX 680:
6 months behind the competition.

Better check your facts there. GTX 680 was released in March 2012. 7970 was soft launched in Dec 2011, but didn't hit shelves until January 2012. The hard launch was in January, I remember this because I bought 2x 7970s on launch day as soon as they hit newegg. That turned out to be a purchase I regretted many months later after trying AMD's version of surround and dealing with AMD software hell (it only took two years to fix frame pacing), but nonetheless, 2 months is not 6 months.

GTX 680 was also faster than the launch 7970. Go look at the GTX 680 launch reviews. It was the single GPU crown winner following its launch, but I imagine AMD fans will yell something about overclocking or some other nonsense.

The "hot and expensive" comment is quite comical as well, considering the 290X reference has set new standards for how hot and loud a GPU can get, and the 7970 was more expensive than the GTX 680. The 7970 was 550$ at launch, while the GTX 680 launched at 499$ and was faster than the 7970. But, everyone forgets when AMD overcharges. Fact of the matter is, AMD charges what the market will bear; generally speaking, AMD is considered a budget brand and for good reasons. Perhaps when they catch up to NV on all fronts (software, out of box experience, etc) they can charge similarly. They certainly tried and failed with the 7970, the market forced to simply lower the price after the 680 launched.

Now, I think the 290 and 290X are great GPUs now that the "hot and loud" aspect is fixed. Custom 290 and 290X cards are great GPUs. They're priced very well these days on top of that, you can't say anything bad about that; I also firmly believe that AMD is a more fierce competitor these days than in prior years, they have a compelling product in their 290 cards. However, both sides can play this little game of throwing the terms "late", "hot", "loud", and "expensive" around. Fact of the matter is, both sides have done one of these at various points and time. It's just comical to bring it up given the recent 290X situation, and the price situation of the 7970 at launch. So to bring that up? Yeah. Okay. Like that argument can't go the other way around. I remember the 7970 collecting dust on shelves for some 8 months after GTX 680's launch while the 680 was sold out for months. Why? I guess the 7970 was too "expensive" without customers willing to pay the premium for the AMD brand for whatever reason. But that situation fixed itself when AMD charged what the market would bear. In other words they price-cut it to a lower than GTX 680 price, and then they followed up with their overclocked GHz edition 7970.
 
Last edited:
Yeah, it doesn't make sense to me either to drop from a 384-bit bus in the higher end 780 series and go down to 256-bit bus for the 880 series.

We've seen reviews where even the 128-bit bus for the Maxwell 750/750 Ti is limiting in certain scenarios. Unless Nvidia knows how to squeeze more bandwidth in a 256-bit bus without compromising performance overall, then this leaked news is possibly fake. At 4K resolution or higher settings of gaming with high resolution, that 256-bit bus is going to be a problem unless Nvidia did some magic to this card we don't know about.

And, I highly doubt NVLink is going to help here since won't be released until 2016 with Pascal.

How do you propose that they put a 384bit interface on a ~300-330mm2 die?
They could do like AMD did with Hawaii and trade memory speed for a more area efficient memory controller.

Not much point in speculating based off these rumors though, both the Maxwell and Pirate Islands info from this site is obviously very poor speculation.
 
Last edited:
Yeah, it doesn't make sense to me either to drop from a 384-bit bus in the higher end 780 series and go down to 256-bit bus for the 880 series.
I generally agree, but truth told, we don't know what angle NVIDIA is going to play this round. They can — and may — mitigate the narrower bus by slathering cache on the die. Intel got around memory bandwidth and latency limitations on the Iris Pro by shoving in 128MB of eDRAM as an LLC, and NVIDIA may end up doing something similar on high-end Maxwell parts. It ends up being a pretty large chunk of the die, but it was a successful-enough strategy for Intel.

Then again, GM107 is already kind of a large chip for what it does, so NVIDIA may be facing a lot of inflexibility in trying to scale that up.
 
GTX 680 was also faster than the launch 7970. Go look at the GTX 680 launch reviews. It was the single GPU crown winner following its launch, but I imagine AMD fans will yell something about overclocking or some other nonsense.

Kinda funny when an amd card beat a nvidia card, you've been in the amd forum each time boasting that that nvidia cards overclock. Look in a mirror, all the whining bout what amd fanboy do, you've done for the past year. What a joke.

Stop being pedantic children.

680 was released months later ,introduced dynamic clocking. That didn't stop the 7970 from ocing and matching gameplay by [h] review standard. If you can't see that , than your problem is rooted deeper.

Play games , stop measuring sticks.
 
Not the "dynamic overclocking" shit again. I thought we dispelled this notion years ago?
 
Who's behind who is kind of arbitrary. It's a matter of perspective and what you consider needs to happen to earn the title "ahead", or "behind". With the cat and mouse game I don't see how you could use either two words in this market whatsoever. It's simply called one-upping each other.

However, AMD does deserve to be shit on lightly for delaying their refresh for an ungodly long time. Nearly 2 years from the 7000 Series launch? What were they thinking? Oh they decided milking the Never Settle Bundle was making them more cash than they could dream that bringing a new generation out would cut into those profits. Like I said after that silly announcement from them back then, it will come back and bite them in the ass. Now what happens? They have to respond to Nvidia who is too busy milking the market. Not only that, they can't even produce enough cards to keep up with "unforeseen" demand which has been ripping through the markets for the past 4 months hyper-inflating their "competitive" pricing structure.

So yeah, AMD deserves a little bit more shit. As much as I hate them [Nvidia] for doing it, they have every right to milk the market just as Intel does with AMD's absence. When your opponent screws up you don't offer a hand to help them up, you make them pay for it.
 
Who's behind who is kind of arbitrary. It's a matter of perspective and what you consider needs to happen to earn the title "ahead", or "behind". With the cat and mouse game I don't see how you could use either two words in this market whatsoever. It's simply called one-upping each other.

However, AMD does deserve to be shit on lightly for delaying their refresh for an ungodly long time. Nearly 2 years from the 7000 Series launch? What were they thinking? Oh they decided milking the Never Settle Bundle was making them more cash than they could dream that bringing a new generation out would cut into those profits. Like I said after that silly announcement from them back then, it will come back and bite them in the ass. Now what happens? They have to respond to Nvidia who is too busy milking the market. Not only that, they can't even produce enough cards to keep up with "unforeseen" demand which has been ripping through the markets for the past 4 months hyper-inflating their "competitive" pricing structure.

So yeah, AMD deserves a little bit more shit. As much as I hate them [Nvidia] for doing it, they have every right to milk the market just as Intel does with AMD's absence. When your opponent screws up you don't offer a hand to help them up, you make them pay for it.
Nvidia milks the 600 series for 2 years and... what? Nvidia rebrands their 680 to a 770 and that's cool, heaven forbid we call the 7970 as a 280x.
AMD released the 290x 5 months after the GTX 780, which is the same thing Nvidia has done before (cat and mouse, like you said).

You're being a hypocrite.
AMD deserves shit for releasing shit cards; their rebranding and release schedule withstanding.
 
Last edited:
How do you propose that they put a 384bit interface on a ~300-330mm2 die?
They could do like AMD did with Hawaii and trade memory speed for a more area efficient memory controller.

Not much point in speculating based off these rumors though, both the Maxwell and Pirate Islands info from this site is obviously very poor speculation.
You're right, can't speculate much over these rumors.
I generally agree, but truth told, we don't know what angle NVIDIA is going to play this round. They can — and may — mitigate the narrower bus by slathering cache on the die. Intel got around memory bandwidth and latency limitations on the Iris Pro by shoving in 128MB of eDRAM as an LLC, and NVIDIA may end up doing something similar on high-end Maxwell parts. It ends up being a pretty large chunk of the die, but it was a successful-enough strategy for Intel.

Then again, GM107 is already kind of a large chip for what it does, so NVIDIA may be facing a lot of inflexibility in trying to scale that up.
Can large amounts of cache really offset small bus width?

I know about Intel doing it with higher end Iris Pro 5200 and XONE does this with 32MB of ESRAM. Does it really help though especially when we're talking about gaming at resolutions higher than 1080p with maxed settings?
 
Nvidia milks the 600 series for 2 years and... what? Nvidia rebrands their 680 to a 770 and that's cool, heaven forbid we call the 7970 as a 280x.
AMD released the 290x 5 months after the GTX 780, which is the same thing Nvidia has done before (cat and mouse, like you said).

That's the point. Nvidia technically "released" two generations in the span of one from AMD. Had AMD released the R-200 Series first then Nvidia followed with the 700 the re-branding might not have been so hard to swallow. Competition would have also prevented stupid pricing on Nvidia's side, like launching the GTX 780 @ $650 only to drop it months later by $150 and then release a GTX 780 Ti at another joke price of $650+. Not only that but it would have allowed AMD's cards to saturate the market before the mining craze allowing them to capitalize even further instead of screwing everyone.

None of the re-branding Nvidia has done is cool, and it could have been prevented had AMD released its refresh before Nvidia's. AMD making a PUBLIC statement that they feel no need to unleash a refresh at the moment, a year after the 7000 Series came out, was a mistake.

You're being a hypocrite.
AMD deserves shit for releasing shit cards; their rebranding and release schedule withstanding.


How so? Unless what I'm saying is lost in translation you're making the point. The GPU market has had a hidden handshake of about 12 month release cycles for a long time now. Prior to that it was about 6 months in the "good ole days". AMD broke that cycle going from 7000 > R-200. No, I don't want to hear this GHZ Edition bullshit from people either. That wasn't a refresh. The R-200 was and only one new card came out in that time similar to Nvidia's Titan/780 for the 700 Series that pushed everything down. A lot of this pricing war at the very least could have been dampened. Then we could all just go back to bitching about who's dick is better when it comes to Red vs Green. Pricing is now a factor going forward. Both sides have added to fucking up the structure and they deserve to be called out for it.

Nothing about AMD's cards are shit other than them lacking R&D into noise and cooling, something Nvidia looked into after the 400 Series debacle. If they had that shit in order you'd have no grounds for that statement whatsoever.
 
you are clearly an AMD fanboy and you have nothing to add to our discussion if not flaunt the red flag.

You asked what was wrong with the 480 GTX and the 680 GTX. And He is right about the 480, it was released 7 months after the 5870 and was hot, loud and used a ton of power.

The 680, well, it wasn't really that late, but most people were expecting it to be a lot more powerful.
 
Better check your facts there. GTX 680 was released in March 2012. 7970 was soft launched in Dec 2011, but didn't hit shelves until January 2012. The hard launch was in January, I remember this because I bought 2x 7970s on launch day as soon as they hit newegg. That turned out to be a purchase I regretted many months later after trying AMD's version of surround and dealing with AMD software hell (it only took two years to fix frame pacing), but nonetheless, 2 months is not 6 months.

GTX 680 was also faster than the launch 7970. Go look at the GTX 680 launch reviews. It was the single GPU crown winner following its launch, but I imagine AMD fans will yell something about overclocking or some other nonsense.

The "hot and expensive" comment is quite comical as well, considering the 290X reference has set new standards for how hot and loud a GPU can get, and the 7970 was more expensive than the GTX 680. The 7970 was 550$ at launch, while the GTX 680 launched at 499$ and was faster than the 7970. But, everyone forgets when AMD overcharges. Fact of the matter is, AMD charges what the market will bear; generally speaking, AMD is considered a budget brand and for good reasons. Perhaps when they catch up to NV on all fronts (software, out of box experience, etc) they can charge similarly. They certainly tried and failed with the 7970, the market forced to simply lower the price after the 680 launched.

Now, I think the 290 and 290X are great GPUs now that the "hot and loud" aspect is fixed. Custom 290 and 290X cards are great GPUs. They're priced very well these days on top of that, you can't say anything bad about that; I also firmly believe that AMD is a more fierce competitor these days than in prior years, they have a compelling product in their 290 cards. However, both sides can play this little game of throwing the terms "late", "hot", "loud", and "expensive" around. Fact of the matter is, both sides have done one of these at various points and time. It's just comical to bring it up given the recent 290X situation, and the price situation of the 7970 at launch. So to bring that up? Yeah. Okay. Like that argument can't go the other way around. I remember the 7970 collecting dust on shelves for some 8 months after GTX 680's launch while the 680 was sold out for months. Why? I guess the 7970 was too "expensive" without customers willing to pay the premium for the AMD brand for whatever reason. But that situation fixed itself when AMD charged what the market would bear. In other words they price-cut it to a lower than GTX 680 price, and then they followed up with their overclocked GHz edition 7970.

Another OFF topic, anti-AMD rant by Xoleras. The question was what was wrong with the 480 and the 680 on release. NOTHING to do with AMD at all, NOTHING. He answered, and correctly about the 480, but was way off with the 680.

So why bring your Silly AMD ranting into this thread as well? You could have just said that the 680 wasn't 6 months late only 2. Or unless you are actually going to say there was nothing wrong with the 480 at launch?
 
Back
Top