GTX 780Ti SLI will be faster than GTX 980 SLI

And yes, the GTX 680 was a mid-range part marketed as a high-end product because AMD flopped with their 7970. At $500+ MSRP, it was still around 20% slower than the GTX 680 (same MSRP) on pretty much all fronts.

Yes and no. That situation was artificially created by Nvidia.

Nvidia purposefully held-off on releasing GK110-based cards because there was do decent competition from AMD. Made sense to wait.


Honestly, the 680 was mid-range from the get-go. The GK110 tapped out right after the 680 launched, and could have been used in consumer cards.

The 780, 780 Ti, and Titan were just SERIOUSLY delayed members of the GeForce 6 series.

LOL Are you both for real? Nvidia have never held back cards before and haven't done so since, yet you both think that Nvidia held back that one time because of the performance of the 7970. Not a chance, they didn't have anything else. There was nothing else ready. They were damn lucky that AMD dropped the ball with the 7xxx series cards.

Do you really think Nvidia gave up millions in revenue by holding back the GK110 chip? Do you think any shareholder would have been happy with that situation? They had a gap in the quadro market that they were filling by putting two GK104's on the same gpu, they had a huge order from Oakridge waiting to be filled, lol and you think they held back because AMD's 7xxx series wasn't that good?

Nvidia didn't artificially create any such situation.

Nvidia don't like been in second place, They have shown that they won't wait long before releasing a better card if they have to. The 680 and the 7970 were close in performance, especially with the release of the Ghz edition. Don't you think Nvidia would have released a more powerful card then, if they had it? But they didn't.

You want proof of that? Look at the 780ti, the titan black, the titan z. Look at earlier cards like the 8800 gt, 8800 gtx, 8800gtx ultra. NVidia have shown time and time again that they will release more and more powerful cards, and that they will release cards even when there is no competition, look at the 9800 cards. AMD had nothing out to even touch the 8800 cards.

And yet somehow you believe that for the GK110 that they waited for the one time in their history. LOL. No they didn't.
 
"GK110 wasn't ready at the time (~40,000 went to oak ridge lab but those were the earliest ones they had and GK100 was a scrap-job they never released) in any mass quantity until much later than GK104 was, so I think it's a combo of AMD being worse out of the gate than expected on the 7970, and what nvidia had prepared. "

I already had posted that :). GK110 taped out which is months from production after GK104 had launched, yes, but then the first batch ready even months later was sold for use in a supercomputer prior to launching at the prosumer level a long time later than the GTX 680 came out.

In short, as I said, "GK110 wasn't ready at the time".

Again nothing to do with AMD performance. The Oakridge supercomputer was supposed to be finished in December 2011, that's when they switched over everything, but they had to use Fermi GPU's because Kepler compute cards weren't available. I am sure that the GK100, that you mention, should have been ready for then. Probably related to the problems that both AMD and Nvida had moving to 28nm.
 
LOL Are you both for real? Nvidia have never held back cards before and haven't done so since, yet you both think that Nvidia held back that one time because of the performance of the 7970. Not a chance, they didn't have anything else. There was nothing else ready. They were damn lucky that AMD dropped the ball with the 7xxx series cards.
What are you on about? The GK110 was being designed in-parallel and was already tapping out samples when the GTX 680 launched.

They EXACT same GPU used on the Titan was already in-existence, so yes, they held off on releasing it. The 780 and Titan could have come out FAR sooner than they did (even after factoring in supply issues, they would have launched after the GTX 680, but not as late as they did)

Do you really think Nvidia gave up millions in revenue by holding back the GK110 chip?
Nope, they made MORE money by waiting to release the long-completed GPU.

It allowed them to double-dip with no additional R&D on their part. Users bought a GTX 680 (because it was the fastest single-GPU card), then they bought a GTX 780 or Titan (Because it was the NEW fastest single-GPU card).

Waiting was an incredibly smart move. AMD dropping the ball on the 7970 enabled it in grand fashion.

Nvidia didn't artificially create any such situation.
They had a faster GPU design, they purposefully waited to release it in any consumer products. That artificially made the 680 their fastest single-GPU solution for an extended period of time.

Nvidia don't like been in second place, They have shown that they won't wait long before releasing a better card if they have to. The 680 and the 7970 were close in performance, especially with the release of the Ghz edition. Don't you think Nvidia would have released a more powerful card then, if they had it? But they didn't.
There was no need to do so, and also still too close to the launch of the 680. Have to wait as long as possible to get the most out of the double-dip they pulled.

In short, as I said, "GK110 wasn't ready at the time".
Just because the GK110 wasn't ready to launch along-side the rest of the GeForce 600 series doesn't really impact my point, which was that GK110 could have been released in a consumer product sooner than it was (and therefor was, held-back intentionally).

In short, as I said, "The 780, 780 Ti, and Titan were just SERIOUSLY delayed members of the GeForce 6 series."
 
Last edited:
Totally agree with you Unknown, Nvidia held back on the technology because they could. Hence this update is wont be as big in performance but rather the fact that it will allow the devices to run cooler. Now on the mobile side those I am hearing stuff about how the GTX 980M will be around 25-30% faster then the 880M and run cooler..
 
What are you on about? The GK110 was being designed in-parallel and was already tapping out samples when the GTX 680 launched.

They EXACT same GPU used on the Titan was already in-existence, so yes, they held off on releasing it. The 780 and Titan could have come out FAR sooner than they did.


Nope, they made MORE money by waiting to release the long-completed GPU.

It allowed them to double-dip with no additional R&D on their part. Users bought a GTX 680 (because it was the fastest single-GPU card), then they bought a GTX 780 or Titan (Because it was the NEW fastest single-GPU card).

Waiting was an incredibly smart move. AMD dropping the ball on the 7970 enabled it.


They had a faster GPU design, they purposefully waited to release it in any consumer products. That artificially made the 680 their fastest single-GPU solution for an extended period of time.


There was no need to do so, and also still too close to the launch of the 680. Have to wait as long as possible to get the most out of the double-dip they pulled.



You are so wrong. NVIDIA did not have anything ready, nothing. Lol I cant believe you believe the rubbish you write. They didn't hold back, the gk110 chip wasn't ready and the first batches didn't make it to Oakridge until the end of September, and wasn't released officially until November.
 
You are so wrong. NVIDIA did not have anything ready, nothing. Lol I cant believe you believe the rubbish you write. They didn't hold back, the gk110 chip wasn't ready and the first batches didn't make it to Oakridge until the end of September, and wasn't released officially until November.



And please, you are way out of your depth here



This argument was 2 years ago, and everyone knew saw leaked slides stating that GK104 was the middle card aka 660Ti. And shut up about OAKRIDGE! God. That is the server card, they did not use GTX 680's!
 
You are so wrong. NVIDIA did not have anything ready, nothing.
Please learn to read, I didn't say it was ready. In fact, I said it would have to launch AFTER the GTX 680, but not as late as it did.

Seriously. Learn to read...

Lol I cant believe you believe the rubbish you write.
I didn't write any rubbish. I didn't actually write anything to the affect of what you're responding to. You've misread, and are now going off on a tangent.

They didn't hold back
Yup, they did, the GK110 was ready, finalized, and being produced in mass-quantity well before the Titan was launched as a consumer GPU. Ergo, it was held back.

the gk110 chip wasn't ready and the first batches didn't make it to Oakridge until the end of September
Again, I never said it was "ready" for the launch of the 600 series. I've said from the beginning that it would have to come out AFTER the launch of the GTX 680.

Just not as late as it did.
 
Last edited:
Again that nvidia limiting you on how far you can overclock. I get 1250mhz on mine but I know it could go further without the power limit and voltage lockdown.
Nope, that is NOT Nvidia limiting my overclock. I can set whatever clockspeed I damn well please, and the card will maintain it right up until it starts artifacting because the core isn't that good.

I've had this GTX 780 up to 1300MHz before, but the core simply isn't 100% stable there no matter what voltage I feed it, or how cool I keep it. This isn't an artificial limitation, it's simply where this core craps out :rolleyes:

For reference, you can push a reference model GTX 780 to 1.3v using MSI afterburner. This voltage is already high enough to make blowing up the voltage regulators kinda easy, though... so disable the power-target at your own risk.
 
This thread was a fun read. I do look forward to 750ti like efficiency numbers, then I might not have to watercool my gpu just to keep a heavy gaming box in my room.
 
Yes and no. That situation was artificially created by Nvidia.

Nvidia purposefully held-off on releasing GK110-based cards because there was do decent competition from AMD. Made sense to wait.

Stop spouting this BS.
If Nvidia could have released GK100/GK110 on-time and not paid penalties on their contracts, they would have.


Honestly, the 680 was mid-range from the get-go. The GK110 tapped out right after the 680 launched, and could have been used in consumer cards.

The 780, 780 Ti, and Titan were just SERIOUSLY delayed members of the GeForce 6 series.
Wow... delusional much?

Nvidia released the best they had as soon as it was ready.
Fortunately for them GK104 had some substantial headroom in it.
 
This thread was a fun read. I do look forward to 750ti like efficiency numbers, then I might not have to watercool my gpu just to keep a heavy gaming box in my room.

I wouldn't expect 750Ti efficiency numbers for the top dog parts. It's a lot easier to do in the lower performance segment. nVidia typically avoids making parts like those ... always slash the hardware resources and then crank the clock which of course is bad for efficiency numbers. ie, check how fast the 650 Ti Boost disappeared from their line-up. So they float a single 750Ti arranged to display nice efficiency for good word of mouth. I'll be surprised if the entire line-up follows so nicely.
 
Stop spouting this BS.
If Nvidia could have released GK100/GK110 on-time and not paid penalties on their contracts, they would have.
What BS, exactly?

They could have launched the Titan quite a bit sooner than they did. The core was done and being mass-produced. The core was already being used on workstation / server cards. Contractual obligations to the Titan supercomputer had long-since been fulfilled. They could have launched the GTX Titan months earlier without a problem, especially at the ludicrous $1000 price-point they started-out at.

Wow... delusional much?
Delusional about what, exactly? You go-on to agree with a large portion of what I said in your next breath...

Nvidia released the best they had as soon as it was ready.
Which is exactly in-line with what I said, that anything GK110-based would have come out AFTER the launch of the GTX 680 (but not AS FAR after as they chose to launch it).

All I'm saying is that they delayed the launch of the GTX Titan farther than was absolutely necessary, by a significant margin.
 
Yes and no. That situation was artificially created by Nvidia.

Nvidia purposefully held-off on releasing GK110-based cards because there was do decent competition from AMD. Made sense to wait.

Does it make sense though? If I was Nvidia, I would have released the GK110 cards out of the gate and priced them accordingly. At $500, a GTX-780 would have been a no-brainer for anyone. This probably would have shut AMD out altogether, or at least forced them to lower the cost of the 7970 to like $250 or something.
 
Does it make sense though? If I was Nvidia, I would have released the GK110 cards out of the gate and priced them accordingly.
Why would you do that, though?

They already had the GTX 680, which they could sell for top-tier prices (because it kept-up with the fastest thing AMD had to offer). Releasing the GTX 780 too-early would have simply cut into GTX 680 sales.

If the GTX 780 had launched at the same time as the GTX 680, a good portion of people would have only bought the 780, rather than buying the 680 and then upgrading to the 780. By holding off on releasing a GK110-based consumer graphics card, Nvidia was able to double-dip people's wallets without expending any additional R&D.

At $500, a GTX-780 would have been a no-brainer for anyone.
That's exactly the problem. lol
 
What BS, exactly?

They could have launched the Titan quite a bit sooner than they did. The core was done and being mass-produced. The core was already being used on workstation / server cards. Contractual obligations to the Titan supercomputer had long-since been fulfilled. They could have launched the GTX Titan months earlier without a problem, especially at the ludicrous $1000 price-point they started-out at.


Delusional about what, exactly? You go-on to agree with a large portion of what I said in your next breath...


Which is exactly in-line with what I said, that anything GK110-based would have come out AFTER the launch of the GTX 680 (but not AS FAR after as they chose to launch it).

All I'm saying is that they delayed the launch of the GTX Titan farther than was absolutely necessary, by a significant margin.

Remind me of how many weeks are in a year...
c136_GK110-400-A1.jpg


They were not able to launch Titan until they did.
There is a bigger story there but I'm not going to get into that.

Edit- Actually, I guess I need to since you clearly have no idea what you are talking about.
There was a respin done after they fulfilled their contract to Oak Ridge in Oct '12.
 
They were not able to launch Titan until they did.
So you think it couldn't have been released even a single DAY earlier?

Please, get real. It could have come out earlier (they had the core finished and contractual obligations wrapped up). They simply had no reason to release it until they did.
 
Just because the GK110 wasn't ready to launch along-side the rest of the GeForce 600 series doesn't really impact my point, which was that GK110 could have been released in a consumer product sooner than it was (and therefor was, held-back intentionally).

Kyle has debunked this myth many times. His contacts in the industry told him it was not true.
 
Why would you do that, though?

They already had the GTX 680, which they could sell for top-tier prices (because it kept-up with the fastest thing AMD had to offer). Releasing the GTX 780 too-early would have simply cut into GTX 680 sales.

Interesting perspective. I guess the point I'm trying to make is, if they had priced their top-tier cards to compete with AMD's top-tier cards, I believe Nvidia would have shut them out completely. There'd be no reason to purchase an AMD card ever if Nvidia let their "top dog" into the ring with AMD's top-dog. For every competing price point, Nvidia would have decimated AMD. With AMD's financial shape not being quite what Nvidia's is (especially after the Bulldozer fiasco), I would assume that it would be advantageous for Nvidia to take out one of its competitors. Instead, it looks like Nvidia's letting AMD live... For some reason. I mean in the end - releasing GTX-680 as a high-end part made them TONS of money, sure. But, I think that releasing their top-dog and destroying AMD could have had the same effect, no?
 
So you think it couldn't have been released even a single DAY earlier?

Please, get real. It could have come out earlier (they had the core finished and contractual obligations wrapped up). They simply had no reason to release it until they did.

Im sure they could have released the same version as they shipped to Oakridge but I don't think consumers would have been very happy with it.

Why would they do a respin if it wasn't necessary?
If you knew how much silicon they went through to supply Oakridge, we wouldn't be discussing this.
 
I wouldn't expect 750Ti efficiency numbers for the top dog parts.

I agree. We probably won't see 750Ti efficiency for flagships until 1-3 generations after Pascal.

Just think: in a mere 10+ years, we'll have an entry-level budget $650 (adjusted for inflation) single GPU that offers 8-way Titan-Z SLI performance with 256GB VRAM while being passively cooled because of very low power draw and heat output and will struggle with the newly released CoD 47: Derp Bought It Again Edition on an outdated single 8640p 240Hz monitor!
 
Why would they do a respin if it wasn't necessary?
Because AMD gave them ample time to do so (smoke 'em if you got 'em) :rolleyes:

Just because they already had the thing done doesn't mean they totally sat on their hands for months. Might as well make a few tweaks if you've got the time.
 
Because AMD gave them ample time to do so (smoke 'em if you got 'em) :rolleyes:

Just because they already had the thing done doesn't mean they totally sat on their hands for months. Might as well make a few tweaks if you've got the time.

So they spent the money on a respin and lost the revenue for sales for almost 2 quarters because "they had ample time."
This is the BS I'm talking about.

Edit- You don't do a respin simply because you feel like it...
 
So they spent the money on a respin and lost the revenue for sales for almost 2 quarters because "they had ample time."
Nope, they simply had no reason to release it 2 quarters earlier. No competition. They could keep milking the GTX 680, tweak the GK110 in their off-hours, and release it when sales of the 680 dried up / AMD had new cards on the horizon.

This is a stagey that rakes in additional revenue, it doesn't lose it...

This is the BS I'm talking about.
Again, what BS?
 
I remember alittle different then others.. the 680GTX was Nvidia 's second try to out run the 7970 when the 580GTX couldn't..

Then we have all these Titan's/ Black /Z /780GTX/780Ti trying to hold off the Hawaii Pro and XT ..well what about that big chip AMD taped out with Tonga .. code name (295x) with unknown specs but more then a overclocked Hawaii part as it's die size is bigger then Hawaii ..
 
Nope, they simply had no reason to release it 2 quarters earlier. No competition. They could keep milking the GTX 680, tweak the GK110 in their off-hours, and release it when sales of the 680 dried up / AMD had new cards on the horizon.

This is a stagey that rakes in additional revenue, it doesn't lose it...


Again, what BS?

Then why did they release it?
What was it competing with?
Why release Titan?

Your story has no basis in reality.
 
I remember alittle different then others.. the 680GTX was Nvidia 's second try to out run the 7970 when the 580GTX couldn't..

gtx580 came out long before the 7970 (nov 2010). Nvidia launched the gtx 680 3 months later after the 7970, which was faster than the 7970. Amd later on released the 7970ghz edition to "outrun" the gtx 680.
 
gtx580 came out long before the 7970 (nov 2010). Nvidia launched the gtx 680 3 months later after the 7970, which was faster than the 7970. Amd later on released the 7970ghz edition to "outrun" the gtx 680.

Correct, then Nvidia got the lead with the 780 may 2013 and AMD beat it with the 290x oct 2013 so Nvidia put out the 780ti in nov 2013 to beat that.
 
I agree. We probably won't see 750Ti efficiency for flagships until 1-3 generations after Pascal.

Just think: in a mere 10+ years, we'll have an entry-level budget $650 (adjusted for inflation) single GPU that offers 8-way Titan-Z SLI performance with 256GB VRAM while being passively cooled because of very low power draw and heat output and will struggle with the newly released CoD 47: Derp Bought It Again Edition on an outdated single 8640p 240Hz monitor!

1920x1200 displays have been available for under $600 since 2004...we still can't drive a 4K display at reasonable framerates with a single flagship GPU. Hell plenty of games can't even hit 60fps with four $1,000 Titan Blacks.

And manufacturing process advancements are significantly slower than they were 10 years ago. Back then we would transition into a new process within 12-18 months. We have been on 28 nm since Fall of 2011...

In 10 years we've improved our pixel density by 3.6 times (1200p -> 2160p). You predict in the next 10 we'll increase density by 16 times?! In 10+ years we'll be lucky if a pair cards in SLI can drive 4320p at 40-45fps.
 
1920x1200 displays have been available for under $600 since 2004...we still can't drive a 4K display at reasonable framerates with a single flagship GPU. Hell plenty of games can't even hit 60fps with four $1,000 Titan Blacks.

And manufacturing process advancements are significantly slower than they were 10 years ago. Back then we would transition into a new process within 12-18 months. We have been on 28 nm since Fall of 2011...

In 10 years we've improved our pixel density by 3.6 times (1200p -> 2160p). You predict in the next 10 we'll increase density by 16 times?! In 10+ years we'll be lucky if a pair cards in SLI can drive 4320p at 40-45fps.

lol I think there was some humor in OP's post. Anyway, while gpus are getting more powerful, new titles gobble up the power with extra programming (which makes sense of course). So 2560x1600 on a single gpu will always be out of reach unless 2560 somehow becomes mainstream, which it won't for a very long time, if ever.
 
1920x1200 displays have been available for under $600 since 2004...we still can't drive a 4K display at reasonable framerates with a single flagship GPU. Hell plenty of games can't even hit 60fps with four $1,000 Titan Blacks.

And manufacturing process advancements are significantly slower than they were 10 years ago. Back then we would transition into a new process within 12-18 months. We have been on 28 nm since Fall of 2011...

In 10 years we've improved our pixel density by 3.6 times (1200p -> 2160p). You predict in the next 10 we'll increase density by 16 times?! In 10+ years we'll be lucky if a pair cards in SLI can drive 4320p at 40-45fps.

Check your inbox, there's probably a message regarding a recall on your model sarcasmeter. :p
 
It'll be interesting to see if any Maxwell parts make it into Steam Machines (hello? is Valve still doing those or did they go back to making hats?), mobile or otherwise. I wonder if they could do some kind of special chip that's low-power and super-quiet with a low profile like in a laptop.

For almost all consumers they'd only need to drive a 1080p TV.
 
LOL Are you both for real? Nvidia have never held back cards before and haven't done so since, yet you both think that Nvidia held back that one time because of the performance of the 7970.

Easy now...don't lump me in with that. I simply stated that the 680 (GK104) was the mid-range chip - as every other GX##4 chip has been where 'X' is the architecture and '#' refers to the generation of architecture - but was sold as the high-end video card in the product stack which is normally the GX100 or GX110. The reason I stated was because AMD didn't have a decent card to compete with GK104 (much less GK110) AND because the GK110 part did not have the yields necessary for consumer marketing.

It's easy on the company financially to sell Oak Ridge GK110 chips at several thousand $$ per chip and meet their limited demand when your early fabrications are netting maybe a 10-20% yield of them per wafer. It's a lot harder financially to sell that same GK110 to ten or twenty times the number of potential consumers (in regards to the # of chips needed) at $500-650 per chip - at least until your fabrication process yields are much, much better than 20% per wafer.

Do you really think Nvidia gave up millions in revenue by holding back the GK110 chip? Do you think any shareholder would have been happy with that situation? They had a gap in the quadro market that they were filling by putting two GK104's on the same gpu, they had a huge order from Oakridge waiting to be filled, lol and you think they held back because AMD's 7xxx series wasn't that good?

I'm not disillusioned into thinking GK110 was purposefully being held back, if anything, AMD's flop with the 7970 allowed a new series of chips to come to market without there being a 2.5 year gap between the release subsequent generations of architectures. It took a year after the GT200 (280) release to come out with GF100 (480) and only 6 months after that to release GF110 (580). Waiting 1.5 years after GF110 for GK104 was a pretty long wait by historical standards much less delaying it another 6 months in order to align the product stack like Fermi's was and do a similar 6 month transition.

In fact, if they did wait until the Fall of 2012, we'd have had the *new* 660 (GK104's 680) in October, then waited until May of 2013 for the *new* 680 (GK110's 780) and been sitting around for the past year and a half with no refresh to the Kepler architecture and a backwards release schedule (low end released before high end). Option two would have been to wait 2.5 years between the end of an architecture (GF110 in Nov. 2010) and the start of a new one (GK110 in May 2013) and be waiting around for a year and a half with no architecture refresh on the table as historically expected to occur within 6-12 months - instead moving into a new architecture altogether - leaving the potential for a 4 year period between graphics card updates for the end user.

While that might not necessarily have happened due to the huge performance gains one would see going from GF110 to GK110, it is something financially not worth risking with competition in the market present as market share will inevitably be lost in that time frame.

Either way you dissect it, there is only 1 generation of Kepler cards - and the necessity of having to fiddle with the product stack due to yields/obligations and change up the conventional progression of X00 series cards and code names helped the consumer as well as ensured nVidia retained its market share. Whether Kepler is the anomaly in terms of code names and single generation is yet to be seen, but I have a feeling it will be.
 
I still don't understand how people think Tahiti flopped... It did exactly what it was supposed to do and did it well.
If Tahiti flopped than GK104 was very slightly less of a flop.
 
It flopped for the initial price point compared to the competition's. And I don't see how GK104 was a flop at all compared to its predecessor the GF110 based 580. It was dominating it in gaming benchmarks and performance tests by 30% or better in everything. The only thing it was worse at was compute and raytrace rendering. The 7970 in regards to its predecessor was about as good of an improvement as the 680 was to the 580, but if the 680 was that much better than the 7970...what reason would you have to go fro a 7870 to anything other than the 680 if the 680 and 7970 shared the same price?
 
It flopped for the initial price point compared to the competition's.
They priced it at what the market could bear.
It offered ~20-30%, at launch, more performance than a GTX580 and was only 10% more expensive, $550 vs GTX580's $500.

And I don't see how GK104 was a flop at all compared to its predecessor the GF110 based 580.
I'm not sure what you are getting at here.... Maybe I'm misunderstanding but it sounds like, in the previous quote, you are coming to the conclusion of the 7970 being a flop because the GTX680 came out +3months after the 7970 at a slightly lower price while offering ~10% more performance.

Yet you say the GTX680 wasn't a flop because it handily beat the GTX580, while barely beating the 7970 and being 9% less expensive, a win but nothing crazy especially since AMD adjusted their prices accordingly.

It was dominating it in gaming benchmarks and performance tests by 30% or better in everything. The only thing it was worse at was compute and raytrace rendering. The 7970 in regards to its predecessor was about as good of an improvement as the 680 was to the 580, but if the 680 was that much better than the 7970...what reason would you have to go fro a 7870 to anything other than the 680 if the 680 and 7970 shared the same price?
It was also tested with boost, which was new and testing/benching procedures at the time didn't really account for it. While it was obviously innovative, for GPUs, and allowed for higher performance in typical applications it also included extra variability that wasn't accounted for.

In just about a year after the GTX680 launched, the 7970ref was neck and neck with the GTX680.
 
Then why did they release it?
With GTX 680 sales going dry and new AMD cards on the horizon, there was nolonger any reason to hold it back.

What was it competing with?
That's the brilliance of Nvidia's timing.

They sold pretty-much all the 680's they were going to sell, so it wouldn't hurt them to release a faster card.
They also got a bit of a jump on AMD, which makes them look great from a PR standpoint (and gave them a longer sales-run before AMD had a competing product).

Why release Titan?
Because the timing was damn-near perfect for them, as explained above.

Your story has no basis in reality.
Not seeing the problem here, makes perfect sense.
 
I guess we can expect Nvidia to release a cut down version of their chip first: The 480/680, then release full versions of the cards: 580/780/Titan after the initial release.
 
Back
Top