NVIDIA Delays Fermi to March 2010

I didn't know nVidia had anything to do with the fab issues at TSMC. :rolleyes:
Maybe I'm missing some news tidbit here but AFAIK they're having low yield problems which is pushing everything back; that's not nV's fault.

Designing a highly complex and large GPU on a new process(40nm) is a very risky decision. When you take that kind of decision I think that you are responsible of the consequences. ATI went with a simpler design and it paid off.
Low yield issues are common with new processes so there is no surprise here.
I always like NVIDIA but the Fermi debacle is the result of their arrogance.
Of course, if Fermi proves to be worthy then I will buy one!
 
Now would be a fantastic time for ATI to lower their bloody prices and lock in most of the GPU market, rather than deal with this scarcity. This ATI generation is great, but FAR too expensive and if they were smart, they would lower the prices to 4xxx series levels which will have everyone with post-Xmas spending money buying their 5xxx series instead of holding out, which means that Nvidia will be hard pressed to win back people who already bought an excellent DX11/OpenGL3 card, nomatter how good it is.

C'mon, ATI. Don't screw this up. Don't let momentary greed get the best of you. Oh, and great job with Linux drivers so far, but KEEP GOING!
 
I wouldn't count on it. By end of 2010 AMD which is 100% likely to already be developing its next gen will most likely be out, while nVidia who has to wait till march 6 months after to release current gen in terms of video cards in general. They will be behind thats all there is to it.

I was thinking this too. If it gets pushed back to later than March 2010 then AMD/ATi may have a 2nd generation DX11 card ready for release by the time Fermi is released. I wonder at what point does nVidia think about scrapping Fermi, taking what they've learned from the whole process, and come out with a NEW, new card in order to maintain competitiveness.

Assuming, of course, that AMD/ATi doesn't just sit on its hands now that it has a technology lead.
 
Keep in mind that Nvidia has several development teams. So even though Fermi is late we could still have an nvidia 2nd gen DX11 card by mid 2010.
 
Mannnnnnnn... the 5850's really need to come down in price. ATI has nothing between $200 and $300 right now in the 5000 series.
 
Now would be a fantastic time for ATI to lower their bloody prices and lock in most of the GPU market, rather than deal with this scarcity. This ATI generation is great, but FAR too expensive and if they were smart, they would lower the prices to 4xxx series levels which will have everyone with post-Xmas spending money buying their 5xxx series instead of holding out, which means that Nvidia will be hard pressed to win back people who already bought an excellent DX11/OpenGL3 card, nomatter how good it is.

C'mon, ATI. Don't screw this up. Don't let momentary greed get the best of you. Oh, and great job with Linux drivers so far, but KEEP GOING!

This, this, this. They have an excellent opportunity to grab a lot of market share right now, if only they're willing to take the steps to do so. How many times do we see the cost per unit vs. the number of units sold argument? If AMD/ATi have any brains at all they will do as Xaenos has suggested and drop their prices now. This would make it very difficult for a lot of those who have been holding out for Fermi to keep holding out for Fermi.
 
It actually leaves me wondering (and this is pure speculation)... AMD have been second to Intel in the CPU market forever, ATi have been second to nVidia in the GPU market forever... I wonder if they actually want the market lead or if they are so used to being second that they are content to stay there and be "the underdog"
 
They have no need to lower prices. People are still buying the 5000 series at current prices. They are gaining market share and profits for their stockholders.
This is what happens when their is no competition.
 
Hell, nvidia has a hell of a lot more problems than their difficulty with Fermi right now.. How about the fact that they haven't released a half decent driver in the last 2 years? I don't care how awesome their next generation vid cards are, if their drivers continue to be utter and complete garbage, I'd rather go ATI.
 
Now would be a fantastic time for ATI to lower their bloody prices and lock in most of the GPU market, rather than deal with this scarcity. This ATI generation is great, but FAR too expensive and if they were smart, they would lower the prices to 4xxx series levels which will have everyone with post-Xmas spending money buying their 5xxx series instead of holding out, which means that Nvidia will be hard pressed to win back people who already bought an excellent DX11/OpenGL3 card, nomatter how good it is.

C'mon, ATI. Don't screw this up. Don't let momentary greed get the best of you. Oh, and great job with Linux drivers so far, but KEEP GOING!

The prices are already reasonable, and in line with market conditions. With the 285 and 295 still costing as much or more than cards that beat or tie them, as well as demand that meets or exceeds supply, there is little reason to drop prices. In fact we are lucky they are not higher. Rising ram prices as well as TSMC's issues are as much to blame as anything for the rise in msrp of AMD's cards.
Another thing to note is that AMD's control of pricing slips a little once the chips/cards leave for the AIB's, and their control of pricing all but disappears once the cards leave the AIB's for the e-tailers/retailers. At that point supply and demand takes over.

Not saying AMD could not reduce the price of their GPU's, but they really have little reason to, and I doubt it would make much difference in the prices we are seeing at retail even if they did. Lowering a manufacturer's sale price of a supply constrained item, only changes who is taking how much profit, it does not usually lead to reduced retail prices. At least not until supply is no longer so limited.
 
The prices are already reasonable, and in line with market conditions. With the 285 and 295 still costing as much or more than cards that beat or tie them, as well as demand that meets or exceeds supply, there is little reason to drop prices. In fact we are lucky they are not higher. Rising ram prices as well as TSMC's issues are as much to blame as anything for the rise in msrp of AMD's cards.
Another thing to note is that AMD's control of pricing slips a little once the chips/cards leave for the AIB's, and their control of pricing all but disappears once the cards leave the AIB's for the e-tailers/retailers. At that point supply and demand takes over.

Not saying AMD could not reduce the price of their GPU's, but they really have little reason to, and I doubt it would make much difference in the prices we are seeing at retail even if they did. Lowering a manufacturer's sale price of a supply constrained item, only changes who is taking how much profit, it does not usually lead to reduced retail prices. At least not until supply is no longer so limited.

Very true
 
Nvidia has serious problems with Fermi:

From SemiAccurate (Dec 21, '09):

"(Nvidia) designed a chip that was more or less unmanufacturable, and we have been saying as much for more than six months now. It is big, 530mm^2 at the minimum, likely 10+mm^2 more, and way too hot. The 448 SP version is listed at 225W TDP, 190W 'typical', and that is with two of the 16 shader clusters clusters fused off. With them on, it would likely be well above 250W, far too hot for a single GPU card."

"Then there is the whole problem of 512 SPs promised versus 448 SPs delivered. Fermi is arranged as 16 clusters of 32 shaders, and given that it is turning off 64 shaders, it looks like the minimum granularity it can fuse off is a single cluster of 32. This means it is having problems getting less than two unrecoverable errors per die, not a good sign."

"The architecture is broken, badly designed, and badly thought out. Nvidia does not understand the basics of modern semiconductor design, and is architecting its chips based on ego, not science. The era of massive GPUs is long over, but Nvidia (mis-)management doesn't seem to want to move their egos out of the way and do the right thing. Now the company is left with a flagship part it can't make."

"If Nvidia has a saving grace, it is the upcoming 28nm process node from TSMC, but that is for all intents a 2011 process for any sort of volume. Given that ATI tends to beat Nvidia to any new process node, and does it better, that makes it likely that this "light at the end of the tunnel" will be ATI's Northern Islands chips running over it, rather than anything that fixes Nvidia's problems."

Entire article: http://www.semiaccurate.com/2009/12/21/nvidia-castrates-fermi-448sps/
 
Maybe nvidia will learn something from this.


1. Stop with the rebranding game. Quit using old chips, throwing in a die shrink and calling it a new chip.
2. Bigger is not better in this day and age. Brute force has a chance of getting you the performance lead but at a high cost. Usually results in high power usage, low yields, delays, price vs. competition.
3. Loose the attitude. You just end up annoying your customers.
 
I just wish it was like the old days where we had a lot of games that could really push these cards. Now it seems the only way that happens is because of poor programing.
 
I just wish it was like the old days where we had a lot of games that could really push these cards. Now it seems the only way that happens is because of poor programing.

Yeap. Thats why I have no intentions of replacing my 285's any time soon. Atleast not until the Radeon 6xxx series or Geforce 4xx series. There's no games right now to really push these cards.
 
-snip-

...While I doubt Fermi will be replacing my 5850 any time soon, I do hope nVidia is smart and it's cheaper varients (ie $109-$150) would perform around GTX275 level, as THAT would be a steal.

I'm wondering why the G104 (mentioned in the VERY SAME ARTICLE) isn't getting more attention. According to the article It's launching in Q210, and will be replacing the former high-end generation. Cutting Fermi in half, giving it a 256-bit bus and 4/5ghz GDDR5, this looks like a feasible scenario. Yeah, that's a long time...but considering the main market is sub-$200, most people want DX11, and the 5700 series is not exactly winning hearts and minds, it's worth talking about.

Now, while Fermi could be slightly faster to considerably faster, a halfsy part with some decent clocks (right under 150W TDP...one 6-pin?) could very well be a price-performance alternative to 5850 and a winner compared to the 5700 series. The potential is there to fill a very important gap.

Now, granted, by this time 5890 will be out (~1ghz/~6ghz/$400 MSRP?) with likely stabilized pricing and 5870/5850 will likely be in the ~$200/300 range, but still, in that time before Northern Islands and the 28nm Fermis, it could be an interesting part to look forward to. If for nothing else, depending on performance, it could shake up 5850 and the 5770's price points.

Then again...Maybe not. :p
 
I was just about to ask if I read that right, amdturtle. Don't know if it's a victory for AMD, or Nv scoring an own goal by tripping over their huge balls, sending them rolling into the net on their half of the field.


No mention of NVIDIA having Eyefinity or their own form of it?

PhysX and CUDA plastered over everything will make up for it, totally!
 
Nvidia has serious problems with Fermi:

From SemiAccurate (Dec 21, '09):

"(Nvidia) designed a chip that was more or less unmanufacturable, and we have been saying as much for more than six months now. It is big, 530mm^2 at the minimum, likely 10+mm^2 more, and way too hot. The 448 SP version is listed at 225W TDP, 190W 'typical', and that is with two of the 16 shader clusters clusters fused off. With them on, it would likely be well above 250W, far too hot for a single GPU card."

"Then there is the whole problem of 512 SPs promised versus 448 SPs delivered. Fermi is arranged as 16 clusters of 32 shaders, and given that it is turning off 64 shaders, it looks like the minimum granularity it can fuse off is a single cluster of 32. This means it is having problems getting less than two unrecoverable errors per die, not a good sign."

"The architecture is broken, badly designed, and badly thought out. Nvidia does not understand the basics of modern semiconductor design, and is architecting its chips based on ego, not science. The era of massive GPUs is long over, but Nvidia (mis-)management doesn't seem to want to move their egos out of the way and do the right thing. Now the company is left with a flagship part it can't make."

"If Nvidia has a saving grace, it is the upcoming 28nm process node from TSMC, but that is for all intents a 2011 process for any sort of volume. Given that ATI tends to beat Nvidia to any new process node, and does it better, that makes it likely that this "light at the end of the tunnel" will be ATI's Northern Islands chips running over it, rather than anything that fixes Nvidia's problems."

Entire article: http://www.semiaccurate.com/2009/12/21/nvidia-castrates-fermi-448sps/


damn its gonna have over 300GB/sec of memory bandwidth? right? 7200mhz GDDR5 x (384/8) = 345 GB/sec!
 
Hell, nvidia has a hell of a lot more problems than their difficulty with Fermi right now.. How about the fact that they haven't released a half decent driver in the last 2 years? I don't care how awesome their next generation vid cards are, if their drivers continue to be utter and complete garbage, I'd rather go ATI.

Funny, my experience has been exactly the opposite. :cool:
 
Funny, my experience has been exactly the opposite. :cool:

To be honest, same here. Though I do not have a Nv 2xx card, only 8800/9800 and AMD 4xxx and 5xxx cards. As long as you don't hop on Nv's beta parade, both companies drivers have been pretty solid for me.
 
damn its gonna have over 300GB/sec of memory bandwidth? right? 7200mhz GDDR5 x (384/8) = 345 GB/sec!

Not sure what these cards will show up with; I remember reading this at MadShrimps back in September:

"A 512-bit GPU connected to 1-2 GB of GDDR5 memory at 1000 MHz QDR [Quad Data Rate]... We are looking at memory bandwidth of 256GB/s per single GPU, more than double that of the GeForce GTX295 [896-bit x 1GHz DDR = 224 GB/s]. That would be a conservative estimate, but given the development of GDDR5 memory, if the GT300 chip ends up connected to GDDR5 memory at 1050 or 1150 [alleged 5870/5870X2 clocks], we are looking at memory bandwidth anywhere between 268.8-294.4 GB/s."

"If overclocking-friendly vendors such as EVGA, BFG, Gainward or XFX overclock these cards to let's say 1180 or 1200 MHz QDR [4.7-4.8 GigaTransfers/second, GT/s or "GHz"], the first DirectX 11 GPUs will exceed 300GB/s of bandwidth per single GPU !"
 
Well, for me, I was ready to plunk down a whole new system for MW2, then discovered no dedicated servers.

Thank means my 8800gt's SLIed are just fine with COD4 (MW1).

So intel loses no i920 2.66 gz, no Antec 900 2, no ASRock x58, no XFX 5870, No kingston HperX 8GB memory, no Zalman cooler.

Look at all the money I'm saving!

I'm sure a game will come out that I have to upgrade for, but by that time Femi will be out.
 
I think a lot of people are waiting for the Nvidia to release Fermi before they go DX11....firstly to see how it compares and secondly in anticipation of price reductions...
When I bought my 3870X2 it cost about £300 while currently the 5970 is listed at £550.....why the huge difference they were both king of the hill at the time of release.
ATi should drop their prices to a more reasonable level and secure this part of the market before Fermi release whilst they are still the only game in town.
 
I was hoping to compare performance between Fermi and the current batch of kickass ATI cards before purchasing an upgrade but seeing how nvidia is dragging its heels I may just go with a 5870.
 
And i don't get why does anyone think ATI will drop prices. Selling the highend GeForce 300 cards anywhere bellow 500-600$ will not make financial sense for NVIDIA and its partners due costs of much larger chip and worse yields. Now, if NVIDIA GF360 will be 10% quicker than HD5870 (i can't really believe they can do better than they did with 9800GX2->GTX280 transition when the performances of dual-chip solution from previous gen and single-chip solution from current gen were essentialy same) and will have $500 label, or GTX380 (+20% for example) with $600 label vs $420 for HD5870 - there is no reason for ATI to lower the price of anything except HD5970.
 
I'm happy with what I'm running, even if they released it tomorrow, lol.
As for worrying about DX11, well when more games I care about start using it then I'll upgrade till then, It's just like It was with DX10, I'll wait.

zero2dash said:
Funny, my experience has been exactly the opposite.:cool:
Same here.
 
anymore delays and they'll have to rename the chip GTX787

I lol'd.

No problem here waiting on Fermi. I hadn't planned to rebuild until i9 and my 280s are still just destroying whatever I throw at them so it's whatever until March. At least it's been fun laughing at all the doomsday asshats.
 
While I understand the desire for cheaper video cards, it's unlikely to happen when you have 280's and 285's in the 300 - 400 price range right now.
 
ATI overpricing.

Take that issue up with your favorite etailers.

Prices overseas havnt moved one bit since the launch, well, they've gotten lower.

Ati cannot deliver enough cards. issue!

Nvidia cannot provide more compotition, bringing prices down even futher, for you nvidia fans, be glad ati isnt dropping the prices even more, because do the math:

Performance per MM^2 @ 40NM > perf per MM^2 @ 55NM meaning nv wudnt be able to sell a single card.
 
While I understand the desire for cheaper video cards, it's unlikely to happen when you have 280's and 285's in the 300 - 400 price range right now.

The 5850 is 25 % cheaper than GTX285.
Still faster, lower power consumtion, more features, bitstreamed HDMI, trihead, displayport, audiocontroller onboard, DX11, where to stop ?

Nvidia's production costs and design of their GTX lineup is too expensive to compete, and yet another expensive card wont cut prices unless its VERY effecient.
 
The 5850 is 25 % cheaper than GTX285.
Still faster, lower power consumtion, more features, bitstreamed HDMI, trihead, displayport, audiocontroller onboard, DX11, where to stop ?

Nvidia's production costs and design of their GTX lineup is too expensive to compete, and yet another expensive card wont cut prices unless its VERY effecient.

Oh we are in total agreement. If it wasn't for ATI's drivers in linux I would have purchased one by now. I have a 260 that's barely working and I would like to replace it. Though it seems incredibly stupid to pay 300 - 400 dollars for a last gen part.
 
What Nvidia originally wanted to do with Fermi now seems only possible with the upcoming 28nm process from TSMC. But they've got another problem looming on the horizon.. In Oct '09 ATI unveiled on it's roadmap their new "Northern Islands" series of GPUs, which it said were to be based on 32nm. Now ATI is saying they'll skip 32 and go straight to 28 (perhaps due to Nvidia's delay ?), which TSMC thinks they can have ready by as early as Q4 2010.

I've no doubt that Nvidia is going to drop the bomb in some new, next gen card; we just might have to wait till 2011 to see it. But, as I said, ATI will be poised to quell the rebellion.

It's been a good run for Nvidia; taking the crown from ATI back in '06 when AMD bought 'er out, but it seems that ATI has won the title back and will keep it at least 2011.

Competition is the greatest source of free market prices that we as consumers can enjoy, so this is actually a good thing Nvidia fans, and I'm sure ATI fans are jumping up and down...
 
Yeah, I will never buy a GPU with a TDP of 250w+... Unless it's idle power is in the 10-20w range. That's absurd...

Well, I guess, if the wattage/performance ratio is twice as high than 1:1 (ie 70 extra watts over the 5870 yields more than a 38% performance increase over the 5870 - and more like a 76% increase over the 5870) it wouldn't be such a downer - efficiency/watt ratio and all... But barring that, no thanks nVidia.
 
Back
Top