Nvidia's real problem - the next generation

Can't wait to read it soon as Anandtech gets their malware attack sorted out. :)

Man, I was wondering why Google was telling me that going to Anandtech could harm my computer! It was blocked as malcious!

google report on anandtech said:
”Of the 58 pages that we tested on the site over the past 90 days, 7 page(s) resulted in malicious software being downloaded and installed without user consent. The last time that Google visited this site was on 2010-03-27, and the last time that suspicious content was found on this site was on 2010-03-27.

Malicious software includes 13 trojan(s). Successful infection resulted in an average of 2 new process(es) on the target machine.

Malicious software is hosted on 4 domain(s), including googleanalyticsz.com/, mjgjo.com/, green-fast.net/.

1 domain(s) appear to be functioning as intermediaries for distributing malware to visitors of this site, including whoiz.shit.la/.

This site was hosted on 1 network(s) including AS36643 (EICOMM).
 
I thought part of the reason Fermi has such ridiculously high thermals is due to some leakage problems they had on the 40nm process. At least isn't that what Charlie was spouting about for the last 5 months? So NVIDIA could reduce the thermals considerably by fixing whatever they broke on the 40nm size, and then even more by doing a die-shrink.

I definitely agree with OP that NVIDIA is doing a "brute force" approach in order to get the GTX 480 out the door right now, but I'm fairly certain the architecture itself has a lot of room for improvement in both performance and efficiency.

I'm intrigued by the technology the GF100 brings to the table, but not enough to go out and grab one just yet. There's no question that NVIDIA is scrambling to get a respin done, and no doubt in my mind that the respin will be a much better product. The GTX 480 is really just a stop-gap product for PR damage-control IMO. I'm very much looking forward to seeing their next card.

This!!!!
 
I suspect nvidia is basically waiting for 28nm. That also gives them time to tweak the architecture and remove the worst of it's problems. I suspect that chip will be much better. Remember the radeon 58xx is based on a 2900XT which was ati greatest ever failure.

Ati will respond, but I suspect they've got to be looking first at the gpu compute market (for which fermi is still a 100% success - there is nothing that can compete with it) not consumer graphics. For all the disappointment with the geforce fermi gotta remember that the telsa version is going to make them silly amounts of money, very likely a lot more then the radeon 58xx will make (gpu compute card mark-ups are just massive).
 
gpu compute market

That keeps coming up - wouldn't the Quadro line be better for GPGPU applications? Isn't that the whole purpose of the Quadro line. These cards are aimed at gamers.
 
That keeps coming up - wouldn't the Quadro line be better for GPGPU applications? Isn't that the whole purpose of the Quadro line. These cards are aimed at gamers.

Quadro is for CAD, graphics design, computer animation, etc. Not really GPGPU. I guess it could be adapted but I don't think it's really designed for that. That's why Nvidia built Tesla, but that was a reflex reaction to Intel making Larabee.
 
Please forgive my ignorance, I come here to learn how any card will improve my gameplay experience. What apps besides Folding are "GPGPU" apps that a GPGPU enthusiast would want this card for?
 
Please forgive my ignorance, I come here to learn how any card will improve my gameplay experience. What apps besides Folding are "GPGPU" apps that a GPGPU enthusiast would want this card for?

While a lot of GPGPU apps are in-house or bespoke, there are several available such as the Badaboom encoder - nV have a CUDA page on their website, which is a good place to start. While it is certainly growing phenomenon, nothing really out there of relevance to my interests as of yet. I do a bit of tinkering with it myself though, mainly just porting certain code that is easily parallelized or is already using OpenML (I'm too lazy to do 'normal' multi-threading unless I'm being paid to).
 
So Fermi is here and it's performance is within the same ballpark as ATi currently. But the engineering at it's heart is radically different going for brute force "MOAR POWAH"!

So while they now have a card to compete with ATi - can they produce a next generation with the same philosophy? I say they cannot and I present you with what I consider the end of the brute force approach.

Code:
Core   Card                Fab   Die     Core    Mem Bus  Transistors  Watts
                           (nm)  (mm^2)  (Mhz)   (bits)   (millions)
G80    GeForce 8800 Ultra  90    484     612     384      681          ?
GT200  GTX 280             55    576     602     512      1400         ?
GF100  GTX 480             40    529     700     384      3200         ?

Now if we look at the history of transistor count in each gen from the G80 onward we see an increase of more than double the previous generation. So lets make some future predictions...

If we extrapolate the next generation NV product will have:

6,400,000,000 transistors
32nm fab process

The transistor count follows the trend and lets even give them the benefit of the doubt and assume they can manufacture it at the same process level intel does currently, 32nm. So that leaves us with a die size of approximately:

655 mm^2

Just doesn't seem feasible for them to continue down the current path. they are going to be forced to start a different approach be it either mimicing ATi's more modular design or some other path. The power requirements for a 655mm^2 die would be astronomical.




Ummm, ATi's transistor count is doubling every generation as well. What's your point? It's not like 3200m is that much more than 2200m, ATi's design is no more modular than nVidia's, it's just designed better overall than GF100.

If nVidia designed the chip properly instead of rearranging the pipeline for tesselation and parallel threaded performance, the higher transistor count would be more effective.
 
Last edited:
vick1000 said:
Ummm, ATi's transistor count is doubling every generation as well. What your point?

If nVidia designed the chip properly instead of rearranging the pipeline for tesselation and parallel threaded performance, the higher transistor count would be more effective.

Dude, you're like a child that walks midway into a conversation. Please read the entire thread first. TSMC's problems have been documented in Anandtech's ATI article.

JayteeBates said:
Please forgive my ignorance, I come here to learn how any card will improve my gameplay experience. What apps besides Folding are "GPGPU" apps that a GPGPU enthusiast would want this card for?

There used to be a big push for GPUs to offload video processing from the CPU. I think both ATI and nVidia have both beaten this issue and I don't know what else they can do. Encoders running on the GPU is nice, but that's not the reason I buy high end graphics.

nVidia is trying to sell me something I don't care about and they aren't giving me any reasons to care.
 
Last edited:
That keeps coming up - wouldn't the Quadro line be better for GPGPU applications? Isn't that the whole purpose of the Quadro line. These cards are aimed at gamers.

The geforce, quadro, and telsa lines all use the same chip inside. You have to build a chip to cover all of them. The fermi chip suffers a bit as a geforce chip to rock as a telsa one. The radeon 48xx is basically a great consumer graphics chip, but has no hope vs fermi in the gpu compute market.
The reason ati care is because while nvidia can sell a GTX 480 for $500, they can sell a telsa version for several thousand dollars. It's the same chip with almost the same board and cooling. So all that extra $$$ is profit. I'm sure Ati want in on that market.
 
Dude, you're like a child that walks midway into a conversation. Please read the entire thread first. TSMC's problems have been documented in Anandtech's ATI article.
...

Edited for clarity. I was responding to the original post.
 
Back
Top