Fermi Working Samples for CES?

realistically, the 5870 runs on par with the 295. I know people argue about 1 gpu versus 2 on a card but it is one card vs one card. (if i scan reviews you see the 295 is actually a little faster). So as a single card solution NV still has the top spot. Is it dx11 compatible? Nah for the games out now and the next few months that does not really matter. I mean lets be honest all NV has to do is drop prices on their 2xx line (260 and above) to battle the shortage of 58XX cards. I would sli two 285's if they were 200 a piece ;)...

Not that I am a NV or ATI fan boy, I have both..just from a marketing perspectinve NV has the product line out now to compete with the new ATI cards out now..all they have to do is shift focus from performace to price (a 5870 for 400 or a 295 for 300?)..

What NV is going to have to do is when the 300 line does come out - smoke the competition and do it at a decent price point. Say the gt380 is 100% faster than a 5870,...if they are selling it for 600 bucks I don't think a lot of people will care.
 
realistically, the 5870 runs on par with the 295. I know people argue about 1 gpu versus 2 on a card but it is one card vs one card. (if i scan reviews you see the 295 is actually a little faster). So as a single card solution NV still has the top spot. Is it dx11 compatible? Nah for the games out now and the next few months that does not really matter. I mean lets be honest all NV has to do is drop prices on their 2xx line (260 and above) to battle the shortage of 58XX cards. I would sli two 285's if they were 200 a piece ;)...

Not that I am a NV or ATI fan boy, I have both..just from a marketing perspectinve NV has the product line out now to compete with the new ATI cards out now..all they have to do is shift focus from performace to price (a 5870 for 400 or a 295 for 300?)..

What NV is going to have to do is when the 300 line does come out - smoke the competition and do it at a decent price point. Say the gt380 is 100% faster than a 5870,...if they are selling it for 600 bucks I don't think a lot of people will care.
yeah also gtx 295 is multi gpu setup with much more power draw sli problems etc and how much you think nvidia can afford reducing prices ? (not that ati can't answer to it with its 4000 line)
 
realistically, the 5870 runs on par with the 295. I know people argue about 1 gpu versus 2 on a card but it is one card vs one card. (if i scan reviews you see the 295 is actually a little faster). So as a single card solution NV still has the top spot. Is it dx11 compatible? Nah for the games out now and the next few months that does not really matter. I mean lets be honest all NV has to do is drop prices on their 2xx line (260 and above) to battle the shortage of 58XX cards. I would sli two 285's if they were 200 a piece ;)...

Not that I am a NV or ATI fan boy, I have both..just from a marketing perspectinve NV has the product line out now to compete with the new ATI cards out now..all they have to do is shift focus from performace to price (a 5870 for 400 or a 295 for 300?)..

What NV is going to have to do is when the 300 line does come out - smoke the competition and do it at a decent price point. Say the gt380 is 100% faster than a 5870,...if they are selling it for 600 bucks I don't think a lot of people will care.

at $100 a die for 295 its already 200+ just for the die cost.. then add everything else on top of it.. no way they can sell a 295 @ $300
 
*snorts*

Okay, that was funny :D I guess all those scientists, hospitals and businesses running high-performance tasks on GPUs are just confused :p

It's a real shame that nVidia is taking so long this time around. Fermi looks like the long-awaited shift towards a more generic GPU/vector processor design Intel has been hyping for years with its Larrabee. I have no doubt that Fermi as cGPU heralds the future of GPUs, it's just annoying that the future has been postponed for a few more months :(

Yeah...and how is that potential market working out for Nvidia so far?
 
yeah also gtx 295 is multi gpu setup with much more power draw sli problems etc and how much you think nvidia can afford reducing prices ? (not that ati can't answer to it with its 4000 line)


They were examples - and it's not like they would be the first company to sell a product at or below cost (hi SONY!! Hiya Microsoft!). I'm not knocking ati the 58XX cards are damn impressive - but honestly are they lightyears beyond the 2XX or even the 48XX cards?
 
erm... Juniper is 5750 and 5770... Whatever happened to our NVDA stock argument? and Q1 vs. Q2 fermi release... Vengeance you're normally sharper than this... :D
Yeah, see the post above yours. I brain farted on that one.

I thought that the GTS 250 was the same as the G92 based 8800 GTS 512, 9800 GTX and the 9800 GTX+ not the G80 based 8800 GTX
Die shrunk and clocked higher, but pretty much.

realistically, the 5870 runs on par with the 295. I know people argue about 1 gpu versus 2 on a card but it is one card vs one card. (if i scan reviews you see the 295 is actually a little faster). So as a single card solution NV still has the top spot. Is it dx11 compatible? Nah for the games out now and the next few months that does not really matter. I mean lets be honest all NV has to do is drop prices on their 2xx line (260 and above) to battle the shortage of 58XX cards. I would sli two 285's if they were 200 a piece ;)...
2x 285s would be faster and at 200$ each wouldn't be a bad deal.

The reason they aren't dropping their prices is because of zero supply on the 5870 front.

Not that I am a NV or ATI fan boy, I have both..just from a marketing perspectinve NV has the product line out now to compete with the new ATI cards out now..all they have to do is shift focus from performace to price (a 5870 for 400 or a 295 for 300?)..
$300 would be a loss leader for sure. I'd go snag one for that price. If they gave them away for free, they would take all focus off ATI's new chips too. ;)

What NV is going to have to do is when the 300 line does come out - smoke the competition and do it at a decent price point. Say the gt380 is 100% faster than a 5870,...if they are selling it for 600 bucks I don't think a lot of people will care.
You're hyperbowling but yeah. If Nvidia comes out with Femri, clocks it at 650Mhz, and sells it at 400$ they will have a shot depending on what the 5890 refresh cards look like. That'll put it on paper at 20% faster than a 5870 (2.4X a 280). However, thats high for Nvidia clocks, and it assumes a CUDA core is the same power as a GT200 core. I don't think it is a bad assumption to think a GT200 core is at least equal to a CUDA core clock for clock on paper. I suspect there will be a LOT in the way of driver optimization to be done. I don't think there is nearly as much room for the 5870 to mature because of how similar the architecture is to the 4870 in terms of preformance gains post launch.

yeah also gtx 295 is multi gpu setup with much more power draw sli problems etc and how much you think nvidia can afford reducing prices ? (not that ati can't answer to it with its 4000 line)
It is MUCH more power. So much so that the power draw is a real case. I've never been a fan of people talking about 20 watts difference in power draw. This however is an order of magnitude different.

at $100 a die for 295 its already 200+ just for the die cost.. then add everything else on top of it.. no way they can sell a 295 @ $300
The die cost is probably closer to 50-60$. However I agree they aren't going to sell it at 300$.
 
Yeah...and how is that potential market working out for Nvidia so far?

Nothing potential market. Professional GPUs like the Quadro have been selling well for years, now Tesla cards are selling like hotcakes in as far that's possible in a high profit margin professional market. Especially hospitals, clinics, research facilities, oil companies and HPC people are the biggest GPGPU market for nVidia at this point.
 
So ATI will be there with a new high end chip when Fermi arrives? Does no one think the fermi delay will also shorten the time nvidias next high end will arrive?!
Thats of course the case, too.
 
A bonus to those people waiting for hard benchmark numbers on the GT300, if it doesn't come out until early next year (March/April) or maybe a bit later, prices on 5870 and possibly 5870x2 will have come down

why should price's drop with no competition??
 
Yes because Kyle and Charlie are both reknown leaders for inner workings of Nvidia and now what they are thinking and doing on a day to day basis.

Anything concerning Fermi should all be concidered FUD. Unless we see nothing by XMAS. Seems alot of people have fergotten Nvidia was very very quiet about about G80 until about 4 weeks til launch.

Charlie is a douche, claims Fermi(G300) wont be but 10-20% faster of 5870 and has claimed in the past performance wins for ATI over Nvidia for competeing generations. He is a paid AMD tool and should be ignored.

Yeah, Kyle and Charlie are kind of renowned for their knowledge of the inner workings of nvidia/aibs. ;)

Well, at least pounding the pavement to try to figure out what the hell is going on.

While we didn't know jack about G80 until a couple weeks before launch, we knew WHEN it was launching.

Finally, the common consensus is 20% so, for which I agree. Someone in the know at B3D was saying the word was 70% of 5870 crossfire iirc (Trini may recall better than I), which would be ~25%.

I don't know what speed you expect 48ROP/128TMU/1024 Flops/cycle to be running at, but barring some Dawn dust, it's going to be a slower core speed than 5870, as it's a supposedly a ~60% larger die, so it starts to average out.

My personal theory put it at 700/1750 and 640/1600 for the full and 448sp (360) parts with a 2.5/1 ratio which keeps everything at the same pixel/tex/flop ratio as GT285, with the same performance differential between GTX280 and GTX260 for the two respectfully. At those speeds we're talking 70% and 35% faster theoretical than GTX285, not counting the scheduler enhancements and all that jazz in the new architecture that could improve efficiency. Using simple math:

GTX285 was ~10% faster than 4890
GTX380 may be ~70% faster theoretical than GTX285.
5870 is ~60% faster than 4890.

10+70-60= 20% faster. Efficiency enhancements to SFU and such could be anything but ~5% sounds reasonable...So there, there's a realistic theory to back Charlie/B3D guy up.

Yes, that's just an idea of what it COULD be, based on what we know and think we know, and could very well be very wrong. Point is though, don't expect miracles that are unrealistic. Look at all the facts before you call him nuts: The specs (white paper), look at the amount of transistors and therefore size comparatively speaking to former nVIDIA gens and chips on the process, the speeds capable per TDP on the given process, and there you go. Charlie doesn't just 'guess'....well, not ALL the time. :D I feel he usually has a grip on the sitch one way or another through sources or looking at the facts himself. Doesn't mean I don't think he's sensationalist in his writing, but you gotta give credit where it's due when he has a source or breaks/publicizes a story, IE the AIBs to not expect Fermi until April/May.
 
Last edited:
Nothing potential market. Professional GPUs like the Quadro have been selling well for years, now Tesla cards are selling like hotcakes in as far that's possible in a high profit margin professional market. Especially hospitals, clinics, research facilities, oil companies and HPC people are the biggest GPGPU market for nVidia at this point.

Read the financials. The professional Quadro business at nVidia is 0.2% of its revenue. The margins are nice, but the volume is tiny

Unless that market segment starts growing explosively starting Q1 of 2010, and keeps exploding for several quarters in a row, it will be irrelevant to Fermi's success

Maybe a Fermi successor in 2012-2015 starts to bring significant revenue. I guess that's what's keeping nVidia investing in those extra transistors

Since graphic card generations get obsolete every 18-24 months, nVidia would never recover the cost of designing Fermi, if it had to come from revenue from the professional segment during 2010-2011. The professional segment starting size is too small, and the life of graphic generations too short

Good try to distract investors on presentation day, but every professional investor could see through it next day when they were back at their offices and run the revenue spreadsheet projections
 
Since september NVDA has lost 27%
Since the middle of October, AMD has lost 27% of it's value. It looks like AMD's investors are sweeting even more! Oh, wait It's just cherry picking bullshit that means nothing.
Actually, it's not cherry picking bullshit. Graphics make what % of AMD's sales? and Graphics makes what % of NVDA's sales? AMD investors have always been sweating cause AMD as a company sucks...bleeds money everywhere. However, IF Cyress failed, AMD probably would not. If Fermi failed, NVDA could easily kick the bucket and be on sale for a cheap price. You're statement is like comparing Intel with ATI. It's apples and oranges. If anything fruitful were coming in the coming months for both AMD or NVDA, they're stock prices would reflect it. Obviously there isnt, despite the success of the 5800's.
 
Last edited:
what are you on crack or something 20 watt lol http://www.anandtech.com/video/showdoc.aspx?i=3643&p=26 60 watt at best i saw 80 watt diference in diferent reviews
/sigh. Stop internet road raging for a min.
I was not talking that the 5870 was 20 watts more than a 295. I was refering to the past, where people talked about power draw between cards like a 4870 vs a GTX 275 which from your graph is 31 watts apart. I was talking about the difference between a pair in SLI like 2x 275s and a 5870 which is arguably close on preformance. Yet there is a 243 watt difference in power. I.e. an order of magnitude off the 20 watts for similar preformance I had stated earlier in the post.
 
If I do not see a product fro Nvidia by January, I will def be purchasing a 5870 or 5870x2 (if and when they do come out) Very disapointed by Nvidia this time around....

It's gonna be what-atleast a 3-5 month gap before NVidia puts something new out since the 58x0 architecture was released? forget by January, think of it this way theres gonna be a crapload of people who will take take their holiday money and buy a 58x0 card and the majority of those people probably will not make another vid card purchase for another 6 months after...Nvidia definitely loses this round.
 
I don't mind waiting till March for Nvidia's new cards, if that's the case. While nice, the 5870 isn't a big enough boost for me - however for 4850 and 8800GT owners I see it as a very enticing upgrade. I would consider getting the 5870x2 (or 5900 series, whatever the final name ends up being) but it looks like it won't fit in my case.
 
It's gonna be what-atleast a 3-5 month gap before NVidia puts something new out since the 58x0 architecture was released? forget by January, think of it this way theres gonna be a crapload of people who will take take their holiday money and buy a 58x0 card and the majority of those people probably will not make another vid card purchase for another 6 months after...Nvidia definitely loses this round.

That is if they can find 58xx cards before the holidays... Sigh... Want one more... ;)
 
That is if they can find 58xx cards before the holidays... Sigh... Want one more... ;)

Something a lot of people seem to be missing. Buying a 58xx right now is like playing whack a mole. And judging by what TSMC is saying, it's not going to be any easier to get a 5870 any time soon.
 
Well I have no idea what fermi is going to do.

oh and this math was odd you can't just add them together since one is a superset of the other. it is 70% of x+10%

GTX285 was ~10% faster than 4890
GTX380 may be ~70% faster theoretical than GTX285.
5870 is ~60% faster than 4890.

I used 2 as the base because 1 does screwy things to math.

4890 A2 = 2
GTX285 N2 = 2.2
GTX380 N3 = 2.2 + 1.54 = 3.74
5870 A3 = 2 + 1.2 = 3.2

so numbers are close just not sure if they have any meaning.

But one of the market nvida wants to exploit is the the companies using hundreds of chips would would love to cut that number in half or even quarters which from the white papers looks possible. They could be blowing smoke up our butts but hey no ones knows. That market lives on high margin gains. Tesla are making money and fermi should be better at the same game.

Next day traders make life interesting... and I don't trade stock and even I understand market caps, splitting stock and such. Nvidia has more non-preferred stock out there than AMD, they are also in the black as opposed to AMD which is in major debit. I understand from an accounts point of view it is better to have people owe you money than have cash on hand (still don't think this makes sense), but owing people money you don't have is probably not a good thing. AMD's graphics divsion is making most of AMD's money right now. So they need hits. Nvidia has enough money they could afford to lose money for two quarters in a row before the OEMs start hurting nivida's bottom line.

Most people at hardocp want new hardware now, but I'd rather prefer that nvidia wait until they have an 8800 instead of a 5800. What is interesting is nvidia prior to the 8800 was in real bad shape they thought to have no chance and the few specs that got leaked everyone laughed at saying there was no way that even a quarter of that would be a good card. The specs were right we got an insane card that ATi had to spend several generation and several engineers from the AMDs processor section to beat.

Now the specs we have are:
The 512 CUDA cores = 16 SMs (x 32 cores)

six 64-bit memory partitions (384-bit memory interface)
up to 6 GB of GDDR5 DRAM

g200 vs fermi
30 FMA ops / clock vs 256 FMA ops /clock
240 MAD ops / clock vs 512 FMA ops /clock

What these numbers mean when translated to game performance is totally up in the air. I'm guessing that the first card is going to be held back by memory speed. Is nvidia going to try and save power and use less sockets on the gamers cards and bandwidth and more on the fermi cards where power is not an issue? samsungs new gddr5 is 1.3v which is cheaper (power and heat cost) than the existing 1.5v and much cheaper than the 1.8v gddr3. Which mean that they can run it faster. The new stuff is supposed to achieve 7GHz which results in 296GB/speak bandwidth compared to the 256 bit and 512bit quoted in the article. Which is on a 40nm process. Does samsung have their own foundry or are they also in the TMSC that is getting bogged down with their 40nm process?
 
With Fermi, we may be looking at the "Playstation 3 of video cards".
 
Something like that. Essentially, it seems like a product that is trying to do too much, and doesn't really have a niche.
 
Something a lot of people seem to be missing. Buying a 58xx right now is like playing whack a mole. And judging by what TSMC is saying, it's not going to be any easier to get a 5870 any time soon.

and how will this be any easier with nVidia, with their larger die sizes?
 
why should price's drop with no competition??

Not only that, but right now, AMD cards are, for the most part, priced appropriately against their competition. The 5870 is a $100 less, give or take, and a bit slower than the 295 in most games. The 5850 competes well against the 285 for quite a bit less money. The only pricing glitch AMD seems to have right now is the 5770, which is a bit too close in price to the 260, and performs on par to a bit slower. If anything they have room to raise prices on the 5850 and 5870.
 
Last edited:
What i think is:

Nobody thought about how starving the (gaming-relevant graphics) market was.Nvidias parts were old, i mean, rebatching and shrinking them, didn't actually make them much faster.And AMD were just about catching up with them.

On the other hand, we got more taxing Games,than there were 18 months ago.(or lets say graphically interesting)

As Soon as somebody pushes a card out, that really shows better performance,than previous generations, everybody wants one of these.What a gamer wants, is playing games at full eyecandy, highest possible resolution,and as fluid as water.
That is exactly what Ati concentrated on. (and delivered)

Fermi is something planned for "computing graphics and physics".Nice for Industry and science, and i expect the full-on monster(the 380) to be way more expensive, than my complete high end computer. No need to go that far for a gamer, even with a 30-inch monitor, or 3 of them, for that matter.lol
But if you are a scientist, you might get the same computational power than with a server cluster, for half its price, might be worth it,indeed.
For it to work, it ll need its own toolbox and that is something, Nvidia is really good at.Their success is not to a small part due to their drivers/tools.
Question is: Is the Market asking for it? and is it big enough to feed Nvidia enough?
 
Wow!!!! March or May, does this mean that the Fermi threads will stop? Seriously? How can anyone say with a straight face that Fermi is good for Nvidia?
 
Would you like to ask Nvidia a question?
Latest Questions. One with an Answer from jen hsun. Remember if you want to keep fielding questions. use the original post in this thread.


Q: With AMD's acquisition of ATI and Intel becoming more involved in graphics, what will NVIDIA do to remain competitive in the years to come?


Jen-Hsun Huang, CEO and founder of NVIDIA: The central question is whether computer graphics is maturing or entering a period of rapid innovation. If you believe computer graphics is maturing, then slowing investment and “integration” is the right strategy. But if you believe graphics can still experience revolutionary advancement, then innovation and specialization is the best strategy.

We believe we are in the midst of a giant leap in computer graphics, and that the GPU will revolutionize computing by making parallel computing mainstream. This is the time to innovate, not integrate.

The last discontinuity in our field occurred eight years ago with the introduction of programmable shading and led to the transformation of the GPU from a fixed-pipeline ASIC to a programmable processor. This required GPU design methodology to include the best of general-purpose processors and special-purpose accelerators. Graphics drivers added the complexity of shader compilers for Cg, HLSL, and GLSL shading languages.

We are now in the midst of a major discontinuity that started three years ago with the introduction of CUDA. We call this the era of GPU computing. We will advance graphics beyond “programmable shading” to add even more artistic flexibility and ever more power to simulate photo-realistic worlds. Combining highly specialize graphics pipelines, programmable shading, and GPU computing, “computational graphics” will make possible stunning new looks with ray tracing, global illumination, and other computational techniques that look incredible. “Computational graphics" requires the GPU to have two personalities – one that is highly specialized for graphics, and the other a completely general purpose parallel processor with massive computational power.

While the parallel processing architecture can simulate light rays and photons, it is also great at physics simulation. Our vision is to enable games that can simulate the interaction between game characters and the physical world, and then render the images with film-like realism. This is surely in the future since films like Harry Potter and Transformers already use GPUs to simulate many of the special effects. Games will once again be surprising and magical, in a way that is simply not possible with pre-canned art.

To enable game developers to create the next generation of amazing games, we’ve created compilers for CUDA, OpenCL, and DirectCompute so that developers can choose any GPU computing approach. We’ve created a tool platform called Nexus, which integrates into Visual Studio and is the world’s first unified programming environment for a heterogeneous computing architecture with the CPU and GPU in a “co-processing” configuration. And we’ve encapsulated our algorithm expertise into engines, such as the Optix ray-tracing engine and the PhysX physics engine, so that developers can easily integrate these capabilities into their applications. And finally, we have a team of 300 world class graphics and parallel computing experts in our Content Technology whose passion is to inspire and collaborate with developers to make their games and applications better.

Some have argued that diversifying from visual computing is a growth strategy. I happen to believe that focusing on the right thing is the best growth strategy.

NVIDIA’s growth strategy is simple and singular: be the absolute best in the world in visual computing – to expand the reach of GPUs to transform our computing experience. We believe that the GPU will be incorporated into all kinds of computing platforms beyond PCs. By focusing our significant R&D budget to advance visual computing, we are creating breakthrough solutions to address some of the most important challenges in computing today. We build Geforce for gamers and enthusiasts; Quadro for digital designers and artists; Tesla for researchers and engineers needing supercomputing performance; and Tegra for mobile user who want a great computing experience anywhere. A simple view of our business is that we build Geforce for PCs, Quadro for workstations, Tesla for servers and cloud computing, and Tegra for mobile devices. Each of these target different users, and thus each require a very different solution, but all are visual computing focused.

For all of the gamers, there should be no doubt: You can count on the thousands of visual computing engineers at NVIDIA to create the absolute graphics technology for you. Because of their passion, focus, and craftsmanship, the NVIDIA GPU will be state-of-the-art and exquisitely engineered. And you should be delighted to know that the GPU, a technology that was created for you, is also able to help discover new sources of clean energy and help detect cancer early, or to just make your computer interaction lively. It surely gives me great joy to know what started out as “the essential gear of gamers for universal domination” is now off to really save the world.

Keep in touch.

Jensen


Q: How do you expect PhysX to compete in a DirectX 11/OpenCL world? Will PhysX become open-source?

Tom Petersen, Director of Technical Marketing: NVIDIA supports and encourages any technology that enables our customers to more fully experience the benefits of our GPUs. This applies to things like CUDA, DirectCompute and OpenCL—APIs where NVIDIA has been an early proponent of the technology and contributed to the specification development. If someday a GPU physics infrastructure evolves that takes advantage of those or even a newer API, we will support it.



For now, the only working solution for GPU accelerated physics is PhysX. NVIDIA works hard to make sure this technology delivers compelling benefits to our users. Our investments right now are focused on making those effects more compelling and easier to use in games. But the APIs that we do that on is not the most important part of the story to developers, who are mostly concerned with features, cost, cross-platform capabilities, toolsets, debuggers and generally anything that helps complete their development cycles.




Q: How is NVIDIA approaching the tessellation requirements for DX11 as none of the previous and current generation cards have any hardware specific to this technology?



Jason Paul, Product Manager, GeForce: Fermi has dedicated hardware for tessellation (sorry Rys :-P). We’ll share more details when we introduce Fermi’s graphics architecture shortly!
 
So, as I see it, this TSMC situation is very good for Nvidia. If they can get enough product for testing, and by the time Nvidia's engineer's work out the hardware, TSMC will be ready to pump em out. Along with AMD's refresh, setting up a nice price war and we win!
 
Yes, but please less spandex.

That quote you posted from Ask Nvidia A Question thread is quite interesting. If nothing else, nVidia as a company has a great marketing team and honestly their answers made it sound like nVidia was still relevant? competitive? and planning something really 'special' with Fermi. Especially when they roughly discussed that Fermi has a ...method to calculate tessolation that they'd be 'Announcing soon!'. I guess maybe they will try to use some of those Cuda Cores to do tessolation? Who knows.

Personally, I think it was really great marketing spin and great marketing is as valuable as fertilizer...to someone who isn't a farmer...or a gardener...or well...you get the point. I think someone needs to ask in that thread, 'Why have we not seen a single benchmark of Fermi? Does Fermi play games?'
 
'Announcing soon!'. I guess maybe they will try to use some of those Cuda Cores to do tessolation? Who knows.

Personally, I think it was really great marketing spin and great marketing is as valuable as fertilizer...to someone who isn't a farmer...or a gardener...or well...you get the point. I think someone needs to ask in that thread, 'Why have we not seen a single benchmark of Fermi? Does Fermi play games?'
ChrisRay and Sean asked nVidia to help on the "software only Tessellator rumor".

Got Nvidia to weaken a bit on their resolve about Fermi graphics. With our latest questions we pushed to have the Fermi Tessellator rumor put to silence for good.

Yes Fermi has a dedicated hardware for for tessellation. So glad I can stop biting my tongue about this one :p

Nvidia is keeping the "graphics" hardware discussion of Fermi close to their chest. Nobody knows its pipeline structure, TMUS ect. And the tessellation remarks came from rumors.

Me and Sean convinced Nvidia that the rumor needed to be taken seriously. Hence why it appeared in our latest round of questions.

Fermi's shader units are not "emulating" it. There is dedicated circuitry to hardware tessellation. Yes you can do tessellation via shaders. But that doesn't mean its the best way to go about doing it.
 
Better then the 'hardware assisted' vertex shaders for the mx440 (go, for me :p) that were finally dropped in the post 90.xx drivers. (iirc, too long ago).
 
Read the financials. The professional Quadro business at nVidia is 0.2% of its revenue. The margins are nice, but the volume is tiny

Unless that market segment starts growing explosively starting Q1 of 2010, and keeps exploding for several quarters in a row, it will be irrelevant to Fermi's success

Maybe a Fermi successor in 2012-2015 starts to bring significant revenue. I guess that's what's keeping nVidia investing in those extra transistors

Since graphic card generations get obsolete every 18-24 months, nVidia would never recover the cost of designing Fermi, if it had to come from revenue from the professional segment during 2010-2011. The professional segment starting size is too small, and the life of graphic generations too short

Good try to distract investors on presentation day, but every professional investor could see through it next day when they were back at their offices and run the revenue spreadsheet projections

The margins are nice indeed. Forget revenue, look at % profit per unit sold and look at the further potential in the rapidly growing GPGPU market. Most of the profit at this point for nVidia and ATi is in mainstream and budget GPUs whereas high-end units are loss-leaders. The GPGPU market could potentially reverse or at least more equalize this situation. Future super computers could be constructed out of thousands of cGPUs like Fermi instead of racks of quad-core computers. This has been done once already and so far it's a huge success. Making GPUs more general-purpose only makes this a more attractive option in the future.

I'm just saying :)
 
All this talk about the Fermi delay......If you can't actually buy a 5870....whats the difference?
 
Back
Top