Fermi Working Samples for CES?

Yeah, I figured they either had to be n00bs or trolls. I just felt like ranting :D

BTW, I'm a dudette ;)

this really was their best strategy. scalability and market placement. still there was a point there, they had most of the items needed for DX11 and the easy scale design. it was pretty slick. now why can't AMD take this from ATI and move it to the cpu division?
 
this really was their best strategy. scalability and market placement. still there was a point there, they had most of the items needed for DX11 and the easy scale design. it was pretty slick. now why can't AMD take this from ATI and move it to the cpu division?

I kind of miss the K7 days when AMD managed to completely catch Intel by surprise :( The K8 was a nice evolutionary step from there, but after that things just kind of... didn't do much other than going back to AMD being in the shadow of Intel. Really a shame, I wish Intel got some real competition from AMD instead of the latter merely nibbling away at the mainstream and budget markets.
 
I kind of miss the K7 days when AMD managed to completely catch Intel by surprise :( The K8 was a nice evolutionary step from there, but after that things just kind of... didn't do much other than going back to AMD being in the shadow of Intel. Really a shame, I wish Intel got some real competition from AMD instead of the latter merely nibbling away at the mainstream and budget markets.

Its always been a give or take market, AMD and Intel have traded top spots at times in the recent past not just K7, these lulls have pushed both companies to innovate thier ideas and technology. Its hard to set goals when you are on top, its easier to set goals when you see what the competition is capable of and what they are targeting in the near future.
 
Neither a noob nor a troll. I just don't respect anything at all coming out of Nintendo lately, especially not a 2001-ish era polished turd with fancy gimmick controls. Hollywood and Broadway are nothing more than clock increased, process shrunk Gekko and Flipper, respectively -- which is fine considering Nintendo's market. The market that the Wii caters to is just fine with having to rebuy SNES games on their GBAs, N64 games on their DS and through VC for ridiculous prices.

I stand by my assertion that Nintendo's R&D was minimal in comparison with the other console manufacturers, especially given

1) the fact that the console was sold at a profit from launch
2) 100% backwards compatibility with previous generation hardware AND leaked press reports indicate an evolution rather than a new design
 
I think that part of Intel's adeptness at developing technologies in parallel were what kept it on top during the past couple of years. Granted it gave us a confusing product line for quite some time, (some of it's still confusing i7 on LGA1366, LGA1156, etc...) but on the most part, Intel had the resources to have several teams in different parts of the world working on different types of chips for different markets.

That in conjunction with "fab" ability...

All I'm really seeing from AMD is taking the same chips and either sizing them up or cutting them down. There's nothing wrong with that, but you're not going to take any performance crowns that way.

Either way, I'm getting way off topic now! hahahaha....
 
Neither a noob nor a troll. I just don't respect anything at all coming out of Nintendo lately, especially not a 2001-ish era polished turd with fancy gimmick controls. Hollywood and Broadway are nothing more than clock increased, process shrunk Gekko and Flipper, respectively -- which is fine considering Nintendo's market. The market that the Wii caters to is just fine with having to rebuy SNES games on their GBAs, N64 games on their DS and through VC for ridiculous prices.

I stand by my assertion that Nintendo's R&D was minimal in comparison with the other console manufacturers, especially given

1) the fact that the console was sold at a profit from launch
2) 100% backwards compatibility with previous generation hardware AND leaked press reports indicate an evolution rather than a new design

Less R&D than the X360 and PS3? Probably. "little to no R&D expenditure was required" is quite far from the truth if you had read the interviews with the actual people involved with the R&D process. There is the entire brainstorming process, mockups, piles of concepts and ideas which get half worked out to then be scrapped until eventually a more concrete idea surfaces. It's not like Nintendo started with the 'Let's make a supercharged GameCube!' and slapped some PCBs and cases together.

Hindsight makes things look too bloody obvious at times, but I can assure you that at the beginning of an R&D cycle it is everything but.
 
Neither a noob nor a troll. I just don't respect anything at all coming out of Nintendo lately, especially not a 2001-ish era polished turd with fancy gimmick controls. Hollywood and Broadway are nothing more than clock increased, process shrunk Gekko and Flipper, respectively -- which is fine considering Nintendo's market. The market that the Wii caters to is just fine with having to rebuy SNES games on their GBAs, N64 games on their DS and through VC for ridiculous prices.

I stand by my assertion that Nintendo's R&D was minimal in comparison with the other console manufacturers, especially given

1) the fact that the console was sold at a profit from launch
2) 100% backwards compatibility with previous generation hardware AND leaked press reports indicate an evolution rather than a new design

Wait so if they didn't change anything then there should be forward compatibilty too right? :D

I do have a gamecube laying around so you mean I can throw WII discs in it? But the gamecube disc tray is too small! I can't jam the 5" disc in there. What do I do? ;)

Off topic, but the numbers speak for themselves. Who cares if it's the same shit different day. I don't own one, but the WII won... Nintendo should get props for this.
 
Last edited:
Less R&D than the X360 and PS3? Probably. "little to no R&D expenditure was required" is quite far from the truth if you had read the interviews with the actual people involved with the R&D process. There is the entire brainstorming process, mockups, piles of concepts and ideas which get half worked out to then be scrapped until eventually a more concrete idea surfaces. It's not like Nintendo started with the 'Let's make a supercharged GameCube!' and slapped some PCBs and cases together.

Hindsight makes things look too bloody obvious at times, but I can assure you that at the beginning of an R&D cycle it is everything but.


Very true
 
I'm surprised ATech isn't jumping in on this like a rotton diaper on a baby.

Here's what will happen (my guess): They will have an actual card at CES. No one will be allowed to touch it. No one will be able to see it run benchmarks. In other words, it will be a highly detuned Fermi beta engineering sample until they get the last of the kinks in hardware/software out. Wouldn't be the first time a hardware vendor did this.

He's prob in his green pajamas right now dreaming about JHH..he'll be here soon. And lol @ that blind fan early in this thread talking about double x2 version Fermi card..sorry this round its going to be impossible due to power restrictions. :p
 
He's prob in his green pajamas right now dreaming about JHH..he'll be here soon. And lol @ that blind fan early in this thread talking about double x2 version Fermi card..sorry this round its going to be impossible due to power restrictions. :p

While I'd tend to agree with you on this, I can't help but recall when many people were saying this about GTX 200-series as well and they were able to pull that off obviously..
 
He's prob in his green pajamas right now dreaming about JHH..he'll be here soon. And lol @ that blind fan early in this thread talking about double x2 version Fermi card..sorry this round its going to be impossible due to power restrictions. :p

While I'd tend to agree with you on this, I can't help but recall when many people were saying this about GTX 200-series as well and they were able to pull that off obviously..

Looks like Pharaoh beat me too it. I recall several articles claiming the GTX 295 would be impossible to make.
 
Can just imagine Jen-hsun in a ninja suit sabotaging TSMC machinery.

Secret spy pic hacked off a security camera server database:

cardfactoryninja.jpg
 
Looks like Pharaoh beat me too it. I recall several articles claiming the GTX 295 would be impossible to make.


It was impossible, or at least not reasonably feasible, til the die shrink many months later, and even then it was a pair of 275's and not 285's. Not saying they can't do it without another die shrink, but this chip is supposed to be significantly larger than the 200 series. We will need to see what sort of wattage it will actually have to dissipate b4 we can really make a guess at whether or not it's really doable. I don't know about you guys, but I never want to see triple slot coolers, or water cooling as a requirement.
 
Care to fill us in on your insider info on Fermi power consumption? :)

No insider just speculating. If it has more transistors than a 5870x2 which is rumored to be near 300w then its logical to assume it wont make restrictions for pci-e.

Ok taking devil's advocate for Fermi, I can see them taking the shrink down to 32nm to make this happen just like they shrunk down gt200 process to make the gt295. With TSMC current problems on 40nm....I wouldn't bet on it.

Of course feel free to hope and by the time it comes out I'm sure AMD's single gpu 6 series will come around a week later to beat it by at least 10-15%. Repeat of the current GTX295 - 5870 situation.
 
It was impossible, or at least not reasonably feasible, til the die shrink many months later, and even then it was a pair of 275's and not 285's. Not saying they can't do it without another die shrink, but this chip is supposed to be significantly larger than the 200 series. We will need to see what sort of wattage it will actually have to dissipate b4 we can really make a guess at whether or not it's really doable. I don't know about you guys, but I never want to see triple slot coolers, or water cooling as a requirement.

imagine what a 1024bit combined memory bus would of done... :eek:

but then, on the other hand, ATi has managed to get away time after time with "smaller" memory bandwidth, so...

Dunno.

just gotta see what nVidia has to offer onto the table.
 
No insider just speculating. If it has more transistors than a 5870x2 which is rumored to be near 300w then its logical to assume it wont make restrictions for pci-e.

Ok taking devil's advocate for Fermi, I can see them taking the shrink down to 32nm to make this happen just like they shrunk down gt200 process to make the gt295. With TSMC current problems on 40nm....I wouldn't bet on it.

Of course feel free to hope and by the time it comes out I'm sure AMD's single gpu 6 series will come around a week later to beat it by at least 10-15%.

I thought TSMC was just going to skip over 32, and shoot straight for 28?

Would be cool if nVidia targets this with Fermi, or ATi does this with the 5000 series refresh (for the top end cards, the lower ends don't count, as much).
 
I thought TSMC was just going to skip over 32, and shoot straight for 28?

Would be cool if nVidia targets this with Fermi, or ATi does this with the 5000 series refresh (for the top end cards, the lower ends don't count, as much).

Last I read AMD was going for 32 nm for their 6 series with Global Foundries who is targeting that process for them which is why TSMC promised 28nm to counter. http://xtreview.com/addcomment-id-10092-view-Globalfoundries-32nm-reported-for-later-period.html All reports so far is neither TSMC or Global Foundries will ready for anything less than 40nm until 2010. Hey that's probably when they will release GT395! sweet
 
No insider just speculating. If it has more transistors than a 5870x2 which is rumored to be near 300w then its logical to assume it wont make restrictions for pci-e.

Umm... that's simply not true. The power doesn't pass through the PCI-E slot so it doesn't matter. There is nothing stopping Nvidia or ATI from putting 10- 8 pin power connectors.

What DOES matter is heat. It is harder (not impossible) to put out a card with a higher TDP.
 
Umm... that's simply not true. The power doesn't pass through the PCI-E slot so it doesn't matter. There is nothing stopping Nvidia or ATI from putting 10- 8 pin power connectors.

What DOES matter is heat. It is harder (not impossible) to put out a card with a higher TDP.

about 75W can, but most cards now draw less then that from the PCIe slot, just in case / poor motherboard power distribution / multiGPU maddness.

iirc, it's 75W for a 6pin, and 150W for a 8pin (GPU, not the mobo connector).
 
Last I read AMD was going for 32 nm for their 6 series with Global Foundries who is targeting that process for them which is why TSMC promised 28nm to counter. http://xtreview.com/addcomment-id-10092-view-Globalfoundries-32nm-reported-for-later-period.html All reports so far is neither TSMC or Global Foundries will ready for anything less than 40nm until 2010. Hey that's probably when they will release GT395! sweet

That is sweet.

Now imagine the competion that will stem from this!
 
that is for CPU, not GPU I believe...

Maybe... the link is around somewhere even if this 1 is for cpu it also coincides with their gpu process. I'm 100% sure TSMC said they won't be ready for 28nm till 2010 - 2011. Global foundries can't do 40nm yet hence AMD is at TSMC's mercy for this round.
So do the logical thinking here : 40nm GPU's for Fermi dual gpu card ? 32nm sounds more realistic.

And yes its 75w for pci-e slot 75w motherboard than 150w for 8 and 6 pin. /facepalm at 8 and 10 pin connectors' this guy's imagination never ceases to entertain me (See sig :D) ! I'll alert all PSU makers to be on standby just in case AMD/Nvidia moves to those connections for this year..you never know!
 
about 75W can, but most cards now draw less then that from the PCIe slot, just in case / poor motherboard power distribution / multiGPU maddness.

iirc, it's 75W for a 6pin, and 150W for a 8pin (GPU, not the mobo connector).

Yes, they can draw 75 watts through the PCI-E. I was referring to them not being limited by the PCI-E for power draw. Most of the power comes from the 6/8 pin connectors used. And tbh, I'm not sure currently how much they are actually drawing through the PCI-E slot. I know they can draw some because if you boot without attaching the others you will get a video output telling you to connect the others.
 

I'm giving you this pity reply since everyone else rightfully ignored you. Oh and by the way, what you did has a name already. It is called flamebait which is a subset of trolling. Your attempt at sounding witty has failed.

I took a glance at the author and stopped reading. I remember scali (the author of this piece) getting banned for being a raging frothing at the mouth Intel fanboy. These days he hides at beyond3d coming out to bash ATI/AMD whenever he gets the chance. He was ran out of there as well since he barely posts anymore. Thanks for letting us know where he ended up. There is a reason why that site is often referred as BullShitNews

Now for some real breaking news!!

Nvidia has announced that Fermi videocards will come bundled with a Duke Nukem Forever coupon. The real reason for the delays of the card is because they are running up and down everywhere trying to put back together the developer team to finish the game in time for the release of Fermi videocards. :D
 
Last edited:
Yes, they can draw 75 watts through the PCI-E. I was referring to them not being limited by the PCI-E for power draw. Most of the power comes from the 6/8 pin connectors used. And tbh, I'm not sure currently how much they are actually drawing through the PCI-E slot. I know they can draw some because if you boot without attaching the others you will get a video output telling you to connect the others.

PCI-E standart allow max 300W per device.

 
max... like in everything added up?

The slot only gives 75W, iirc. Unless if 3.0 changed this?
It could be that 3.0 offers 300W (this is just me speculating).

PCIe 1.1 = 75W max
PCIe 2.0 = 150W max
PCIe 3.0 = 300W max now?

Edit - Yep, just checked, as part of the specification for 3.0, it should provide 225/300W of power from just the slot itself.
 
It could be that 3.0 offers 300W (this is just me speculating).

PCIe 1.1 = 75W max
PCIe 2.0 = 150W max
PCIe 3.0 = 300W max now?

Edit - Yep, just checked, as part of the specification for 3.0, it should provide 225/300W of power from just the slot itself.

Nvidia.com_GTX295 said:
Maximum Graphics Card Power (W) 289 W
Check under specifications Last time I checked the 295 isn't a PCIe 3.0 card so I guess it can't exist.
 
No insider just speculating. If it has more transistors than a 5870x2 which is rumored to be near 300w then its logical to assume it wont make restrictions for pci-e.

Ok taking devil's advocate for Fermi, I can see them taking the shrink down to 32nm to make this happen just like they shrunk down gt200 process to make the gt295. With TSMC current problems on 40nm....I wouldn't bet on it.

Of course feel free to hope and by the time it comes out I'm sure AMD's single gpu 6 series will come around a week later to beat it by at least 10-15%. Repeat of the current GTX295 - 5870 situation.


Fermi die size is around the same size as the gt200b, the power usage should also be similiar to the gt200b. Fermi is also supposed be at 3 billion transistors, which is only 40% higher in transistor count.
 
Fermi mentions in yesterday's Q3 earnings call--
http://seekingalpha.com/article/171...-earnings-call-transcript?source=yahoo&page=6

Jen-Hsun is the CEO.

q3 call said:
Jen-Hsun Huang

So let me see if I can break that down a bit -- I expect GeForce to grow next year, not only because the market will be healthier next year. There is real evidence that GPU adoption is increasing. There is -- and the enthusiasm behind FIRMY, our next generation GPU architecture, is just out of this world. I mean, it’s just way over the top. And the reason for that is this is because the first brand new architecture we have created in four years and instead of an incremental change to DX11, this is a fundamentally new architecture and the performance is fabulous. And so we are expecting to be very successful with FIRMY and all of its derivatives.

Glen Yeung - Citigroup

Jen-Hsun, I just need you to clarify one thing, if you might -- when you say performance of FIRMY is great, are you saying that relative to your previous architecture or also relative to the current competition?

Jen-Hsun Huang

Both.

Glen Yeung - Citigroup

Great, thanks.

David Wu - Global Crown Capital

Okay, that’s good. Jen-Hsun, you showed -- I was at the show when FIRMY was previewed and you showed very good numbers on the computing side. Relative to your competition that is shipping products, you didn’t talk anything about the graphics side -- should I assume that the graphics performance is equally superior to the competition that is shipping right now?

Jen-Hsun Huang

We didn’t announce anything on graphics because it wasn’t graphics day. When we announce GeForce and Quadro, we are going to talk about the revolutionary graphics ideas that are designed into FIRMY and so we are looking forward to do that in the near future. And so please be patient with us -- the market is anxiously waiting and we have enthusiasts all over the world that are waiting for us to ship it. You know, every four years or so, we revolutionize the GPU with a brand new architecture. If you remember the G80, what became the GeForce 8800, was probably one of the most successful products in the history of our company and we’ve been doing incremental changes to G80 since then.

And now with FIRMY, it’s another revolutionary architecture and I’m expecting to take the GPU market up another notch.
 
Last edited:
The transcript is ten pages and gets into plenty of other details on NVDA, but thats what I saw relevant about Fermi. Still good reading. How much of it is the CEO blowing sunshine up the analysts' is an exercise left to the reader ;)
 
from the same article

David Wu - Global Crown Capital

Okay, that’s good. Jen-Hsun, you showed -- I was at the show when FIRMY was previewed and you showed very good numbers on the computing side. Relative to your competition that is shipping products, you didn’t talk anything about the graphics side -- should I assume that the graphics performance is equally superior to the competition that is shipping right now?

Jen-Hsun Huang

We didn’t announce anything on graphics because it wasn’t graphics day. When we announce GeForce and Quadro, we are going to talk about the revolutionary graphics ideas that are designed into FIRMY and so we are looking forward to do that in the near future. And so please be patient with us -- the market is anxiously waiting and we have enthusiasts all over the world that are waiting for us to ship it. You know, every four years or so, we revolutionize the GPU with a brand new architecture. If you remember the G80, what became the GeForce 8800, was probably one of the most successful products in the history of our company and we’ve been doing incremental changes to G80 since then.

And now with FIRMY, it’s another revolutionary architecture and I’m expecting to take the GPU market up another notch.

And relative to the competition, the market has really spoken -- although it’s a fast chip, it’s not that fast and it is basically an RV770 with DX11. And I think that you could incrementally make changes for a number of years but certainly not forever and we are going to really change the marketplace going forward with FIRMY and so that’s our focus and we are trying to get it shipped as soon as possible. The demand is really, really strong for it and we will tell you about all the great graphics features when we launch.
 
And relative to the competition, the market has really spoken -- although it’s a fast chip, it’s not that fast and it is basically an RV770 with DX11.

Yeah thats a nice snipe at AMD. Theres a lot of intel love in there too, its like a giant "BUY ME" sign.
 
Back
Top