G80 specs revealed

loafer87gt

Limp Gawd
Joined
Feb 2, 2005
Messages
467
Looks like both the GTX and GTS cards are going to be monsters! Can't wait till November...

8800GTX
575MHz Core Clock
900MHz Mem Clock
768MB GDDR3 memory
384-bit memory interface (86GB/s)
128 unified shaders clocked at 1350 MHz
38.4 billion pixels per second theoretical texture fill-rate
450W PSU Recommended
Hard launch in second week of November

8800GTS
500MHz Core Clock
900MHz Mem Clock
640MB GDDR3 memory
320-bit memory interface (64GB/s)
96 unified shaders clocked at 1200 MHz
?? billion pixels per second theoretical texture fill-rate
400W PSU Recommended
Hard launch in second week of November

Info taken from Daily Tech

http://www.dailytech.com/article.aspx?newsid=4441
 
I would have thought they would have moved on to GDDR4 for this, but i haven't kept up with G80 progress. The rest of the specs look positively insane, but does that memory controller have the ability to support GDDR4?
 
Well, with the GDDR3 clocked at 900MHz (1800MHz effective), I see little need for GDDR4.

450W PSU recommended, which means it's no 300W monster. Good news.
 
450W for one. Wonder what it is for 2. Don't think my 535W can cut it :D They should be kcikers :D

EDIT: Isn't it meant to have GDDR4?
 
phide said:
450W PSU recommended, which means it's no 300W monster. Good news.

indeed, as i won't have to shell out extra cash for a new power supply.
 
glad i've been resisting the urge to upgrade recently, i look forward to the reviews!
 
Ricey said:
128 unified shaders?! Is that even possible even on 65nm?

I thought g80 was still a 90nm chip?


also those specs are insane, finally some interesting news in the hardware area.
 
Less than a month away! And a hard launch... awesome. If those rumors about R600 not shipping this year are true, looks like green will have a few months of glory all to themselves. Now MS just needs to hurry up and get DX10 and Vista out so we can see what she can really do.

I wonder how long this info will stay up.. DailyTech is definately breaking NDA by posting it... hmm.
 
I think we're all breathing a collective sigh of relief. "Only" 450W PSU recommended. I'm not going high end again. $650 in one lifetime is enough, that is until the itch hits me again. I'm settling for a GTS and with a 650W PSU i'm happy to be ready.
 
Funny, the crazy memory interface that the Inq was harping on a few weeks ago is actually correct (though IIRC they were merely reporting on what someone else had said). I'm still wondering how G80 is going to work since the core runs at 575MHz and the shaders run at 1350MHz. Should be interesting to say the least. It's also interesting that NVIDIA has apparently done a complete about-face and is going unified after all. Now all I have to do is cross my fingers and hope for a Nov-Dec launch, just in time for me to take advantage of my Step-Up :)
 
Ardrid said:
Funny, the crazy memory interface that the Inq was harping on a few weeks ago is actually correct (though IIRC they were merely reporting on what someone else had said). I'm still wondering how G80 is going to work since the core runs at 575MHz and the shaders run at 1350MHz. Should be interesting to say the least. It's also interesting that NVIDIA has apparently done a complete about-face and is going unified after all. Now all I have to do is cross my fingers and hope for a Nov-Dec launch, just in time for me to take advantage of my Step-Up :)

Well this article did say early Nov hard launch for sure. It's easy enough to imagine how the core could move data around in larger bundles than an individual shader uses. Stick some fifos inbetween the clock domains, and tadaa.. works like a charm. Also.. Nvidia never actually said they weren't going unified. They said there were some issues with unified, and that they would go when they were ready. Apparently they are ready now...
 
Man I love the step up program I just got my GX2 and in a few months G80 GTX here I come :)
 
eXzite said:
Well this article did say early Nov hard launch for sure. It's easy enough to imagine how the core could move data around in larger bundles than an individual shader uses. Stick some fifos inbetween the clock domains, and tadaa.. works like a charm. Also.. Nvidia never actually said they weren't going unified. They said there were some issues with unified, and that they would go when they were ready. Apparently they are ready now...

I think Nvidia was misdirecting all that time. Todays firing squad article basically said as much in plain english.

Also:

:
speaking for DX10 ASIC such as G80 per se . with numerous arrays of MIMD 1D ALU which could

be seen as

128X1(MIMD 1D)X2(Double-Clock)=256

R600

(4+1D) 5X64=320

besides R600 has better raw performance whilst G80 have better arrays of ALU suffice for

better utilization hence optimal future.


Seems like "stream processor" means processing one component at a time..double pumped. Some say this could improve efficiency a lot (say if your shader is like one or two components..then a 5 component ALU is overkill)

I take it H wont review this card till the hard launch (mid nov) btw?

I certainly hope they dont stoop to reviewing a paper launched card in mid-October or something..since they have promised not to review any more paper launched cards.
 
Holy cow :D
128 unified shaders, hahaha... comparing to x360 48 unified shaders... who'd thought these new gen console gets beaten so fast :p

banned_user said:
I would have thought they would have moved on to GDDR4 for this, but i haven't kept up with G80 progress. The rest of the specs look positively insane, but does that memory controller have the ability to support GDDR4?
They finally increase the memory data bus to 384 bit, so I guess for now, that'll be the means of improving memory bandwidth. At such high frequency, those extra bits could actually translate to a sufficient increase in bandwidth...
 
im thinking it could be a dual gpu thing. 64x2 sounds more feasible than 128 on a single card
 
Hornet said:
Holy cow :D
128 unified shaders, hahaha... comparing to x360 48 unified shaders... who'd thought these new gen console gets beaten so fast :p


They finally increase the memory data bus to 384 bit, so I guess for now, that'll be the means of improving memory bandwidth. At such high frequency, those extra bits could actually translate to a sufficient increase in bandwidth...

First off, yeah, next gen GPU's are going to kill 360/PS3. That always happens. Consoles use current high end GPU's and guess what, PC cards double every 12 months.

At least 360 has an excuse, it's a year old. What about PS3? :eek:

But you people aren't understanding, it's very early but it appears these are like MINI ALUS. This is why they call them stream processors and can clock them so high.

So it's not, probably, anything like a 128 pipeline or even 128 "shader" card. Call them mini-shaders.

It's likely imo the G80 128 alu's can only crunch ONE component per clock, while the X360 can do as many as FIVE. So it's NOT as big a mismatch as you assume on paper.

OTOH, you're looking at 128 of the suckers at 1350 mhz, which still gives it significant power edge over Xbox360, as youd expect. Plus, I'm kind of thinking Nvidia did this to get huge efficiency gains, so the difference could be even bigger in favor of G80 then it looks on paper.

But again, anybody with a brain already knew G80/R600 would trounce 360/PS3..hell X1950XTX if programmed for trounces them (48 shader pipes) NOW (7900 GTX not so much)
 
i do prefer a unified approach because i approve of GP-GPU usage, and the more generic the processor the better.

i wonder the if the variable 128MB+64bit vs 256MB+64bit memory that leads to 640MB+320bit vs 768MB+128bit memory is a result of separating IQ fucntions like AA from general resource usage?

with the GTS you get 8x super-duper AA for FREE
with the GTX you get 16x super-duper AA for FREE

doesn't make a lot of sense if its for something like the Geometry Shader portion of the GPU, because you would be basically saying to the GTS user that they should really only consider their new card as a very fast DX9 card, as its a bit crippled in DX10............

regardless, i will await the 0.65u refresh for a less power hungry version.
and hopefully a return to lower clocked but fully functional GT versions as i hate buying something with crippled hardware.
 
R3MF said:
i do prefer a unified approach because i approve of GP-GPU usage, and the more generic the processor the better.

I want my GPU to do one thing only and that as good as possible: make graphics appear on my screen. The better it is at that the more value it has for me. Whether it can then be hacked to become a really fast FP processor is a different story.
 
These specs are really insane. At least when compared to the existing cards in the market.
When the first rumors about the G80 specs came out, I joked and said that, with that card in your system, when you entered a game, the first thing you would see was "GAME OVER", because the card is simply too fast.
Though this was of course a joke, the G80 seems to be the killer card for any game out there. With so much raw power, shaders/pixel pipelines, plus the dedicated memory for AA purposes alone, we should be able to max any game's settings @ extreme resolutions without much hassle. Let's see if this is true, when actual benchmarks come out.
 
physics on GPU FTW:

http://www.dailytech.com/article.aspx?newsid=4444

With the release of the G80, NVIDIA will also release a new engine dubbed Quantum physics engine. Quantum Effects Technology is similar (at least in spirit) to NVIDIA's PureVideo Technology -- a dedicated layer on the GPU for physics calculations. A few documents alluding to this new engine appeared on public FTP mirrors late last week.

Quantum utilizes some of the shaders from NVIDIA's G80 processor specifically for physics calculations. Physics calculations on GPUs are nothing new; ATI totes similar technology for its Stream Computing initiative and for the Triple Play physics.
 
Hawk said:
450W for one. Wonder what it is for 2. Don't think my 535W can cut it :D They should be kcikers :D

EDIT: Isn't it meant to have GDDR4?

LOL 2 how about Quad SLI? I'm thinking 200W each? Looks like I'll have to spring for the 1KW PSU in the future.

Ohh 2nd thought ontop of all that overclock ability! 650/1100? maybe even better? Mmmm drool!

Sunin
 
Silus said:
These specs are really insane. At least when compared to the existing cards in the market.
When the first rumors about the G80 specs came out, I joked and said that, with that card in your system, when you entered a game, the first thing you would see was "GAME OVER", because the card is simply too fast.
Though this was of course a joke, the G80 seems to be the killer card for any game out there. With so much raw power, shaders/pixel pipelines, plus the dedicated memory for AA purposes alone, we should be able to max any game's settings @ extreme resolutions without much hassle. Let's see if this is true, when actual benchmarks come out.
i'm only guessing that the variable memory/bandwidth is for AA............
if you are quoting my post just above yours that is?
 
It sounds like the other memory may be physics - see my link above.
 
R3MF said:
i'm only guessing that the variable memory/bandwidth is for AA............
if you are quoting my post just above yours that is?

No, I was actually refering to the rumors, when the first specs came out. One of them said the extra 256 MBs with 128 bit interface, would be used for AA only. So if you didn't know about this rumor, your guess might be correct :)
 
Big Fat Duck said:
god i love dailytech

i never imagined the cards would be out so soon though

Soon???????
They was supposed to come out since before October!!!!!! *beats the livin crap outta duck*
 
R3MF said:
i'm only guessing that the variable memory/bandwidth is for AA............
if you are quoting my post just above yours that is?


The G80 is like a cell chip, or vice versa, each "mini gpu" has its own bandwidth this is why if there is a portion of the chip that is not used like in the gts version the bandwith will decrease, this will inturn give nV alot of flexibility for new generations, they just have to add more "mini gpu's" to the archiecture and not worry too much about bandwidth feeding the GPU since its will already be there.

The 7950 gx2 it was recommended to have a 400 watts powersupply for single card, so the g80 needs 450 for single card its not much of an increase.
 
Silus said:
No, I was actually refering to the rumors, when the first specs came out. One of them said the extra 256 MBs with 128 bit interface, would be used for AA only. So if you didn't know about this rumor, your guess might be correct :)
cool cheers.
 
Ok,

Does anyone know how much these monsters are going to cost? Its right about the time I'll be looking to build my new rig :D
 
Jester1550 said:
Ok,

Does anyone know how much these monsters are going to cost? Its right about the time I'll be looking to build my new rig :D

The rumors said something like $600-650 for the GTX and $450-550$ for the GT model (now known as GTS).
But these were just rumors and the specs posted in this thread, from dailytech, seem real enough, so we still need official word on the final prices.
 
this is what i have been waiting for..... finally a card that will give my new pci-e upgrade worth the money. Though thoes specs are very impressive, a benchmark says a thousand words. I need a 3dmark05 score now!!
 
Warrior said:
this is what i have been waiting for..... finally a card that will give my new pci-e upgrade worth the money. Though thoes specs are very impressive, a benchmark says a thousand words. I need a 3dmark05 score now!!
why a 3d05 benchmark ? why not a 3d06 one instead ?
 
The REAL question is............according to the specs how well will the GTX version perform playing CRYSIS at 2560x1600?? :D

................No really, I need to know. :|
 
Ricey said:
128 unified shaders?! Is that even possible even on 65nm?

read the dailytech comments, the author posts about them not being true unified shaders, but nvidia is calling them that or something
 
StalkerZER0 said:
The REAL question is............according to the specs how well will the GTX version perform playing CRYSIS at 2560x1600?? :D

................No really, I need to know. :|

No one knows at this point of course, but my guess is that it will in DX9. In DX10, however, I won't even make a comment yet.
 
Back
Top