G80 8800GTX 30% faster than X1950XTX, 8800GTS even

hmmm 30% is a bit dissapointing tbh,was hoping for a bigger leap,but i guess with a new set of drivers it will be faster then 30%,and since im still on a geforce 6800ultra i guess ill see a big change in my next upgrade wihch includes a core duo 6700 (got an a64 320 atm).
 
30%?? That's not good.. it should be close to being twice as fast. Or at least twice as fast as the 7800GTX.
 
that's horrible.......I was expecting a huge performance increase due to the price also :eek:
 
Take it with a grain of salt, some of the specs aren't real also everything they stated in the article and table on the last page is a carbon copy of what we already know.
 
joeview said:
30%?? That's not good.. it should be close to being twice as fast. Or at least twice as fast as the 7800GTX.

Agreed. The Inquirer is now saying we should expect 3DMark06 scores around 10,500, which is close to what I'd expect from SLI'ed GTX's.
 
The GTS is supposed to be 'just' as fast as the 1950XTX - according to the article that is... hmm, I call BS. Wasn't ATI publicly claiming double the performance of current generation cards with their r600?
 
Scyles said:
Agreed. The Inquirer is now saying we should expect 3DMark06 scores around 10,500, which is close to what I'd expect from SLI'ed GTX's.


The inq's numbers are wrong too, the gtx which is definilty getting 12,000 with some kind of processor, the CPU doesn't have much effect in overall scores in 3dmark.

Here is techreports benchmarks with 3dmark06 and conroe vs kentsfield, note that the CPU score of kentsfied is 1500 more but over all 3dmark score is only 600 more.

http://techreport.com/etc/2006q3/kentsfield/index.x?pg=1

Inq says it has reliable information but at the end they just made up numbers. And we all know nV doesn't talk to the Inq not to mention Inq doesn't deny they don't talk to nV, the Inq even states that they know they can't get anything from nV.
 
I'd expect the performance to go up considerably once nVidia makes the drivers for the 8x00 series.
 
Once Nvidia starts messing with the drivers, I'm sure you'll start seeing more performance out of the 8800GTX. That is, if this preview holds true. It could be full of hot air, but Nvidia, as we all know, tends to usually pull a magic trick out of their drivers. I remember when the Geforce2 GTS 32mb came out and it was only slightly faster than the Voodoo5. The next detonator drivers increased the performance on the chip by some ungodly amount in Quake3, and after that, there was simply no competitition.

Just give it time, and remember, only the real reviews hold any water.
 
magic drivers from puff the magic dragon... :rolleyes: They've been working on this for few years now... I already knew it wasn't going to be fast everybody thought it was... I remember specuating fast as gx2 couple months back...
 
Marvelous said:
magic drivers from puff the magic dragon... :rolleyes: They've been working on this for few years now... I already knew it wasn't going to be fast everybody thought it was... I remember specuating fast as gx2 couple months back...


Not really magic drivers, just going to take time for them to mature, and so far they are doing well. And this card is faster then a gx2 much faster.
 
how can one draw conclusions on a preview of a new architecture... ofcourse if it was a refresh of an existing lineup of gfx cards then one would have been able to comply with a preview ..
 
Faster is all relative. Just for the sake of argument lets say 30% is close to the truth. Now bust out Crysis and watch it be 100% . It will depend on the application .
 
Big Fat Duck said:
why would such a hot and massive card perform only 30% better than last gen

While I dont believe a word of that article I have one word for you :

GeForce FX


Major screwups are possible, and power is no indication of performance.
 
If there are drivers out in the wild that have native 8800 support, where are they? I mean, surely someone would have leaked them onto the web by now. I think the fact that the german site doesn't mention what driver version they used should make it obvious.

I'll wait for the shipping drivers before I start worrying, TYVM.
 
jacuzz1 said:
Faster is all relative. Just for the sake of argument lets say 30% is close to the truth. Now bust out Crysis and watch it be 100% . It will depend on the application .


Thats a possiblity take a game like Oblivion which is highly CPU bound I can see the marginal 30% increase.
 
That article is BS.

G80 got over 700+ million transistors, 128 pipes
G70 got like 300+ million transistors, 24 pipes

LIke c'mon how in the world could it only be 30% faster.
 
I also agree that this is not ture. :eek:
How about we all wait and see once is out, and not hear from 'rumors' :D
 
You say 30%, I say...

What application?
What settings?
How much higher can you crank the settings while maintaining acceptable performance?

It's very possible that it could be +30% in one application, and much higher in another, or that it could be more than +30% at higher settings. Or that it only gives +30% FPS at given settings, but the settings can be cranked up with a much smaller performance hit. It could be even be true that the performance lead in current games is small and will grow much wider for new games.

It's very possible for this figure to be 100% accurate, and yet 100% misleading. Then again, it's also possible (though maybe not likely) that the new card really is only 30% faster than a 1950 XTX across the board.
 
Shottah_king said:
That article is BS.

G80 got over 700+ million transistors, 128 pipes
G70 got like 300+ million transistors, 24 pipes

LIke c'mon how in the world could it only be 30% faster.


The G80 doesn't have 128 pipes. Mabye 128 shader processors, but not 128 pipes.

And that's a lot of transistors. :eek:
 
razor1 said:
Thats a possiblity take a game like Oblivion which is highly CPU bound I can see the marginal 30% increase.

Huh? Majorly cpu bound? I tend to think not. I'll give you a game that's majorly cpu bound: Quake 3.
 
since dx10 isn't out, I think it's safe to assume that they only tested the card with dx9 application.s...

all those transistors are for dx10...so passing judgment on the card without knowing its dx10 performance is pointless...

Not to mention, this thing has an absolutely sick amount of memory bandwidth...I bet higher resolutions, AA, and AF see a much better than 30% boost based on the memory performance alone...

This 30% figure is probably just from running 3dmark or something stupid like that...

(side note...I put a '.' in the word 'application.s' to prevent the stupid intellitxt add nonsense from popping up)
 
Erasmus354 said:
While I dont believe a word of that article I have one word for you :

GeForce FX


Major screwups are possible, and power is no indication of performance.

lol, this guy makes a good point

also, ROLF, 128 Pipes. That's one huge gain. From 24/48 - 128 pipes huh? lol. Everyone, stop gettin your panties in a wad right now and wait for a more reliable source to come out. Anyone could go online, find a picture of a G80 (beta), photoshop, and make some pictures to seem authentic and OC a 7900GTX to get benchmarks. Wait for anandtech, xbit labs, etc... reputable sources.
 
If oblivion was CPU bound you'd expect framerates between ATI and Nvidia to be almost identical. This doesn't really seem to be the case with Oblivion. I can't imagine grass blades casting shadows to be to huge a burden on the CPU. I'm sure it takes more CPU power than other games for the graphics but none of the evidence i've seen seems to point to the CPU being the limiting factor.

Also keep in mind with the transistors for G80 a lot of them likely would have been used up adding all the features that were required for DX10. Also keep in mind Nvidia probably ate up a few adding HDR+AA. DX10 is also fairly precist on it's requirements to image quality should be fairly standard. If the AA isn't done properly it won't be a DX10 compliant part for example.

The branching I'd guess is what is by far taking up most of the transistors. I'd still expect roughly a 50-75% performance increase over a 7900GTX though.
 
Apple740 said:
I have a feeling adding 4xAA(+8/16xHQAF) the CPU boundness totally disappears.


Being bottlenecked doesn't mean the program is always bottlenecked all the time, even each frame that is rendered could be bottlenecked at different points example. 50% of the frame is cpu bound the other 50% isn't. This is what is occuring in Oblivion. This is because it takes 2 or 3 passes to render each frame, the first pass is where most of speed tree's functions are passed and basic rendering is done (more cpu intensive) then in the other passes is where you have your special effects like water distortion, reflections of cube maps, etc. And added to this speed tree's animation is frame by frame calculated.

Also at those settings you mentioned you can't compair the g80 to today's cards, today's cards won't be able to run Oblivion at those settings, well only time would be at very low resolutions like 800x600.

Sorry also the g80 would be very CPU bound in this game if its even 30% faster then the x1950xtx, according to that article I posted.

Of course we don't know what game they used or games they used, just a hypothesis based on what I've heard so far.
 
Anarchist4000 said:
If oblivion was CPU bound you'd expect framerates between ATI and Nvidia to be almost identical. This doesn't really seem to be the case with Oblivion. I can't imagine grass blades casting shadows to be to huge a burden on the CPU. I'm sure it takes more CPU power than other games for the graphics but none of the evidence i've seen seems to point to the CPU being the limiting factor.

Also keep in mind with the transistors for G80 a lot of them likely would have been used up adding all the features that were required for DX10. Also keep in mind Nvidia probably ate up a few adding HDR+AA. DX10 is also fairly precist on it's requirements to image quality should be fairly standard. If the AA isn't done properly it won't be a DX10 compliant part for example.

The branching I'd guess is what is by far taking up most of the transistors. I'd still expect roughly a 50-75% performance increase over a 7900GTX though.

No nV's and ATi's dirvers have different CPU overhead. And with the gx2 especially is a higher overhead then nV's single card or ATi single card solutions.

True I agree with you that Dx 10 features list would add a good deal of transistors, but not more then 25% of the entire chip.

The shader performance alone per clock is around 2 times higher then that of the 7900gtx from what I've heard, so unless the drivers they used were very early versions, which is a possiblity but highly unlikely given the date of the article and where they got the information from.
 
actualy after thinking about this why should it be double as fast as ATI`s last offering?
from the release of the 8800 they wont have any competition until ATI releases thier card 2-3 months later,and by that time they can just release a faster version to counter ATI`s new chip.

sounds logical to me,altho i dont think it justifies the price of 650 euro.
 
razor1 said:
Hmm yes Oblivion is hugely CPU bound, especially the outdoor scenes, the grass and trees of Speed Tree tend to take up a good deal of CPU cycles ;)

And if you don't believe me

http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2747&p=4

So you take a 1280x1024 benchmark with no AA and tell me that it's CPU bound?

Ok then...

How about we try some HDR+AA with 1600x1200 or 1920x1200?

Edit: I read your reply to what others had to say, and yes, I guess I can partially agree with you and partially not, so lets leave it at that.
 
arthur_tuxedo said:
You say 30%, I say...

What application?
What settings?
How much higher can you crank the settings while maintaining acceptable performance?

It's very possible that it could be +30% in one application, and much higher in another, or that it could be more than +30% at higher settings. Or that it only gives +30% FPS at given settings, but the settings can be cranked up with a much smaller performance hit. It could be even be true that the performance lead in current games is small and will grow much wider for new games.

Great questions to ask - this card is closely guarded by NVIDIA in terms of performance metrics and specs. I highly suggest you guys wait until NDAs are lifted before coming to conclusions...
 
wizzackr said:
Wasn't ATI publicly claiming double the performance of current generation cards with their r600?
Assume that anything that nVidia or ATi says is about as far from the truth as possible. R600 may be twice as fast, but perhaps only in vertex processing.

^eMpTy^ said:
all those transistors are for dx10
700 million transistors are not dedicated solely to DX10. To assume such is nonsense. The 128 shader processors will function regardless of what Shader Model a particular application is using, and the shader processors are the significant transistor meat of G80.

Apple740 said:
I have a feeling adding 4xAA(+8/16xHQAF) the CPU boundness totally disappears.
Unfortunately not. There's simply so much processing going on behind the scenes that there will always be instances in which the video card is waiting for data to chew. Not much has changed since Morrowind, unfortunatlely.
 
Back
Top