9800 GTX compared to 8800 Ultra

If those specs would hold true then these pictures are faked. G92-420-A2 text on that GPU dictates that performance of this card in these pictures are between 8800 GTS 512 and 9800 GX2.
 
20080304_0aeab8a96ab29979926bk6odl8873JvS.jpg

Close-up from Geforce 9800 GTX. Reveals it uses G92-420-A2. First part is telling us which GPU this is, second part tells us it's performance level and thir part tells us it's revision

8800 GTS 512: G92-400-A2
9800 GTX 512: G92-420-A2
9800 GX2: G92-450-A2

IF those pics are true of course. That may not be the 9800 GTX. And we are still lacking specs.

I highly doubt the 256 Stream Processors. I'm still betting on 160 or 192.
 
If those specs would hold true then these pictures are faked. G92-420-A2 text on that GPU dictates that performance of this card in these pictures are between 8800 GTS 512 and 9800 GX2.

So, either the picture is faked and the 9800GTX is indeed the new flagship card (according to the specs in that link), or the stats are wrong and it's really...... just basically another 8800GTX (if that) just with a smaller/cooler chip?

How would they even be privy to that information anyway, when no one else can seem to get a hold of the official specs?

If it's accurate, I'd certainly go with upgrading, as was planned when Nvidia supposedly stated a long time ago that the 9800GTX would indeed be their next flagship. Otherwise, guess it's a wait for the true next gen flagship to come along.
 
So, either the picture is faked and the 9800GTX is indeed the new flagship card (according to the stats), or the stats are wrong and it's really...... just basically another 8800GTX (if that) just with a smaller/cooler chip?

How would they even be privy to that information anyway, when no one else can seem to get a hold of the official specs?

If it's accurate, I'd certainly go with upgrading, as was planned when Nvidia supposedly stated a long time ago that the 9800GTX would indeed be their next flagship. Otherwise, guess it's a wait for the true next gen flagship to come along.

The problem is that the card depicted in the pic, is not even another 8800 GTX. Only 512 MB of VRAM and only a 256 bit memory interface, which already represents a downgrade. The pics may represent the new GTS or even the new GT, but I doubt they reflect the new GTX.

Also, the GX2 should be the fastest card, which is why I doubt the new GTX has 256 SPs (that's left for G100). I'm betting on 160 or 192.
 
The problem is that the card depicted in the pic, is not even another 8800 GTX. Only 512 MB of VRAM and only a 256 bit memory interface, which already represents a downgrade. The pics may represent the new GTS or even the new GT, but I doubt they reflect the new GTX.

Ah, ok. I didn't study the pic all that hard.

Also, the GX2 should be the fastest card, which is why I doubt the new GTX has 256 SPs (that's left for G100). I'm betting on 160 or 192.

That's what I would think, that the GX2 would be faster, which is why I cant imagine the specs on that site being accurate, though it doesn't seem like a "shady" site to me... though I've never been there before, so I don't know. Though, on the other hand, like I said, I'd almost think they were accurate due to Nvidia's claims a while back about what the next GTX flagship would be capable of, but I suppose there's no way to verify the specs on that site until [H] gets a hold of them/the GPU and confirms it.
 
That card in that picture is weird anyway. Since it says it's G92-420-A2 it means it will have exactly same GPU as 8800 GTS and that basically dictates that it will have 128SP and 256-bit mem bus. It means that it can get performance difference just trough higher clocks and they can't go too far with G92-A2..those factory OC'd cards with like 740MHz core have poor yields already. Basically that pcb simply seems to be overkill for that GPU they are using on it
 
Besides if Nvidia would have 160SP/192SP card ready they wouldn't be releasing Geforce 9800 GX2 which is much more expensive to make and should lose to that 160SP/192SP-card.
 
9600GT -
Core: 650 Mhz
Memory: 1.8 Ghz
Steam Processors: 64

HD 3870 -
Core: 775 Mhz
Memory: 2.25 Ghz
Stream Processors: 320
(I'm aware that these Mhz may change, depending on the manufacturer, overclocking)

What I'm also talking about is that the 9600GT performs better in games, even with numbers that are lower.

Facts mean something, but should be taken with a grain of salt. There is often times more than meets the eye.

But, none of this is even the main reason why I posted. The main reason is that we DON'T know the exact specs of the card, so we can't even deduce how it will perform!
...That's because for some reason you decided to exclude all the other factors that aren't Stream Processors or MHz. You do realize there's other factors in play, right?

Nvidia Geforce 9600:
ROPS: 16
Texture Units: 32

ATI HD3870:
ROPS: 16
Texture Units: 16

The 9600GT has two times the texture units of the HD3870. Of course there's other factors in play too, but I'm not going to get too deep into that.
 
Besides if Nvidia would have 160SP/192SP card ready they wouldn't be releasing Geforce 9800 GX2 which is much more expensive to make and should lose to that 160SP/192SP-card.

A 160/192 SPs GTX would not be faster than two 8800 GTS 512 in SLI (which is basically what the GX2 is).
Plus, a 160/192 SPs card and at least a 384 bit memory interface, would make it come close to a billion transistors chip. That alone makes it much more expensive than each PCB on the GX2. Also, they couldn't use this billion transistors chip in the GX2, due to thermal reasons.
There's also the fact that NVIDIA wants to tout their Quad-SLI tech, which has been dead, ever since the 7950 GX2 days.
 
Pics from this thread re-uploaded and re-posted in 3 seperate spots by me just Google "9800 GTX vs 8800 GTX"


All mirrored this thread's incredibly awesome pics, by me, thx guys for suck wonderfully leakage! XD this is ONE reason why i love the [H].
 
A 160/192 SPs GTX would not be faster than two 8800 GTS 512 in SLI (which is basically what the GX2 is).
Plus, a 160/192 SPs card and at least a 384 bit memory interface, would make it come close to a billion transistors chip. That alone makes it much more expensive than each PCB on the GX2. Also, they couldn't use this billion transistors chip in the GX2, due to thermal reasons.
There's also the fact that NVIDIA wants to tout their Quad-SLI tech, which has been dead, ever since the 7950 GX2 days.
1) G92 has already 754 million transistors and is more complex than G80. Geforce 9800 GX2 has two of them totalling that transistor count to 1.5 billion
2) Size of two G92's combined is 648 square mm's. 192SP card with 65nm should be around same size as G80 was and smaller if Nvio wouldn't be integrated. And yes it would be cheaper than two G92's.
3) Geforce 9800 GX2 has two pcb's which raises costs
4) 192SP is already 50% upwards from 8800 GTS 512
5) This card wouldn't have those problems that SLI has (scaling)
6) This card would have more usable vram than 9800 GX2 (GX2: 512MB, this card: 768MB)
7) This card wouldn't be bottlenecked by it's memory channel systems like Geforce 9800 GX2 (384-bit vs 256-bit.. that's 50% upwards from GX2)
8) Having two G92's onboard already creates thermal problems. Since it's SLI system it's power consumption / performance-ratio shouldn't be good. Why should this one GPU create more heat?
 
1) G92 has already 754 million transistors and is more complex than G80. Geforce 9800 GX2 has two of them totalling that transistor count to 1.5 billion
2) Size of two G92's combined is 648 square mm's. 192SP card with 65nm should be around same size as G80 was and smaller if Nvio wouldn't be integrated. And yes it would be cheaper than two G92's.
3) Geforce 9800 GX2 has two pcb's which raises costs
4) 192SP is already 50% upwards from 8800 GTS 512
5) This card wouldn't have those problems that SLI has (scaling)
6) This card would have more usable vram than 9800 GX2 (GX2: 512MB, this card: 768MB)
7) This card wouldn't be bottlenecked by it's memory channel systems like Geforce 9800 GX2 (384-bit vs 256-bit.. that's 50% upwards from GX2)
8) Having two G92's onboard already creates thermal problems. Since it's SLI system it's power consumption / performance-ratio shouldn't be good. Why should this one GPU create more heat?

You are confusing the size of the chip, with the number of transistors it uses.
Using a smaller fab process, is not equal to reducing the number of transistors. Transistor count is the same, is it either 90 nm or 65 nm.
So, imagine the G80 in a 8800 GTX, shrunk to 65 nm. The chip would be smaller, but would still have 681 million transistors.

Now, IF the 9800 GTX packs 160 Stream Processors and a 384 bit memory interface, the transistor count goes up to almost 1 billion. Even at 65 nm, that means it will consume more (or the same) power and will create more (or the same) heat, as a 8800 GTX. Not to mention, a 1 billion transistor chip should be very expensive.

As for the GX2, since it's basically two 8800 GTS 512 in SLI, NVIDIA doesn't really need to do much more than just use existing stock of 8800 GTS 512 and apply the necessary changes. That means productions costs are not that high, as compared to a 1 billion transistor chip.

And there's also the possibility that the extra 32 SPs (128 to 160) already exist in the G92 core, but were disabled for future use. I'm not sure how many transistors are required for the video decoding capabilities, but do they use all the transistors in the difference between G80 and G92 (754 million - 681 million = 73 million) or those 73 million include something else i.e. disabled Stream Processors ?
 
UPDATE: BENCHMARKS + GPU-Z SHOTS!! CORE CLOCKS and OFFICAL REVISION RELVEALED!

Lonely GPU-Z Screenshot
Here is our 9800GTX GPU-Z screen shot. As you can see, the card’s core clock is only 25MHz higher than G92 based 8800GTS 512MB. But memory clock is as high as 1100MHz, so that’s why we said it will have a 0.8ns memory module. Because of the core clock’s raised, shader also went up to 1688MHz.


first set of Benchmarks
As expected, here is the card’s 3Dmark06 performance. Nothing special, score just around 14K, not those Über high scores. But considering 9600GT SLI scaling to a new height, maybe 9800GTX is also good at SLI and 3-way SLI.

HARDWARE USED
9800GTX w/ X9650@3GHz


BENCHMARK UPDATE!
HARDWARE USED
9800GTX w/ X9650@4GHz


Again Thanks to ex preview for pics. ENJOY, [H] you deserve it!!!
 
Consider yourself lucky. Back in the day most video cards were as big, or bigger the the 8800 GTX, and most other add in card where just as big. Working at a comp store people drop off their old systems, sometimes they are very very old, with hardware companies no one remember like circus logic or A Open (although they still make dial up modems), and you learn things used to be BIG. 5.25" HDD anyone?

Those VLB and even some ISA Cards were Freeking HUGE! I remember many 486 and Prior PC Cases had Guides AT THE FRONT OF THE CASE to hold the boards in place!

customserverspic.gif
 
If those results are real, I might just use my step-up from my 8800GTS 512... Assuming the price is $350 on the card, which is where it should be at that performance level.
 
You are confusing the size of the chip, with the number of transistors it uses.
Using a smaller fab process, is not equal to reducing the number of transistors. Transistor count is the same, is it either 90 nm or 65 nm.
So, imagine the G80 in a 8800 GTX, shrunk to 65 nm. The chip would be smaller, but would still have 681 million transistors.

Now, IF the 9800 GTX packs 160 Stream Processors and a 384 bit memory interface, the transistor count goes up to almost 1 billion. Even at 65 nm, that means it will consume more (or the same) power and will create more (or the same) heat, as a 8800 GTX. Not to mention, a 1 billion transistor chip should be very expensive.

As for the GX2, since it's basically two 8800 GTS 512 in SLI, NVIDIA doesn't really need to do much more than just use existing stock of 8800 GTS 512 and apply the necessary changes. That means productions costs are not that high, as compared to a 1 billion transistor chip.

And there's also the possibility that the extra 32 SPs (128 to 160) already exist in the G92 core, but were disabled for future use. I'm not sure how many transistors are required for the video decoding capabilities, but do they use all the transistors in the difference between G80 and G92 (754 million - 681 million = 73 million) or those 73 million include something else i.e. disabled Stream Processors ?
G92 doesn't have any disabled units.. that's the problem with it.. and if it would have that would mean that they wouldn't release 9800 GX2 since it would be cheaper to just unlock those disabled things on one G92 than put two G92's there.

If it consumes as much as 8800 GTX it will still consume a lot less than Geforce 9800 GX2 will.. Besides since it would have only one PCB and one GPU it would have more simple cooling solution.

I just don't get how it can be cheaper to build card with two pcb and two 754M chips and that chip which is needed for internal SLI compared to card with one pcb and one 1000M chip? If we imply simple mathemathics (not necessary right way but..) this 1000M chip would be around 430mm^2 if we assume that "transistor amount" /mm^2-ratio would be static. Comparing that to 324mm^2 of single G92 and 648mm^2 for two of them.

Btw. where did you think that I confused size of the chip with number of transitors
 
UPDATE: BENCHMARKS + GPU-Z SHOTS!! CORE CLOCKS and OFFICAL REVISION RELVEALED!

Lonely GPU-Z Screenshot
Here is our 9800GTX GPU-Z screen shot.

It does seem that most aspects of the 9800GTX are indeed actually a step down from the 8800GTX.

Bus Width (256 vs 384)
Bandwidth (70.4 vs 86.4)
ROP's (16 vs 24)
Mem (512 vs 768)
...are all lower than the 8800GTX.

Shaders are the same (128), both GDDR3 (I thought the 9800 was supposed to be GDDR4?) though fill rate (it seems) and clocks are a bit higher.

So over all, it doesn't even match the 8800GTX in most aspects.

Not really expected, and sort of confusing, given the amount of hype and "suggested" specs from Nvidia a while back that the 9800GTX would offer.
 
Maybe it'll OC a lot better than the 8800GTS G92? I can't imagine nVidia using as insane of a PCB without any reason.
 
9600GT -
Core: 650 Mhz
Memory: 1.8 Ghz
Steam Processors: 64

HD 3870 -
Core: 775 Mhz
Memory: 2.25 Ghz
Stream Processors: 320
(I'm aware that these Mhz may change, depending on the manufacturer, overclocking)

What I'm also talking about is that the 9600GT performs better in games, even with numbers that are lower.

Facts mean something, but should be taken with a grain of salt. There is often times more than meets the eye.

But, none of this is even the main reason why I posted. The main reason is that we DON'T know the exact specs of the card, so we can't even deduce how it will perform!
For some reason I thought a comparison between two NVIDIA GPUs, as opposed to a comparison between GPUs of two completely separate designers would have more credibility. But clearly you're in the right with your comparison. I mean, why would anyone assume two derivatives of the G80 architecture would be any more similar than, say, the G80 architecture and the R600? Why stop there though, why not compare it to the Voodoo 2?

Silly me!
 
Just looking at the GPU-Z screenie of the 9800GTX, it appears on paper to definitely be inferior... I hope it performs better in real life though...

Just to show the differences:

My Ultra: (first one is stock, second is my overclock)
ultragpuz2.jpg
ultragpuz1.jpg


The 9800GTX:
gpuzexphg1.jpg
 
It does seem that most aspects of the 9800GTX are indeed actually a step down from the 8800GTX.

Bus Width (256 vs 384)
Bandwidth (70.4 vs 86.4)
ROP's (16 vs 24)
Mem (512 vs 768)
...are all lower than the 8800GTX.

Shaders are the same (128), both GDDR3 (I thought the 9800 was supposed to be GDDR4?) though fill rate (it seems) and clocks are a bit higher.

So over all, it doesn't even match the 8800GTX in most aspects.

Not really expected, and sort of confusing, given the amount of hype and "suggested" specs from Nvidia a while back that the 9800GTX would offer.
From those specs even if NVidia decided to call it '8900GTX' it would have seemed like a stretch, but to name it something that gives the consumer an impression it's a next gen part is misleading. Hoping the performance is up to levels indicative of the nomenclature.

Maybe it'll OC a lot better than the 8800GTS G92? I can't imagine nVidia using as insane of a PCB without any reason.
What's so insane about it? Looks like a run-of-the-mill PCB to me.
 
What's so insane about it? Looks like a run-of-the-mill PCB to me.


The core does look huge though.
Maybe its just to make a better connection with the heatsink and has a tiny core underneath but I feel there is more silicon underneath.

This would also explain why a beefed up power section is required.
 
The question remains, why would NV use an entirly new and complex PCB with another power connector for another 100 something mhz on the memory and 25 on the core when the 8800GT/GTS PCBs are clearly capable of those speed (minus the memory) on the less complicated design.

also, I thought 9800GTX = G92-450, and the GX2 = G92-420. Did someone screw that info up? Also, what is up with the SLi connector, its looks like it has extra contacts in the middle of the 2 slots.
 
also, I thought 9800GTX = G92-450, and the GX2 = G92-420. Did someone screw that info up? Also, what is up with the SLi connector, its looks like it has extra contacts in the middle of the 2 slots.
Well it's been clear from January that GX2 is G92-450 (when first 'naked' photos arrived)
 
What's so insane about it? Looks like a run-of-the-mill PCB to me.

Maybe I should've said "complex PCB design". Just look at that card (specifically the power circuitry) compared to the 8800GTS 512. For what is essentially the same card with faster memory and tri-sli, I don't know why it has to be so different.
 
Maybe I should've said "complex PCB design". Just look at that card (specifically the power circuitry) compared to the 8800GTS 512. For what is essentially the same card with faster memory and tri-sli, I don't know why it has to be so different.
Indeed, you are correct and I was wondering the same thing. Both cards have two power connectors, so that can't be it. The new GPU is manufactured using a smaller die process, theoretically making it less power hungry, thus, that can't be the reason. The memory isn't really any different but there is less of it, which basically brings us to frequency/stability.

Maybe NVidia would like its partners to release stable very high OC versions of the card? I really can't see anything else except a highly redesigned GPU architecture. However, no indications to that effect have been circulating around the web except for a few earlier unfounded rumors that have since been dispelled. Conclusively, it must be related to frequency headroom, IMHO. What else can it be?
 
:) that makes two of us!, the better setup for the money/performance/Power requirements!

and i dont like crysis at all, so....:p
 
I was surprised to see TechArp updated Feb.24 showing the rumored specs from several months ago. Usually the info you see a few weeks before the release is correct. All other sources are showing 128 shaders at 675 core and the screenshots of the GPU seem to confirm that it's not a 55NM chip with 256 shaders. In my opinion this should be called the 8900 GTX. It's not a next generation chip with twice the performance of the 8800 GTX. Why bother even making it. The GTX is supposed to be the flagship product and we don't even know it the performance will be as good as the 8800 GTX. Looks like the 9800 GTX is just an overclocked 8800 GTS. Unless the price drops from $450 to $350 it's a complete waste of time.
 
For some reason I thought a comparison between two NVIDIA GPUs, as opposed to a comparison between GPUs of two completely separate designers would have more credibility. But clearly you're in the right with your comparison. I mean, why would anyone assume two derivatives of the G80 architecture would be any more similar than, say, the G80 architecture and the R600? Why stop there though, why not compare it to the Voodoo 2?

Silly me!

You're right, I wasn't comparing apples to apples. Therefore, your point is well taken, and nulls any point I was attempting (poorly) to make.
 
if its truly 256 bit bus and just a some clock increase, looks like it will be a bit slower than the 8800 ultra, at best on par.
 
Looks good. I think the 9800GTX will perform much better than a lot of you guys are expecting.

It better because I REALLY need a new card, my 8800GTX just can't handle all of the new games that just came out.....................................it's like a slidehow.................
 
I was hoping the 9800GTX would be killer at high res, but with 256 bus and 512 ram, no.

I'd like to see how the 9800GX2 does. Other then that, I'm going to use my step-up on a 8800GTX. I'm at 1920x1200 and want some AA. I can play Crysis at 1920x1200 and all high settings and no AA at 30FPS with the rig in the sig. 2x AA is never noticeable, anyone know if a single OC'd 8800GTX can do Crysis all high, 1920x1200, 4x AA? If not I'm not going to waste $200 on a step-up that sucks. I'm a keep that money when the cards after these come out. This is pathetic.

9600GT - 8800GT with a newer core
9800GT - rebadged 8800GT, OC'd some
9800GTX - 8800GTS 512 oc'd some
9800GX2 - two 8800GT's stacked into one card.
GTS?
9850?
9900?
9950?
 
Back
Top