BFGTech GeForce 8800 GTX and 8800 GTS

Advil,

That's a marketing myth. Clock a pci-e vid card's GPU down to the same speed as the AGP equivalent (same pipelines, same onboard mem bandwidth, etc), and they perform within 1% of each other. That's one reason why nvidia cripples AGP cards by disabling pipelines, then releases them with the same name as pci-e cards with all pipelines running. It's a marketing trick to get people to switch away from agp.

I'm 100% in agreement that AGP needs to die, but that's because the AGP spec itself has limitations and needs to be put down. But it has nothing to do with the performance you'll get with single identical cards on a PCI-e or AGP bus.

Most memory accesses are on the local on-card memory. Even pci-e transfers are too slow to be useful in gaming. Everyone knows that. In reality, the agp vs. pci-e has nothing to do with single-card speeds. It has to do with pci-e being able to run more full speed devices. It's a better and more useful technology overall. But one AGP card vs. one pci-e card discussions come down to marketing tricks like nvidia's insistence on crippling their AGP offerings. Look it up yourself... Start with comparing the AGP 6800GT with any pci-e 6800 with the same specs (gpu and mem clocks, mem type, and same number of pipes) and you'll see that they're pretty much identical.
 
Granted. I was trying to keep it simple for the original poster of the question.
 
I thought I was the original poster of the question.

I have a nice and quick A64 system with an AGP mobo, and I find that I'm much more often vid card limited than cpu limited. There is no technical reason why the newer video cards with the appropriate agp/pci-e bridge chip (and corresponding slightly higher price) can't be made in AGP versions. It's about price and marketing. I'm personally willing to pay a bit extra so I don't have to go through the expense of a full system refresh that I don't need, when all I need/want is a faster video card.

I understand that the demand for AGP cards is plummeting, but dammit if they're going to release a vid card like the 7900GS, why did they deliberately cripple it with reduced pipelines? How can that possibly help anything? I'm not going to upgrade my entire system because they've crippled the vid card. Instead, I'm going to buy their damn crippled video card and be pissed at nvidia because the only video card they'll sell me is one that is slower than comparably priced offerings using pci-e.

It's all marketing BS, and it's generated an awful lot of frustration and even hatred against nvidia because of it. It makes the release of a new video card yet another slap in the face to anyone who doesn't need a new system but wants a nice video card. If they're going to kill off AGP, fine, don't make any more AGP cards. Just don't let the marketing droids pee in my cheerios by recognizing that there still is a market for AGP cards, but deliberately making those products slower than the pci-e versions. Why kick a guy when he's down? Jerks.
 
EpedemiX said:
I'm kind of dissapointed...

8800 series on paper should be x3 faster than the 7950 series and they are luck to get x2 on certain situations.

Having x3 the shaders and higher bus, with the shaders supposively running at higher clock rate along with having more than x2 the Transistor count you'd easily get x3 or higher due the the hardwares efficiency over the old architecture.

Why didn't anyone bother to test Splinter Cell-Double Agent? A lot of companies are going to be using the Unreal Engine 3 so its makes sense for the New GPUs to be tested on new games that have more of the future features in them.


its not realy 3x the shaders
theres 128 FPUs in the thing all can be pixel, vertex, or geometry shaders

but in the real world its divided up so you you will have some as one and some as the other if you totally balance it you realy have 64 and 64 with no geometry units

see now?
 
Advil said:
I think it's safe to say there's never gong to be an AGP 8800. At least not a GTS or GTX. AGP doesn't have the bandwidth to support these cards. Sorry, but AGP's days are over and buried.

Advil show me some proof of this. I've read around and I believe AGP was to go to x16 and have a much higher bandwidth then PCI-e, but servers and such use PCI-e for many other cards, and this benefited board makers because they no longer had to have a single slot that was solely useable by a graphics card and nothing else.

Correct me if I am wrong with some hard facts.
 
Elios said:
its not realy 3x the shaders
theres 128 FPUs in the thing all can be pixel, vertex, or geometry shaders

but in the real world its divided up so you you will have some as one and some as the other if you totally balance it you realy have 64 and 64 with no geometry units

see now?

That still doesnt make sense for the lack of performance.

If you had dedicated shaders and now have a Unfiied architecture which is more efficient. Again were are just talking about FPUs that alone should speed things up. Not to mention the added bus and transistor count. Along with suppoively the higher clocked FPUs or shader units as you want to refer to them.

Would a revised 7900 with x3 the shaders x2 the transistors done just aswell?

The performance gap would be that of a 7600 GT to a 7900 GTX which is slightly more than double. So that performance should at least be x2 to get a good idea of the scalability of the G71 architecture and you get a dissapointment in 8800 series given everything in it is suppose to be more efficient and has x2 transistors and x3 shaders running on higher bus and supposively internal clock.
 
think of the new setup as a bunch of realy fast general use FPUs

128 in the case of the GTX now in the old way you had pixel shaders and vertex shaders

if say some thing was pixel shader heavy you have parts of the GPU that would sit idle
with the new way you can alot more FPUs to working as pixel shaders
so out of the 128 you might have 100 pixel shaders and 28 vertex shaders but is the scene would change to be more vertex heave youll drop pixel shaders and pick up more vertex shaders

so you almost never have them all being used for one thing at time
in a totaly balanced scene you would have 64 and 64 so if you think about it in that way
its preforming how it should

also note there still on early drivers till take time but i think theres more speed in card yet

edit:
as to the transistor count well remeber now these are FULL genreal use FPUs hell with CUDA now you can write full aps to run on the card think of what it could do for folding or stuff like photoshop and after effects for complex filters and rendering
thay are much more complex then fixed shader units

think of it as having 128 CPU class FPUs in there
 
EpedemiX said:
That still doesnt make sense for the lack of performance.

If you had dedicated shaders and now have a Unfiied architecture which is more efficient. Again were are just talking about FPUs that alone should speed things up. Not to mention the added bus and transistor count. Along with suppoively the higher clocked FPUs or shader units as you want to refer to them.

Would a revised 7900 with x3 the shaders x2 the transistors done just aswell?

The performance gap would be that of a 7600 GT to a 7900 GTX which is slightly more than double. So that performance should at least be x2 to get a good idea of the scalability of the G71 architecture and you get a dissapointment in 8800 series given everything in it is suppose to be more efficient and has x2 transistors and x3 shaders running on higher bus and supposively internal clock.
If this thinking was valid, the X1900 series would have completely destroyed the 7900 series with its 48 shaders. In the end, it had a measurable performance advantage in most situations, but not 2X. Probably averaged closer to 10%. Shaders are only one part of the overall picture, and NO tech improvement scales 100%, not shader units, not CPU cores, not clock speeds, nothing. Software has to be optimized to take advantage of the new capacity for one thing--and it's a HUGE thing, just look at the challenge of multithreading apps for multicore CPUs. Another thing is the inherent bottlenecks throughout the process that have nothing to do with shader operations.Still another thing is that the transistor count you are fixating on was not dedicated purely to boosting throughput. A lot was spent on enabling new capabilities, including providing local memory cache for the stream processors so they can run shader programs on the data without having to write it out to frame buffer memory and back. Cache (like any other memory) eats up a lot of transistors, just look at ATi's GPU for the XBox 360.

Picking numbers out of the G80 spec and using them to claim that overall performance should triple is simplistic and unrealistic. It's far more likely that the power of all these new hardware bits will be used to enable better effects, realism, and visual quality than to boost raw frame rates at existing quality levels. And that's as it should be. We don't hunger for Quake 2 at 500FPS, we hunger for Oblivion at 60FPS, and after that, Crysis.

The last few new GPU series have done well to boost performance by 30-50%. To have one come along that boosts performance by 70-90% (while revolutionizing image quality) is nothing short of historic, and if that fails to meet your arbitrary expectations, you aren't going to find much sympathy around here.
 
No ones talking about ATi here.

Like I said if you look at 7600 GT to 7900 GTX it did scale more than 100% so if your contempt to abscure or deny the facts. Thats fine.

Although you do point something interesting out in your thought process that some how the 7900 GTX couldn't possibly scale 100% to the 7600 GT. Yet it did. How one can come to a conclusion of a more efficient architecture being, well less efficient in its scalability and being acceptable even if it was achived with the old architecture is down right odd..

You might want to check out other site that have more technical information about the G80 and how it does and doesn not scale well compared to the 7900 GTX.

Historic :) Would have been a price tag of under $300 not more image filters. :)
 
Texxxxx said:
So, I've been planning this new build for my son for Christmas...I ordered a case last week, and today ordered the BFG 8800GTX...then I started wondering if it would fit in the case I bought...an Aprevia server-size.. http://www.newegg.com/Product/Product.asp?Item=N82E16811144006 . I am dumping the power supply and already bought a OCZ 700W with 4 12V rails. I wanted to get him a 'cool' looking case because no matter what is in it, an 11 year old needs a flashy case...lol. Anyway I was reading the article and looking for the size of the card...the article states:

"The BFGTech GeForce 8800 GTX has the same height and width but exceeds the length of both at almost 10.5 inches long. The 8800 GTX is the same length as a RatpadzGS, which is roughly 1” shorter than a UPS Next Day Air envelope if you are looking for something to wedge into your case and see if your chassis is suitable for the GTX."

Well..low and behold...I got my Ratpadz GS out from one of my other comps (4 gaming rigs in LAN), and it is not 10.5" long..it is a whopping 11.5" inches long. It will be tight in this case, but I think it will work...I'll find out when the card arrives...lol.

Anyway...great article...double check your size, and anyone have any case suggestions that this card will fit in?


i have that same case layout (but mine is a koolance PC3-720SL, all the same in terms of fitting a 8800GTX in there). i measured it and there is 10.75" from the PCI slot on the back of the case to the hard drive rack, excluding the edge (closest to the window) where the aluminum is coiled over to prevent cuts and scratches.

i honestly cannot say whether or not the GTX will fit, but i have a feeling it will be just perfect enough to make it hard as hell to get in and out (but still fit!)
 
Commander Suzdal said:
If this thinking was valid, the X1900 series would have completely destroyed the 7900 series with its 48 shaders. In the end, it had a measurable performance advantage in most situations, but not 2X. Probably averaged closer to 10%. Shaders are only one part of the overall picture, and NO tech improvement scales 100%, not shader units, not CPU cores, not clock speeds, nothing. Software has to be optimized to take advantage of the new capacity for one thing--and it's a HUGE thing, just look at the challenge of multithreading apps for multicore CPUs. Another thing is the inherent bottlenecks throughout the process that have nothing to do with shader operations.Still another thing is that the transistor count you are fixating on was not dedicated purely to boosting throughput. A lot was spent on enabling new capabilities, including providing local memory cache for the stream processors so they can run shader programs on the data without having to write it out to frame buffer memory and back. Cache (like any other memory) eats up a lot of transistors, just look at ATi's GPU for the XBox 360.

Picking numbers out of the G80 spec and using them to claim that overall performance should triple is simplistic and unrealistic. It's far more likely that the power of all these new hardware bits will be used to enable better effects, realism, and visual quality than to boost raw frame rates at existing quality levels. And that's as it should be. We don't hunger for Quake 2 at 500FPS, we hunger for Oblivion at 60FPS, and after that, Crysis.

The last few new GPU series have done well to boost performance by 30-50%. To have one come along that boosts performance by 70-90% (while revolutionizing image quality) is nothing short of historic, and if that fails to meet your arbitrary expectations, you aren't going to find much sympathy around here.

great post! :)

PS:
Sorry if this has been beaten to death.. but so can you really only OC these cards with Ntune??
 
Nice post Suzdal.

I'm sorry for stirring the hornets nest on AGP hatred. I know the industry could have pushed it farther, but it didn't. The unfortunate truth is that it's aging rapidly, and even if we throw out all the bus issues there's still the little problem of AGP based systems using older CPUs that leave the 8800 series hoplessly CPU limited.

I'm kind of playing devil's advocate here, I'm not saying they shouldn't make a card for the AGP folks, it's just that it would be such a narrow niche product now that no one is going to develop it. And do you really want to buy an 8800GTS or GTX in AGP now, knowing that it won't work on your next upgrade and won't have any resale value a year from now?
 
For some reason, the latest nTune software (ver. 5.05) does not allow OCing, although the nVidia website states that ver. 5.05 supports all MB for temperature monitoring and OCing the GPU. The sliders are greyed out even though the "Custom Clock Frequencies" is clicked. Clicking on the "Find Optimal" button results in a blue screen (not BSOD). I have to reset my computer.

Does anyone who has a non-nForce MB have the same issue?

Here are my computer specs;

MB: Gigabyte GA-965P-DQ6
CPU: E6600
VID: BFG 8800GTS
PSU: OCZ GameXtreme 700W

Thanks,
 
I don't even think the 8800gtx would have maxed out AGP 8X bandwidth...

AGP was killed off early... and it was done because Nvidia and ATI both had interest in you guys all going out to buy more new motherboards... motherboards with their branded chipsets.

Dual GPU is another one of those marketing moves, ironically connected to PCIe... why buy one when you can buy two at twice the price? PCIe has been a cash cow for Nvidia certainly... and ATI too as they ramped up their performance chipset production.

I would have liked to see the full 7x00 line and the full X1x00 line released also on AGP... In the ideal world, the phase out would be happening now (with the simultanious release of conroe and AM2)... rather then when it did. Requiring you to buy a new motherboard... esp. when the chipsets are made by the same people that are refusing to release AGP variants... I think it was unethical. But then again... how many corporations these days *are* ethical?
 
jedicri said:
For some reason, the latest nTune software (ver. 5.05) does not allow OCing, although the nVidia website states that ver. 5.05 supports all MB for temperature monitoring and OCing the GPU. The sliders are greyed out even though the "Custom Clock Frequencies" is clicked. Clicking on the "Find Optimal" button results in a blue screen (not BSOD). I have to reset my computer.
Does anyone who has a non-nForce MB have the same issue?
[...]
MB: Gigabyte GA-965P-DQ6
CPU: E6600
VID: BFG 8800GTS
PSU: OCZ GameXtreme 700W

Hi Jedicri,

I'm having the exact same problem, and so far you're the only person I've seen who is as well - welcome to the club! We have totally different hardware, the only thing we have in common is we're using non-nVidia motherboards:

M/B: Asus A8R32-MVP Deluxe
CPU: AMD A64 X2 3800+ (Stock)
V/C: Asus 8800 GTS
PSU: Antec Neo HE 550W
O/S: Windows XP Pro SP2
Other: SB Audigy 2ZS, 3x HD, 1x DVD-RW

This is a brand new Windows install, except that I originally had an X1900 XT card installed that I exchanged for the 8800 GTS when I heard how loud the fan was on the ATI. So, the ATI driver was originally installed but it seemed to uninstall properly before I installed the Forceware driver.

When I try to click "Custom" I actually end up with both radio buttons selected, the "Factory" one stays in place as well. The sliders are greyed out, and if I choose "Find Optimal" it starts the bar but then almost immediately the screen turns light blue. I think the overclock is actually still in process (my case has an outboard temp sensor that was crawling upward from 51'C to 56'C over about 15 minutes, but when it stalled at 56'C for about 5 minutes and I still had a light blue screen I gave up. It's funny, they are actually both selected, it's not just a display glitch - when I leave the Control Panel and go back in they're STILL both selected.

I also can't get into the "Adjust Motherboard Settings" screen, I get a loading hourglass for a second or so and then I'm back at the desktop.

Let me know if you come up with a solution! I suspect we're stuck until the next revision of nTune comes out.

Ray
 
RaySmith said:
Hi Jedicri,

I'm having the exact same problem, and so far you're the only person I've seen who is as well - welcome to the club! We have totally different hardware, the only thing we have in common is we're using non-nVidia motherboards:

M/B: Asus A8R32-MVP Deluxe
CPU: AMD A64 X2 3800+ (Stock)
V/C: Asus 8800 GTS
PSU: Antec Neo HE 550W
O/S: Windows XP Pro SP2
Other: SB Audigy 2ZS, 3x HD, 1x DVD-RW

This is a brand new Windows install, except that I originally had an X1900 XT card installed that I exchanged for the 8800 GTS when I heard how loud the fan was on the ATI. So, the ATI driver was originally installed but it seemed to uninstall properly before I installed the Forceware driver.

When I try to click "Custom" I actually end up with both radio buttons selected, the "Factory" one stays in place as well. The sliders are greyed out, and if I choose "Find Optimal" it starts the bar but then almost immediately the screen turns light blue. I think the overclock is actually still in process (my case has an outboard temp sensor that was crawling upward from 51'C to 56'C over about 15 minutes, but when it stalled at 56'C for about 5 minutes and I still had a light blue screen I gave up. It's funny, they are actually both selected, it's not just a display glitch - when I leave the Control Panel and go back in they're STILL both selected.

I also can't get into the "Adjust Motherboard Settings" screen, I get a loading hourglass for a second or so and then I'm back at the desktop.

Let me know if you come up with a solution! I suspect we're stuck until the next revision of nTune comes out.

Ray

Yup, same results as you had described. Some have suggested to use ATITool for OCing this card, since it also works on nVidia cards. I've used it on my Geforce FX card. I don't know if it will work on the G80's though. I guess I will give it a go.

fci
 
EpedemiX said:
No ones talking about ATi here.

Like I said if you look at 7600 GT to 7900 GTX it did scale more than 100% so if your contempt to abscure or deny the facts. Thats fine.

Although you do point something interesting out in your thought process that some how the 7900 GTX couldn't possibly scale 100% to the 7600 GT. Yet it did. How one can come to a conclusion of a more efficient architecture being, well less efficient in its scalability and being acceptable even if it was achived with the old architecture is down right odd..

You might want to check out other site that have more technical information about the G80 and how it does and doesn not scale well compared to the 7900 GTX.

Historic :) Would have been a price tag of under $300 not more image filters. :)

Yes, but you're comparing GPUs within the same architectural family. The 7600 GT is literally 1/2 of the 7900 GTX (12 pipes vs. 24 pipes), so you should naturally see a 100% boost in performance. G80 is running on a completely different architecture. You're not going to see the kind of performance you're expecting because you can't directly compare G71's pipes to G80's shaders. That being said, there are many occasions where G80 can exceed the performance of a 7900 GTX SLI. Give the drivers some time to mature, throw in a little DX10, and I'm sure you'll see what G80 can really do.
 
the 8800gtx is faster than the 7 series, so what if its not powerful or not, its owns in all lbenchmarks :eek:
 
Brent_Justice said:
I didn't have time for this evaluation, I can give it a shot now though, let me see... installing now...


patiently waiting for this, i can't wait to see if this card will finally move that damn slider!
 
Sorry to interrupt the discussions but is that a Zalman CNPS9500 AM2 heatsink used on the 680i mobo? Checks on google and zalman's site doesn't say about the heatsink for use on a C2D? Very interested to know how it got there..seems nice when paired with the 680i mobo :D
 
jedicri said:
Yup, same results as you had described. Some have suggested to use ATITool for OCing this card, since it also works on nVidia cards. I've used it on my Geforce FX card. I don't know if it will work on the G80's though. I guess I will give it a go.

fci

Good call - I tried the latest ATITool at the time, pre7 (pre8 is now available at http://www.techpowerup.com/wizzard/ATITool_0.25b16pre8.exe, haven't tried it yet) and I've had generally good results. Most interesting, though, after messing around with ATIT for a while (I'd removed nTune first to minimize potential conflicts) I reinstalled nTune and, with the OC already in place from ATIT, the custom clock option was now available in nTune and the sliders are no longer greyed out.

However, my overclocking results have been mixed. I'm getting fantastic actual overclock results (630/1000 runs perfectly stable), but the performance increase isn't really there... for a 15%+ overclock I'm only seeing about a 5% performance increase in 3DMark '06 and none (far as I can tell) in Oblivion. Having said that, I'm running Oblivion at 1280x1024 @ 16x (!) AA and 16x AF w/ HDR and every single game video setting on max and it's smooth as butter (frame rates are averaging 70fps in an outdoor Oblivion gate region), so it's not like I'm missing that extra 5% - 15%. I'm only running on an X2 3800+, I think the game may actually be CPU limited for me a this point. Didn't see that coming.

Thanks for the ATI Tool suggestion!

Now, time to focus on my incredibly lousy CPU overclock on my new A8R32-MVP Deluxe motherboard... used to get an easy bump to 2.4GHz (from 2.0GHz... 2.5GHz was stable but I scaled back a bit for everyday use) on my A8V Deluxe board, with this one I can barely hit 2.2GHz and that's only when using the automatic 10% overclock option, I can't get ANY increase manually. I'm not looking for suggestions at this point, it's still too soon for that and this is the wrong forum, I'm just venting a bit. :)

Ray
 
EverQuest II results:

It plays better on the 8800 GTX than any other video card i've ever played it on. There is still very bad framerate dips. For some reason the game will drop down to 20, then up to 70, as you move through it. This happened with all other previous generation video cards as well, so it is a game problem. The problem is it causes you to have to drop settings so you have smooth performance without dips below 30 FPS, the framerate falls and rises quickly.

Extreme Quality - Almost playable, in some places 70 FPS, in others it dips down to the teens. Like mentioned above it varies so much it is practically rendered unplayable.

Very High Quality - This is actually pretty playable, framerate is all over the place, but the lowest downspikes are in the mid to upper 20's.

High Quality - Much like Very High Quality, slightly higher min fps. A lot of people may find High Quality playable.

Balanced - Very playable all around, still framerate spikes, but the min ones are in the 30's to 40's.

This is all at 1600x1200 NoAA/16XAF

Pretty much it seems like the framerate problems stem from the game itself.
 
brinox said:
i have that same case layout (but mine is a koolance PC3-720SL, all the same in terms of fitting a 8800GTX in there). i measured it and there is 10.75" from the PCI slot on the back of the case to the hard drive rack, excluding the edge (closest to the window) where the aluminum is coiled over to prevent cuts and scratches.

i honestly cannot say whether or not the GTX will fit, but i have a feeling it will be just perfect enough to make it hard as hell to get in and out (but still fit!)

Well, just got my card today, and you are correct, brinox...it barely fits!...but it fits and that is good news...thx for your input.
 
Well, [H] is always the first place where I go to get my reviews, and GJ once again on the 8800 review. Those settings are unbelievable!! Oblivion MAXED at 1600x1200 w/ EIGHTxAA? In-freaking-credible.

This video card is the real deal for dx9 games, that's for sure. And it is my guess that dx10 will look even better. The numbers might not seem so, but I'm willing to bet when you compare image quality teh DX10 will look even better than what we are seeing now. I mean, that's the whole point of DX10 right? Eliminate the old guys so you can have all the doodads mandatory and running *efficiently* on a standard method (unified pipes).

It would be fairly dissapointing if you had to lower the overall image quality below what it is in these games for the DX10 games.

Also, price. I think they are doing an excellent job on pricing. I'm looking at the BFG 8800GTX @ $680 right now. The 7800GTX's were $600 when they first came out, and that was with 256mb of ram. Remember those 7800GTX 512mb? $750 if it was a penny. I'm confident the price will drop down into at least the $500's in a few weeks, maybe back towards the ~$500 pricing we saw on the 7800GTX after a month or two (I paid $530 for mine, and before I had my computer up and running they were $480-500 :( ). It's always a good thing for the next gen to end up at about the same price as last gen, I don't want to be paying $2k for my video card in a few years!

Anyway, GJ nVidia and GJ [H]ardocp for another great review!
 
Brent_Justice said:
EverQuest II results:

It plays better on the 8800 GTX than any other video card i've ever played it on. There is still very bad framerate dips. For some reason the game will drop down to 20, then up to 70, as you move through it. This happened with all other previous generation video cards as well, so it is a game problem. The problem is it causes you to have to drop settings so you have smooth performance without dips below 30 FPS, the framerate falls and rises quickly.

Extreme Quality - Almost playable, in some places 70 FPS, in others it dips down to the teens. Like mentioned above it varies so much it is practically rendered unplayable.

Very High Quality - This is actually pretty playable, framerate is all over the place, but the lowest downspikes are in the mid to upper 20's.

High Quality - Much like Very High Quality, slightly higher min fps. A lot of people may find High Quality playable.

Balanced - Very playable all around, still framerate spikes, but the min ones are in the 30's to 40's.

This is all at 1600x1200 NoAA/16XAF

Pretty much it seems like the framerate problems stem from the game itself.
Thanks for the extra work, Brent. Looks like the EQ people could maybe do something to straighten this out if they've a mind to. I suppose NV could even tweak a few things with drivers to help, but I don't know if they could do anything about an issue that seems so tied to the game code itself. Maybe a patch that takes advantage of the G80's streaming would loosen up the bottlenecks, but who knows if they'd want to do that much work? Anyway, thanks again for looking into it.
 
Hey Brent -

Any chance we might get some GTX vs. OC'd GTS results posted?

After seeing FiringSquad.com review I opted for the EVGA 8800 GTS and hope to push 620 core / 900+ memory to close the gap between performance of a $480 and $650 card. They posted some results showing the OC'd GTS beating the GTX but only used HL2 and CoH - I'd like to see some Oblivion results or FEAR EP with the OC'd GTS vs the GTX.

You guys had luck with overclocking the GTS but I didnt see any real performance numbers that resulted from that overclock.

Thanks.
 
Vielgus-Kutas said:
pretty good review from the gaming side of things ,

completely worthless from the multimedia side ,

what a ridiculus choice for the test system, was it donated by nvidia?

i reserve my judgment about this card until i read what the guys from *** have to say about it ,


Viral marketing at its best. :mad: If you want to advertise your site, how about you buy a banner?
 
Okay, props to HardOCP, the inquiring minds that post here with helpful and positive information, and myself for reading it all. ;)

My big question is power. My power supply died in my Apple G5 case on Christmas Day. Knowing I have a DFI LanParty MB, AMD 4400+ X2 CPU, 2GB Crucial RAM, 256MB 7800GTX video card, etc... I know the next item I could see myself upgrading is the video card. A buddy bought a 7950, and has had trouble keeping the PC running either because of that card or MB issues... but I never want to have trouble with power.

So instead of just getting another Enermax 600w PS, I decided to read the reviews and go with something beefier. Ended up going with the Corsair 620w Modular PS, as the reviews for the DFI motherboard are all top notch. 3 12+ volt rails running 18A each. Is that enough to run a 8800GTS? Suppose I'd be tempted to go with the GTX, but I don't consider it a must, especially when the reviews say overclocks on the GTS are great.

Long story short, I don't see any mention of the power supply used in the reviews (it's not in the test rig list)... so if we could get that info, that would be GREAT!
 
Skids1 said:
The thing that I was wondering was what about running dual monitors in SLI

http://www.theinquirer.net/default.aspx?article=35594

If the article is correct, this makes me vomit. A DX10 based card that wont handle dual screens without a damn add on card, wtf over? Supreme Commander is due to USE DUAL SCREENS for gameplay if you have them, no way am I gonna want a 3rd 8800 series card around so I can run dual display in that game... Some FIX nvidia... This makes me seriously wait to see if the R600 corrrects this....

Now I dont mean to pooop on the great review the [H] did of the 8800 series... seriously nice job guys, if I werent so worked up about the above issue I might even be trying to order a card today. Keep up the great work guys, not your fault Nvidia cant implement SLI+dual display correctly.

I couldn't agree more. I knew of this "issue" (or more accurately, intended feature) going into purchasing my new 680i-based system with two 8800GTX's, and I have to tell you these things smoke on maximum settings, even on my new BenQ FP241W 24" monitor (I'm surprised all the lights in my house don't dim when I'm playing CounterStrike Source). It's just plain sweet.

But, the inability to use dual monitors while in SLI mode is quite annoying, and for the primary reason you mentioned: future games designed to use dual monitors. Supreme Commander will no-doubtedly require heavy 3D artillery (pun intended), and being able to utilize SLI would certainly be preferred. Although I have feeling a single 8800GTX will still handle even the likes of Surpreme Commander quite nicely, it ticks me off that a limitation like this exists as a part of its design, motivated by what is most likely a marketing ploy.
 
One question about the review. Does anyone know if the standard stock speed benchmarks were done with linkboost enabled on the 680i motherboard that was used for testing?
 
Does anyone know if the faceplate is removeable on the cooler and if so how? My friend likes to make custom faceplates for his graphics cards (He's made them since the 6800s), and would like to know. If anyone can post a picture of the faceplate removed that would help too ;)
 
Back
Top