6800GT or X800 Pro?

wilson502

Limp Gawd
Joined
Dec 5, 2003
Messages
458
Hey guys, its good to see that the forums are back up! Ive been wantin to ask you guys this though. Im debatin between the 6800GT or the x800 pro, im in a gridlock. The 6800GT has 16 pipes, the x800 pro has 12, 6800 has PS 3.0, the x800 pro doesnt. The x800 however seemst to get better performance in some cases however. But the 6800GT can be made into a ultra by means of o/cing. I try to be unbiased to either brand, however, i do give a slight edge to Nvidia, because of Opengl performance because i run HL and its mods. What do u guys think?
 
the pro keeps up with the 6800ultra on some spots. but wait until the benchmarks come out on the GT.
 
1. if 16 pipes is what you're looking for, spend the x-tra dough for an X800XT, 2. PS 3.0 isn't really necessary on the X800, so ATI didn't add it 3. the 6800 cards you will find are slightly faster in OpenGL applications than the X800, however it's somewhat insignifigant. personally, i'm planning to get an X800XT this summer. i don't mean to nock the 6800 cards, Nvidia has made a major comeback from the FX series, but i just feel that the X800 cards have a good lead over the Geforce 6800.
 
1. if 16 pipes is what you're looking for, and u have the x-tra money, go for an X800XT, 2. PS 3.0 isn't really necessary on the X800, so ATI didn't add it 3. the 6800 cards you will find are slightly faster in OpenGL applications than the X800, however it's somewhat insignifigant. personally, i'm planning to get an X800XT this summer. i don't mean to nock the 6800 cards, Nvidia has made a major comeback from the FX series, but i just feel that the X800 cards have a good lead over the Geforce 6800.
 
i dont think im gonna spend the xtra money for the x800XT. Just cant see myself doing it, plus cant really afford it. Heck i paid $337 for this card, when it was top of the line back in Dec of 01.
 
This seems to be the debate of the summer. Personaly I think I am going to wait for everything to hit retail and the prices to shake down before I make a real decision. These are also the cards I am looking at getting. Seems that getting either card is not a bad decision. Just want to see how far my video dollar will go.
 
The gt has been benchmarked by Toms Hardware and Anandtech

Its cheaper than the pro by about £20 and has more feautures than the pro. According to Anand it performs slower in new in PS2 heavy games but only FarCry does it perform slower (out of X2, Halo and FarCry. Ignoring Homeworld 2).

I would recommend waiting and seeing what everything is like after Nvidia release the next set of drivers, and FarCry goes PS3.

The GT requires a lot less power...
 
Catsonar said:
hey,


i'd say go with the X800 Pro, the picture quality is better. Plus the X800 Pro can be moded to be an XT, very simple here's a link


www.xtremesystems.org/forums/showthread.php?threadid=36003&perpage=25&pagenumber=1


6800 consume a lot of power as well, do you atleast have a 480watt PSU, caus that's what you need in order to run the 6800GT

So far only the Sapphire are succesful with the 16 pipe mod. The power ratings were dropped and that was for the ultra version. Havent seen anything showing defintative IQ for one or the other. They both do very well from the screenies I have seen.
 
I'd say give it a month or 2 after "Both" cards are available on store shelves and then make the decision. Last minute production changes could make notable differences in the performance of the Retail Cards. I am going to compare the 6800 Ultra to the XT PE before I buy.
 
I,d say just Get the X800Pro and enjoy a blist3eringly fast card that is available right now>And if and when the PaperLaunch6800 finally sees the light of day and the Benchys are real and sm3 is the stuff its being hyped to be then sell the X800 and buy one.
 
Don't forget that there is another issue to deal with here....
Linux driver support with nVidia has been superior to ATI's recently. To be honest if you only want pure brute force in a win32 (or 64) environment then ATI might be the best choice. However, if you frequently get your UT2004 frag on in Linux like myself, a 6800GT or Ultra with an updated nvidia-kernel might cause you fewer headaches.

The bottom line is that it's your choice. Pick the one that suits your needs the best. :cool:
 
wilson502 said:
i wonder if my cpu will create too much of a bottleneck... :rolleyes:
To be honest, I dont think you'd notice the slightest difference between a x800pro and an x800xt because of that cpu bottleneck. Unless you plan on upgrading your cpu when you get your new gfx card, i'd say a get a 9800 pro level card. Otherwise I see get an x800.
 
Catsonar said:
hey,


i'd say go with the X800 Pro, the picture quality is better. Plus the X800 Pro can be moded to be an XT, very simple here's a link


www.xtremesystems.org/forums/showthread.php?threadid=36003&perpage=25&pagenumber=1


6800 consume a lot of power as well, do you atleast have a 480watt PSU, caus that's what you need in order to run the 6800GT

That's just not true. Nividia have changed the minimum power requirements all the way up the line. 350 for the ultra and 300 for the GT. Maybe that's what's delaying the release? btw, picture quality is a tough call I'd call it a very slight edge to ATi, but no where near antyhing TO MAKE YOUR DECISION BY like the last generation.

pERSONALLY I am waiting to see how Nvidia 1. Drivers Mature, and 2. GT is Priced. The GT looks to be the very best card that will be out there in both camps, especially if Nvidia spanks ATI in the DOOM3 bench's like I anticipate it will, that will be the extra incentive I need to get the GT.

Yes, shader 2.0 games run SLIGHTLY better on ATI hardware, but nothing I'll loose sleep over. My money is on DOOM3 based games anyway, that's the way I've always been and I want my DOOM based games to run as fast as possible, thank you. Plus, it seems to me the main advantage of PS 3.0 AFAIKT will be in speed optimizations if there is a difference, less so for feature set. So those optiimizations mean that the card has room to improve much more so than the ATI cards with their already mature drivers (basically the same as the 9800 series) and NO 3.0 support.

Great to have the forums back up I've really been hurtin without the HARDFORUM! :)
 
oozish said:
My money is on DOOM3 based games anyway, that's the way I've always been and I want my DOOM based games to run as fast as possible, thank you. Plus, it seems to me the main advantage of PS 3.0 AFAIKT will be in speed optimizations if there is a difference, less so for feature set.

So the smart thing would be to just wait until DOOM III comes out and see what is best for it then. Until then you're just babbling. babble babble
 
Liam said:
To be honest, I dont think you'd notice the slightest difference between a x800pro and an x800xt because of that cpu bottleneck. Unless you plan on upgrading your cpu when you get your new gfx card, i'd say a get a 9800 pro level card. Otherwise I see get an x800.

What CPU ISN'T creating a bottleneck on these cards right now, excluding uber-clocked EEs and FXs, which most of us don't have? I would recommend getting the x800 Pro over the 9800 Pro, unless costs are a concern.
 
emorphien said:
So the smart thing would be to just wait until DOOM III comes out and see what is best for it then. Until then you're just babbling. babble babble

hEY jerk, STFU. Like I didn't summarize a bunch of info that is KNOWN out there in my post, unlike your ignorant flame. Why even post if you're just going to be an A-hole unless that's all your good for?
 
Heheheh... I find it midly humorous that you are leaning towards one supercard over the other cos of... HALF LIFE and it's mods.

Your GF3 can run it beautifully, can't it? :D

Anyway, best of luck choosing the card you want. It's a tough choice hehe.
 
Between those 2 specific choices I'm inclined to lean towards the 6800GT, thats the choice I'd make. Of course the 6800GT isnt available yet and by the time it is the X800XT monster might be there to tempt you.
 
Humm , when the 6800GT will reach the stores @ 400 $, the X800 Pro will proly cost alround 300$ and the X800 XT (proly 475mhz core ) will be alround 400 - 430$ .
All this hype alround the GT is useless imo , when it will reach the stores you will have the 16 pipes 800XT priced even .
I talk here about X800 XT and not X800XT PE edition.
Also if somebody think that the GT will be an overclocking beast this is just wrong , the GT will have one slot cooling and one molex connector , in the best case we can hope some 380-400 mhz core , this of course if you ad extra cooling becuz this mofo run realy hot .
I realy like the GT , awesome card with big hardware features , if i had to chose betwen a Pro and a GT priced even i will take the GT without hesitation but for 100$ difference is just not worthit .
 
I doubt that the x800pro price will have dropped to 300 from its launch msrp of 400 in the little time it will take NV to bring the 6800gt to market, especially if demand and number of parts shipped keep availability as low as it is now.
 
Merlin45 said:
I doubt that the x800pro price will have dropped to 300 from its launch msrp of 400 in the little time it will take NV to bring the 6800gt to market, especially if demand and number of parts shipped keep availability as low as it is now.

The X800Pro can already be preordered for 350$ from best Buy (if i remember good ) , the cheapest pre-order i seen for the GT is 460$ . The GT will proly reach the stores in limited quantitys in over a month .
I personaly think that when both cards will reach full production it will always be 100$ difference betwen the cards .
I maybe wrong but this is what i think .
 
IIRC there is a place that is selling preorders for the 6800U for $610, I doubt that they are a good reflection of the final price. I expect the 6800GT to launch at the $400 price point, and then drop , to be price competative with the x800pro, which probably won't be too much of a drop.
 
i dont think i would do the hard mod to an XT, way too risky. Unless they come out with a softmod, id do it. You prolly wont see prices shakedown for both cards until next month or so. BTW my geforce 3 runs HL and its Mods just fine. :)
 
A X800 Pro @ 530/1120 is pretty sweet. I don't think I would have gotten this kind of an overclock out of the 6800 GT, so I'm happy. The Pro kicks all kinds of ass.
 
I've got my X800 pro to 540/540 with some volt modding. It can do more with better cooling.

Remember, the 6800GT is not available.

So drawing comaparisons between a card that has been out for more than 1/2 a month and card that is not available is not fair.

You speculate all you want about the 6800GT, but the fact is that none you have it in your computer right now.
 
eraser_16 said:
1. if 16 pipes is what you're looking for, and u have the x-tra money, go for an X800XT, 2. PS 3.0 isn't really necessary on the X800, so ATI didn't add it 3.
ATi didn't add sm3.0 because they didn't know how to implement it. Now they are trying to make it look unimportant, and you fell for it :p .

Just trying to stop the spreading of lies on this forum. Enought that everybody here things that Corsair XMS3200C2 is BH-5.... :mad:
 
ATi didn't add sm3.0 because they didn't know how to implement it. Now they are trying to make it look unimportant, and you fell for it .

Just trying to stop the spreading of lies on this forum.

Ahahahahahahaha!!!!

The second sentence is the most ironic thing I've seen written in light of the first sentence. If you really believe ATI didn't implement SM3 because they didn't know how, you are regurgitating PR (from nvidia in this case), which is a lie almost by definition.

Unless you have some internal ATI documentation that says "WTF guys, how do you do teh SM3.0, it's teh win! Oh screw it, we'll just settle for the fastest SM2.0+ around.", you're spreading a lie.
 
Think about it. First they say it's unipornant, and then they announce plans to add it in R450... Something fishy here.
 
The amusing thing to me here is that the "reviews" of the nVidia product is that they are really previews. We won't know the truth until we get multiple retail cards in channel of all 4 cards (6800ultra,6800gt,x800pro,x800xt) and see lots of reviews. Until then most of this is conjecture and the penis wagging about who does what better is mostly fanboy talk.

PS To celebrate the re-opening of the forums I will eagerly a flaming from some asshole who doesn't like my opinion.
 
Yes, there is something fishy.

But it smells a lot like:
"Do we really need PS3.0 at this point given the number of transistors it uses and the lack of additional function when we could get more clock speed and usability out of an SM2.0 part and really stick it to nvidia in the games they want people to test?"

Instead of:
"OMG we have no idea how to make an SM3.0 part even though the specifications are publicly availible and have been given to us long ago by MS! Screw it, let's just design this incredibly complex SM2.0+ part that runs faster in current games."

ATI made a bet with SM3.0 similar to what nvidia did for SM2.0.
Only SM3.0 appears to be less special or unique in FarCry at least, than what nvidia would have us believe. But we already knew that from the original discussion months ago about that infamous presentation where only the "PS3.0" part had clear, shiny water, despite the fact that even a GeForce3/Radeon8500 can both do that effect (it's PS1.1 as a matter of fact).

Recognize those offset mapping floor shots?
They look shockingly like what nvidia said was "only possible with SM3.0".
Only those were taken with a 9800 Pro.

edit to fix link
 
the thing is that in most interviews, they said that they didn't want to implement is because it requires fp32 precision among other things, which would have required far more substantial architectural changes and would have ballooned the die size to sizes they deem too big (IIRC, they said 33%bigger), it isn't so much that they couldn't do it, it is that they didn't want to chance trying, failing, and having to start again. (NV on the other hand, went all out and it payed off). Although perhaps low-k would make it much harder to get acceptable yeilds out of a 200+ million transistor chip.
besides, SM 3.0 is far less important IMHO than FP filtering and blending, which has real IQ and performance implications. (it is the only way to efficiently implement HDR rendering, otherwise you have to use MRTs and multiple passes on the targets, it gets very messy and inefficent compared to using fp blending and filtering).
 
NV on the other hand, went all out and it payed off
Well we'll see what kind of boards we can actually buy and at what prices when they actually reach retail eh?

Unfortunately the news regarding nvidia's yields at IBM has all been bad so far. There's even a suggestion that nvidia will rush the NV45 out the door and use TSMC to fab it, leaving the 6800 as something of a repeat of the 5800U, only crippled by fabrication issues rather than a fudged design and too-agressive clock speed targets.

I'm hoping the news is overstating the problem :(
 
I sincerely doubt that the NV45 will be fabbed at TSMC, they are very heavily backlogged with orders, no way NV could get anything fabbed there any time soon. plus I read lately that NV's problems aren't yeilds at IBM, it is fitting into IBM's very tight schedual. the NV45 is simply an nv40 with PCI-e and according to Anandtech, a slightly higher clock. no real boost there, not enought to make the nv40 disapear. also switching fabs to TSMC this late in the game, even if TSMC had space, would be a bad idea, It would cost a ton, take a long time, and the final product wouldn't be as good as it would on the IBM .13 process it was designed for. The NV45 will be rushed out the door, because the R423 will need competition, neither will offer anything substantial performance wise over their predescessors, unless PCI-e provides some serious performance benefits. (the only way the nv45 would show up on AGP is with nvidia's HSI 2 way bridge chip, and that would be rediculous considering that they already have an AGP native chip with identical capabilities in the NV40.
 
IBM has been having well-documented allocation and yield issues with it's line. These are apparently so bad that they have now breachedcommon practice and only require their customers to pay per functional die rather than per wafer put in.

Credit Suisse First Boston believes that NVIDIA’s ability to rapidly increase production of complex graphics processors may improve once the firm inks contracts with companies who have similar manufacturing technologies with IBM, such as Chartered Semiconductor.

NV45 will indeed be native PCI-E with the HSI chip working "backwards". Presumably clock speed targets will be higher as befits a speed-bump.

Whether you doubt nvidia will use TSMC or not is irrlevant to the fact of the matter :)
Santa Clara, CA and Hsin-Chu, Taiwan – February 24, 2004 – NVIDIA Corporation (Nasdaq: NVDA) today confirmed that it will be one of the first semiconductor companies to manufacture select up-coming graphics processing units (GPUs) at Taiwan Semiconductor Manufacturing Company’s (TSMC’s) (TAIEX: 2330, NYSE: TSMC) 0.11 ìm (micron) process technology. NVIDIA will combine TSMC’s 0.11 micron process with its own innovative engineering designs, to deliver high-performance and low-power consumption in a graphics processor.

“The decision to move to this new process underscores our long-standing relationship between TSMC and NVIDIA,” stated Di Ma, vice president of operations at NVIDIA. "This new manufacturing technology, along with numerous architectural enhancements, enables us to continue delivering products that allow end users to interact with a wide variety of digital devices. We look forward to the new opportunities this advancement will allow us.”

TSMC’s 0.11 micron process technology is fundamentally a photolithographic shrink of its industry-leading 0.13 micron process. The process will be available in both high-performance and general-purpose versions using FSG-based dielectrics. Though actual results are design-dependent, TSMC’s 0.11 micron high-performance process also includes transistor enhancements that improve speed and reduce power consumption relative to its 0.13 micron FSG-based technology.

TSMC began 0.11 micron high-performance technology development in 2002 and product qualified the process in December of 2003. Design rules, design guidelines, SPICE and SRAM models have been developed, and third-party compilers are expected to be available in March. Yields have already reached production-worthy levels and the low-voltage version has already ramped into volume production. The 0.11 micron general-purpose technology is expected to enter risk production in the first quarter of next year.

And:
http://www.theinquirer.net/?article=15419

TODAY'S Taiwan Economic News is claiming that some large customers of IBM Micro, including Nvidia, Qualcomm, Cisco and Xilink, are fleeing the Big Blue coop and giving their business to TSMC and UMC.

This is odd. IBM Microelectronics, to the best of our knowledge, is building Nvidia's NV40 graphics processor.

The article quotes IBM's CFO John Joyce as saying that the Microelectronics unit lost $150 million in its latest financial quarter because of defects with the 90 nano and 130 nano chips it's building at its 12-inch fab in New York state.

The same report claims that IBM "lured" the big names away from local foundries TSMC and UMC when it bounced into that market last year.

But there's another problem. Recent reports in other Taiwanese news outlets claimed that TSMC was chock a block on the foundry plant, and if those reports are true, that means the big customers will have no place to go.

As always, we will see in the end.
 
hmm, do you know if the nv40 is fabbed on 12 or 8 inch wafers at east fishkill? because supposedly .13 on 8 inch wafers is one of the only processes that IBM gets really good yeilds on, especially when compared to 12" wafers. If it is fabbed on 12 inch wafers than they will probably swithc to 8 inchers and get much better yeilds at lower overall volume. even if they are unsatisfied with IBM, as the INQ said, they have nowhere else to go as TSMC and UMC are booked solid.
 
Back
Top