8800 GTX released too early??

|CR|Constantine said:
.

Once again do you have any sources that show that Nvidia will in fact have a refresh by the time the R600 released. Moreover what kind or refresh could be done within the confines of 90 nanometer, because surely if they are to get more power they would need to move to 80na. Which is my contention they cannot do within 3 months.




Hold onto rumors? you're the one claiming Nvidia will have a refresh by the time the R600 comes out. With absolutely no evidence or even hear say to put up. The R600 is not a "rumor" and will be out early next year.

Secondly, how on Earth does the 8800GTX make the ATI R600 look crappy? When on paper the R600 is a more advanced card and will be released when it can actually be used.

Moreover, I would not be so quick to brag about nvidia and their apparent DX10 goodness, when NO ONE has benchmarked the bloody thing on an actual DX10 enviroment. Especially against another competitor.

Do I believe the Nvidia 8800GTX is a great card, sure of course it is. Was it rushed, no. Do I think it will beat the R600 (It's competitor) No.

Throw all the jokes and crap you want but when the R600 is released I will be sure to look you up in the forums and gloat happily.

Well, you are taking this serious...
What's with the "I'll defend the still unexistent card, to death" thing ?
Where are your sources that confirm it will have true 512 bit memory interface ? And where does it say, that rumor alone will grant it more than memory bandwidth, when compared to the G80 ? We know absolutely nothing as a fact, about R600. All you have are rumors about it.

8800 GTX/GTS are out there, waiting to be bought and they have proved to be the best of the best in what they do: render graphics.

At this point all I can say is (as I already did): R600 better beat the G80, or ATI will have a hard time selling it, because it will come much later.

About NVIDIA going forth with their 80 nm fab process, read this:

http://techreport.com/onearticle.x/11175
 
|CR|Constantine said:
First, I was not talking to you.

If you weren't talking to me, then you shouldn't have quoted me and then asked "What Sources..."

|CR|Constantine said:
Silus said:
Originally Posted by Silus
It better, because it won't be competing with the 8800 GTX anyway. It will be competing with the refreshed version.

What *Sources* do you have about this phantom NVIDIA refresh part.

Moreover, if ATi releases this card in January\February time frame you think Nvidia is going to have a refresh ready (that should be a 80na part) in 3 months ..... RIIIIIIGGGGGGHHHHHTTTT.

|CR|Constantine said:
Second, the person I was talking to makes a point that the R600 will be competing against a refreshed G80 not the current model. To which I called bullsh1t.

I never said the G80 was not impressive, but with ZERO evidence on how it does within a DX10 enviroment against another competitor I believe waving the OH MY GOD WE WIN TEH DX 10 GOD SPOT. is a bit pre-mature. Especially when there is NO BLOODY DX10 GAMES OUT NOW.

So if people with ZERO evidence is going to pull the G80 is the best at DX10 with no proof than I will happily submit the fact that the R600 with it's SECOND GENERATION UNIFIED ARCHITECTURE, TRUE 512BIT, 1GB DDR4, 80 NANOMETER, card will do better than the G80 in DX10.

Can I prove that, of course not because I have no hard data, and neither do you.

If you actually read what I typed, you would notice, I also said R600 will not compete with the G80, but a refreshed version of it.

You can say all you want. 8800 GTX/GTS are real, you can buy one of them (or two). R600 is worth nothing on paper, until it proves it really is better.
 
i still want to see where Constantine got the "G80 runs DX10 through a software emulation"

i can point to threads where people that should know have said Nvidia may very well have a refresh ready, but it's pointless

my point is, everything about R600 is speculation right now, nothing about G80 is speculative other than DX10 performance, but the fact that M$ gave it the "DX10" compliant blessing tells me one thing, that it's done right
 
JDAdams said:
Personally I'm not interested in the 8800 since it uses a software layer to emulate the DX10 unified shader architecture instead of actually having unified shader support in hardware; this is a little too similar to the Voodoo5 emulating DX7 hardware transform and lighting my liking.

I'll wait for a true DX10 part before I buy, whether that be from nVidia or ATi. In the meantime my dual X1900s can take care of Oblivion or anything else at 1600x1200 with decent AA/AF on.

It seems I thought |CR|Constantine said that, so the question I asked him, goes to you:

Where are you getting this "software layer to emulate DX10" stuff ? Have you even read the reviews about the G80 ?

I suggest you do. [H] review is very thorough on the new architecture. Please give it a try.

@|CR|Constantine

Sorry, that question was for JDAdams, not you.
 
nobody_here said:
i still want to see where Constantine got the "G80 runs DX10 through a software emulation"

i can point to threads where people that should know have said Nvidia may very well have a refresh ready, but it's pointless

my point is, everything about R600 is speculation right now, nothing about G80 is speculative other than DX10 performance, but the fact that M$ gave it the "DX10" compliant blessing tells me one thing, that it's done right

I made the same mistake. It wasn't him that said that. It was JDAdams.
 
Silus said:
I made the same mistake. It wasn't him that said that. It was JDAdams.


they both said it, Constantine said it here

http://www.hardforum.com/showpost.php?p=1030207758&postcount=70

i suggest the OP read this article after reading the [H] article

http://www.beyond3d.com/reviews/nvidia/g80-arch/

Any D3D10 accelerator must support all base API features without any exceptions, developers requiring a level playing field in order to further the PC as a platform and fruitful playground for graphics application development, be they games or GPGPU or the next generation of DCC tools, or whatever the world feels like building. Performance is the differentiator with D3D10, not the feature check list. There's no caps bit minefield to navigate, nor a base feature to be waived. So the big new inflection point probably should have been a big hint.

goes into waayyyy more technical detail, if it was needing a software emulation layer to run DX10, these guys would know it
 
I said it, then he said it - I thought it was common knowledge the G80 didn't have a converged shader unit, but having had a look it now seems there's no definitive information on whether it does or doesn't :confused:

Edit: Ah - that B3D article would seem to indicate it does, not that we'll really know until a DX10 app tries to use the shaders...
 
I bet what I am about to say has been repeated about 10 times in the past 4 pages...

---
IMO: Nvidia's rush to release the new GTX card is basically a 'holiday' rush deal, they'll probably lower the amount of supply and keep the price at a stand-still until the very end of December (Start of Jan.).

Why? The release of the new ATI Pwnijiiiii Card will probably be on a release shortage (ATI...so sly...ha) and have their card @ around 600 bucks Retail value.

It seems the best bargin for a 8800 GTX is around 450 - 480 bones...

Early? Yes...DX 10 releases? Soon? I dont know exactly when....ArmA will be out by Dec. in the USA...So I am looking forward to that...(I think it will be DX 10 compatible)
 
SX2233 said:
.......nVidia could've perfected the card better instead of rushing it out... .


please explain what is not perfect about it and what needs to be improved, assuming you own one of course...
 
In my opinion, and this is only what I think, is that yes the G80 was rushed. IF G80 did not support DX10 no one would make such a big deal out of it. What is the point of having a video card that supports DX10 when there are no games that support it? Sure you could play oblivion at INSANE resolutions but why did they have to stick dx10 in it and charge extra for that?

Just think about it for a second. People who bought the G80 for $650 are also the people who are going to be buying the next nvidia dx10 card because chances are when the first dx10 game comes out the g80 will be weak to play it.
 
The bottom line is this, the 8800GTX is a fantastic card and in no way was rushed out into the market place. It has arrived at the appropriate time frame that Vista is going to be released. ATi (once again) is late to the party. However, since DX10 games have also been delayed ATi will not be suffering as much as one would think.

Secondly, for the record NO ONE HERE has any clue to not only how the 8800 will work under Vista but also DX10. Odds are it should work fine, but in the world of high end parts "just working" does not cut it. The real proof in the putting is which card, the 8800 or the R600 will be faster, and better looking in DX10 titles. Until then all we have is that the 8800 is a monster of a DX9 card with DX10 capability.

It's up to each individual consumer to decide what is best for them. Personally I am tickled pink that Nvidia FINALLY got their act together in terms of IQ. Now that you can get uber quality IQ from both camps, and any top end card can play games with playable frame rates all that is really left is availability, price and driver support.

But make no mistake about it, when Crysis was demo'd the other day it ran like ass on a 8800 and well below 60FPS. Anyone thinking that they will be kicking some serious ass in Crysis with this card better think again.

What has always held true is the one that comes out with the latest card before a monster resource hogging title will win the day. So far it looks as though the R600 will be released when Crysis is released and I DOUBT Nvidia will have something ready (In volume) at that time.
 
|CR|Constantine said:
The bottom line is this, the 8800GTX is a fantastic card and in no way was rushed out into the market place. It has arrived at the appropriate time frame that Vista is going to be released. ATi (once again) is late to the party. However, since DX10 games have also been delayed ATi will not be suffering as much as one would think.

Secondly, for the record NO ONE HERE has any clue to not only how the 8800 will work under Vista but also DX10. Odds are it should work fine, but in the world of high end parts "just working" does not cut it. The real proof in the putting is which card, the 8800 or the R600 will be faster, and better looking in DX10 titles. Until then all we have is that the 8800 is a monster of a DX9 card with DX10 capability.

It's up to each individual consumer to decide what is best for them. Personally I am tickled pink that Nvidia FINALLY got their act together in terms of IQ. Now that you can get uber quality IQ from both camps, and any top end card can play games with playable frame rates all that is really left is availability, price and driver support.

But make no mistake about it, when Crysis was demo'd the other day it ran like ass on a 8800 and well below 60FPS. Anyone thinking that they will be kicking some serious ass in Crysis with this card better think again.

What has always held true is the one that comes out with the latest card before a monster resource hogging title will win the day. So far it looks as though the R600 will be released when Crysis is released and I DOUBT Nvidia will have something ready (In volume) at that time.


Uber quality on NVIDIA's side, not ATI.
You have to give them the edge now. ATI's IQ is now inferior to NVIDIA's. There were too many threads praising ATI's AF and I was one to praise it, but now NVIDIA's on "top", in both AA and AF, so you need to aknowledge that, if you are fair enough.

Crysis didn't ran like "ass". Plus the game is not finished and not even in the slightest bit, optimized and it still ran pretty well. Now that Crytek has a card to work with, you'll see that the G80 will be the recommended card to run it.
 
|CR|Constantine said:
The bottom line is this, the 8800GTX is a fantastic card and in no way was rushed out into the market place. It has arrived at the appropriate time frame that Vista is going to be released. ATi (once again) is late to the party. However, since DX10 games have also been delayed ATi will not be suffering as much as one would think.

Secondly, for the record NO ONE HERE has any clue to not only how the 8800 will work under Vista but also DX10. Odds are it should work fine, but in the world of high end parts "just working" does not cut it. The real proof in the putting is which card, the 8800 or the R600 will be faster, and better looking in DX10 titles. Until then all we have is that the 8800 is a monster of a DX9 card with DX10 capability.

It's up to each individual consumer to decide what is best for them. Personally I am tickled pink that Nvidia FINALLY got their act together in terms of IQ. Now that you can get uber quality IQ from both camps, and any top end card can play games with playable frame rates all that is really left is availability, price and driver support.

But make no mistake about it, when Crysis was demo'd the other day it ran like ass on a 8800 and well below 60FPS. Anyone thinking that they will be kicking some serious ass in Crysis with this card better think again.

What has always held true is the one that comes out with the latest card before a monster resource hogging title will win the day. So far it looks as though the R600 will be released when Crysis is released and I DOUBT Nvidia will have something ready (In volume) at that time.


still waiting for the "G80 runs DX10 in software emulation" linkage
 
JDAdams said:
Personally I'm not interested in the 8800 since it uses a software layer to emulate the DX10 unified shader architecture instead of actually having unified shader support in hardware;

WTF :confused: :confused:
 
I don't think it was rushed at all. Vista and DX10 were just delayed. Microsoft is severely hurting the PC gaming market.
 
MastA said:


dont fret, these two obviously misinterpreted something somewhere, perhaps an article on the development of Cysis where the devs mentioned having to run the game engine through a software emulator for DX10 processing and that it was dog slow (as it should be), but this was obviously not because DX10 hardware doesn't exist, probably more because they needed to start development of the game long before hardware was available, and DX10 software emulation was all anyone could provide them, and switching so near completion would only complicate things

my take on it is they had to run it this way to make sure they stayed true to DX10 spec without catering to a specific IHV's "version" of being DX10 compliant, this way when it is released, it will run on both GPU's the same, or from the standpoint of the software, it wont care if you are running it on a NV or ATI GPU, it will be running pure true DX10 spec, so any advantages seen when running Crysis on an NV or ATI part will be purely because of a more efficient design

but that's just me thinking outloud, since neither of the two offenders care to back up their claim with anything at all.... :rolleyes:
 
Chernobyl1 said:
So come on then.
How has the 8800 been rushed out like you claim in post #1?

I never claimed anything it was simply a question, a hypothesis, it's too bad you can't see that.
 
Silus said:
It better, because it won't be competing with the 8800 GTX anyway. It will be competing with the refreshed version.

What specs are you refering to anyway ? 80 nm fab, 512 bit memory interface ? That's absolutely no indication of final performance and the only thing where it surpasses the G80 (if true), is memory bandwidth. Speculation is fine. I myself, started a thread last week about the refreshed G80 - G81 - but at this point, there's simply nothing that can even be compared to G80.

and nVidia is gurantee to have a refresh by the time x2900 is released? :rolleyes:
 
nobody_here said:
there's nothing else to compare it to, both companies highest end products always get compared, even when there's a generation leap on one side but not the other

remember A64 vs. P4?

same thing happens every cycle with video cards, the newest and fastest from one side can only be compared/contrasted to the newest and the fastest from the other side

so now, why dont you address the points i made in my post instead of taking the cheap way out, little boy, here, i will repost it here for you since you seem to have overlooked it before
:rolleyes:








what specs? actually, the only things i have seen so far that differ is the use of GDDR4 and a supposed 512bit memory bus

GDDR4 is not anything to write home about (remember when DDR2 system memory first come out, it sucked), it's still not polished, GDDR3 is mature and easy to get in large quantities, i think NV did the right thing with that choice

as far as a 512bit bus, i think it is overkill honestly, i dont think there is a game or CPU that is going to be able to puch a card so hard that a 512bit bus will provide any advantage over a 384bit bus, time will tell

So based on ur theory GDDR4>GDDR3? 512bit bus is not overkill, look at Crysis and many other DX10 games such as Turok.
 
SX2233 said:
So based on ur theory GDDR4>GDDR3? 512bit bus is not overkill, look at Crysis and many other DX10 games such as Turok.


Just give up already. Your obviously not going to win this battle.
 
MrGuvernment said:
please explain what is not perfect about it and what needs to be improved, assuming you own one of course...

how about giving it 1gb memory and a faster gpu glock
 
Why give up? I find this whole nonsensical debate very amusing.

EDIT: Hell, might as well join the fray tonight.

SX2233 said:
how about giving [the 8800 GTX] 1gb memory and a faster gpu glock
This means you've confirmed that Crysis uses ~one gigabyte of surface data?

Do tell, sir!
 
Sigh. I can't understand how there could be so much ignorance and misinformation about something that is covered in detail all over the internet. Remember folks - reading is fundamental !

As for the early bit - Jen Hsun would have liked the G80 architecture to have been released last year. So far from being early - it's actually quite late.
 
SX2233 said:
So based on ur theory GDDR4>GDDR3? 512bit bus is not overkill, look at Crysis and many other DX10 games such as Turok.

you know, it's really hard to take you seriously man, you start a thread that is as worthless as every other post you have made, simple one-liner posts, avoiding the issues, and not providing any significant info so i tell you what, i'm gonna put on my tinfoil hat just for you since we can't seem to talk reason with you and you obvisouly dont understand things as well as you wish you did...k? ;)


to answer the GDDR vs. GDDR3 question um, no, not now, GDDR3 is much more mature, cheaper, easier to get in high speeds with stability and low latencies

GDDR4 is new, harder to get in quantities to satisfy a hard card launch i would think, has higher latency than GDDR3 (which negates the higher clockspeed), more expensive, etc.....

just like the move from DDR to DDR2 system memory, DDR2 sucked bad and was more expensive at first

eventually GDDR4 will replace GDDR3 in video cards, but by then Nvidia will be using it too i am sure, they just decided to go with "what we know" instead of banking on GDDR4 being available and cheap enough to not affect a hard launch

here we go with the black helicopter theories....

if you believe that R600 is going to come out with GDDR4 and a 512bit bus and is not going to be too expensive or too hard to get or both, then you have to be open to the idea that Nvidia will also have a card ready to launch at the same time with GDDR4 memory and a 512bit bus.......did you ever stop to think that Nvidia might have released this card at the perfect time to soak up the profits that ATI wont be getting, which means more money to pour into the next big thing, and already has a non-neutered 8900GTX with GDDR4 and a 512bit memory bus waiting....just waiting for ATI to launch, then bam, steal their thunder....and be able to do it for cheaper because they reaped the benefits of the holiday shopping craze.....so they dont have to charge an arm and a leg for the newer tech......

i hope ATI does just that....slams the G80 with a $500 stunner of a card with massive availability, because it would drive G80 prices down and force Nvidia to show their cards, all of which is good for us, the consumers, but it just inst going to happen unless AMD wants to bankrupt that part of their busniess, and trust me, with the state of affairs AMD is in right now in the desktop market, they dont want to lose any more mainstream market shares

my point is, all of this is speculative until it is released, right now the R600 can't beat anything because it cannot be bought by anyone.....do you see?

also, we dont know if a 512bit memory bus will be of any use at any time in the near future.....remember SM3.0? the NV 6 series featured SM3.0, but was it powerful enough to use it fully and really take advantage of it? NO, that didnt come until the 7 series,

to be honest, a 512bit bus and GDDR4 could be a really expensive waste in first gen DX10 cards, we may not see a 512bit bus or GDDR4 provide any advantage at all until say the end of 2007, early 2008, so in the mean time, ATI will be passing off all of the costs associated with going with an expensive PCB to accomodate a 512bit bus and all of the increased costs involved in going with GDDR4 memory, and all of the associated availability issues.......and i am going out on a limb here.....

i bet if an ATI X2900XTX with a 512bit bus and 1Gb of GDDR4 memory comes out and can barely be found and is selling for $800+ but is no faster than G80, you will be one of the ones in line waiting to get it just because "it has a 512bit bus and GDDR4 memory, so it has to be better right", i mean.......512 is more than 384, and GDDR4 is newer than GDDR3 right???.....so it has to be better right...??

why dont you go start a thread over at some other forum board where a bunch of "bigger is better" types hang out so you can find some people who believe the crap you believe with nothing to base it on other than, it's newer and bigger so it has to be faster.... :rolleyes:
 
nobody_here said:
why dont you go start a thread over at some other forum board where a bunch of "bigger is better" types hang out so you can find some people who believe the crap you believe
For reference, these can be found here and, to some extent, here.
 
phide said:
For reference, these can be found here and, to some extent, here.

actually, some of the most knowledgeable people are over at Beyond 3D, now, i dont know about their forum members, which is quite uncontrollable, but they sure know their GPU programming.....did you read their G80 architecture article?

if you have a site that gives an even more thorough and technical insight into the G80 GPU, please, linkage now..
;)
 
nobody_here said:
did you read their G80 architecture article?
At your earlier request, yessir. There's no question that Beyond3D's staff writers are some of the most savvy out there. The number of forum members who don't subscribe to reality are plenty, unfortunately.

It depends on sub-forum, but you're liable to run into some of these strange folk at the B3D forums.
 
phide said:
At your earlier request, yessir. There's no question that Beyond3D's staff writers are some of the most savvy out there. The number of forum members who don't subscribe to reality are plenty, unfortunately.

It depends on sub-forum, but you're liable to run into some of these strange folk at the B3D forums.


this is true, but they are easy to spot everywhere, they start threads like this one.....ROFL
 
Brent_Justice said:
Do you have any facts, or even opinions, to back up why you think the card is not "perfected" as you put it? What is it missing in your opinion?

You made a very blanket statement without any proof to back it up.


They could've spent more time figuring out how to make it shorter for a start. Second they could've waited a bit longer so they could build it on a smaller process as well. It really is pointless to get an 8800 GTX right now if you already have a 7900/X1900 class card because there really isn't any game the latter cards can't handle. Having an uber high end DX10 card right now is just for the sake of a bigger e-penis and not much else. Who knows how well it will perform when actual games that demand its power like Crysis, Quake Wars, Bioshock etc surface?
 
5150Joker said:
They could've spent more time figuring out how to make it shorter for a start. Second they could've waited a bit longer so they could build it on a smaller process as well. It really is pointless to get an 8800 GTX right now if you already have a 7900/X1900 class card because there really isn't any game the latter cards can't handle. Having an uber high end DX10 card right now is just for the sake of a bigger e-penis and not much else. Who knows how well it will perform when actual games that demand its power like Crysis, Quake Wars, Bioshock etc surface?


there are many games out that cannot be played at the high resolutions demanded by large widescreen gamers, and now they can play these games at these insane resolutions with unheard of AA/AF levels at playable framerates, remember, 8800 also upped the ante in the IQ dept and is offering basically 16xAA at 4xAA cost
 
5150Joker said:
Who knows how well it will perform when actual games that demand its power like Crysis, Quake Wars, Bioshock etc surface?
Why would you assume that performance in these games will be sub-par? I assume that performance in these titles you've listed will be quite excellent going from what I've seen. G80 is more than two times faster than the already capable G71 in a number of scenarios, especially those nasty pixel shader intensive scenarios we're going to become very familiar with.

5150Joker said:
Having an uber high end DX10 card right now is just for the sake of a bigger e-penis and not much else.
It couldn't possibly be for unparalleled performance and image quality in current games, of course. Why, buying a $650 graphics card for anything else than increasing the size of your so-called "e-penis" would be quite pointless.

5150Joker said:
Second they could've waited a bit longer so they could build it on a smaller process as well.
Had they built it on a smaller process, clock speeds would be higher, and we could shrug off this totally pointless 2x+ performance boost over the previous architecture. Only two times faster? What a sham indeed!

There will never be a "perfect" graphics card, but having been a graphics card buyer for over eight years, this is as close as anyone has ever come. I don't think anyone would really disagree with that, and if they do, certainly they'd have a better reason than "it's too long" (or so I would hope).
 
5150Joker said:
Having an uber high end DX10 card right now is just for the sake of a bigger e-penis and not much else.
You sound like someone who just got out of a cold pool.
 
SX2233 said:
how about giving it 1gb memory and a faster gpu glock

First of all, it would have to go to 1.5GB, not 1GB.

Second, why strap more expensive memory to it when no game is going to make use of it?

And exactly how fast do you think you're going to get 681 million transistors running on a 90nm process with acceptable yields? If you have some magical insight into the situation, you should give nVidia a call...they're always looking for self-proclaimed experts to tell them how to do their jobs.

nVidia is running a business, let's not forget that.
 
5150Joker said:
They could've spent more time figuring out how to make it shorter for a start. Second they could've waited a bit longer so they could build it on a smaller process as well. It really is pointless to get an 8800 GTX right now if you already have a 7900/X1900 class card because there really isn't any game the latter cards can't handle. Having an uber high end DX10 card right now is just for the sake of a bigger e-penis and not much else. Who knows how well it will perform when actual games that demand its power like Crysis, Quake Wars, Bioshock etc surface?

First of all, if you're complaining about the size of the card, you're on the wrong forum.

If they had waited longer to build it on a smaller process (as ATi is doing with R600), they'd be missing this extremely lucrative holiday season where they have no competition. Staying at 90nm and conserving die space by offloading some functions onto the nvio chip was pure genius.

There's a hell of a lot you can do with an 8800GTX you can't do with any other card. It has the best AA and AF on the planet, and can run games at resolutions that no other card in the world can. Faster is better, and if you don't understand that...once again...you're on the wrong forum.

As for its performance in future games...in the very very unlikely case that the card doesn't play your latest game of choice as fast as some other younger, more attractive card, you can always ebay it and buy something better.

You my friend are what we call a nay-sayer. When every review site on the face of the planet gives a new video card a thumbs up, and you start whining about "it's too big", that's when you know it's time to watch cspan, or upgrade to the latest version of aol...or whatever it is you people do.
 
SX2233 said:
I never claimed anything it was simply a question, a hypothesis, it's too bad you can't see that.

rofl,
You made this statement
"nVidia could've perfected the card better instead of rushing it out"
No hypothesis there!

It seems you evade the truth even when the facts are indisputable.
Its too bad you cant see that ;)
 
Back
Top