No R600 for X-mas :(

Silus said:
Funny how you talk about FUD in this thread, but you yourself say something like this. I know this is the ATI board, but don't show your !!!!!!ism so blatantly.

What exactly in my statement is incorrect? Nvidia's G80 will not be a unified architecture. DX10 is built around that principle and why ATi has been working so closely with Microsoft (funny how this is exactly what we went through with DX9 and the utter dominance of the R300).

The ATi\AMD R600 is rumored to be a beast, with the last RUMOR being that it is a 64 piped monster.

As for my statement about ATi's IQ, they have shown without a shadow of a doubt their leadership in that area and there is no reason to believe they won't continue it with the R600.
 
Excuse me? But what the freaking frickity frack about DX10 dang nabbit to heck!!!!!! :mad:
I had B E T T A be able to run Crysis at 2560x1600 with dx10 at acceptable frames with whatever card either of them come out with!!
Or I sware.....there will be heck to pay!!! :mad: :mad:
 
Marvelous said:
That's the thing we don't know that yet... :confused:

Anything is better than nothing I suppose...

R600 looks to be not much faster than current crop of cards... I'm estimating it will be fast as 7950... But we'll see... ;)


Thank you for being the "I spit coffee all over my keyboard" guy in this thread. It was sorely lacking it.
 
|CR|Constantine said:
What exactly in my statement is incorrect? Nvidia's G80 will not be a unified architecture. DX10 is built around that principle and why ATi has been working so closely with Microsoft (funny how this is exactly what we went through with DX9 and the utter dominance of the R300).

The ATi\AMD R600 is rumored to be a beast, with the last RUMOR being that it is a 64 piped monster.

As for my statement about ATi's IQ, they have shown without a shadow of a doubt their leadership in that area and there is no reason to believe they won't continue it with the R600.

Everything in your statement is not true. Everything is based on rumors and your blatant !!!!!!ism toward ATI.

And please do not start an IQ war again. I think it's been discussed to death that ATI has an advantage in AF, with its angle-independent filtering, and NVIDIA has an advantage in AA, with TSAA. The rest is subjective and anyone will make their own choice, on which card they should get.
If you want to post in a thread which is only supported by rumors, at least don't use words like "garbage" when refering to G80, which is obviously not out yet, for you to say something like that.
 
mashie said:
Lol, so in other words ATI/AMD will keep on doing highend graphics as always.

AMD comments has made it sound they might drop ati from the highend CHIPSETS, chipsets meaning motherboard chipsets like RD600, RD580, etc NOT High End Graphic's, 2 totally seprate markets
 
Marvelous said:
That's the thing we don't know that yet... :confused:

Anything is better than nothing I suppose...

R600 looks to be not much faster than current crop of cards... I'm estimating it will be fast as 7950... But we'll see... ;)

How can you say "it looks to be not much faster than current crop of cards", if it's not even out yet ?

And if your guess ends up being true, I would say it's a good thing. A card with performance similar to a GX2 in DX10, is great IMHO.
 
Marvelous said:
That's the thing we don't know that yet... :confused:

Anything is better than nothing I suppose...

R600 looks to be not much faster than current crop of cards... I'm estimating it will be fast as 7950... But we'll see... ;)


Well it all depends the way the r600 is set up it can have 2x ALU's per array, which could end up as 128 shader ALU's and I'm guessing it would end up at the 650 million transistors that was rumored along time ago. It could be more then 650 mill though since we really don't have any concrete information on the xenos transistor count per array ;) The closest I've gotten was 4.6 million transistors per ALU, which would end up at around 300 million for a 64 ALU chip +150 million for Avivo +100 for dx 10 functions which is 550. Thats why this round I've pretty much given up on trying to figure out who would end up being faster since if you look at it one way, nV has a huge lead (64 shader alu's for ATi, and 32 pipelines for nV) or ATi has a huge lead with multi ALU arrays against a 32 pipeline g80).

Edit: new rumor the g80 has 48 pixel shaders, this is from a microsoft developers day document, no idea if they are talking about pipelines or ALU's since the g70 was quoted to have 24 pixel shaders when it was released and the Microsoft document was around or just before this time.

But common sense would dictate that both the g80 and r600 would end up with similiar performance since the target of both companies is similiar, what games will be played on these cards, and what is the maximum limit on the processes being used. Neither company is going to over extend themselves since it would hurt thier bottom line in the short run and in the long run as well.
 
Marvelous said:
That's the thing we don't know that yet... :confused:

Anything is better than nothing I suppose...

R600 looks to be not much faster than current crop of cards... I'm estimating it will be fast as 7950... But we'll see... ;)

HAHAHAHAHAHHAHAHAHA
HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHHA
HAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAHAHAHAHA

I mean, yea, sure, you're right, definitely.
 
psychot|K said:
HAHAHAHAHAHHAHAHAHA
HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHHA
HAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAHAHAHAHA

I mean, yea, sure, you're right, definitely.


Actually if it only has 64 ALU's he is correct it will be around the same performance as a x1900xtx. So all ya guys bitching about it, he was on the money with current rumors. Just that the rumors are so generic its hard to speculate if it only has 64 ALU's, I think it will have more.
 
HighTest said:
Seriously, regardless of who is released first, it will have less of an impact this time arround than in the past. The most significant reason is that the non-hardcore less informed (which is not us) will most likely purchase their card arround the same time they purchase Vista. Major product forcasts for computer sales this Christmas is significantly down do to the delay of Vista, hence the industries angst against MS at this point in time.
Marketing is way ahead of you...... "Vista Ready" ;)
 
so, if the RD600 is out in limited supplies meaning it will cost $$$, what mobo/chipset should we be going for overclocking our conroes?

-jcl
 
|CR|Constantine said:
What exactly in my statement is incorrect? Nvidia's G80 will not be a unified architecture. DX10 is built around that principle and why ATi has been working so closely with Microsoft (funny how this is exactly what we went through with DX9 and the utter dominance of the R300)

Thought there should be some clarification here. DX10 is built arround SM 4.0 a unified programming API (HLSL or Higher Level Shader Language), the implementation of SM 4.0 in hardware can be discrete shaders (pixel, vertex and the new geometry) or unified shaders and the driver needs to ensure that all programming calls are handled seemlessly. Hardware unified shaders are not a "requirement", but ATI is choosing this route. nVIDIA believes that specialized shaders (especially a greater number of pixel shaders) still have a performance benefit and will use the unified programming API that all DirectX10 cards will use to access their discrete SM 4.0 hardware. It's all semantecs and irrelevant to the DX10 game developer.

One possible debate is what will be a real important need in games. Currently in the high-end DirectX9.0c games, you need a large number of high performance Pixel shaders for HDR FP16 and the newer FP32 modes (HL2: Episode one is Interger based HDR that all DirectX9 cards can do along with AA). nVIDIA is hedging their bet that users will still need in a SM 4.0 universe more "Pixel shader like" performance to produce the results. Microsoft wanted a unified programming method for shaders to "simplify" the complexity of programming for these features, to mount it as a resource and provide unlimited lines of code for it to process that opens new possibilities that the previous programming methods limited. The reality is that this "pixel processing" even if in a new unified programming method is still likely to be the highest demand in the rendering pipeline for the effects we all want. If nVIDIA's gamble pays off, it's possible that this may result in a stronger SM 4.0 result for nVIDIA. If ATI's gamble pays off, then nVIDIA will need even another generation to catch up. What's the result of the loser in the gamble? A card that is technically DirectX10 but underperforms for SM 4.0 results in comparison to it's competitor. Similar to the FX, before ATI developed it's cards that supported SM 2.0 and there wasn't any comparison, people were excited with the FX. Once ATI showed that it had a card that provided significantly better SM 2.0 results and had FP16 rendering instead of just FP32, they dominated. Someones going to be an FX card and someones not. Until we have both products in hand and shipping DirectX10 titles to compare (along with a 3DMark07) we won't know what that is.

The big part of the problem: When the cards are released this fall before Vista/DirectX10 is available, you'll only be able to test them for DirectX 9.0c results and they'll both perform well there. It's only going to be the SM 4.0 tests that will show the reality later.

One last possiblity, they both are right on design and like the current gen perform within 5 FPS of each other.

So until product's in hand, lets stop this :confused: guessing game and speculation on how one feature implementation is the roxorz and the other the suxorz cause it's anyones guess. :rolleyes:
 
ivzk said:
Thank you for being the "I spit coffee all over my keyboard" guy in this thread. It was sorely lacking it.

And you for being a "I smoke crack when I have nothing to contribute"... It was clearly conceived... :rolleyes:


Silus said:
How can you say "it looks to be not much faster than current crop of cards", if it's not even out yet ?

And if your guess ends up being true, I would say it's a good thing. A card with performance similar to a GX2 in DX10, is great IMHO.

Of course I'm guessing by what's available to us... But hey I'm a great fortune teller... ;)



razor1 said:
Actually if it only has 64 ALU's he is correct it will be around the same performance as a x1900xtx. So all ya guys bitching about it, he was on the money with current rumors. Just that the rumors are so generic its hard to speculate if it only has 64 ALU's, I think it will have more.

Thank you Razor... Some guys know what the hell he is talking about while everyone just follows... :cool:
 
razor1 said:
Well it all depends the way the r600 is set up it can have 2x ALU's per array, which could end up as 128 shader ALU's and I'm guessing it would end up at the 650 million transistors that was rumored along time ago. It could be more then 650 mill though since we really don't have any concrete information on the xenos transistor count per array ;) The closest I've gotten was 4.6 million transistors per ALU, which would end up at around 300 million for a 64 ALU chip +150 million for Avivo +100 for dx 10 functions which is 550. Thats why this round I've pretty much given up on trying to figure out who would end up being faster since if you look at it one way, nV has a huge lead (64 shader alu's for ATi, and 32 pipelines for nV) or ATi has a huge lead with multi ALU arrays against a 32 pipeline g80).

Edit: new rumor the g80 has 48 pixel shaders, this is from a microsoft developers day document, no idea if they are talking about pipelines or ALU's since the g70 was quoted to have 24 pixel shaders when it was released and the Microsoft document was around or just before this time.

But common sense would dictate that both the g80 and r600 would end up with similiar performance since the target of both companies is similiar, what games will be played on these cards, and what is the maximum limit on the processes being used. Neither company is going to over extend themselves since it would hurt thier bottom line in the short run and in the long run as well.

I don't have all the info what was released... But what I do know is that Nvidia was based on 48pixel pipes and R300 was based 32 pipes... As for shaders math units I don't have anything concrete... except R600 for rumored on 64 shaders and G80 based on 48... Hmmm... maybe we have different sources..
 
razor1 said:
Actually if it only has 64 ALU's he is correct it will be around the same performance as a x1900xtx. So all ya guys bitching about it, he was on the money with current rumors. Just that the rumors are so generic its hard to speculate if it only has 64 ALU's, I think it will have more.

and people were saying the X1800XT would just be an OC'd X850XT which turned out to be false as well

like others are saying, wait till its out, i doubt ATI would release a next gen product only 15% faster then previous cards.
 
Trimlock said:
and people were saying the X1800XT would just be an OC'd X850XT which turned out to be false as well

like others are saying, wait till its out, i doubt ATI would release a next gen product only 15% faster then previous cards.

Last time I checked 7950 is not 15% faster than 1900gtx....

I'm estimating 50% or lower...
 
Trimlock said:
and people were saying the X1800XT would just be an OC'd X850XT which turned out to be false as well

like others are saying, wait till its out, i doubt ATI would release a next gen product only 15% faster then previous cards.


Well the x1800xt wasn't the card that other people were expecting too. It could barely beat a 7800 gtx, and in many cases it was pretty much a tie and when no aa and af was used it lost almost all benchmarks. I don't think ATi will go down that road again with a very unbalanced chip, they wasted quite a bit resources while doing it.

I agree the r600 most likely will come out to around 50-100% faster then current gen, thats puts it a good 20%-50%+ faster then the gx2, I also expect this for the g80, and possibly a tad more just because older games will most likely already be written for the g80 ununified pipelines. ATi will see the extra boost when paticular shaders bottleneck the g80 and will take a lead in these situations (either when vertex shaders or pixel shaders become bottlenecked, which won't show up most likely till we have some serious dx10 games), I think this is where the Inq got the idea of 3dmark05 much faster on a r600, because its quite vertex shader limited in some instances.

ATi might be more forward looking with thier design, but is it neccessary at this point is the question.
 
Marvelous said:
Last time I checked 7950 [GX2] is not 15% faster than 1900 [XTX]....
Thereabouts in some instances, though I'm not sure I'm following this debate and how it pertains to...well, whatever.

razor1 said:
ATi might be more forward looking with thier design, but is it neccessary at this point is the question.
And a good question. We'll have to see. I must say I'm getting tired of waiting, though. We need ourselves a good leak.
 
phide said:
Thereabouts in some instances, though I'm not sure I'm following this debate and how it pertains to...well, whatever.

Just say it... man... don't worry I can take it.. :eek:
 
Something to ponder, ATI has the X1950XT which while in crossfire kills any game available right now while maintaining superior IQ. Why release the R600 when it isn't even needed at this point? Now NV needs the G80 so they have a single GPU solution that doesn't get trounced by ATI.
 
Marvelous said:
And you for being a "I smoke crack when I have nothing to contribute"... It was clearly conceived... :rolleyes:

I say coffee, you say crack. Must of hit a nerve. Sorry about that.
 
ivzk said:
I say coffee, you say crack. Must of hit a nerve. Sorry about that.

Must be your day off to collect rocks...

edit:
"Tall Goofy Chinese guy's voice when Jean Claude Van damn enters Kumate in Blood Sports"

O kay USA! middle finger up...
 
R1ckCa1n said:
Something to ponder, ATI has the X1950XT which while in crossfire kills any game available right now while maintaining superior IQ. Why release the R600 when it isn't even needed at this point? Now NV needs the G80 so they have a single GPU solution that doesn't get trounced by ATI.


Its not about performance leadership at this point, its about getting it out for Vista. Vista is going to push mid level to high end graphics cards to masses, at least thats what nV and ATi are going to market thier next gen cards as.
 
razor1 said:
ATi might be more forward looking with thier design, but is it neccessary at this point is the question.

thats what i'm saying too, ATI is taking alot of chances with this, they are radically changing their GPU and this seems to either be a GREAT thing or a really BAD thing

at the same time its exciting to see if it works or flops, I for one am willing to wait to see it before speculating it won't
 
R1ckCa1n said:
Something to ponder, ATI has the X1950XT which while in crossfire kills any game available right now while maintaining superior IQ. Why release the R600 when it isn't even needed at this point? Now NV needs the G80 so they have a single GPU solution that doesn't get trounced by ATI.

And what game isn't killed by 7900 GTX SLI ? Not to mention Quad-SLI, which still needs to mature a bit, but it will happen.
When you compare X1950 XTX vs 7900 GTX, the X11950 XTX wins in some games and the 7900 GTX wins in some others. So I wouldn't say ATI leads the single GPU "war" by much.

Now slap two GPUs in one PCB and NVIDIA holds the crown, with no competition. I have to say that, although I'm not loyal to any brand, I'm very impressed with what NVIDIA did with the GX2. Doesn't take anymore space than a 7900 GTX or a X1900 XTX, consumes less power and it gives you almost double the performance...I think it's their achievement with this architecture.

NVIDIA doesn't need to release G80 because they're "losing". Not at all. They need to release it because of the upcoming launch of Vista, plus the fact that they already milked the the G70 architecture to death (though they will still launch a 7900 GTO) and it's time for something new.
The same goes for ATI. They need a new product and even though their current architecture is not as "old" as NVIDIA's, they don't have anything that is DX10 compliant, so they NEED to launch R600.
 
R1ckCa1n said:
Something to ponder, ATI has the X1950XT which while in crossfire kills any game available right now while maintaining superior IQ. Why release the R600 when it isn't even needed at this point? Now NV needs the G80 so they have a single GPU solution that doesn't get trounced by ATI.
They already have the fastest single card with the GX2, which from what I have seen around the forums is selling well. Even the 1950 refresh did not top it so I doubt they are all too worried about it.
 
Personally I think R600 could surprise all of us. But then again so could G80.

In any situation that currently favors a non-unified architecture (balanced vertex and pixel shader resource load) G80 will almost undoubtedly win. However the other extreme (pixel shader load bottlenecking vertex shaders and vice versa) will give ATI the win.

Which is best remains to be seen.
 
Soparik2 said:
only DFI will be making the RD600 mobo as of now
when is it sopposed to come out? and for what price?

what kind of FSB can we see from it? Also, i need 3 pci slots, and im worried that with the 3x pcie slots, i wont be able to get that. (i cant put a pci card in a pcie slot right?)
 
PRIME1 said:
They already have the fastest single card with the GX2, which from what I have seen around the forums is selling well. Even the 1950 refresh did not top it so I doubt they are all too worried about it.
you take the dual PCB/GPU solution from NV and they get killed. The 7950GTX2 is nothing more than a 'we are getting dominated this generation so what can we do but relase a dual GPU/PCB card to contend with a single GPU/PCB'.
 
jcll2002 said:
when is it sopposed to come out? and for what price?

what kind of FSB can we see from it? Also, i need 3 pci slots, and im worried that with the 3x pcie slots, i wont be able to get that. (i cant put a pci card in a pcie slot right?)
sorry but i think there is only 1 pci slot on the RD600 from DFI
FSB's wont be as high as expected cuz DFI is screwing with the boards
high = 2000MHz QDR (500MHz)
 
R1ckCa1n said:
you take the dual PCB/GPU solution from NV and they get killed. The 7950GTX2 is nothing more than a 'we are getting dominated this generation so what can we do but relase a dual GPU/PCB card to contend with a single GPU/PCB'.
Hell they could slap 20 PCBs and 40 GPUs on it. As long as it fits in one PCIe slot it's still one card and the fastest single card at that. You could say that ATI was losing so they had to crank up the heat, noise and power draw just to compete.

Either way...... DirectX 10 cards may be coming soon, but the large majority of games released in the next 2 years or more will be DirectX 9. Only a very few games this year were 9.0c using SM3.0 and there have been cards supporting that for over 2 years.
 
R1ckCa1n said:
you take the dual PCB/GPU solution from NV and they get killed. The 7950GTX2 is nothing more than a 'we are getting dominated this generation so what can we do but relase a dual GPU/PCB card to contend with a single GPU/PCB'.
A make-believe car has two engines (I recall one Ford concept supercar having two V10s, for instance, so this is a reasonable analogy). A competitor's car has one very large V12. The former car has two engines, but it is indeed faster than the competitor's car. Faster is faster is faster is, well, faster. Is the fact that Car A has two engines, rather than one, a real issue? Sure, two engines guzzle down more fuel (don't tie this in to power consumption, because the 7950 GX2 consumes less than a single-GPU "half-memory" X1900 XTX), perhaps the car weighs more ("sluggish" IQ), perhaps it can't be driven in some countries (damn, no vertical sync!). And perhaps the way to add more engines (Quad SLi) doesn't work so great, so it misses out on a potenrial market. Okay, great - whatever.

Is it case of the manufacturer of Car A frantically trying to compete with Car B or is it a case of the manufacturer of Car A taking a different road? This is something I think you need to consider here. I've already talked about this before, and it's not as clear-cut as you seem to believe it is. How long do you think it takes to design and launch a product like the 7950 GX2? The release gap between the X1900s and the GX2 is a hair over four months. Personally, I don't see the GX2 being on the design table for only three months and change. I believe the GX2 was in the pipeline before G70 became a reality, as it was an interesting market nVidia wanted to pursue. We can be pretty confident this was the case because of the affiliation with Dell, and how Quad SLi (and thusly dual PCB cards) was a public reality before X1900 decided to drop into our laps and a consumer reality shortly thereafter. A cost-effective SLi card without the bother of making people buy two cards, having to match them and bridge them and so on. Cost-effective because A) the core components were already being produced and B) the design was not a radical departure from the norm.

I look at the GX2 and see a video card. Two PCBs, two GPUs. Great, whatever, so long as it works. It's a card designed for a market. Whether or not it could be called a "catch-up" part is irrelevent. Was the X1900, released only three months after the X1800, a "catch-up" part from ATi, or was it just a product designed to bridge any gaps?

I suppose what really bothers me is that you don't even know the name of the card you're slamming on such technical merits. What exactly is a 7950 GTX2?
 
phide said:
I suppose what really bothers me is that you don't even know the name of the card you're slamming on such technical merits. What exactly is a 7950 GTX2?

You can see what an impact that card made on me other than for a good laugh. :eek:
 
Back
Top