NVIDIA to Launch GeForce 8600 Series on April 17th

Marvelous

Supreme [H]ardness
Joined
Apr 26, 2003
Messages
4,117
NVIDIA is set to launch mainstream 8600GTS (G84-400) and 8600GT (G84-300) as well as 8500GT (G86-300) on 17th of April. GeForce 8600GTS and 8600GT will have 256MB GDDR3 memories onboard and sports 128-bit memory interface but no HDMI yet. GeForce 8600GTS is meant to replace 7950GT and 7900GS while 8600GT to replace 7600GT and 8500GT to replace 7600GS.

8600GTS will be clocked at 700MHz core / 2GHz memory and comes with dual D-DVI, HDTV, HDCP but requires external power. 8600GTS will be priced between US$199-$249. Another mainstream model, the 8600GT will be clocked at 600MHz core / 1.4GHz memory and has 2 variants; one with HDCP (G84-300) and the other without (G84-305). This model doesn't requires an external power. 8600GT will be priced between US$149-$169.

The last model meant for budget segment is actually a G84 core but downgraded to meet the value segment pricing structure. The 8500GT will be clocked at 450MHz core / 800MHz 256MB DDR2 memory and comes in 2 variants; one with HDCP (G86-300) and the other without HDCP (G86-305). 8500GT will be priced between US$79 to US$99. Towards end of April, we can expect NVIDIA to release GeForce 8300GS for the budget market to replace 7300 series.

The NVIDIA 80nm G84 and G86 line-up will meet head on with ATi's DX10 65nm offerings where mainstream RV630 is slated to arrive in May and value RV610 is slated to arrive earlier in April.

http://www.vr-zone.com/?i=4723
 
Looks like 128bit memory bus. hmmm... Doesn't mention how many pipes but I'm guessing 12 pixels per clock 16 textures per clock for the 8600gts.

Peak fill rate of 11200 Mtexels Note 7950gt has 13200 Mtexels. It should be fast as 1900xt I'm thinking.
 
It seems their price point and performance isn't as exciting as the previous reports gave us hope for...damn
 
Anyone else a bit disapointed at the 8600GTS, 128bit and only 256MB, WTF!
 
I think you guys should wait for 8900gs alternative in the fall. These aren't really worth it if you already have a mainstream cards like 7900gs. Unless of course dx10 games come out and give you extra eye candy there's no point.
 
Hmmm looks like I may shell out some extra money now for the 8800GTS. I want at least 512 of memory. Or less benchmarks beg to differ and they are pretty powerful even with a crippled memory interface and memory size.
 
Well I guess we can take this dissapointment to the bank, and in exchange a lesson should be learned...

If we were dissapointed on how the myth fell short of the truth, then you can bet that if Nvidia does release an 89xx card, that It wont be what you are expecting...

sucks..:rolleyes:
 
Looks like 128bit memory bus. hmmm... Doesn't mention how many pipes but I'm guessing 12 pixels per clock 16 textures per clock for the 8600gts.

Peak fill rate of 11200 Mtexels Note 7950gt has 13200 Mtexels. It should be fast as 1900xt I'm thinking.

That is not how the G80 works.
 
I wouldn't believe this garbage just yet..

Rumors usuallyyy create chaos. Wait for the official press release
 
You guys are looking at this all wrong. The cards in the $150-$250 bracket have to come in at a certain cost - they can't just put 256-bit buses on them just for kicks. With G80 a 256-bit bus means 16 ROPs which is probably way too much transistor budget for the part. R600 might have 16 ROPs for all we know.

Compare a 7600GT to a 6800GS if you want to get a feel for how relevant a 256-bit bus is at that price point. It's pretty naive to look at a single spec like bus width and hope to know how a card performs.
 
Would you elaborate instead of that's not how it works?

The G80 uses stream processors, not pixel pipes and shader units ;)

They're fully programmable, and can do more functions than previous ROP's or pixel pipes.

At least, that's what the current G80's use, who knows if nVidia's changing things up for the lesser variants (although it would be stupid to do so).
 
The G80 uses stream processors, not pixel pipes and shader units ;)

They're fully programmable, and can do more functions than previous ROP's or pixel pipes.

At least, that's what the current G80's use, who knows if nVidia's changing things up for the lesser variants (although it would be stupid to do so).

Yes I know that but don't tell me pipes and bandwidth doesn't effect performance because it does.
 
I hate that we are still getting new cards without HDCP.

these cards are supposed to be the end-all/be-all for HD viewing. but you CAN'T properly watch blue-ray and hd-dvd WITHOUT HDCP. ridiculous.
 
Damn I thought there was going to be some kind of 8600Ultra....fawk.

I need SOMETHING to replace this x850xt.
 
Damn I thought there was going to be some kind of 8600Ultra....fawk.

I need SOMETHING to replace this x850xt.

Actually a 8600GTS or GT is going to replace that X850XT pretty damn good. I know there are no reviews but hell even a 7600GT is faster than a X850XT by a little or equal in some scenerios. A 8600 will no doubt be faster than the 7600 or there is hardly anypoint in releasing it except that it supports DX10.

Im just worried its not going to be that much faster than my X1900XT. Im sure its going to be, but is it that noticable? DX10 is not a big deal to me, im just building a new computer soon and giving my sig computer to my brother and want a card that is equalivalent to X1900XT.
 
Damn I thought there was going to be some kind of 8600Ultra....fawk.

I need SOMETHING to replace this x850xt.

I found this article on http://www.gpureview.com/

This news comes straight from China via the Google translator. Luckily the specs themselves require no translation. 8600 Ultra and 8600 GT specifications. 8300 GT and 8300 GS specifications. Apparently the 8600 Ultra has a MSRP of $179 and will launch in Q3 of 2007. The 8600 GT has a MSRP of $149 and will launch this May along with the 8300 GT which will have a MSRP of between $89 and $99. The 8300 GS is reported to have a MSRP of $69 to $79 and has no launch date specified.

Last but not least, enjoy this XFX 8600 (could be GT or Ultra) pic.

dsc09094tg85ux.jpg


dx10-midrangeNsOt-1.jpg
 
looks nice. the new cards need unified shaders to run dx10 i believe so they will have them.
 

Thank you. The page you linked provides several useful examples for illustrating my point. These are the kind of numbers that you have insisted on using to predict and compare real-world performance, both in this thread and in others. Raw fillrate and bandwidth numbers lined up on a chart seem to have great attraction to you.

Let's start with the table at the top and compare the 8800GTS to the 7900GTX. On that chart, every theoretical number favors the 7900GTX except for the memory bandwidth, where the 7900GTX has 80% of the power of the 8800GTS. Based on these numbers, someone adhering to your philosophy would expect the 7900GTX to be a superior performer except in highly memory-bandwidth-dependent situations. In the real world, the 8800GTS is clearly superior across the board.

Let's go on down to the 3DMark charts. This time, let's look at the graphs for the X1950XTX compared to the 7900GTX. On every chart, except for at the lowest resolution in the top one, the 7900GTX outperforms the X1950XTX. But in real-world testing the X1950XTX wins almost every time, often by the kind of margins seen in the charts in favor of the 7900GTX.

One more notable item: In the fourth text paragraph of the article, we see this statement--"The G80's texturing abilities are also superior to the G71's in a way that our results above don't show." Hmmm, so even after calculating raw theoretical numbers and running some synthetic benches, both of which have yielded different results than real-world testing, there's still more hidden architectural mojo that we haven't accounted for.

Can you see it yet? The interrelated factors that determine the real-world results of a given GPU architecture are too complex to be predicted by old-school theoretical calculations. This was true in the G7X/R5XX generation and you failed to acknowledge it then. It is even more true in the G80 era and you still won't acknowledge it. Time for a paradigm shift.
 
I'm not gonna upgrade from a card with a 256bit memory bus to a 128bit memory bus. Even if the raw bandwidth is more, it doesn't feel like an upgrade :(.
 
Thank you. The page you linked provides several useful examples for illustrating my point. These are the kind of numbers that you have insisted on using to predict and compare real-world performance, both in this thread and in others. Raw fillrate and bandwidth numbers lined up on a chart seem to have great attraction to you.

Let's start with the table at the top and compare the 8800GTS to the 7900GTX. On that chart, every theoretical number favors the 7900GTX except for the memory bandwidth, where the 7900GTX has 80% of the power of the 8800GTS. Based on these numbers, someone adhering to your philosophy would expect the 7900GTX to be a superior performer except in highly memory-bandwidth-dependent situations. In the real world, the 8800GTS is clearly superior across the board.

Let's go on down to the 3DMark charts. This time, let's look at the graphs for the X1950XTX compared to the 7900GTX. On every chart, except for at the lowest resolution in the top one, the 7900GTX outperforms the X1950XTX. But in real-world testing the X1950XTX wins almost every time, often by the kind of margins seen in the charts in favor of the 7900GTX.

One more notable item: In the fourth text paragraph of the article, we see this statement--"The G80's texturing abilities are also superior to the G71's in a way that our results above don't show." Hmmm, so even after calculating raw theoretical numbers and running some synthetic benches, both of which have yielded different results than real-world testing, there's still more hidden architectural mojo that we haven't accounted for.

Can you see it yet? The interrelated factors that determine the real-world results of a given GPU architecture are too complex to be predicted by old-school theoretical calculations. This was true in the G7X/R5XX generation and you failed to acknowledge it then. It is even more true in the G80 era and you still won't acknowledge it. Time for a paradigm shift.

Hey smarty pants. Yes I know streaming processors and it makes a difference in pixel and vertex shading. Yes new tech beats old and more robust card beats fill rate monsters. Fact that 256bit memory bus beats 128bit is obvious. Fact is G80 still has pipes that determines performance. If fill rate and bandwidth didn't matter why are people complaining about 8600 series 128bit bus. Because it makes a difference. :rolleyes:
 
Actually a 8600GTS or GT is going to replace that X850XT pretty damn good. I know there are no reviews but hell even a 7600GT is faster than a X850XT by a little or equal in some scenerios. A 8600 will no doubt be faster than the 7600 or there is hardly anypoint in releasing it except that it supports DX10.

Im just worried its not going to be that much faster than my X1900XT. Im sure its going to be, but is it that noticable? DX10 is not a big deal to me, im just building a new computer soon and giving my sig computer to my brother and want a card that is equalivalent to X1900XT.

And it probably won't be. Rule of thumb: Generation n's mid-range is approximately equal to Generation (n-1)'s high-end. If you want to upgrade from a last-gen high-end, upgrade to a current-gen high-end (8800 GTS/GTX), otherwise you will be wasting your $$.
 
Fact is G80 still has pipes that determines performance. If fill rate and bandwidth didn't matter why are people complaining about 8600 series 128bit bus. Because it makes a difference. :rolleyes:

And you can't quantify that difference until you have real hardware and software (both drivers and applications) in your hands. And no, the page you linked doesn't do anything to "prove" that the G80 has "pipes," so that's an issue still up in the air. Why do you insist that unified-architecture cards have pipes? What do you define as a pipe?

If you mean that units are clustered into groups that tend to operate on one set of data at a time, I suppose that's still true. But when the functionality of the units can change on the fly, and when the data no longer flows end-to-end down a single path until ouput, and when the GPU has the ability to mix and loadbalance, we are stretching the definition of a pipe into something much different than it was two years ago--different enough that making predictions about performance based on a handful of specs is a meaningless exercise.

Brent gave you the short version, I gave you the long. You wouldn't listen to someone of his caliber, so I'm not surprised that you won't listen to me. Perhaps he'll be kind enough to step back in at some point and let us know if I did an accurate job of representing his position. Until then, me and my smart pants will do just fine, thank you ever so much.
 
Hmm 8600GTS looks tempting... but I will wait for the review/battle's to come out with it competing against the 8800GTS before I make a move...

I hope that Ultra comes out... yeah the prices arent correct but if it will somehow sit between the 8600GTS and 8800GTS ... and have the minimal 256bus.

... Well lets us all hope for the Ultra 8600...but wouldnt that make it some what the same as the 320mb 8800GTS?

Only benchmarks in the end will clear the smoke! :D
 
Hmm 8600GTS looks tempting... but I will wait for the review/battle's to come out with it competing against the 8800GTS before I make a move...

I hope that Ultra comes out... yeah the prices arent correct but if it will somehow sit between the 8600GTS and 8800GTS ... and have the minimal 256bus.

... Well lets us all hope for the Ultra 8600...but wouldnt that make it some what the same as the 320mb 8800GTS?

Only benchmarks in the end will clear the smoke! :D

If the 64 stream processors vs. 96 for the 8800 GTS is true, then the 8600 Ultra won't even come close. 50% more is alot more. There is nothing to say other than if the architectures are comparable and the *speculated* specs are right, the 8600 Ultra will be good, but not even close.
 
And you can't quantify that difference until you have real hardware and software (both drivers and applications) in your hands. And no, the page you linked doesn't do anything to "prove" that the G80 has "pipes," so that's an issue still up in the air. Why do you insist that unified-architecture cards have pipes? What do you define as a pipe?

If you mean that units are clustered into groups that tend to operate on one set of data at a time, I suppose that's still true. But when the functionality of the units can change on the fly, and when the data no longer flows end-to-end down a single path until ouput, and when the GPU has the ability to mix and loadbalance, we are stretching the definition of a pipe into something much different than it was two years ago--different enough that making predictions about performance based on a handful of specs is a meaningless exercise.

Brent gave you the short version, I gave you the long. You wouldn't listen to someone of his caliber, so I'm not surprised that you won't listen to me. Perhaps he'll be kind enough to step back in at some point and let us know if I did an accurate job of representing his position. Until then, me and my smart pants will do just fine, thank you ever so much.

Wow, that is the best summary rundown explanation of this I've ever seen, major reps :)


You got it right, the architecture has changed, the word "pipeline" is no longer relevant to the current architecture since everything is now decoupled. Current GPUs are more like CPUs thesedays, the 8800 GTX is like having a 128 core CPU.


My question for Marvelous is, please define your use of the word "pipeline". Perhaps there is simply some misunderstanding about definitions going on.
 
Back
Top