GT200 55nm rumored to rival 4870x2

The 4870X2 has fixed most of these issues. They have fixed microstuttering and many other multi-GPU issues. This is the concept that I don't understand, many people assume that the technology behind the X2 and the direction AMD is heading is the same that nVidia has done. It's not. Ever think about why AMD choose GDDR5 over GDDR3?



GDDR5 has in place the ability to overcome a huge hurdle in multi-GPU solutions: a shared frame buffer. This wasn't in place in the preview, my hope is that they will have this in place for the release.

The fact about AMD is that they are designing their cards to become multi-GPU capable, and to scale together. nVidia, isn't doesn't. They make a card, where you can have them do AFR, but they aren't trying to go the route of AMD. This isn't bad, and I'm not saying one is better then the other. What I am saying is this: the route that AMD is going with it's multi-GPU technology is not the same as nVidia. Because of this, you can not say 'AMD looses because it's 2 dies'. AMD is betting on their multi-GPU strategy, and it's working! I don't understand how people can bash ATi for going this route, and to think ATi has done something 'wrong'.

I do think that the x2 and a GT200b will compete very competitively, but to rule out the x2 because it's 'mulit-GPU' is wrong.


When thet start putting multiGPUs on a single package, it may mean something. As it is, the R700 is still just Xfire on a single card regardless of a few tweaks, and power draw/ heat will reflect this. A 55nm or 40nm GT200 part would undoubtedly run cooler and draw less power, while performing similar.

What makes the multiGPU strategy a negative, is it reduces the requirment of R&D on new architecture, and stagnates new silicon growth. R770 is not much differrent than R670, just a few more SPs and die tweaks. While GT200 is a whole new animal (all be it an inefficient beast), as I hope GT300 will be as well (just not a beast).
 
Then we get a company line about how the game is being pirated and they are abandoning support for Crysis. That is fishy. You also have to consider that they intentionally and knowingly disabled image quality settings for DX9, and used the game as a DX10 marketing tool while the DX9 mode of the game runs considerably faster.
I would seriously just blame EA on that one. I remember when BF 2142 was coming out and Dice all of a sudden started dropping update support for BF 2 despite the numerous problems and glitches that still existed. Despite no support bf2 still draws more players in then 2142 but alas the last update you can find for bf2 on the ea site is from 2006. Maybe ea forces it's developer companies to drop support for already release products and instead to concentrate on soon to be released projects just to make sure that they make money.

I understand the outdoor nature of Crysis makes for very steep system requirements, and I do like what I see from the game, but something is not right about it. I am supposed to believe that I need 3 GTX 280 cards to run this game as intended, and also accept the fact that they are arbitrarily locking me out of settings in DX9 mode?
There is a lot of tweaking you can do in the Crysis graphics cfg that will make the game run smoother and faster on the cards available today and still look just as good. A lot of the gpu power in crysis gets used up for things you wouldn't care too much about like proper shader detail on the mountains far away and etc. Or you can just be lazy like me and download a optimized cfg installer.
 
The major problem with multi-GPU solutions is that they are more complex, especially from a software perspective. As SLI and CF have matured this is becoming less and less of an issue, but really, lets say that a 4870x2 and a GTX 280 5nm perform similarly. From both a hardware and software perspective, the single GPU is more attractive because its simpler.

Now for the fanboy disclaimer as this is becoming necessary around here. The 4870x2 looks to be awesome. I'm tempted to get one at launch but I'm going to have to wait and see what the GT200b looks like and hopefully the 4870x2 will be a little cheaper by then anyway.

But I don't see how AMD can claim that two 1 billion transistor chips with a bridge chip and CrossFire drivers would be a simpler solution than a 1.5 billion transistor chip, no bridge chip, and no need for software support for multiple GPUs, unless AMD can package more and more of the simpler GPUs into a single package and get good scaling and performance and keep power and thermals in check.
 
If someone says something demonstrably true about a video card and your initial inclination is to actually get angry (about video cards!) and call that person a fanboy. . . take a look in the mirror. . . because the fanboy bell. . . it tolls for thee.
 
What makes the multiGPU strategy a negative, is it reduces the requirment of R&D on new architecture, and stagnates new silicon growth. R770 is not much differrent than R670, just a few more SPs and die tweaks. While GT200 is a whole new animal (all be it an inefficient beast), as I hope GT300 will be as well (just not a beast).

GT200 is very much alike to G80.
I really dont see what your talking about though, it doesnt make sense to me.
 
The tone of the article is much more favorable towards Nvidia than that other site FUD used to write for.
 
GT200 is very much alike to G80.
I really dont see what your talking about though, it doesnt make sense to me.

It's simple. If you as a GPU designer are dedicating your resources to making chips work together, those resources are not bieng used to design new, and better chips. It's just as if you upgraded by buying another identicle card to go Xfire/SLI instead of buying a new card with better chip.

The GT200 if a far cry from the G80, when comparing the R770 to the R670.
 
And the Core 2 Duo is really just a tweaked Pentium III sandwiched onto a die with another core.

*Yawn* that argument is tired.
 
They are making new chips...just smaller chips that are easier to use in MGPU setups with good scaling. I still dont get your arguement.
Also the 4800s are a far cry from the 3800 if the 8800 is a far cry from the GTX280
 
GT200 is very much alike to G80.
I really dont see what your talking about though, it doesnt make sense to me.

Same here, I have no idea what he was trying to reflect, Gt 200 is not a whole new beast, if he read the rv770 architecture he would know that ati did a whole lot more than just giving it more sp's and die tweaks, and the AA performance and texture fill rate with 40 TMU's is a prime example of how hard they worked, and YEA I have a gtx 280, lol. but it is never wrong to praise a company for bringing on the competition.
 
Same here, I have no idea what he was trying to reflect, Gt 200 is not a whole new beast, if he read the rv770 architecture he would know that ati did a whole lot more than just giving it more sp's and die tweaks, and the AA performance and texture fill rate with 40 TMU's is a prime example of how hard they worked, and YEA I have a gtx 280, lol. but it is never wrong to praise a company for bringing on the competition.


nVidia expanded every facet of G80 and added better floating point accuracy and widened the memory bus, the thread managment gains alone are a mojor upgrade.

ATi slapped 6 more SMIDs on the chip, gave them some more cache, and did away with the silly ass ringbus.
 
The major problem with multi-GPU solutions is that they are more complex, especially from a software perspective. As SLI and CF have matured this is becoming less and less of an issue, but really, lets say that a 4870x2 and a GTX 280 5nm perform similarly. From both a hardware and software perspective, the single GPU is more attractive because its simpler.

Now for the fanboy disclaimer as this is becoming necessary around here. The 4870x2 looks to be awesome. I'm tempted to get one at launch but I'm going to have to wait and see what the GT200b looks like and hopefully the 4870x2 will be a little cheaper by then anyway.

But I don't see how AMD can claim that two 1 billion transistor chips with a bridge chip and CrossFire drivers would be a simpler solution than a 1.5 billion transistor chip, no bridge chip, and no need for software support for multiple GPUs, unless AMD can package more and more of the simpler GPUs into a single package and get good scaling and performance and keep power and thermals in check.

the multicore CPU and GPU era is here.
 
The major problem with multi-GPU solutions is that they are more complex, especially from a software perspective. As SLI and CF have matured this is becoming less and less of an issue, but really, lets say that a 4870x2 and a GTX 280 5nm perform similarly. From both a hardware and software perspective, the single GPU is more attractive because its simpler.

Now for the fanboy disclaimer as this is becoming necessary around here. The 4870x2 looks to be awesome. I'm tempted to get one at launch but I'm going to have to wait and see what the GT200b looks like and hopefully the 4870x2 will be a little cheaper by then anyway.

But I don't see how AMD can claim that two 1 billion transistor chips with a bridge chip and CrossFire drivers would be a simpler solution than a 1.5 billion transistor chip, no bridge chip, and no need for software support for multiple GPUs, unless AMD can package more and more of the simpler GPUs into a single package and get good scaling and performance and keep power and thermals in check.
Whenever competition returns to the GPU market fanboyism becomes a problem. I will tell ya though that I used to be a pretty huge nVidia fanboy and I still love their graphics cards for the most part. That said however the 680i motherboards pretty much ruined my computing experience over the past 2 years. I love the idea behind SLi and Crossfire, but my experience with going "high end" with SLi has left a HORRIBLE taste in my mouth to the point that I pretty much will never purchase another nVidia chipset based motherboard ever again. I am much more willing to try Crossfire than SLi simply cause since my debacle with the 680i boards I have been using this Asus P5K board and it has been smooth sailing and I can actually overclock without killing my memory. If I can get legitimate performance gains with Crossfire I am definitely gonna take that path rather than take my chances by going SLi. Now that will COMPLETELY change if I can get SLi support on an Intel based motherboard.

Anyways...I will close by saying that nVidia and ATi both should be worried about the prospect of Intel turning the Larrabee into a success cause a rock solid highly overclockable multi-GPU all-Intel machine would be the savior that alot of gamers and PC enthusiasts have been dreaming of.
 
nVidia expanded every facet of G80 and added better floating point accuracy and widened the memory bus, the thread managment gains alone are a mojor upgrade.

ATi slapped 6 more SMIDs on the chip, gave them some more cache, and did away with the silly ass ringbus.

^ Ok :rolleyes:
 
The 4870X2 has fixed most of these issues. They have fixed microstuttering and many other multi-GPU issues.
The microstuttering is not fixed, just reduced compared to 3870X2 but still just 4870CF compared to the rumor about it to be a "revolution".
 
nVidia expanded every facet of G80 and added better floating point accuracy and widened the memory bus, the thread managment gains alone are a mojor upgrade.

ATi slapped 6 more SMIDs on the chip, gave them some more cache, and did away with the silly ass ringbus.

False.

If anything, Nvidia simply added features that ATI already had since R600 - not to mention they still don't have tesselation, which DX11 requires :rolleyes:

Besides, if you read this: http://www.rage3d.com/reviews/video/atirv770/architecture

Sorry but RV770 has some significant architectural improvements that weren't covered in detail by a lot of sites becaues it was WAY more complex in # of changes than most sites could cover
 
The microstuttering is not fixed, just reduced compared to 3870X2 but still just 4870CF compared to the rumor about it to be a "revolution".

I'll agree to that, but ATi has the technology to fix it. All they have to do is take advantage of the full features GDDR5 has to offer, mainly the shared frame buffer.

Not to mention, because AMD is focusing on reducing if not eliminating micro-stuttering, it just goes to show how focused they are on multi-GPU technology. Again, it has its downsides, but it still doesn't mean it's a bad technology.
 
False.

If anything, Nvidia simply added features that ATI already had since R600 - not to mention they still don't have tesselation, which DX11 requires :rolleyes:

Besides, if you read this: http://www.rage3d.com/reviews/video/atirv770/architecture

Sorry but RV770 has some significant architectural improvements that weren't covered in detail by a lot of sites becaues it was WAY more complex in # of changes than most sites could cover

What's false about it? If anything, the R770 SMID looks like a G80 SMID.
 
No, it doesn't operate like that at all, not to even mention that G80s worked on MADD + MUL while RV770s are like R600s in that its MADD with all 5 working together to do a double-point.

As others summed up in the Larrabee discussion, in truth Nvidia's "cores" are the simplest while Intel Larrabee cores would be the most complex, while ATI is somewhat in between.
 
No, it doesn't operate like that at all, not to even mention that G80s worked on MADD + MUL while RV770s are like R600s in that its MADD with all 5 working together to do a double-point.

As others summed up in the Larrabee discussion, in truth Nvidia's "cores" are the simplest while Intel Larrabee cores would be the most complex, while ATI is somewhat in between.

RV770 is a VLIW design, so each SP is "wider" than a G80/GT200 SP, but without a large increase in complexity. Complexity comes from having more control logic for things like scheduling and reordering, not from simply having more execution units.

So G80/GT200 may have the simplest "core", but that's just a basic block that requires outside control logic and G80/GT200 have more control logic encapsulating each SP than RV770. RV770's ability to use multiple execution units in a SP to do double-point is very cool though.

Simple overview of VLIW here.
 
nVidia expanded every facet of G80 and added better floating point accuracy and widened the memory bus, the thread managment gains alone are a mojor upgrade.

ATi slapped 6 more SMIDs on the chip, gave them some more cache, and did away with the silly ass ringbus.

yea exactly they did expand everything, honestly I have a gtx 280 and I can post pictures and I just hate when people try to defend a company like it is their family, sorry. but gtx 280 is a kick ass card, Nvidia added to what g80 already could do, and ati actually fixed alot, I really dont care because the card is faster, but to actually put another company down is not my thing as I believe in going with what is true, rv770 had much more improvement than gt 200, look at the die size, and look how good the 16 rop's work compared to Nvidia's 32, and 40 tmu's work compared to nvidia's 80, yes gtx 280 is faster and that is why I got it, but ati actually made a chip that has half the rop's and tmu's, and kicks ass when AA is turned on, and I believe has more texture fill rate in Vantage than gtx 280
 
Indeed, for such a small die, Im totally floored at the performance numbers of the Rv700.
 
The 280 is not great but it is still is a good card.
This video card had the following problems when it launched initially:
1) Too expensive. $650! Come on!
2) Loud and hot.
3) Performance should have been a little bit better. I think Nvidia was targeting 700MHZ instead of the actual 600MHZ.

It can now be found for $400 (and even lower) which makes much more sense to me.
The 55nm re-spin should be a very interesting card (assuming that the performance is there!).
 
yea exactly they did expand everything, honestly I have a gtx 280 and I can post pictures and I just hate when people try to defend a company like it is their family, sorry. but gtx 280 is a kick ass card, Nvidia added to what g80 already could do, and ati actually fixed alot, I really dont care because the card is faster, but to actually put another company down is not my thing as I believe in going with what is true, rv770 had much more improvement than gt 200, look at the die size, and look how good the 16 rop's work compared to Nvidia's 32, and 40 tmu's work compared to nvidia's 80, yes gtx 280 is faster and that is why I got it, but ati actually made a chip that has half the rop's and tmu's, and kicks ass when AA is turned on, and I believe has more texture fill rate in Vantage than gtx 280

It may be 'faster' but does it give you a better visual quality? The thing that most surprised me about the 4800 is how AA is almost free.

AMD’s 4800 series and NVIDIA’s GT 200 series maintain a very competitive antialiasing image quality. However AA framerate performance is a bit different story. In real world gaming scenarios, with enthusiast-level 4870 X2 and GTX 280 cards, both in single GPU and SLI/CrossFireX configurations, AMD’s high-end antialiasing techniques are more likely to be useable than NVIDIA’s.

I think the term 'faster' how you used it is a bit misleading without telling what settings and what game.
 
It may be 'faster' but does it give you a better visual quality? The thing that most surprised me about the 4800 is how AA is almost free.



I think the term 'faster' how you used it is a bit misleading without telling what settings and what game.

lol, I was actually praising the hd 4800 series as a gtx 280 owner, and my be you thought the other way around, by faster I meant what the word actually means, I have been giving credit to ati, and yet I fall victim to my own comments.
 
It may be 'faster' but does it give you a better visual quality? The thing that most surprised me about the 4800 is how AA is almost free.

Look at [H]review of msi 4850 and take a look at crysis screenshots. 4850s texture quality is a bit higher then that of 9800gtx, 8800gt.
 
yea exactly they did expand everything, honestly I have a gtx 280 and I can post pictures and I just hate when people try to defend a company like it is their family, sorry. but gtx 280 is a kick ass card, Nvidia added to what g80 already could do, and ati actually fixed alot, I really dont care because the card is faster, but to actually put another company down is not my thing as I believe in going with what is true, rv770 had much more improvement than gt 200, look at the die size, and look how good the 16 rop's work compared to Nvidia's 32, and 40 tmu's work compared to nvidia's 80, yes gtx 280 is faster and that is why I got it, but ati actually made a chip that has half the rop's and tmu's, and kicks ass when AA is turned on, and I believe has more texture fill rate in Vantage than gtx 280

the only problem with the 280 was the price. its a great card at 399, not so great at 650.

that said I dont think you will have any problems playin games for a long time. so why worry
 
Nvidia is really going to have problems with the 4850X2. If its on par or faster than the 280, and it looks to be.. then NV has lost the high end, high mid and mid range to Ati.

it will be like this

1 4870X2 ( crown holder)
2 4850X2
3 GTX 280
4 4870
5 GTX260
6 4850
7 9800gtx+
 
I find the whole doom and gloom, "Nvidia is getting crushed, oh lord, their company is going to fall apart" rhetoric sort of funny. Everyone said the same thing about AMD when they released the 2900 series and it was a turd.

Good on AMD for releasing a sweet series of cards, but it's only a matter of time before Nvidia counter punches, then AMD counter punches back, ad naseum and the cycle continues.
 
I've not seen much about the GT200b for a while. If some rumors don't start leaking by next week, only one week away from nVision 08, we may not be in for much. The 4870x2 looks mighty impressive. I'm wating to see some end user posts and benchmarks and experiences before I take the plunge.
 
same here, the rumours should be starting to leak by next week, if we have nothing than either Nvidia has a big surprise or they are just concentrating on their next product down the road, they better drop the prices on gtx 280 further, I will either sli or sell it to get hd 4870 x2 if the prices dont go down, lol
 
Hmm....g2xxb sometime around christmas this fall? Certainly that would be rather interesting
 
I'm pretty sure that you'll see the GT 200b announced pretty soon as nVidia needs to get its costs down pretty quickly. As for the performance, I have no idea. It'll probably be a little slower than a 4870x2 and cost around $400. Just a guess on my part but a reasonable one I think.
 
Again, it has its downsides, but it still doesn't mean it's a bad technology.

The new card only needs to get close to the FPS numbers of the 4870x2 to be the winner because it is a single gpu = what they were saying. Only one guy implied that it was "bad" technology.
 
The new card only needs to get close to the FPS numbers of the 4870x2 to be the winner because it is a single gpu

I agree.
I would be more inclined to buy the GT200b over the 4870X2 if the latest is only 10%-15% faster in CrossfireX supported games.
 
Is there any date tfor hese GT200b? Any proof? Is there a point waiting for them, or should I just get one GTX280 now, and second for SLI later when save some money? If those GT200b coming in next few months then I would rather wait for new version and then buy one..
 
Back
Top