4870x2 reviews beating gtx280 sli

Impressive, but why didnt they benchmark World in Conflict? lame. Im so sick of seeing Crysis benchamrks , its a broken game anyways, imo a badly coded game has no relevance when its comes to benchmarking performance. Now If that would have gotten a advance demo of Far Cry 2 , that would have been something to shake a stick at :p

You think Crysis is broken? You should go look at FlightSimX benchmarks at Tom's Hardware. 8800Ultra beats GTX 280 and 4850 beats 4870. Now, that is seriously broken.
 
You think Crysis is broken? You should go look at FlightSimX benchmarks at Tom's Hardware. 8800Ultra beats GTX 280 and 4850 beats 4870. Now, that is seriously broken.

Nay, that's just Tom's being Tom's.
 
You know, the Crossfire 4870 X2 numbers are impressive, but at nearly 700W at peak.. Jesus! Better be using one hell of a watercooling solution or have one hell of a cold room to keep that thing going without bursting into flame. I think it's time for them to get a shrink to 45nm pronto.

Not to mention the electric bill u shall be slapped with every month :D
 
Isn't it alittle unfair to compare the 4870X2 CF to a GTX-280 SLI setup? Isn't that comparing 4 GPU's to only 2?

I agree the 4870X2 looks promising and looks like the best price/performance solution, but it seems like the GTX-280 is still the best single gpu out.
 
Isn't it alittle unfair to compare the 4870X2 CF to a GTX-280 SLI setup? Isn't that comparing 4 GPU's to only 2?

I agree the 4870X2 looks promising and looks like the best price/performance solution, but it seems like the GTX-280 is still the best single gpu out.

Who cares? It's still dual card vs dual card. At similar price points. Nothing else matters.
 
Who cares? It's still dual card vs dual card. At similar price points. Nothing else matters.

Honestly, for all im concerned, similar price points is the most important thing that matters. Followed by that is how many slots a gpu takes up, not how many gpus fill that one slot. If they put a huge case covering the two 4870 in a 4870x2 to make it look like only one gpu, will that make some of u guys happier? I mean I only have one gpu slot on my motherboard and any card that can fit that slot is what I would consider 1 gpu.
 
All I'm saying is that it took ATI 2 GPU's to do what Nvidia did with one. I'm no fanboy or anything just looking at the facts. If Nvidia decides to put out a GTX-280x2 at the same price point thats pretty much it for the 4870x2. I'm sure Nvidia has got something in the works to make the 4870x2 look silly anyway...

no need for hostilities...
 
All I'm saying is that it took ATI 2 GPU's to do what Nvidia did with one. I'm no fanboy or anything just looking at the facts. If Nvidia decides to put out a GTX-280x2 at the same price point thats pretty much it for the 4870x2. I'm sure Nvidia has got something in the works to make the 4870x2 look silly anyway...

no need for hostilities...

KILL IT WITH FIRE


..jk ;) I see what you're saying, and its true, but the reality is the 4870X2 by itself competes pretty well with the GTX280 at a lower price point. They're about the same size, and you can have two 4870X2 for a lot less than a GTX280-- of course this is an assumption until pricing is confirmed, but it will most likely be the case. Doesn't really matter if its one, two or even three GPUs on a single board. What matters is cost and performance.
 
All I'm saying is that it took ATI 2 GPU's to do what Nvidia did with one. I'm no fanboy or anything just looking at the facts. If Nvidia decides to put out a GTX-280x2 at the same price point thats pretty much it for the 4870x2. I'm sure Nvidia has got something in the works to make the 4870x2 look silly anyway...

no need for hostilities...

And nVidia's attempted the same thing with 7950GX2 and 9800GX2, oh wait those were 2 PCBs.:rolleyes:
People need to stop knit picking on the 2 GPU solution, clearly this is what ATI has chosen to go with for their highend, it works, it beats nVidias $500 card and by a good margin.
Sure, thoretically a 280GX2 is possible but imagining the cost and power consumption of such thing I doubt nVidias going to bother with one.
 
All I'm saying is that it took ATI 2 GPU's to do what Nvidia did with one. I'm no fanboy or anything just looking at the facts. If Nvidia decides to put out a GTX-280x2 at the same price point thats pretty much it for the 4870x2. I'm sure Nvidia has got something in the works to make the 4870x2 look silly anyway...
You're claiming to be"neutral" yet are sounding like a typical fanboy.
The fact is that BOTH companys got something in the works.
That's the nature of competition.
Now that ATI got its shit together, Nvidia will have no choice but start moving again instead of sticking to the same old tech wrapped in a different shell.
It's not a question of IF they decide.
They're in a real hurry now to come up with some cost effective answer to the ATI's newest crop of chips.
And you can bet that ATI are not exactly sitting around either...
 
Obviously multiple gpus is the way to go, look what happened with cpus. We pretty much gave up on making them faster and took on just putting more of them on a single chip. The GTX-280 gpu is better and more advanced than a 4870. Right or wrong? ATI is just taking 2 inferior chips and slapping them together, but isn't that also AMD's approach? They are improving the relationship between multiple gpus on a single card to their credit

haha, and i like that i'm now a fanboy because i claimed that ati hasn't been able to compete with nvidia chip for chip. i thought it was obvious to everyone, apparently not.
 
GTX-280 gpu is better and more advanced than a 4870. Right or wrong?
Wrong, The 4870 is more advanced in features, performance/transistor, value, you name it. The 280 is more of the same tech NV has been using for years, just more of it.

ATI is just taking 2 inferior chips and slapping them together, but isn't that also AMD's approach?
They are taking 2 superior chips and teaming them up. It takes a GTX280 in SLi to beat it , but at the cost of major heat and power draw, not to mention cost.

Nvidia needs to update their GPU with a refreshed design. AMD doesn't, all they need to do for their refresh is add to it, same exact tech but more of it.

edit - don't forget to bring up Crysis, or some unreleased Nvidia card to counter my argument. ;)
 
I like how it surpasses the gtx280 by being able to turn up AA even higher. I just need one of these and I'm set. :)
 
Right now the 4870X2 is proof that ATi's tech path is superior to nVidias. Starting with a big hot flagship chip and later reducing and optimizing it to mid-range and low end parts doesn't make sense and ATi realized that. Now they put most of their effort in to the mid/high range and then slap two together to get your flagship card. It seems to produce better products and will hopefully be more profitable.

And multicore/multiprocessing is always going to be superior to a fast single processor. CPUs went this route, and it's good to see videocards follow. Honestly, how many people stayed with the last Pentium 4 single cores once the dual cores came out? Performance is performance, doesn't matter how many parts are doing it.
 
Right now the 4870X2 is proof that ATi's tech path is superior to nVidias. Starting with a big hot flagship chip and later reducing and optimizing it to mid-range and low end parts doesn't make sense and ATi realized that. Now they put most of their effort in to the mid/high range and then slap two together to get your flagship card. It seems to produce better products and will hopefully be more profitable.

I think this a little over stated. Were talking about a 55nm vs a 65nm part and in looking at the thermal and power numbers in the reviews, the 4800 isn't particularly cool and low power.

I think nVidia was just playing it safe. The GT 200 on 55nm using GDDR5 would have probably been a much better and cheaper part.

Don't cinder estimate the GT 200. As it sets smaller I think it has potential.
 
All I'm saying is that it took ATI 2 GPU's to do what Nvidia did with one.

One thing to note is that two RV770 die combined are still smaller than a single GT200 die. Could we have made a giant monolithic die? Absolutely. Although that would severely impact the yield in a negative way, and as a result we wouldn't be able sell the cards for the prices that you see today.

But you ask, if two RV770 die add up to close to the same size as a GT200 die, doesn't that mean there is no difference? Ah, but there is a huge difference. Keep in mind that the yield drops exponentially with die size. There are a number of models used to estimate yield. For example, one of the common ones is the Poisson model:

Y = e^(-AD)

Where Y is the yield, A is the die area, and D is the defect density. Since TSMC's 65nm and 55nm process should be pretty mature by now, for simplicity's sake we assume that the defect density is the same for both (of course TSMC will never release this kind of info). Thus, you can see quickly the yield decreases as die area increases.

There are other models (i.e. Murphy, Seeds) as well, but the die area all play an exponential role in all of them.
 
All I'm saying is that it took ATI 2 GPU's to do what Nvidia did with one. I'm no fanboy or anything just looking at the facts. If Nvidia decides to put out a GTX-280x2 at the same price point thats pretty much it for the 4870x2. I'm sure Nvidia has got something in the works to make the 4870x2 look silly anyway...

no need for hostilities...

zzzz...so what are you still using a single core athlon or p4 in your system? Is everything multi core bad?!
 
One thing to note is that two RV770 die combined are still smaller than a single GT200 die. Could we have made a giant monolithic die? Absolutely. Although that would severely impact the yield in a negative way, and as a result we wouldn't be able sell the cards for the prices that you see today.

But you ask, if two RV770 die add up to close to the same size as a GT200 die, doesn't that mean there is no difference? Ah, but there is a huge difference. Keep in mind that the yield drops exponentially with die size. There are a number of models used to estimate yield. For example, one of the common ones is the Poisson model:

Y = e^(-AD)

Where Y is the yield, A is the die area, and D is the defect density. Since TSMC's 65nm and 55nm process should be pretty mature by now, for simplicity's sake we assume that the defect density is the same for both (of course TSMC will never release this kind of info). Thus, you can see quickly the yield decreases as die area increases.

There are other models (i.e. Murphy, Seeds) as well, but the die area all play an exponential role in all of them.

Who is this "we" just curious, do you work for AMD?
 
All I'm saying is that it took ATI 2 GPU's to do what Nvidia did with one. I'm no fanboy or anything just looking at the facts. If Nvidia decides to put out a GTX-280x2 at the same price point thats pretty much it for the 4870x2. I'm sure Nvidia has got something in the works to make the 4870x2 look silly anyway...

no need for hostilities...

GT200 is bigger than two RV770 combined:
http://www.hardforum.com/showthread.php?p=1032744383#post1032744383

Your argument is like saying that Intel needs two dual core CPUs (MCM design) to do what AMD did with one native quad core.
 
GT200 is bigger than two RV770 combined:
http://www.hardforum.com/showthread.php?p=1032744383#post1032744383

Your argument is like saying that Intel needs two dual core CPUs (MCM design) to do what AMD did with one native quad core.

That's not the point at all. I could give a rats ass as to who does better, Nvidia or AMD, but if you understand how two GPUs on a card work, its nothing like an MCM versus a native quad on a single die. The two GPUs do not share memory (as far as we can tell - otherwise you wouldnt need 2x1GB) let alone a shared cache, and until the R700, crossfired GPUs didn't even talk to each other! http://www.anandtech.com/video/showdoc.aspx?i=3354

There is still added latency to the drawn picture (the equivalent of input lag in monitor speak). At least microstuttering is reduced or gone (will have to wait for final reviews). AMD is improving the dual GPU experience, but until there is no added latency and not a single game that has any issues with dual GPU, it is and will be an inferior gaming experience to a single GPU for me, all else being equal. I know other people just want the highest FPS and don't look at anything else, and godspeed to them. I'm looking for a hassle-free low lag smooth gaming experience. It will either be a single "Super" 4870 (when theyre out) or a GTX once the price drops are in place.

In fact MCM is exactly what AMD should be doing IMHO it would work exactly like a bigger, faster GPU except cheaper to make - no lag, no driver issues, no game issues - I would jump on that in an instant.
 
That looks to run hot! I also had a reality check and found out that there are no games I have that make it worth the cost. (They all run good but that’s just me}. Having replacing my two year old ati 1900xt 10 days after the warranty was up. (And it looks like it has almost the same cooling set up). I have some serious doubts about that card holding up for the long run. I just got a 119 $ 8800gts (92). (Got to love the price cuts). I think I’m just going to skip this year and see what next year will bring and get the next video card close out price. Im one more guy that will not jump on the hot new card bandwagon again. Not many people have 30” monitors to run the new cards on anyway. :rolleyes:
 
Sounds like time to write nVidia Fanboy HQ chapter 2. :D

this made me lol. Ill be slapping myself if indeed theres any kind of micro stutter around. Like hand microstutter when playing games with the 4870x2 due to the feeling of being uber once again.
 
And multicore/multiprocessing is always going to be superior to a fast single processor.

Where did you get this preposterous notion from? Even for the most parallelize-able of tasks, you can never achieve perfect scaling. You're saying that, given a choice, you'd choose a 3ghz C2Q over a 12ghz C2S? There is not a single application in existence that would run better on the 3ghz quad core vs the 12ghz single core.

The reason that AMD and Intel have begun to migrate to increased cores rather than clocks is that CPU's have begun to approach the physical limits of silicon transistor switching speed (as limited by size). Yes, there's still improvements to be made in speed, but progress has begun to decelerate.
 
Where did you get this preposterous notion from? Even for the most parallelize-able of tasks, you can never achieve perfect scaling. You're saying that, given a choice, you'd choose a 3ghz C2Q over a 12ghz C2S? There is not a single application in existence that would run better on the 3ghz quad core vs the 12ghz single core.

The reason that AMD and Intel have begun to migrate to increased cores rather than clocks is that CPU's have begun to approach the physical limits of silicon transistor switching speed (as limited by size). Yes, there's still improvements to be made in speed, but progress has begun to decelerate.
You have a 12GHz single core? Guess not, maybe you should check back into reality once in awhile were my comments are validated. And once programming matures for more multithreaded applications, multiprocessor systems will be faster and even more viable.
 
You guys are confusing multiple cores and multiple dies. Graphics cards have been multi-core for the past decade.
 
All I'm saying is that it took ATI 2 GPU's to do what Nvidia did with one. I'm no fanboy or anything just looking at the facts. If Nvidia decides to put out a GTX-280x2 at the same price point thats pretty much it for the 4870x2. I'm sure Nvidia has got something in the works to make the 4870x2 look silly anyway...

You couldnt make a GTX280x2 it would be huge, draw huge power, and well its just not possible given the GTX280's size. You shouldnt look at it as "it took ATI 2 GPU's to do what Nvidia did with one" You should look at it as "ATI can put two of there GPUs on one board and still have a smaleld over all die size than Nvidia, the same sized board, a tad bit higher of a power requirement yet get much better yields and performance all without microstuttering. And what makes you think Nvidia has something in the works to make the X2 'look silly'? GT200b? A die Shrink? I bet that will be just for profibilities sake and not for performance's sake. Perhaps a tad bit of increased clocks but I doubt huge clock increases. Its all wait and see but for now ATI wins.
 
Which is faster is down to the type of calculations you're doing, the order you're doing them in, and how well you've optimised your code for a multicore processor.

Theres still a great many applications and games which use only 1 core, given the option of a 3Ghz single core or a 6ghz dual core i'd probably go with the dual core for that reason. No such thing exists outside of extreme overclocking, but it's not meant to be a real world example simply a thought experiment to prove a point.

*edit*

Sorry being a spacker, the above is supposed to read 6Ghz single core and 3Ghz dual core.

*/edit*

Multicore is great but for 1 power hungry app, it's a nightmare trying to balance that one app across n cores where n could be any low value interger.
 
You have a 12GHz single core? Guess not, maybe you should check back into reality once in awhile were my comments are validated. And once programming matures for more multithreaded applications, multiprocessor systems will be faster and even more viable.

You said "multicore/multiprocessing is always going to be superior to a fast single processor," implying that a slower multicore processor would be faster than a higher clocked single core processor, a statement that is false. Like I said in my previous post, multicore processors are only being made because it's quicker and cheaper to glue a bunch of cores together than it is to continue finding ways to make chips faster, NOT because it's the best way to increase absolute performance. If Intel could have produced a 12Ghz single core right now, for the same R&D and production costs as the core 2 line, I guarantee to you that they would have.
 
Which is faster is down to the type of calculations you're doing, the order you're doing them in, and how well you've optimised your code for a multicore processor.

Theres still a great many applications and games which use only 1 core, given the option of a 3Ghz single core or a 6ghz dual core i'd probably go with the dual core for that reason. No such thing exists outside of extreme overclocking, but it's not meant to be a real world example simply a thought experiment to prove a point.

Multicore is great but for 1 power hungry app, it's a nightmare trying to balance that one app across n cores where n could be any low value interger.
That is exactly my point, it's the programs that need to catch up. Specialization and division of work produces faster and better results. It doesn't even have to be considered in the world of computing, look at Ford's assembly line in the early 1900s. Computing will eventually go the same way. Look at how GPUs are now being realized for number crunching because of their architecture.
You said "multicore/multiprocessing is always going to be superior to a fast single processor," implying that a slower multicore processor would be faster than a higher clocked single core processor, a statement that is false. Like I said in my previous post, multicore processors are only being made because it's quicker and cheaper to glue a bunch of cores together than it is to continue finding ways to make chips faster, NOT because it's the best way to increase absolute performance. If Intel could have produced a 12Ghz single core right now, for the same R&D and production costs as the core 2 line, I guarantee to you that they would have.
Only because it would have been cheaper and require less output on the company in the long run. Intel is a business and a business makes money first and foremost. Having multiple processors in a single machine is much more efficient and versatile then having a fast single core processor, there's no alternative. The only reason a single processor is as powerful in some situations is because programs don't take advantage of the multiple cores. This isn't a question of computing as much as it is of simple logic.
 
Only because it would have been cheaper and require less output on the company. Intel is a business and a business makes money first and foremost.

Uhh, I just made the argument that Intel would have produced these theoretical 12Ghz CPU's at the same cost of the core2quads if they could. How would it be cheaper if it cost the same? o_O

Having multiple processors in a single machine is much more efficient and versatile then having a fast single core processor, there's no alternative.

The exact opposite is true, actually. A single core processor can emulate four cores at 1/4 speed, but not vice versa.

The only reason a single processor is as powerful in some situations is because programs don't take advantage of the multiple cores. This isn't a question of computing as much as it is of simple logic.
Simple knowledge of how multithreading works would tell you otherwise. If you were able to achieve absolute perfect scaling, N cores working at 1 gigahertz would only equal a single core at N gigahertz. Since all multithreaded applications have a single threaded overhead in which the multithreading is set up, a multicore processor can, at best, *approach* the performance of a faster single core.
 
You said "multicore/multiprocessing is always going to be superior to a fast single processor," implying that a slower multicore processor would be faster than a higher clocked single core processor, a statement that is false. Like I said in my previous post, multicore processors are only being made because it's quicker and cheaper to glue a bunch of cores together than it is to continue finding ways to make chips faster, NOT because it's the best way to increase absolute performance. If Intel could have produced a 12Ghz single core right now, for the same R&D and production costs as the core 2 line, I guarantee to you that they would have.

It's not an either or situation. Modern software operates both sequentially and in parallel. Even if you have a 12GHz processor, you’d still gain benefit from parallelism even if the scaling isn’t perfect. There’s always going to be a top end to sequential speed and parallelism can extend that.

Plus, some problems are by nature parallel anyway, graphics rendering actually falls in the space. Some games gain 100% performance boost by simply adding another GPU, not all obviously but most games do see at least some performance increase and often it is substantial. So not matter how fast one is, two can be a lot faster.
 
Since all multithreaded applications have a single threaded overhead in which the multithreading is set up, a multicore processor can, at best, *approach* the performance of a faster single core.

But what about a situtaion where you have independent threads? The the N core solution could be better than the 1 core.
 
Back
Top