cybereality
[H]F Junkie
- Joined
- Mar 22, 2008
- Messages
- 8,789
Wow! Good show, good show. Looks like ATI has a winner.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Impressive, but why didnt they benchmark World in Conflict? lame. Im so sick of seeing Crysis benchamrks , its a broken game anyways, imo a badly coded game has no relevance when its comes to benchmarking performance. Now If that would have gotten a advance demo of Far Cry 2 , that would have been something to shake a stick at
You think Crysis is broken? You should go look at FlightSimX benchmarks at Tom's Hardware. 8800Ultra beats GTX 280 and 4850 beats 4870. Now, that is seriously broken.
You know, the Crossfire 4870 X2 numbers are impressive, but at nearly 700W at peak.. Jesus! Better be using one hell of a watercooling solution or have one hell of a cold room to keep that thing going without bursting into flame. I think it's time for them to get a shrink to 45nm pronto.
Isn't it alittle unfair to compare the 4870X2 CF to a GTX-280 SLI setup? Isn't that comparing 4 GPU's to only 2?
I agree the 4870X2 looks promising and looks like the best price/performance solution, but it seems like the GTX-280 is still the best single gpu out.
Who cares? It's still dual card vs dual card. At similar price points. Nothing else matters.
All I'm saying is that it took ATI 2 GPU's to do what Nvidia did with one. I'm no fanboy or anything just looking at the facts. If Nvidia decides to put out a GTX-280x2 at the same price point thats pretty much it for the 4870x2. I'm sure Nvidia has got something in the works to make the 4870x2 look silly anyway...
no need for hostilities...
All I'm saying is that it took ATI 2 GPU's to do what Nvidia did with one. I'm no fanboy or anything just looking at the facts. If Nvidia decides to put out a GTX-280x2 at the same price point thats pretty much it for the 4870x2. I'm sure Nvidia has got something in the works to make the 4870x2 look silly anyway...
no need for hostilities...
What? Did you mean 2 X2s for around what 2 280s cost?and you can have two 4870X2 for a lot less than a GTX280-- of course this is an assumption
You're claiming to be"neutral" yet are sounding like a typical fanboy.All I'm saying is that it took ATI 2 GPU's to do what Nvidia did with one. I'm no fanboy or anything just looking at the facts. If Nvidia decides to put out a GTX-280x2 at the same price point thats pretty much it for the 4870x2. I'm sure Nvidia has got something in the works to make the 4870x2 look silly anyway...
Wrong, The 4870 is more advanced in features, performance/transistor, value, you name it. The 280 is more of the same tech NV has been using for years, just more of it.GTX-280 gpu is better and more advanced than a 4870. Right or wrong?
They are taking 2 superior chips and teaming them up. It takes a GTX280 in SLi to beat it , but at the cost of major heat and power draw, not to mention cost.ATI is just taking 2 inferior chips and slapping them together, but isn't that also AMD's approach?
Right now the 4870X2 is proof that ATi's tech path is superior to nVidias. Starting with a big hot flagship chip and later reducing and optimizing it to mid-range and low end parts doesn't make sense and ATi realized that. Now they put most of their effort in to the mid/high range and then slap two together to get your flagship card. It seems to produce better products and will hopefully be more profitable.
All I'm saying is that it took ATI 2 GPU's to do what Nvidia did with one.
All I'm saying is that it took ATI 2 GPU's to do what Nvidia did with one. I'm no fanboy or anything just looking at the facts. If Nvidia decides to put out a GTX-280x2 at the same price point thats pretty much it for the 4870x2. I'm sure Nvidia has got something in the works to make the 4870x2 look silly anyway...
no need for hostilities...
One thing to note is that two RV770 die combined are still smaller than a single GT200 die. Could we have made a giant monolithic die? Absolutely. Although that would severely impact the yield in a negative way, and as a result we wouldn't be able sell the cards for the prices that you see today.
But you ask, if two RV770 die add up to close to the same size as a GT200 die, doesn't that mean there is no difference? Ah, but there is a huge difference. Keep in mind that the yield drops exponentially with die size. There are a number of models used to estimate yield. For example, one of the common ones is the Poisson model:
Y = e^(-AD)
Where Y is the yield, A is the die area, and D is the defect density. Since TSMC's 65nm and 55nm process should be pretty mature by now, for simplicity's sake we assume that the defect density is the same for both (of course TSMC will never release this kind of info). Thus, you can see quickly the yield decreases as die area increases.
There are other models (i.e. Murphy, Seeds) as well, but the die area all play an exponential role in all of them.
All I'm saying is that it took ATI 2 GPU's to do what Nvidia did with one. I'm no fanboy or anything just looking at the facts. If Nvidia decides to put out a GTX-280x2 at the same price point thats pretty much it for the 4870x2. I'm sure Nvidia has got something in the works to make the 4870x2 look silly anyway...
no need for hostilities...
GT200 is bigger than two RV770 combined:
http://www.hardforum.com/showthread.php?p=1032744383#post1032744383
Your argument is like saying that Intel needs two dual core CPUs (MCM design) to do what AMD did with one native quad core.
Who is this "we" just curious, do you work for AMD?
With microstuttering gone, now we will have a "micro input lag" problem.
Sounds like time to write nVidia Fanboy HQ chapter 2.
And multicore/multiprocessing is always going to be superior to a fast single processor.
You have a 12GHz single core? Guess not, maybe you should check back into reality once in awhile were my comments are validated. And once programming matures for more multithreaded applications, multiprocessor systems will be faster and even more viable.Where did you get this preposterous notion from? Even for the most parallelize-able of tasks, you can never achieve perfect scaling. You're saying that, given a choice, you'd choose a 3ghz C2Q over a 12ghz C2S? There is not a single application in existence that would run better on the 3ghz quad core vs the 12ghz single core.
The reason that AMD and Intel have begun to migrate to increased cores rather than clocks is that CPU's have begun to approach the physical limits of silicon transistor switching speed (as limited by size). Yes, there's still improvements to be made in speed, but progress has begun to decelerate.
All I'm saying is that it took ATI 2 GPU's to do what Nvidia did with one. I'm no fanboy or anything just looking at the facts. If Nvidia decides to put out a GTX-280x2 at the same price point thats pretty much it for the 4870x2. I'm sure Nvidia has got something in the works to make the 4870x2 look silly anyway...
You have a 12GHz single core? Guess not, maybe you should check back into reality once in awhile were my comments are validated. And once programming matures for more multithreaded applications, multiprocessor systems will be faster and even more viable.
That is exactly my point, it's the programs that need to catch up. Specialization and division of work produces faster and better results. It doesn't even have to be considered in the world of computing, look at Ford's assembly line in the early 1900s. Computing will eventually go the same way. Look at how GPUs are now being realized for number crunching because of their architecture.Which is faster is down to the type of calculations you're doing, the order you're doing them in, and how well you've optimised your code for a multicore processor.
Theres still a great many applications and games which use only 1 core, given the option of a 3Ghz single core or a 6ghz dual core i'd probably go with the dual core for that reason. No such thing exists outside of extreme overclocking, but it's not meant to be a real world example simply a thought experiment to prove a point.
Multicore is great but for 1 power hungry app, it's a nightmare trying to balance that one app across n cores where n could be any low value interger.
Only because it would have been cheaper and require less output on the company in the long run. Intel is a business and a business makes money first and foremost. Having multiple processors in a single machine is much more efficient and versatile then having a fast single core processor, there's no alternative. The only reason a single processor is as powerful in some situations is because programs don't take advantage of the multiple cores. This isn't a question of computing as much as it is of simple logic.You said "multicore/multiprocessing is always going to be superior to a fast single processor," implying that a slower multicore processor would be faster than a higher clocked single core processor, a statement that is false. Like I said in my previous post, multicore processors are only being made because it's quicker and cheaper to glue a bunch of cores together than it is to continue finding ways to make chips faster, NOT because it's the best way to increase absolute performance. If Intel could have produced a 12Ghz single core right now, for the same R&D and production costs as the core 2 line, I guarantee to you that they would have.
Only because it would have been cheaper and require less output on the company. Intel is a business and a business makes money first and foremost.
Having multiple processors in a single machine is much more efficient and versatile then having a fast single core processor, there's no alternative.
Simple knowledge of how multithreading works would tell you otherwise. If you were able to achieve absolute perfect scaling, N cores working at 1 gigahertz would only equal a single core at N gigahertz. Since all multithreaded applications have a single threaded overhead in which the multithreading is set up, a multicore processor can, at best, *approach* the performance of a faster single core.The only reason a single processor is as powerful in some situations is because programs don't take advantage of the multiple cores. This isn't a question of computing as much as it is of simple logic.
You said "multicore/multiprocessing is always going to be superior to a fast single processor," implying that a slower multicore processor would be faster than a higher clocked single core processor, a statement that is false. Like I said in my previous post, multicore processors are only being made because it's quicker and cheaper to glue a bunch of cores together than it is to continue finding ways to make chips faster, NOT because it's the best way to increase absolute performance. If Intel could have produced a 12Ghz single core right now, for the same R&D and production costs as the core 2 line, I guarantee to you that they would have.
Since all multithreaded applications have a single threaded overhead in which the multithreading is set up, a multicore processor can, at best, *approach* the performance of a faster single core.