Real-World Gaming CPU Comparison with BFGTech 8800 GTX SLI

Quote page one of the article:

The systems are “identical” with our CPUs being isolated to see if one or the other gives a better gaming experience.

We all know this to be untrue. You cannot have two CPU's from different manufacturers run on the same chipset. To all of us hardware enthusiasts that is obvious. You can't call them identical if they aren't. Simple fact.

Then from page two:

For our Intel Core 2 Duo X6800 (2.93 GHz) CPU we are using the EVGA nForce 680i SLI motherboard with the latest P24 BIOS and NVIDIA 9.53 nForce chipset drivers. The BIOS was configured with default CPU frequencies and default memory frequencies at the fastest timings, LinkBoost was also enabled.

For the AMD Athlon 64 FX-62 (2.8 GHz) CPU we are using the ASUS M2N32-SLI Deluxe 590 SLI motherboard. The latest BIOS, 0706, was used as well as the latest nForce chipset drivers 9.35. The BIOS was also configured at default CPU frequencies and memory frequency with the same memory timings.

Not only are they different chipsets, but the motherboards are also from different manufacturers. We all have seen countless reviews where one brand of motherboard was pitted against another brand of motherboard, both with the same chiset, feature set, etc., and one outperformed the other. You can't say that identical in any way. Then, as brought up earlier in this thread, [H] went as far as to enable Linkboost, which is a feature found on only one of those boards. More apples to oranges.

If performance is achieved with cache, clock speeds, front side bus or anything else, great! Performance is performance irregardless of other factors. Cache is not a user device. You cannot interact with it directly from the user level, so its only purpose is to increase performance of the CPU. That's it.

Now, if the cache afforded you different opportunities, if you could interact with it in a real way, and if it were truly relevant, this would be a different story.

I never suggested that they should manipulate the cache. Neither of these CPU's allow that. I only suggested that the clock the CPU's to the same speed, in that they could eliminate as many variables as possible. If these CPU's have the ability to change the multiplier then it would be simple to do a set of benchmarks at various speeds seeing what CPU MHz really has an effect on in gaming.

This test is like pitting two different dogs against each other in a race and seeing who can run faster, but giving one of the dogs a shot of speed before the race. One went faster cause it was given an advantage over the other. There is no control variable. It's all apples to oranges. You want it to be a fair test, then level the playing field.
 
I see your point, and I may be the only one, but once the "highest playable settings" are established, on either platform, that is what the test should be run at on both platforms... apples to apples all the way through.
While that's certainly a valid opinion, the "highest playable settings" are variable between test configurations, which means settings are changed to hit the ideal conditions for gameplay on all testing configurations. If that means dropping 8X CSAA for 2X CSAA, because the performance penalty is too high for that particular machine, that's the change that will be made. It's a very different way to approach the situation, and it takes some time to get a "feel" for how apples-to-apples is not always perfectly representative of the real world. In some cases, both are desirable, but [H] writers only have so much time. Like anyone else, deadlines are always looming.

enumae said:
...but to test CPU's and put a GPU bottleneck in the equation just doesn't make sense to me.
You look at it as purely a CPU evaluation. A pure CPU evaluation would use synthetic benchmarks, SuperPi times and other nonsense. They may not be nonsense in a synthetic super-world, but they aren't perfectly representative of the real world.

There will always be bottlenecks in any given system. Because each frame is different, and each game is different, and each platform is different, bottlenecks can flip-flop. There is almost always going to be some component limiting some other component, even if it ends up boiling down to the API. This has been the case with the CPU/GPU relationship since the advent of the Voodoo 2 -- the sliding performance scale.

The CPU evaluation is a kind of system evaluation which stresses changes from using different testbeds with different CPUs. The focus is the CPU, but that doesn't mean that other factors are irrelevant. Look at [H] evaluations more broadly than you would some other, more synthetic evaluation.

jon67 said:
Don't forget that the AMD CPU has 940 pins against Intels 775. In order to achieve apples-to-apples conditions, 165 pins must be disabled/removed from the AMD unit, for a total of 775 contact points per CPU.
Exactly! The fact that [H] did not remove these pins is sickening. I will never come here again, blah, blah, so on and so forth ad nauseum.
 
The sarcasm does not help.

The title of the review/test is Real-World Gaming CPU Comparison with 8800 GTX SLI, if the test is a CPU comparison, why continue to create a bottleneck with the GPU's?

Am I missing something here?

Please explain, I have no intent to be hard headed.

Thanks

Because when is that last time that you said I want to game at 640X480. The people around here want to have the latest and greatest. I really don't care which CPU cranks out more frames at low res because after I shelled out 1000 bucks for video cards and an additional 1000 for my top of the line CPU. Then sold my other arm for the big LCD. I'm not gaming at low res.

Personally I would like to see an article using a 3800+-4200+ level chip versus say the E6300-E6400 with a 8800 GTS at 1280X1024.

But a good article. I think it is good to point out in a blind test that there is all but no difference between AMD and Intel.

Its shows that you must choose a CPU by what you do.

Me I'm not an hardcore gamer so I'm fine with my Opty 170 would a Core 2 be faster yes. Do I want the extra speed. Yes. Do I need it no.

Ill be on this system until Quad core becomes affordable. I guess Im not very [H]ard I don't feel the need to have the latest and greatest. All I want is a fast machine that fulfills my needs. What this article shows that Intel is the best but the AMDs are still competitive.
 
The sarcasm does not help.

The title of the review/test is Real-World Gaming CPU Comparison with 8800 GTX SLI, if the test is a CPU comparison, why continue to create a bottleneck with the GPU's?

Am I missing something here?

Please explain, I have no intent to be hard headed.

Thanks

Please explain this bottleneck you refer to.


As an aside, imho, the C2D wins out simply from consistency. It delivers more consistent performance in many games. It's quite simple, actually. If you have a single core AMD s939 with 2x512 of RAM or lower, it is better to upgrade to a C2D if you are looking for a new system right now. Anything higher, it is best to stick with AMD and spend the money elsewhere (GPU, etc.). Would a flowchart help?:D
 
I never suggested that they should manipulate the cache. Neither of these CPU's allow that. I only suggested that the clock the CPU's to the same speed, in that they could eliminate as many variables as possible.
If you tamper with the clock speeds, you're nulling the playing field, not leveling it. The idea is to compare two out-of-the-box CPUs. You want to compare the performance of CPU architectures. [H] wants to compare the performance of CPUs.

The architecture of a CPU has no direct benefit to gaming performance. The benefit is indirect; the architecture allows the CPU to perform the tasks it needs to perform. The performance of the entire CPU, including the amount of cache, its clock speeds and other factors is what delivers gaming performance.

If you want to see how efficient a CPU architecture is, there is probably somewhere else you could go to see that.
 
It's quite simple, actually. If you have a single core AMD s939 with 2x512 of RAM or lower, it is better to upgrade to a C2D if you are looking for a new system right now. Anything higher, it is best to stick with AMD and spend the money elsewhere (GPU, etc.). Would a flowchart help?:D

QFT
 
The sarcasm does not help.

The title of the review/test is Real-World Gaming CPU Comparison with 8800 GTX SLI, if the test is a CPU comparison, why continue to create a bottleneck with the GPU's?

Am I missing something here?

Please explain, I have no intent to be hard headed.

Thanks

If you create a test where the limits of all the other hardware in the system is reached, then the single variable you change will give you a result. This is always true. Even if the variable produces no changes, that in and of itself is a result. Such a result would have shown that the CPU was not a limiting factor and that no difference exists between the two CPUs. This eludes to another variable being the responsible variable that would significantly adjust performance deltas in the given tasks used in the tests.

Some games are GPU limited, and some are CPU limited. In most of the games, the CPU changes made little to no difference in gameplay. Though the numbers showed varying differences from one game to the next, a human being wouldn't likely have noticed these differences in 90% of those tests as the FPS on many of those games was over 50FPS. In the titles that were primarily CPU bound such as Flight Simulator X, the difference between each CPU was as much as 24% and because the FPS were so low on that game, the difference a human being could perceive between the two test machines would have been great. While the performance difference between the two CPUs is great in synthetic tests, in real world applications the difference is less pronounced. There are a few cases where that doesn't hold true.

I've reworded this post about 4 times trying to find the best way to explain it. This is the best I've come up with before hitting the Submit Reply button. I don't know what else to tell you at this point.

I think that we can all agree that graphics hardware is more important a bottleneck in system performance than CPUs are. Even so, if you have a badass 8800GTX SLI setup, you need the best CPU possible to feed those awesome cards and get your moneys worth out of them.

At the time of this writing, the Core 2 Extreme X6800 is that processor. (Or the QX6700 for a little less performance now, and probably alot more in the future.)

(Or any C2D overclocked to X6800+ speeds will do nicely as well.)
 
I have no problems in those areas. It is simply one of those things that drives me nuts. Because I am bored at work I addressed the issues I had with that particular post.
Ahh you're right. But it's an uphill battle. Eventually more people use the wrong saying, and the wrong saying becomes the accepted phrase.

Like the Brit's who have called the computer case(with CPU, Mobo etc.) a CPU. Now it seems that CPU is the accepted British term for a computer (the complete system, less the monitor and peripherals). Ask them what a Cor 2 Duo is? A processor. Arrrrrrrgghhhhhhhhhhhhh!

How's that for off-topic!
 
Ahh you're right. But it's an uphill battle. Eventually more people use the wrong saying, and the wrong saying becomes the accepted phrase.

Like the Brit's who have called the computer case(with CPU, Mobo etc.) a CPU. Now it seems that CPU is the accepted British term for a computer (the complete system, less the monitor and peripherals). Ask them what a Cor 2 Duo is? A processor. Arrrrrrrgghhhhhhhhhhhhh!

How's that for off-topic!

People calling a whole computer a hard drive or CPU is pretty prevalent down here in Texas at least. It drives me nuts as well.
 
Don't forget that the AMD CPU has 940 pins against Intels 775. In order to achieve apples-to-apples conditions, 165 pins must be disabled/removed from the AMD unit, for a total of 775 contact points per CPU.
:p

The sarcasm does not help.

The title of the review/test is Real-World Gaming CPU Comparison with 8800 GTX SLI, if the test is a CPU comparison, why continue to create a bottleneck with the GPU's?

Am I missing something here?

Please explain, I have no intent to be hard headed.

Thanks

Take a look at the FSX and M2:TW pages you mentioned earlier; essentially the only difference between the "Intel" and "Amd" tests was the processor, and yet the "Intel" tests managed to achieve higher graphical settings at the same/slightly higher framerates. If the test was GPU bound there would have been no difference in either.
 
I just wrote some words in an email to a reader, and after writting them and reading this thread I discoverd that they are very relevant to the dissucision at hand seeing as there is a common thread in the posts I am reading. Primarly it seems our evaluation method is still misunderstood.

I will elaborate, from my email:

We feel that the highest playable gaming evaluation is of more importance to relating the game experience to our readers than apples-to-apples. Our real-world gameplay evaluations such as this are geared for the gamer; it lets them know exactly what the real benefits to gaming are between CPU/GPU. We evaluate performance and image quality hand-in-hand, neither solely.

When I install a game on my computer and play that game for the first time the very first thing I do is figure out what is the best resolution, in-game settings, and AA and AF I can play at with enjoyable performance. We perform the same method in our testing, evaluate the game and figure out what resolution, in-game settings; AA and AF are playable on a said platform.

Since our comparison is between the fastest AMD and Intel dual-core platform with the fastest graphics system out there, the comparison is very valid. It lets you know which platform is going to give you the best gaming experience. As we found out overall the Intel platform provides that, there are some games though where the experience is much closer, others where it is farther.

If we compare “apples-to-apples”, that isn’t going to tell us anything useful, it doesn’t represent “real-world” gaming.

If you guys would like to discuss this further I will be glad to answer any questions and go into more detail privately through email.

Let's keep this thread ontopic in regards to the evaluation itself now.
 
If you tamper with the clock speeds, you're nulling the playing field, not leveling it. The idea is to compare two out-of-the-box CPUs. You want to compare the performance of CPU architectures. [H] wants to compare the performance of CPUs.

The architecture of a CPU has no direct benefit to gaming performance. The benefit is indirect; the architecture allows the CPU to perform the tasks it needs to perform. The performance of the entire CPU, including the amount of cache, its clock speeds and other factors is what delivers gaming performance.

If you want to see how efficient a CPU architecture is, there is probably somewhere else you could go to see that.

The idea that I want to compare the CPU architecture is completely untrue. As I said previously, it would make more sense that if you were trying to see what the difference of CPU MHz is on "real world performance" then take one CPU, set it to different clock speeds, and see the difference in performance. To say one CPU is better than the other when not they are not on a level playing field is not a true comparison. I can compare any two CPU's I choose from each camp and one is obviously going to have advantages over the other.

The point I was trying to make about the different clock speeds is that one side is given several advantages over the other. If [H] is going to give one side the advantage and, lo and behold, it is called the better CPU, that is a biased test. If they wanted to do a fair and balanced test, then see who performs better, that's fine. The Intel CPU was given two advantages, a faster overall MHz rating and Linkboost. It was [H]'s idea to compare the two CPU's, not mine. All I'm saying is if they want to do it, they should do it right and make it a fair test.

More to the point, if they want to do a "real world" analysis of CPU MHz on game performance, one CPU clocked at many different speeds would have been a more realistic test. End of story.
 
A more realistic test, at least from an enthusiast point of view, would be to overclock both CPUs as far as possible under identical thermal conditions (same CPU cooler etc) and then compare. If the AMD proc can be oc'ed to the same GHz level as the C2D, fine. If not, the difference will become even clearer.
 
Because when is that last time that you said I want to game at 640X480. The people around here want to have the latest and greatest...

As far as I understand, you don't have to go to 640x480 in order to be CPU limited, especially not with 8800GTX in SLI.
 
I've reworded this post about 4 times trying to find the best way to explain it. This is the best I've come up with before hitting the Submit Reply button. I don't know what else to tell you at this point...

Thank you for taking the time explaining. :)
 
...If the test was GPU bound there would have been no difference in either.

Good point, and I understand it, but I still feel that if these test were run at 1600x1200 or a slightly higher resolution it would have really let the GPU's flex there muscle with all settings on the highest, and the AMD system would not have been hovering around 24FPS.

If I have not made my point clear I am sorry, as I am not trying to be difficult.
 
This is a pretty standard complaint. While I don't do video card reviews and can't really address what those guys will or will not do going forward, I can tell you that benchmarking all hardware in the most ideal conditions is probably the best way to go.

The 8800GTX SLI setup needed to be pushed to it's limits in order to show the difference a CPU could make with that type of setup. In this context, it was the correct choice I think. Plus you test with an X6800 CPU to make sure that you've removed as much of the bottleneck as possible and that way the only variable changing is your graphics card. (In the case of graphics cards reviews.)

At that resolution, you're correct, but it doesn't help anyone that's not spending less than $1500.00 on a monitor.

In the [H] video card reviews, if you'll notice many resolutions and monitor configurations are often tested. I would think this would give you the information you'd need most. There should be no doubt that at anything less than 2560x1600, the 8800GTX is going to STILL be the fastest card out today.

Yes, but one doesn't need a review to know that. I'm not saying this article doesn't make a point. I actually like how the tests were done to get playable settings, but it'd be great to see what kind of playable settings you can get at 1600x1200 (or 1050 for WS). It'd be interesting to see a wide variety of video card and CPU combos.

IOW, if I have video card X, what is the max CPU that's worth getting? If I have CPU Y, which card should I be looking at. Showing that with the playable settings, as well as an apples to apples comparison, where appropriate, would be an incredible article. No doubt it'd be a long hard slog of a test, but it'd very useful, IMO. And it'd be something that you could update every 6 months (preferably with a page that has charts for each game).

That would give you the ultimate bang for the buck reference.
 
I'd like to see a few things added (or in a future article). First would be results from overclocked systems-afterall, this is [H]ardOCP. I would like to see some info about typical overclock results for AMD vs. Intel processors and maybe even video cards (though that is usually covered in the video card reviews) and what that means to real world gaming performance the way you have presented here. I would also like to see a number of different resolutions tested as well as some non-SLI setups to see what performance can be had with that.
I think one of the advantages of SLI is that you can buy a system with one video card for a reasonable price that suits your needs and when more demanding games come out another card can be added to bump up the performance without having to get rid of your old card. For bleeding edge performance I think it will always be the case that the newest cards in SLI/Crossfire will give you the most bang, but a lot of people worry about the most bang for their buck.
Previously, I think you've highlighted nicely that there are mainly two people who like to overclock, those who want the highest performance possible and those who want the most value from their hardware investment, and catered to each of these groups in your articles. This article seems to cater to people who buy the most expensive stuff and then don't push it. I see that there aren't really many games out there that can stress the system you've put together, but there aren't many people who can afford the system you've put together either.
Maybe you've done an article recently that highlighted the midrange market, or maybe you're waiting for the lower-clocked C2Ds to be available to put it together, but I always enjoyed the articles that not only emphasized the fastest machines possible, but the articles that showed you what the best value was.
Thanks for the great work, keep it up. (Oh yeah, and the spell-check for the forums doesn't recognize "overclock" or "[H]ardOCP", though I'm sure it's been pointed out before.)
 
The point I was trying to make about the different clock speeds is that one side is given several advantages over the other. If [H] is going to give one side the advantage and, lo and behold, it is called the better CPU, that is a biased test.
This is absolutely wrong. The X6800 is the current top-end dual core Intel processor, the flagship dual core Intel processor. The FX-62 is the current top-end dual core AMD processor, the flagship dual core AMD processor. This is the playing field; The X6800 and the FX-62.

The X6800 has an advantage because, due to many factors, it is faster than AMD's competing processor. It executes the same task faster than AMD's competing processor out of the box. The review represents what happens with a number of games, on different (but comparable) high-end platforms when you install each processor into each respective platform and allow the BIOS to detect the CPU. The platforms between the two test machines naturally vary. That is unavoidable, but crippling one platform to better match the other is not what real users will do when they build their gaming machines, just as crippling one processor is not what real users will do. Portraying the whole story is what [H] is all about.

Now, if AMD had a dual core processor that is globally as fast as Intel's X6800, Brent surely would have used it in the evaluation. Intel's flagship offering is more expensive than AMD's counterpart, but that seems inconsequential from the standpoint of what the evaluation is trying to achieve. The article is not intended to be about which processor is the better value, nor is it intended to outline whether or not other Intel processors are faster or slower than other AMD processors. The article is about the X6800 and the FX-62 and scaling in gaming performance. Brent's conclusion is this:

Brent Justice said:
There is no other way to slice it; our gameplay experiences clearly show the Intel Core 2 Duo X6800 to provide real tangible improvements over the AMD Athlon 64 FX-62 when running GeForce 8800 GTX SLI. The BFGTech GeForce 8800 GTX SLI platform is one seriously fast gaming platform, and the Intel Core 2 Duo X6800 powers it better.

Out of the box, which is the better performing CPU in real-world gaming scenarios? Are we going to scream "bias!" because the X6800 is faster than the FX-62?

The idea of calling bias on [H] because the X6800 is faster than the FX-62 in real-world gaming scenarios on real-world platforms (the point of the article) is extremely odd. As I said, if you want to level the playing field by negating the advantages inherent in Intel's more expensive processor, that's your business, but raising the bias flag on Brent and [H] as a whole because you cannot accept the perfectly outlined fact that the X6800 is a better performing processor in real-world gaming scenarios than the FX-62 (on a comparable platform) is simply unacceptable.

t_ski said:
More to the point, if they want to do a "real world" analysis of CPU MHz on game performance, one CPU clocked at many different speeds would have been a more realistic test.
The article wasn't about clock speed scaling, it was about gaming performance on two out-of-the-box processors. Don't attempt to draw any other conclusions on anything other than what was presented in the article.
 
Great job [H]!

To everyone still wondering about the GPU bottleneck thing:

1. They used the best video cards available.
2. They used the 30" Dell monitor because widescreen gaming is getting more popular and widescreen is where 8800GTX SLI really shines. If you don't have a 30" Dell monitor, you probably don't need 8800GTX SLI anyway.
3. They used 2560x1600 because it is the native resolution of the monitor. LCDs still don't look that good on non-native resolutions.
4. Even with the extremely high settings, most benchmarks still showed Intel's lead, though it wasn't night and day like the 640x480 benchmarks were.
5. In the benchmarks that did seem to be GPU bottlenecked, they provided apples-to-apples benchmarks that clearly showed Intel's lead in most cases.
6. They mentioned at the beginning of the article that AMD was guilty of the same "canned" (I still think that's too strong of a word) benchmarks as Intel.
7. They didn't use low-res benchmarks because there isn't much of a point. In the lower resolutions, framerates become high enough that the difference won't be preceiveable in gameplay. The other reason to use low-res benchmarks is to test a processor's raw power, but othe types of test (such as SANDRA, encoding, and Photoshop) are better for that purpose than low-res games.

I think the quality of this article was excellent and much better than the first Core 2 Duo article (which, I have to admit, was biased).
 
The idea that I want to compare the CPU architecture is completely untrue. As I said previously, it would make more sense that if you were trying to see what the difference of CPU MHz is on "real world performance" then take one CPU, set it to different clock speeds, and see the difference in performance. To say one CPU is better than the other when not they are not on a level playing field is not a true comparison. I can compare any two CPU's I choose from each camp and one is obviously going to have advantages over the other.

The point I was trying to make about the different clock speeds is that one side is given several advantages over the other. If [H] is going to give one side the advantage and, lo and behold, it is called the better CPU, that is a biased test. If they wanted to do a fair and balanced test, then see who performs better, that's fine. The Intel CPU was given two advantages, a faster overall MHz rating and Linkboost. It was [H]'s idea to compare the two CPU's, not mine. All I'm saying is if they want to do it, they should do it right and make it a fair test.

More to the point, if they want to do a "real world" analysis of CPU MHz on game performance, one CPU clocked at many different speeds would have been a more realistic test. End of story.[/QUOTE

What exactly are you saying there? Are you saying that [H] should have set the clockspeeds of the processors to be equal to each other so that neither had an "unfair advantage"?

I hope you're not saying that, because it doesn't make any sense at all. That's not equalizing the processors, it's bottlenecking the better processor. What about the CPU architecture? If Intel's is better, doesn't that give it an unfair advantage? What about the L2 cache? If one is bigger, should [H] disable it, since it gives that CPU an unfair advantage.

That's like taking a 7800GT and a 7800GTX and comparing the two, but with only 16 pipes active on the GTX.
 
For fuck's sake people the correct expression is "I couldn't care less." Not "I could care less." See the difference? The first one means that you couldn't care less because you are at the minimal possible level of caring (which is zero) concerning the subject the comment was made in reference to. That means, you don't give a shit. The other statement. "I could care less." means that you do care about the situation the statement was made in reference to. If you could care less than you are not at the minimum level (of zero) or complete lack of caring. Saying you could care less means that there are lower levels of caring than you currently have in regard to the statement.

Stop it. It's driving me nuts. Thread after thread people keep screwing up that saying.

Oh Shit, that made me laugh. It's almost as annoying as people using the word "noone". Do they mean "no one"? There's no such word as "noone". It's a shame what is going on in our school systems.
 
Oh Shit, that made me laugh. It's almost as annoying as people using the word "noone". Do they mean "no one"? There's no such word as "noone". It's a shame what is going on in our school systems.

Slightly off topic but,

Apparently, people are forgetting how to write now because they spend all their time typing text and instant messaging. I use computers and type more than MOST people and I have certainly not forgotten how to write.

This is something I read about not that long ago. That's SAD. Schools today suck, and parents are breeding idiots.
 
What exactly are you saying there? Are you saying that [H] should have set the clockspeeds of the processors to be equal to each other so that neither had an "unfair advantage"?

I hope you're not saying that, because it doesn't make any sense at all. That's not equalizing the processors, it's bottlenecking the better processor. What about the CPU architecture? If Intel's is better, doesn't that give it an unfair advantage? What about the L2 cache? If one is bigger, should [H] disable it, since it gives that CPU an unfair advantage.

That's like taking a 7800GT and a 7800GTX and comparing the two, but with only 16 pipes active on the GTX.

Actually, that's exactly what I'm saying, that both CPU's should be run at the same MHz. Whether that means underclocking the Intel or overclocking the AMD, it there were at least some tests shown with both CPU's run at identical speeds I think it would be a more realistic comparison between the performance of the CPU's. I do think that each CPU will have an advantage in some respect, but if the testing is to be unbiased then they should have limited as many "unfair advantages" as possible.

If AMD were to come out with a CPU that ran at a substantially higher clock speed than the Intel chip, I think it would have won. Why? Because it had an unfair advantage. In all honesty I don't think there are very many people out there that will go out and buy these two CPU's and not overclock them. Most people buy these CPU's because they are multiplier unlocked. So, in a sense, what [H] is doing here is comparing the two platforms and saying which one they think is better. As stated by Kyle (IIRC) in this thread, he admits many readers just skip to the comclusion page and don't bother to read all the details. That means many of those people see "Intel is better" and stop there. If [H] really wanted to compare the two platforms to eachother, then they should have leveled the field.

And in response to your comment about the cache, if you bothered to actually read what I wrote, you obviously would have seen that I said NOT to disable the cache.
 
Actually, that's exactly what I'm saying, that both CPU's should be run at the same MHz. Whether that means underclocking the Intel or overclocking the AMD, it there were at least some tests shown with both CPU's run at identical speeds I think it would be a more realistic comparison between the performance of the CPU's. I do think that each CPU will have an advantage in some respect, but if the testing is to be unbiased then they should have limited as many "unfair advantages" as possible.

You have missed the point. There is an unfair advantage, it's called architecture, and manufacturing capabilities. How is it a realistic test when AMD and Intel don't sell a Core 2 Extreme and FX series processor at the same clock speed that work on a similar platform?

We all KNOW that the Core 2 Duo and Core 2 Extreme are FASTER than the Athlon X2 and FX series processors per clock cycle. We also all know that they overclock better than the AMD CPUs do. Why would you need a Core 2 Extreme and Athlon FX clocked at the same speed knowing this fact already? If you want fair, take an E6600 and an X2 4600+ and match those. Both operate at 2.4GHz. Still think AMD would win that battle? I didn't think so. At 2.8GHz, guess how the battle is going to go? Logic and simple reasoning should be enough to enable you to draw conclusions given the information presented.

There are FAR more people that will buy a high end system with the top CPU offered at the time and leave everything stock. I've worked on literally thousands of PCs and I've built plenty of them for customers only to upgrade them or service them for whatever reason later. None of them get overclocked.

I've also talked with several people about overclocking and even amongst so called "gamers" do not overclock. Gamers and overclockers have a loud voice when it comes to hardware, software and things we like and don't like. People also tend to listen to use because we know more on the subject than they do. Still you have to realize that we are a minority. There are more people who don't overclock than there are that do. Even though a large portion of the mom and pop PC's can be overclocked, as can most higher end custom systems, the bulk of them will never see any significant upgrade and certainly no overclocking during their operational lifetime.
 
Whether that means underclocking the Intel or overclocking the AMD, it there were at least some tests shown with both CPU's run at identical speeds I think it would be a more realistic comparison between the performance of the CPU's.
No, it would be a more realistic comparison between the performance of the CPUs at clock speeds that nobody would actually use.

When you buy a CPU, you buy the clock speed. The clock speed is the CPU, in a sense. Without it, ops per clock drops to zero, and the processor isn't even good enough for a bookend.

Like I said, you want to compare core architectures. I don't understand why you've denied this. Enthusiasts buy processors -- not core architectures.

Also, the term "bias" in this context indicates deliberate and intentional special treatment for one brand over another. I see nothing that would lead you to believe that this is what Brent has done.
 
When you buy a CPU, you buy the clock speed. The clock speed is the CPU, in a sense.

This is not entirely true. When you buy a CPU, you also need to buy the coresponding chipset that enables the features of that CPU. Many could argue that the chipset is even more important than the CPU itself. Every single part of a system provides important pieces to the whole puzzle. Chipset, CPU, video card, memory, etc, etc. That's why the point was made in the article to have "identical" systems. But they aren't identical, and can never be identical. Why? Because of **gasp** architectures. [H] is the one that brought up the idea of comparing the two; my point was only to level the playing field. When AMD releases a new CPU with a higher core speed than the Intel, one that would beat the X6800, will the same test be performed? No, probably not. Even if it did, I imagine there would be people in here complaining that the test was unfair due to the difference in core speed: an unfair advantage for AMD.

Like I said, you want to compare core architectures. I don't understand why you've denied this. Enthusiasts buy processors -- not core architectures.

Again, the processor is only part of the puzzle. Look at the big picture...

Also, the term "bias" in this context indicates deliberate and intentional special treatment for one brand over another. I see nothing that would lead you to believe that this is what Brent has done.

It is biased when the article comes to a conclusion saying one is the winner, one that has several advantages in its favor, and people read that and make their purchases based on what they're told. They go out and spend their money on what [H] says is good, but what happens when the table is turned and AMD comes back with a winner? The two companies have started leap-frogging eachother just like ATI and nVidia. One day it's X, the next day it's Y. If the two are not compared in the same circumstances, or at least with as few variables as possible, then it would not be an even comparison. Yes, I know that people will not buy an X6800 and underclock it - I'm not a fucking idiot. If you compare an Intel system to an AMD system you are comparing more than just clock speeds. You are comparing the CPU, the chipset, memory controllers, cache, instruction sets, and so on. Do you really think that all there is to a CPU is speed? If you do, I'm sorry for you...

If all this is too hard for you to understand, I really don't know how to put it to you any easier.
 
I'm not a fucking idiot.

You're not? You damn sure come across like one. Many people have addressed your points already, if you can't accept that your thinking is highly flawed then too bad.

Asking for an identically clocked test between two different architectures is STUPID. Were you around asking for the PD 965EE to be underclocked to 2.6GHz when it was being compared to the FX-60? Or did you ask for the FX-60 to be overclocked to 3.73GHz? :rolleyes: ROFLMAO

Are you going to insist that when R600 is released that they clock the GPU/VRAM at the EXACT SAME SPEEDS as an 8800GTX?! :rolleyes:

Give it up. You're embarrassing yourself.

You, sir, are a fucking idiot. Period.
 
:confused:
To: Dan and Brent

Real-World game resolution setting?

Yup do a Poll of what Resolutions users here use for the Game settings, first one used for Online Gaming and the other for against the Computer. For Example, looking at folks' rigs, looks like 1280 X 1024 85Hz might be more *Real World than 19 or 20 by 1400 or etc.....Even guys I see here with 24", 22" and 20" LCD's still have one Video card and many of those are still 1900/50 7900/50's and and few single 8800 etc..

Then if a poll shows 1600 X 1024 or 1200 is the most common setting, use that. I'd be absolutely SHOCKED if even more than 10% of the posters here can come close to those settings used in what's called a "Real World" review. Just IMHO, the most commonly used settings should be called real world.:)
 
Real-World game resolution setting?

<some good points>
You make a good point that most members here are nowhere near the resolutions used in this article, and I'd also like to see a similar article written for more common resolutions. However, I think by "real-world", the authors meant that they were actually using real hardware on their own desk(s) with real games, rather than relying on performance data from the manufacturers or canned benchmarks.
 
Interesting points, however, how often does one encounter two 8800GTX driving a 1280x1024 resolution in actual practice? That's a bit odd, imo. If one were to spend ~$1200 for video cards, and then shoehorn them into a 19" display...well, they wouldn't be getting what they paid for out of those video cards. That's for sure.
 
Interesting points, however, how often does one encounter two 8800GTX driving a 1280x1024 resolution in actual practice? That's a bit odd, imo. If one were to spend ~$1200 for video cards, and then shoehorn them into a 19" display...well, they wouldn't be getting what they paid for out of those video cards. That's for sure.
A similar comparison for 1280x1024 wouldn't necessarily require a pair of 8800GTX's. Remember, the point of the review was to see what impact the CPU makes when the video card is not a bottleneck. So for 1280x1024, all you need is a single video card powerful enough to run games at that resolution at maximum settings, and see what effect, if any, the CPU has.

For example, put a single 8800GTX in an AM2 board, start with an FX-62, and gradually work down through all the X2's, then the Athlon 64's, and then the Semprons, until you see the CPU hampering in-game performance.
 
:confused:
Yup do a Poll of what Resolutions users here use for the Game settings, first one used for Online Gaming and the other for against the Computer. For Example, looking at folks' rigs, looks like 1280 X 1024 85Hz might be more *Real World than 19 or 20 by 1400 or etc.....Even guys I see here with 24", 22" and 20" LCD's still have one Video card and many of those are still 1900/50 7900/50's and and few single 8800 etc..
Or they could just bench at a range of resolutions. That way we can look at the resolution we are most inclined to run at. We can also see what resolutions bottlenecking occurs and on what component it is occurring (-very useful for getting a sense of what would need upgrading next-). Generally we would get a more complete picture of the performance characteristics of the hardware.

This is unsurprisingly how most everyone else reviews (oops I mean "evaluates") their hardware, and for good reasons.
 
Or they could just bench at a range of resolutions. That way we can look at the resolution we are most inclined to run at. We can also see what resolutions bottlenecking occurs and on what component it is occurring (-very useful for getting a sense of what would need upgrading next-). Generally we would get a more complete picture of the performance characteristics of the hardware.

This is unsurprisingly how most everyone else reviews (oops I mean "evaluates") their hardware, and for good reasons.

QFT!
 
If you notice, we don't do what everyone else does. If you want what everyone else does, than read everyone else. We offer something unique to the table, something that is more relevant to real-world gaming.
 
Or they could just bench at a range of resolutions. That way we can look at the resolution we are most inclined to run at. We can also see what resolutions bottlenecking occurs and on what component it is occurring (-very useful for getting a sense of what would need upgrading next-). Generally we would get a more complete picture of the performance characteristics of the hardware.

This is unsurprisingly how most everyone else reviews (oops I mean "evaluates") their hardware, and for good reasons.

[H] does that, but in a different way. Their philosophy is to keep the resolution as high as possible (or as close to the LCD's native rez as possible) while tweaking other settings to get good gameplay. They run games at different shadows, HDR, AA, AF, and scenery settings, and try to find the highest settings that make the game "playable" (though different people have different ideas of what playable is - at least they picked a common number) at the resolution that they want. So, just like the other review sites, they benchmark a game many different times at different settings, but instead of the resolution, they tweak the other settings to reach their resolution goal (usually 1600x1200, 1920x1200, or 2560x1600).

Since they don't have the time to test every single setup, [H] made a few very reasonable and necessary assumptions:

1. Readers would want to see how the CPUs did with the best GPUs available (after everyone's yelling about not using Crossfire last time, I don't think this was too far off).
2. Those buying the best GPUs available would be using them with a widescreen monitor (who buys dual 8800GTXs to run at 1600x1200?)
3. Those using a widescreen monitor would be running at native resolution (who runs an LCD monitor at non-native resolution?)

[H] used the 30" monitor, however, and the 24" 1920x1200 monitor was probably a better, more common choice. This is, IMO, they only thing that could have been better with the review.

Using multiple resolutions would be great, but to do that along with the extensive tweaking of settings that [H] does would be asking too much, IMO.

Actually, that's exactly what I'm saying, that both CPU's should be run at the same MHz. Whether that means underclocking the Intel or overclocking the AMD, it there were at least some tests shown with both CPU's run at identical speeds I think it would be a more realistic comparison between the performance of the CPU's. I do think that each CPU will have an advantage in some respect, but if the testing is to be unbiased then they should have limited as many "unfair advantages" as possible.

The clockspeed is actually part of the processor. It's what makes the processor go faster. Underclocking one of the processors would be disabling part of what makes it go faster than it's competitor.

Yes, if AMD released a CPU with a much higher clockspeed than Intel, it would probably win. But, AMD hasn't released such a CPU, and Intel therefore has the performance crown. Your statment is akin to saying that, if AMD released a 64-core processor right now based on the current architecture, it would beat Intel. I will respond to that with the following: Duh!

As for my statement about L2 cache, I'm asking you: What is the difference between extra cache and extra clockspeed? Aren't they both things that can give a processor an advantage over it's competitor? Aren't they both things that are not part of the CPU architecture itself? Aren't they both things that can be disabled by the user?

I saw an article that said the eight-speed Veyron has a top speed of 250 MPH. That's a lot more than, say, a Corvette. But I think the article was biased, because if you set both cars to gear 1, the difference is much more modest. The Veyron's extra two speeds give it an unfair advantage. :rolleyes:
 
If you notice, we don't do what everyone else does. If you want what everyone else does, than read everyone else. We offer something unique to the table, something that is more relevant to real-world gaming.

Yes, I forgot to mention in my post that one of the good things about [H] is that they give you a different perspective on things than all of the other sites. That's not to say that the other site's benchmarks are invalid (yes, [H] went too far when they said that a while back). But [H]'s benchmarks give you new information instead of repeating what you already know.

Oh, and BTW guys, apples-to-apples comparisons aren't always the best thing. To see what real apples-to-apples comparisons look like, try reading an issue or two of Consumer Reports. It'll make your head spin. They once said that the Dyson vaccum cleaner is a ripoff because it cleans no better than other cleaners but costs twice as much. They failed to factor the Dyson's main feature - it's no-clog filter system - into their tests, however, since the other vaccums didn't have that feature and it therefore wouldn't have been an apples-to-apples test.

(That's not to say that CR is never useful, however)
 
If you look at most of the tests, they are at the same settings, NFS Carbon, Oblivion, BF 2142, FEAR, WoW, for the two games we didn't find the same settings playable we provided AP2AP graphs.

I honestly don't understand the complaints, if you read the evaluation everything you need is in there.
 
Yes I don't mean to come off rudely. I know you're very much aware of all the points I've raised (among other people), and for whatever reasons, valid or otherwise, you've decided to stick with the "real world" methodology.
 
If you notice, we don't do what everyone else does. If you want what everyone else does, than read everyone else. We offer something unique to the table, something that is more relevant to real-world gaming.

Please, I know the reply wasn't for me. I was just suggesting finding the most common settings and calling them Real World. I agree and don' t care what others said. Same goes for my 20" viewable CRT, surely not a common device. Just seeing 30" LCD with two 8800GTX and etc.. as Real World just doesn't seem right.
 
Back
Top