Intel Core 2 Gaming Performance

metallicafan said:
Maybe the best post out of the 900 or so posts in this thread. When those of you complaining in this thread realize that Conroe is indeed faster in many applications except our highly graphical GPU bound games the quicker this issue can be put to rest.
I don't see very many people denying that Conroe is faster in many applications except for GPU-bound games. That's not the crux of the debate, as far as I can tell. The real problem is why was this "real-world" approach chosen for the Conroe review (which btw is more consistent with his other reviews, video cards for instance), while for the AM2 review a couple months ago he chose to scale down rez to 800x600 to eliminate the GPU as a factor? I'm not gonna jump the gun and say that he's trying to put AMD processors in a better light like some will, but it's not exactly consistent, now is it? I see that he's planning to write an editorial to clear the air, and I hope he addresses this particular question. Why do real-world now and not 2 months ago for the AM2 as well? I am eager to read his explanation, and I hope that it's good.


Often times I see people complain that people put too much stock in synthetic benchmarks, and that 3DMark means nothing about in-game performance. Its more or less the same thing here. Kyle could have done what every other review out there did, and what most people already believed to be true, and showed that Conroe is indeed a faster CPU. But he chose to give the truth in that at the resolutions and graphical settings most of us play at there arent a lot of gaming benefits to buying a $1000, top of the line Conroe.
The [H] benchmarks certainly offer a different perspective on performance of various computer parts, and that's a good thing. Taken on its own it is fine as a review. But compared to the methodology used in the AM2 review a couple months ago it looks a bit strange. You can't deny that it looks just a little iffy. Once again, I'm not implying any deliberate wrong-doing, but you can see why it raises questions.


Bottom line: Measuring CPU performance on a game at 800x600 resolution is more or less a synthetic benchmark like 3DMark. It sure cant be called a "Real World" benchmark like so many people have asked for.
Eh? Are people calling 800x600 a real-world benchmark? I certainly wouldn't.

All I want to see is consistency. If real-world is the way [H] does things, then do that always. If I want reviews that test parts at identical rendering settings I can go somewhere else. If I want to see "real-world" I come here. Just don't mix and match. It just makes [H] look bad.
 
JNavy89GT said:
Donnie, again apologies for lumping u in with the Intel !!!!!!s. It's not true evidently so I retract that statement about u. In retrospect I appreciate your outlook and commentary
thx


Dont fall for it man... That is Donnies "canned" excuse. It isa CLEAR he is one of the biggets Intel !!!!!!s here... But whenever someone happens to notice it, he falls back on, "Well I have a amd64 3500+"....

Everytime....
 
HOCP4ME said:
Okay, I see what you mean. But this motherboard test is much different from the Conroe tests. Let me try to explain it:

The reason why Kyle did "real-world" tests on Conroe is because there was so much hype about it that Conroe would increase your framerates by xx percent and you'd be blown away be the performance. Kyle wanted to disprove this claim, and he did so successfully. He showed that, once you increase the resolution, Conroe isn't going to help you because today's games are GPU-limited. Now, it's true that they wouldn't have been so GPU-limited with SLI or CrossFire. But [H] tried to use SLI, and it didn't work. So, they were forced to go ahead with the single video card, and they admitted that SLI might show different results.

Now, we take a look at the motherboard review. This time, it wasn't meant to disprove hype or show "real-world" performance. There was no one saying that upgrading to this board would increase your gaming performance by xx percent. This time, it was just supposed to be an all-around motherboard review, and that meant showing which board was faster without any bottlenecks. What if they would have said "our real-world testing shows that you won't notice any difference, so it doesn't matter which board you buy"? That would NOT be very helpful at all. This was supposed to be a review, not an article about whether you will be able to utillize extra power due to bottlenecks. The Conroe article, however, was supposed to be about whether you would really be able to utillize it's extra power. Those are two completely different things.

When [H] does SLI testing of Conroe, as they said they would, it will probably be more of a CPU review.

But I do agree that the comment about other sites being liars doesn't need to be there.

The same logic applies about testing motheboards does it not? Why are we told one motherboard is faster than another with synthetic benchmarks but they do something completely opposite with the Core 2 Duo article. Why not run the same "real world" (more like fantasy world for most of us) benchmarks for the motherboard articles? It is obvious you will come to the same conclusion, it will not matter what motherboard you choose since it will also be GPU limited. Who cares about features, overclocking, stability, or anything else on a board because these "real world" benchmarks would completely negate the need for buying a $250 Asus board over a $75 ASRock board. If you assume the audience here all plays at your 'Real World" benchmarks then why not test all hardware with the same benchmarks? I guess it would get very boring but they could basically change all of their reviews to one page editorials with the same lead in sentence - "All other websites are liars and cronies, only [H] is telling you the truth, as you will see today our real world benchmarks prove all (take your pick) motherboards, power supplies, memory, cooling systems, and cases are limited by the GPU and are therefore equal."

I am arguing about the logic between tests of one critical piece of the system (CPU) and the other (motherboard) when the "real world" tests (GPU centric) negate any differences between the other components in the system. The wording in the motherboard articles clearly discuss performance between the boards using synthetic benchmarks, the conclusions are also based upon these tests along with other aspects of the board. A conclusion that is wrong based upon the comments used in the Conroe 2 Duo article to justify using the "real world" benchmarks to prove there is no difference in between CPUs. Once again, this can be argued every which way but loose, the real issue here is the lack of professionalism in my opinion by Kyle in dealing with other websites. Rant over...
;)
 
Dan_D said:
Wow after reading many of these posts I can see that many here are missing the point.

What Kyle has shown us is that while Conroe CPU's may in fact be faster technically, in the real world the difference doesn't mean a thing in gaming performance. The difference across the board in the review wasn't that much. All the CPU's were within a few % of each other.

So when another website talks about Conroe being the best for games, that is somewhat missleading. If you game at anything beyond 1024x768 and have a high end video card, it simply won't matter which CPU you are using.

The situation now is different than it was when the A64 made a large improvement in gaming performance over that of a Prescott based P4.


Really ? How is that since in Hs benchmark you can clearly see the Pentium D neck in neck with the FX62 .The difference between A64 and P4 is smaller than between A64 and Conroe.But double standards make us happy , doesn't it ?

And as other sites showed if you game with high end stuff , Conroe makes a difference.
 
savantu said:
Really ? How is that since in Hs benchmark you can clearly see the Pentium D neck in neck with the FX62 .The difference between A64 and P4 is smaller than between A64 and Conroe.But double standards make us happy , doesn't it ?

And as other sites showed if you game with high end stuff , Conroe makes a difference.

Many articles who what [H]ard have shown, in real world use the difference in games are...well...not that big:

http://www.firingsquad.com/hardware/fear_cpu_performance/page3.asp

http://www.xbitlabs.com/articles/cpu/display/cpu-games2_3.html

Terra - Why people are so "upset" over this I really can't see?
 
Terra said:
Terra - Why people are so "upset" over this I really can't see?

Because they changed their testing methods in this so called CPU review.Up until now Kyle was perfectly fine using "canned" benchmarks and testing in 640/480.

All of a sudden , he decides to start on a crusade to reveal the truth to the world, that a FX62 and a P3 1GHz running on a GF 6200 manage the same FPS.... :rolleyes:..so you're perfectly fine with your P3 , no reason to upgrade.

Is it that hard to comprehend ?

Having more CPU power is never a bad thing , even if it's unused now in games , it will be used in the future.Games will be more and more CPU hungry as AI develops and GPUs wil push the bar higher every 6 months.

Kyle at best showed that Conroe has untaped potential and based on that he adviced people "wait for the AMD price cuts" .Neutral journalism at its best... :eek:
 
duby229 said:
Dont fall for it man... That is Donnies "canned" excuse. It isa CLEAR he is one of the biggets Intel !!!!!!s here... But whenever someone happens to notice it, he falls back on, "Well I have a amd64 3500+"....

Everytime....

Did you ever provide any links to back what you're saying?
 
Donnie27 said:
Did you ever provide any links to back what you're saying?

I still remember this thread from him

http://www.hardforum.com/showthread.php?t=1042059

Care to comment duby229?
This post shows it all:
http://www.hardforum.com/showpost.php?p=1029283731&postcount=75

duby229 said:
Uninformed?

Dude.....

At the very least I'm actively reseaching todays, and tommorows architectures. I know more about micro architectures then at least half the people on this forum.

Using Sharikou as a source is neither informed or educated...
Cause boy was that troll wrong...

Terra - :rolleyes: :D
 
Not to change the subject here but I got my maximum PC yesterday and after a thurough combing of articles related to conroe.

The 2006 dream machine sports a C2EX6800 and reading over and over it gives a 27% increase over their zero point system in FEAR with an FX60 with the same SLi cards. it was 31% faster in Quake 4. Then it said "we spanked the quad-SLi overdrive rig by almost 8 frames per second." WHAT? I dunno if that is the nvidia nforce 590 intel edition mobo they got before release but I just do not see how a cpu can allow a gain like that over quad sli.

The articles posted on this website really make me appreciate different angles of evaluation of new hardware. You guys are so detailed and I am glad that things are more spelled out and not to mention real-world applicable.

on a side note kudos to craig tate (Tech Daddy) for his makign rig of the month. I follwed his work log on it and its great... it can be found here:
http://www.hardforum.com/showthread.php?t=1012392&highlight=fx-57
 
I just wanted to thank the [H] guys. I think they gave a very fair view of the gaming situation. Conroe does not represent much if any advantage over current AMD chips in what most of us want to game at. It might be worth it if you have a 1280x1024 monitor, but otherwise it isn't. I thought it was extremely clear. I think all the bitching about no low resolution benches is ridiculous!! No one games in low resolutions anymore. It is like having a video card review and never topping 1024x768 with no AA and no AF. In that situation a 7900GTX might spank a radeon 1900XTX, but no one games at that resolution with those cards so what is the point of having that bench? I agree that the [H] crew should get some SLI or Crossfire benches just to see what the extremely rich who want the bleeding edge will get.
 
Willsonman said:
Not to change the subject here but I got my maximum PC yesterday and after a thurough combing of articles related to conroe.

The 2006 dream machine sports a C2EX6800 and reading over and over it gives a 27% increase over their zero point system in FEAR with an FX60 with the same SLi cards. it was 31% faster in Quake 4. Then it said "we spanked the quad-SLi overdrive rig by almost 8 frames per second." WHAT? I dunno if that is the nvidia nforce 590 intel edition mobo they got before release but I just do not see how a cpu can allow a gain like that over quad sli.

The articles posted on this website really make me appreciate different angles of evaluation of new hardware. You guys are so detailed and I am glad that things are more spelled out and not to mention real-world applicable.

on a side note kudos to craig tate (Tech Daddy) for his makign rig of the month. I follwed his work log on it and its great... it can be found here:
http://www.hardforum.com/showthread.php?t=1012392&highlight=fx-57


That's what the [H] isn't telling you... that the GPU is the limiting factor in their CPU review. With Quad-SLI, it makes the CPU being what is benchmarked :). Hence, your confusion :(.
 
Terra said:
I still remember this thread from him

Using Sharikou as a source is neither informed or educated...
Cause boy was that troll wrong...

Terra - :rolleyes: :D

Hehehe!
or

Originally Posted by duby229
I didn't say anything about Intel systems? So how is that a point worth making?

Opinions vary I guess. Showing something, and faking something are two very different things.

AMD shows...
Intel fakes...

Originally Posted by Duvie
I agree...this has been talked about for 4 weeks plus now....Lets all wait for real reviews and not just by the morons at XS who stroke their epenises with superpi scores...

Originally Posted by duby229
I'm pretty sure Netburst used a shared cache as well.

This new thing is simply a marketing gimmick. It is just a name for an old hack. Yes that is right hack....

Trust me "hack" is a better word....

Thats a few of the reasons why I JOKINGLY asked if they were the same person.

duby229

On a hand picked chip that came from a wafer where half didnt work...... That wont be available until at least august, and even then wont be available in volume becouse the OEM's will be eatin them up.

I can't wait until November when it is actually available in retail.
 
GoldenTiger said:
That's what the [H] isn't telling you... that the GPU is the limiting factor in their CPU review. With Quad-SLI, it makes the CPU being what is benchmarked :). Hence, your confusion :(.

Yes they did. Kyle said if you had something like Cross Fire or SLI you would see more of difference. I'd bet a poll of what video card folks use would be in order here. Thay way we could see a sample of user's cards like Steam Did. Then [H] could call that highest majority voted by users here, real world. I'd bet there are more folks using X800XL and 6800s than 7900GTX's and X1900XTs.

Steam and polls at GameSpot, GamePC and etc. have already showed 1024 X 178 is the most common setting, doesn't matter if folks like that FACT or not. It can be argued that that is more of a REAL WORLD setting than even 1280 X 1024.
 
This whole Conroe vs. AMD thing reminds me of the comparisons with the older P4C 3.0-3.2 to an AXP 3200+ (K7). You would have thought that because of the memory bandwidth advantage of the P4 initially that there would have been a massive switch of gamers to intel platform but it didn't happen in huge numbers once gamers found out that just on pure FPS (multitasking aside) alone the AXP was very close in most situations, and especially with some overclocking. In this case there looks to be a legitimate/modest gain but not enough to cause a massive gamer-user switchout - although i'll go conroe because i will be building a new machine anyway. Might as well go with the fastest at the time.
 
zone_86 said:
This whole Conroe vs. AMD thing reminds me of the comparisons with the older P4C 3.0-3.2 to an AXP 3200+ (K7). You would have thought that because of the memory bandwidth advantage of the P4 initially that there would have been a massive switch of gamers to intel platform but it didn't happen in huge numbers once gamers found out that just on pure FPS (multitasking aside) alone the AXP was very close in most situations, and especially with some overclocking. In this case there looks to be a legitimate/modest gain but not enough to cause a massive gamer-user switchout - although i'll go conroe because i will be building a new machine anyway. Might as well go with the fastest at the time.

Yea but for many Games, 3200XP+ was slower than the older 2800+. Then add to this that the very first P4C 2.4 and 2.6C overclocked from 3 to 3.5GHz consistantly. AXP 3200+ didn't like any kind of overclocking:) Still sold for almost $500.
 
bingo13 said:
Once again, this can be argued every which way but loose, the real issue here is the lack of professionalism in my opinion by Kyle in dealing with other websites. Rant over...
;)

I told you I agree with you on that. As of now my ONLY problem with the article is the comment about other sites. You can stop mentioning that now.

The same logic applies about testing motheboards does it not? Why are we told one motherboard is faster than another with synthetic benchmarks but they do something completely opposite with the Core 2 Duo article. Why not run the same "real world" (more like fantasy world for most of us) benchmarks for the motherboard articles?

Because there was no one saying that a new mobo is going to improve your real-world gaming.

With motherboards, everyone knows that the differences in gaming aren't going to be noticeable. Therefore the synthetic benchmarks are used as more of a stabillity test. They wanted to stress the motherboard, not the GPU. So they used the synthetic tests to do just that. If it would have run much slower than others, we would know there is something wrong with that board. High-res tests are also used for this purpose, to stress the PCI-E bus.

With CPUs, everyone should also realize that the differences in gaming aren't going to be noticeable. And Kyle didn't have to run all of those tests to prove that. It seems, however, that many users forgot about the GPU-bottleneck in the midst of the hype about Conroe. There were so many posts that went like "OMG Conroe is going to increase my framerates by 20%!!!" when that simply wasn't true. I think Kyle wrote the article to remind everyone that Conroe was under the same GPU limitations as every other processor. I don't think it was meant to say "Conroe is only catching up with AMD". It was more of a "remember guys, processors are GPU-bottlenecked" reminder. And the results would have been the same if they would have followed the "most popular" hardware and used a 6800GT at 1024x768.

I still think that the best solution to this problem would be to do both synthetic benchmarks and real-world tests. Synthetic benchmarks are not "total BS", they just give you a different type of information. There really is nothing wrong with using them both. IMO, they complement, not contradict, each other. Real-world benchmarks tell you what preformance gains you can expect to see by upgrading now. Synthetic benchmarks tell you how much power your chip has - power that may be unlocked in the future. Using synthetic benchmarks in addition to real-world takes almost no additional time, and would result in an article that pleases everyone. Don't want to confuse your readers with contradicting results? Then label the tests. Call the real-world tests "today's performance" or "useable power". Call the synthetic tests "future performance" or "potential power". That way, no one is confused, everyone gets what they want, those looking to build a new system get the synthetic tests, and those looking to upgrade get the real-world tests. It would be easy to tell the difference between the two, readers would understand what each is telling them, and [H] would long be hailed as the site where you can "get both sides of it". And if there is a person who doesn't understand both types of tests and only trusts one of them, at least he won't be complaining that you didn't run the types of tests that "actually mean something" to him. You guys may think this is "double-standards", but think about DirectX and OpenGL. Would you ever buy a card that only supported one of them, because you wanted a card with a single "standard"? I think not. You would want a card that supports both "standards", because each one has it's own advantages. Combine both testing methods, and you get the advantages of each, with the disadvantages of neither. There really is no reason not to do it both ways.
 
HOCP4ME said:
I told you I agree with you on that. As of now my ONLY problem with the article is the comment about other sites. You can stop mentioning that now.



Because there was no one saying that a new mobo is going to improve your real-world gaming.

With motherboards, everyone knows that the differences in gaming aren't going to be noticeable. Therefore the synthetic benchmarks are used as more of a stabillity test. They wanted to stress the motherboard, not the GPU. So they used the synthetic tests to do just that. If it would have run much slower than others, we would know there is something wrong with that board. High-res tests are also used for this purpose, to stress the PCI-E bus.

With CPUs, everyone should also realize that the differences in gaming aren't going to be noticeable. And Kyle didn't have to run all of those tests to prove that. It seems, however, that many users forgot about the GPU-bottleneck in the midst of the hype about Conroe. There were so many posts that went like "OMG Conroe is going to increase my framerates by 20%!!!" when that simply wasn't true. I think Kyle wrote the article to remind everyone that Conroe was under the same GPU limitations as every other processor. I don't think it was meant to say "Conroe is only catching up with AMD". It was more of a "remember guys, processors are GPU-bottlenecked" reminder. And the results would have been the same if they would have followed the "most popular" hardware and used a 6800GT at 1024x768.

I still think that the best solution to this problem would be to do both synthetic benchmarks and real-world tests. Synthetic benchmarks are not "total BS", they just give you a different type of information. There really is nothing wrong with using them both. IMO, they complement, not contradict, each other. Real-world benchmarks tell you what preformance gains you can expect to see by upgrading now. Synthetic benchmarks tell you how much power your chip has - power that may be unlocked in the future. Using synthetic benchmarks in addition to real-world takes almost no additional time, and would result in an article that pleases everyone. Don't want to confuse your readers with contradicting results? Then label the tests. Call the real-world tests "today's performance" or "useable power". Call the synthetic tests "future performance" or "potential power". That way, no one is confused, everyone gets what they want, those looking to build a new system get the synthetic tests, and those looking to upgrade get the real-world tests. It would be easy to tell the difference between the two, readers would understand what each is telling them, and [H] would long be hailed as the site where you can "get both sides of it". And if there is a person who doesn't understand both types of tests and only trusts one of them, at least he won't be complaining that you didn't run the types of tests that "actually mean something" to him. You guys may think this is "double-standards", but think about DirectX and OpenGL. Would you ever buy a card that only supported one of them, because you wanted a card with a single "standard"? I think not. You would want a card that supports both "standards", because each one has it's own advantages. Combine both testing methods, and you get the advantages of each, with the disadvantages of neither. There really is no reason not to do it both ways.


This is logic I am trying to understand. :D

HardOCP uses canned (their words) benchmarks in the motherboard and system reviews. Why not use the "Real World" benchmarks in the motherboard or system reviews? The statements are made that other websites are "lying" to the user by using canned benchmarks and then [H] posts a motherboard review and a system review using such "real world" benchmarks as WorldBench5, I/O Meter, and Sandra 2007. If I follow the stated logic on "real world" benchmarks then HardOCP is lying to me in the other reviews. These canned benchmarks are not accurately showing what the system or component, motherboard in this case, are going to do in a real world setting.

You stated they run synthetic benchmarks to stress the motherboard and to ensure stability of the platform. Why not use this same thought process to stress the CPU?

Here is a quote from the ECS KA3 MVP article-
"The KA3 MVP shows its dominance immediately, by pulling to the forefront of the AM2 based pack." This analysis is made after running Sandra 2007. Tell me, how is this dominance in a synthetic benchmark going to affect my "real world" performance?

My issue besides the lies and cronies comments is the logic in explaining to the readership that only real world benchmarks tell the truth unless of course you are reading our other review articles where synthetic benchmarks tell the truth.

What process do we believe?
:D
 
zone_86 said:
This whole Conroe vs. AMD thing reminds me of the comparisons with the older P4C 3.0-3.2 to an AXP 3200+ (K7). You would have thought that because of the memory bandwidth advantage of the P4 initially that there would have been a massive switch of gamers to intel platform but it didn't happen in huge numbers once gamers found out that just on pure FPS (multitasking aside) alone the AXP was very close in most situations, and especially with some overclocking. In this case there looks to be a legitimate/modest gain but not enough to cause a massive gamer-user switchout - although i'll go conroe because i will be building a new machine anyway. Might as well go with the fastest at the time.
The thing with that is that the Athlon XP 2500+ and later Mobile Barton were so cheap for their performance and could get up to 3200+ fairly easily so they were fine for gaming.

The fact is for "gaming" it doens't really matter who you buy in the end for performance today, it's performance "tomorrow" that should be included in your purchasing decision.

Any modern LGA775 P4/Pentium D is fine for gaming.
 
BigGreenMat said:
...It is like having a video card review and never topping 1024x768 with no AA and no AF. In that situation a 7900GTX might spank a radeon 1900XTX...

Thats because at that resolution the CPU becomes the bottleneck, which of course is a pointless exercise if you are trying to compare the performance of video cards.

Geeze, what is it that you people dont get......its not rocket science for gods sakes.
:rolleyes:
 
coldpower27 said:
...
Any modern LGA775 P4/Pentium D is fine for gaming.

I don't think that was the message put forward when A64 beat it by 2-10% in games and lost by a similar amount in multimedia and multitasking.
 
savantu said:
I don't think that was the message put forward when A64 beat it by 2-10% in games and lost by a similar amount in multimedia and multitasking.

Which is exactly what i was trying to convey. I recommend the P4D, and sometimes and X2 for anyone what wishes to do video editing intensively. Both chips are also fine for gaming but i always recommend the platform that has that slight edge for whatever taks you're doing. Back when it was the P4C vs. AXP i would recommend the P4 most often, unless you understood overclocking and were willing to get the right parts for AMD (NF7S or Lan Party Infinity) where you could do far more than take it to 2.2 GHz but to around 2.7. Hell i took stock Biostar M7NCDP's to 2.5+ with mobiles and unlocked XP2500's, and with that in mind AMD lost nothng in pure gaming performance ot the P4C. Shortly thereafter the A64 made it's mark and passed up the P4C quite easily, then Intel's P4D eventually with net burst found it's way closer to the game performance to the A64 but not enough for gamers to not condsider A64 still as the top gaming platform - especially when X2 came along effectively negating any hyperthreading/multitasking advantage that the pentuim had.

For AMD it was dead even with multitasking for the most part - but the gains with gaming were substantial. What we have now is a situaltion where the roles are reversed but the gains are a bit more pronounced for Intel's side.

Intel has basically AMD'ed - AMD here ... both with the performance gain and with agressive pricing market wise. Role reversal. Just like the back and forth between ATI and nVIDA. Inevitable.
 
bingo13 said:
You stated they run synthetic benchmarks to stress the motherboard and to ensure stability of the platform. Why not use this same thought process to stress the CPU? :D

That's right. They do it to test mobo stabillity. Notice how they praise the mobo even when it isn't at the top. That's because they aren't looking to see which is faster, they're just making sure there are no unnecessary bottlenecks being introduced. Sandra, for example, would reveal a bottleneck in communication between the RAM and CPU.

And I agree, stabillity tesing on a CPU is a good idea too. If they did both real-world and synthetic benchmarks, a stabillity test would just be icing on the cake.

But really, complaining that they used synthetic benchmarks to stress-test a mobo is like complaining if they use Prime95 to stess a CPU because, after all, Prime95 isn't real-world.

I still would like someone to tell me why synthetic + real-world benchmarks is a bad a idea.
 
HOCP4ME said:
That's right. They do it to test mobo stabillity. Notice how they praise the mobo even when it isn't at the top. That's because they aren't looking to see which is faster, they're just making sure there are no unnecessary bottlenecks being introduced. Sandra, for example, would reveal a bottleneck in communication between the RAM and CPU.

And I agree, stabillity tesing on a CPU is a good idea too. If they did both real-world and synthetic benchmarks, a stabillity test would just be icing on the cake.

But really, complaining that they used synthetic benchmarks to stress-test a mobo is like complaining if they use Prime95 to stess a CPU because, after all, Prime95 isn't real-world.

I still would like someone to tell me why synthetic + real-world benchmarks is a bad a idea.

I am not complaining that they used synthetic benchmarks to test the motherboards. My only issue is the entire logic of their statements. In short, Real World benchmarks do not lie, Synthetic/Canned benchmarks do lie according to what has been stated. They use both so how do I know which one is a lie. I just cannot believe people completely overlook the logic in their statements. I could care less which one they use, just do not speak out of both sides of your mouth when trying to justify both methods yet at the same time slam both methods.
Really, what is the difference between a custom timedemo run at 16x12 8xAA/16xAF and a FRAPs recording at the same settings. The fact of the matter is the custom timedemo will have a .5% variability and FRAPs will have 3~7% on average. They know this and this is one of main reasons why you see different "setups" used in every review, they cannot accurately repeat the same results with their FRAPs capture between gpus, cpus, or motherboards.
 
bingo13 said:
I am not complaining that they used synthetic benchmarks to test the motherboards. My only issue is the entire logic of their statements. In short, Real World benchmarks do not lie, Synthetic/Canned benchmarks do lie according to what has been stated. They use both so how do I know which one is a lie. I just cannot believe people completely overlook the logic in their statements. I could care less which one they use, just do not speak out of both sides of your (Kyle) mouth when trying to justify both methods yet at the same time slam both methods.
:p

I agree that the statment "synthetic benchmarks lie" is wrong. But saying "I prefer to use real-world tests to measure performance" and then using synthetics for stabillity testing isn't being inconsistent.
 
Everyone is acting as if your computers will no longer work after the 27th if you don't have a Core 2 Duo.

Hard OCP is on the Bubble with me because I would like to see the closest thing to "real" world benchmarks. If you are the top 10% of computer users, these benchmarks are great, but if you are not on the top of the food chain, these are apples to oranges. This goes for the other sites that are benchmarking and forecasting the death all CPUs except for Conroe.

I would like to see some testing and benchmarks with hardware that does produce bottlenecks. For Example:

- Some enthusiats will be able to afford the chip and motherboard but will have to used thier existing Vid cards, Sound Cards, HardDisk Drives, etc...

- Most OEM PCs are CPU heavy with crap peripherials because the general public and some novice gamers are led to believe that if they have the CPU they must be able to play the games.

- Also, the numbers are not real world times, meaning if it now takes you 1 second to bring up Word, with a Conroe you will have 50% more speed, so now it will take .5 seconds to open word. This is just an example.

If you are a video editor or server administrator, then Conroe is a goldmine.


I stand corrected: CPUs & Real-World Gameplay Scaling article does just what I suggested.
 
athlon 64 x2 is still the best deal for most peple get a 3800 it can cream any networst and also it can cream any lowend conroe. once 4x4 comes intel will be creamed senceless.
k8l + 4x4 = the fastest computers in the world. core 2 is not what it is cracked up to be. if amd had 2mb of cache per core conroe would be creamed so bad that it would be repeat of networst
 
righty right. good call there. your immense knowledge of technology and mastery of the old socratic debate have convinced me i made a terrible decision getting a conroe. now i feel bad.

i am sure you are stating facts, because i can tell by your sig you have a 4x4 k8l system, a conroe system, a "netburst" system and a 3800 x2, you are speaking from experience. me, what the hell would i know?
 
what i was saying in the future amd will come out with 4x4 wich gets all the benifits of a 2 way opteron box at 1/2 the price. i think the best deal is actuly the x2 4800 939 $290 at newegg and it can beet anything at its price.
 
quux said:
athlon 64 x2 is still the best deal for most peple get a 3800 it can cream any networst and also it can cream any lowend conroe. once 4x4 comes intel will be creamed senceless.
k8l + 4x4 = the fastest computers in the world. core 2 is not what it is cracked up to be. if amd had 2mb of cache per core conroe would be creamed so bad that it would be repeat of networst
:rolleyes: :rolleyes: :rolleyes: ...thats all this really deserves
 
quux said:
what i was saying in the future amd will come out with 4x4 wich gets all the benifits of a 2 way opteron box at 1/2 the price. i think the best deal is actuly the x2 4800 939 $290 at newegg and it can beet anything at its price.

Huh? The E6400 that's selling for $230-$240 at most places is the better deal. It beats the FX-62 by a fair amount at STOCK speeds, and destroys it when overclocked <hugs my precious E6400 :)>:

http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2802&p=4

Read. Learn. Understand.
 
Terra said:
Many articles who what [H]ard have shown, in real world use the difference in games are...well...not that big:

http://www.firingsquad.com/hardware/fear_cpu_performance/page3.asp

http://www.xbitlabs.com/articles/cpu/display/cpu-games2_3.html

Terra - Why people are so "upset" over this I really can't see?

According to those articles, including the one here at [H], my A64 3800+ is sufficient, and the main thing that I need to be concerned with in the here and now would be my GPU's. My current board is AGP, so I'd have to grab a new board and memory, but it seems like I can save myself $300 for now and just stay 939.
 
you know what you need better than anyone else. no point in upgrading to am2, but you should consider a board with pcie vid slot(s)
 
I decided to leave my game box the way it is:

AMD 4400+ X2
A8N32 SLI
7800GTX SLI
2GB 3500LL PRO Corsair
74GB Raptors in Raid 0
Plextor 716A
ETC..

I snaged a Core 2 Duo 6300 and used that instead of the Core Duo 2.66 in the P5W I was going to use to water cool and overclock etc..

I ended up clocking the Core2Duo to 2.8GHZ and I am now using it as my main computer instead of the dual 3GHZ Zeon workstation which I sold to finish building it.

Core 2 Duo 1.86 @ 2.8GHZ
2 GB Ballistix DDR2 667
PC Power and Cooling Deluxe 510 SLI
150GB Raptor/500GB Seagate
Radeon 1900XT 512
Asus P5W DH

Without actually getting a P5N32 SLI, and doing benchmarks, I will never really know the exact difference, but with SLI disabled on the AMD machine, I find no difference between systems as far as gaming goes. This is not apples to apples, oranges to oranges, or apples to oranges, but what I like to call Meat and Potatoes. No FPS counters or benchmarks but just KVM back and forth and playing the games on both machines. No difference that I can see.

Now, if you want to talk about capturing video directly into Premiere Pro, rendering transitions, and exporting out to DVD with Encore DVD, then I can say that there is a significant difference in these two machines. 10s of minutes difference!!!

Just sharing my experience.

Thanks

All
 
The author of this article states this test is "real world". One question,

Whats real world about using 1600x1200 resolution with 4x AA and 16x AF on a non-sli system? No casual or pro gamer is going to even think or try to play at those settings. Even at the end of their article, they show their true colors by saying, and I quote;
"Lastly, I would advise everyone that is thinking of rushing out and purchasing their latest upgrade that we are sure to see HUGE pricing slashes out of AMD before the end of the month."

If that isn't an article that was sponsered by AMD, I don't know what is.


Ignorance is bliss.
 
this is an old thread. these points have been made. i dont think anyone actually went for this jive anyway. i dont recall anybody typing "gee i was going to buy a c2d, but now that i read that article, coupled with the fact that amd is having huge price slashes, i'm not buying one now"









#1 retail 6600-3703mhz
http://www.hardforum.com/showthread.php?t=1075792
 
I'd still be using my [email protected] if it supported PCIe and SLI.

It was fast enough for eveything I did and almost any CPU made in the last 3 years is so powerful that the GPU becomes the bottleneck in high res gaming. Many sites have proven this... XBITLABS did a review once that showed celerons = A64's.

I think our minds are so blinded by drive-by 800x600 reviews of processors we tend to invest too much there and not enough on the video end where it really does make a difference in gaming.

Well I don't have infinite money so for me monitor comes first then video card and processor is way down that list right after mouse selection.
 
This has proved to be a pretty rough thread (I have only read parts of it) and I hope I'm not beating a dead horse or telling anyone something they didn't know already, but I wanted to add my thoughts....

Having scoured the web for Core 2 gaming benchmarks in SLI at high resolutions with AA and AF enabled, I came upon the article that spawned this thread. I, too, was skeptical of the canned benchmarks that I found on many websites touting the Core 2 as the Second Coming. I do have a preference for AMD, but im not a "fan-boy". I found it frustrating, though, that I could not find SLI or Crossfire gaming benchies vs AMD. All the data I could find about Core 2's amazing performance gains were with a single albeit top of line video card. Also, The crossfire and or SLI Core 2 benchmarks I COULD find, (there weren't many) didn't compare the results directly to an AMD counterpart. I decided to see for myself what the hype really was and whether it was warranted.


Having an AMD 4000+ Asus SLI system with 2 gigs of RAM, OC'd to 2.7 GHZ and two 7900 GTX's, I decided to pull the mobo, RAM and CPU and replace them with an e6400, an ASUS P5N32-SLI SE deluxe and two gigs of decent OCZ pc2-6400.

Now, I don't have any ORB results, screenies or any of that stuff. But I can give a pretty fair estimate of my results comparing the two. I guess you can take me at my word- I didn't really care which platform was better; I just wanted to know for myself so I could have the best gaming performance.

Note: I was able to OC the e6400 about 500 MHZ from 2160 to 2611. The increase turned out to be a nice improvement.


Sooooooo....


3Dmark 06 default settings no AA or AF-

AMD at 2.7 GHZ, 7900 GTX SLI= 5998

E6400 at 2.16 GHZ, 7900 GTX SLI= 8411

E6400 at 2611 MHZ, 7900 GTX SLI= 9545

Note that my CPU score in 3dmark06 more than doubled to over 2000 going from AMD to Core 2, but im sure most of you knew that already.


FEAR built in benchmark- 1440 x 900 everything max, soft shadows off, 8xS AA, 16 AF:

AMD at 2.7 GHZ, 7900 GTX SLI= 45 FPS AVG. 90 FPS HIGH

E6400 at 2.16 GHZ, 7900 GTX SLI= 59 FPS. AVG. 139 FPS HIGH

E6400 at 2611 MHZ, 7900 GTX SLI= 64 FPS. AVG. 147 FPS. HIGH


I know there are plenty more benchies I could do- several other ways I could compare the two. But for my purposes,which is mainly gaming, this evidence is sufficient. There is significant gains moving from AMD to Intel Core 2. And on bar graphs (the common form or displaying comparative benchmarks) these improvements can appear monstrous.....BUT....


The "feeling" I got playing the games I like to play was marginal. It definately "feels" faster playing BF2142 and BF2 but there were still slow downs in FEAR. I was quite disappointed in that. I was hoping the Core 2 would smooth things out a bit at higher resolutions, but it didn't- not to my satisfaction. I still have to play FEAR at 1024 x 786 with both platforms for it to be nice and smooth all the way.

I am undecided as to whether i'll keep the Core 2. I guess when the next generation video cards come out, I'll be glad I did. As for now, i'm not sure its really it's worth the extra 300 bucks (700 I paid for the Core 2 system, minus the 400 i'll get for AMD stuff).


If enough of you read this (i'm not sure you will; this thread has been down for a while) , im sure ill be held to some sort of scutiny. Feel to ask questions or PM me, I'll be happy to tell you what I know. Or you'll be happy to tell ME what I DON'T know, etc. etc. etc.
 
Back
Top