Real-World Gaming CPU Comparison with BFGTech 8800 GTX SLI

I would have liked to seen the article written when SupCom came out.

One reason Supreme Commander is molesting everyones PC right now is everyone whent out and bought high end video cards and moderate cpu's. SupCom comes out, clearly one of the most CPU demanding games to day easily bringing the fastest dual cores to their knees, and everyone is left with there jaw sitting on the floor wondering why their 900$ in video cards wont run the game over 10fps.

SupCom has really thrown a wrench in the assumption most games will be GPU dependent.
 
I would have liked to seen the article written when SupCom came out.

One reason Supreme Commander is molesting everyones PC right now is everyone whent out and bought high end video cards and moderate cpu's. SupCom comes out, clearly one of the most CPU demanding games to day easily bringing the fastest dual cores to their knees, and everyone is left with there jaw sitting on the floor wondering why their 900$ in video cards wont run the game over 10fps.

SupCom has really thrown a wrench in the assumption most games will be GPU dependent.

RTS games have always been CPU dependant. The sheer number of units on screen creates such a situation. This is largely the reason why RTS games have such low unit counts for each player. That's also why the games performance can radically take a dive in large battles or when your unit cap is full. This shouldn't surprise any RTS fan.
 
RTS games have always been CPU dependant. The sheer number of units on screen creates such a situation. This is largely the reason why RTS games have such low unit counts for each player. That's also why the games performance can radically take a dive in large battles or when your unit cap is full. This shouldn't surprise any RTS fan.

I have a 7800GT, a 1920x1080p monitor (westinghouse) and a 1600x1200 older dell (2000fp).

I also have dual opteron 285's and 8G of ram. Any idea how this would play in 64-bit xp pro or in linux with wine? If it really is cpu dependant I wonder if having 4 cores is going to help me. Will it use them?
 
I have a 7800GT, a 1920x1080p monitor (westinghouse) and a 1600x1200 older dell (2000fp).

I also have dual opteron 285's and 8G of ram. Any idea how this would play in 64-bit xp pro or in linux with wine? If it really is cpu dependant I wonder if having 4 cores is going to help me. Will it use them?

I'm running SupCom on Dual 1680x1050 screens, with 4xAA, 16x Ansio, and all detail settings maxed out. This is on a Quad Core QX6700 with 8800GTX and 2GB of RAM. In 8-way AI skirmish matches with the unit cap set at 1000 per player, I've see framerate drop to single digits, but usually only when zoomed in on a large number of battling units.

I typically see Core 0 running at about 80%, Core 1 between 40% and 60%, and Cores 2 and 3 in the 20% - 30% range.

I initially had trouble running games that large (8-way, 81Km map, 1000 unit cap) because they would crash after about 20 mintues, then I realized that the SupremeCommander.exe would crash whenever it tried to use more than 1.5GB of RAM. So I looked in the GPNet forums and found a solution that allows you to mod the exe so that it can address memory space higher than 2GB. Once I did that, no more crashing, and I've seen the SupremeCommander.exe use as much as 2.9GB of RAM, which since I only have 2GB means it's paging like a bastard (though I typically only notice it when I zoom out of one location and zoom into another). I figure with another 2GB in my box I should be able to resolve the last of my SupCom perf issues.

Note that these memory issues only occur on large maps (40Km and 81Km) with more than 6 players, which is why most people are playing online with 20Km maps and smaller, and 6 or fewer people.
 
I have a 7800GT, a 1920x1080p monitor (westinghouse) and a 1600x1200 older dell (2000fp).

I also have dual opteron 285's and 8G of ram. Any idea how this would play in 64-bit xp pro or in linux with wine? If it really is cpu dependant I wonder if having 4 cores is going to help me. Will it use them?

That machine will basically perform like a clock speed equivalent Athlon X2. The other two cores won't do you any good at this point. The ram won't do you much good either.

An Athlon X2 @ 2.6GHz machine with 2GB of ram will perform the same as your machine would with the exact same video card.
 
Maybe Im completely logical in my thinking or something but I have been reading Hardocp for a long long time. I quit reading reviews or review advice when the "subjective" review started. Its not accurate it doesnt eliminate most variables and infact in this case introduces a GPU bottleneck by cranking up insanely high settings.

Imagine going to a race and you have two runers, because one runer is stronger you make him run with 10lb ankle weights and then you tell them," you guys keep runing and stop when you "think" you have reached a mile".

This is not meant to flame or add any insult. I appreciate the Hardocp website and I have spoken out on this before. I hate the testing. Try to do this within any scientific realm and be laughed out the door. You test the systems with as close to common configuration as possible as the same resolutions accross the board and who ever comes out on top is the fastest. It's not rocket science.
 
Maybe Im completely logical in my thinking or something but I have been reading Hardocp for a long long time. I quit reading reviews or review advice when the "subjective" review started. Its not accurate it doesnt eliminate most variables and infact in this case introduces a GPU bottleneck by cranking up insanely high settings.

Imagine going to a race and you have two runers, because one runer is stronger you make him run with 10lb ankle weights and then you tell them," you guys keep runing and stop when you "think" you have reached a mile".

This is not meant to flame or add any insult. I appreciate the Hardocp website and I have spoken out on this before. I hate the testing. Try to do this within any scientific realm and be laughed out the door. You test the systems with as close to common configuration as possible as the same resolutions accross the board and who ever comes out on top is the fastest. It's not rocket science.

In the scientific realm, the athletic realm, or any other environment, the validity of a test depends on what you are trying to find out. In racing, you typically (but not always) want to equalize as many variables as possible, as in your example. But consider a different event at the same hypothetical "olympics"--weight-lifting. In that event you keep adding more weight until one competitor can lift it and the other can't. You don't care who can most quickly lift 50, 100, and 200 pounds, you care who can lift the most pounds whatever the final number is.

That is the revolution of [H]'s testing method. They had a paradigm-shift moment where they came to realize that a pure "footrace" between cards did not provide the needed information. Once you eliminate all variables, you eliminate the reason people want good video cards. Highest frame rate at an arbitrarily "standard" setting is not the goal. Best settings at a workable framerate is the goal. The definition of a workable framerate is unavoidably subjective, something complicated by the fact that the ideal framerate can vary from one game to another. But that's a degree of grayness we have to accept, because what we want is the best possible visual quality and the most advanced effects, not the highest framerate. Traditional benchmarking is easier to standardize and easier to grasp at a glance, but it doesn't tell us what we need to know. It has its priorities backwards.

I used to prefer the "old" way myself. Now when I read benchmarks at other sites, I get frustrated as I keep asking myself, "but what settings did they use? Did they turn on maximum grass distance? Did they enable supersampling? How did each card react when they went up a notch on shadow quality? This card seems to handle 4XAA easily--could they have used 8X?"

Standardized settings can't answer the question "how good can it get?" because they would have to create a test run for every setting--no site has the time for that, and no reader would want to wade through the results. [H]'s method takes you straight to the bottom line--how good can this card make your games look and play? Nothing else should matter. Again, it doesn't matter how "scientific" or "objective" a test is if it doesn't tell you what you need to know.

If the above still doesn't make sense to you, then please just take a vow not to bother commenting on the issue any more. You're wasting your time and ours, because we understand what you're saying--you just don't understand what we (the [H] and the vast majority of us who agree with their testing methods) are saying.
 
In the scientific realm, the athletic realm, or any other environment, the validity of a test depends on what you are trying to find out. In racing, you typically (but not always) want to equalize as many variables as possible, as in your example. But consider a different event at the same hypothetical "olympics"--weight-lifting. In that event you keep adding more weight until one competitor can lift it and the other can't. You don't care who can most quickly lift 50, 100, and 200 pounds, you care who can lift the most pounds whatever the final number is.

That is the revolution of [H]'s testing method. They had a paradigm-shift moment where they came to realize that a pure "footrace" between cards did not provide the needed information. Once you eliminate all variables, you eliminate the reason people want good video cards. Highest frame rate at an arbitrarily "standard" setting is not the goal. Best settings at a workable framerate is the goal. The definition of a workable framerate is unavoidably subjective, something complicated by the fact that the ideal framerate can vary from one game to another. But that's a degree of grayness we have to accept, because what we want is the best possible visual quality and the most advanced effects, not the highest framerate. Traditional benchmarking is easier to standardize and easier to grasp at a glance, but it doesn't tell us what we need to know. It has its priorities backwards.

I used to prefer the "old" way myself. Now when I read benchmarks at other sites, I get frustrated as I keep asking myself, "but what settings did they use? Did they turn on maximum grass distance? Did they enable supersampling? How did each card react when they went up a notch on shadow quality? This card seems to handle 4XAA easily--could they have used 8X?"

Standardized settings can't answer the question "how good can it get?" because they would have to create a test run for every setting--no site has the time for that, and no reader would want to wade through the results. [H]'s method takes you straight to the bottom line--how good can this card make your games look and play? Nothing else should matter. Again, it doesn't matter how "scientific" or "objective" a test is if it doesn't tell you what you need to know.

If the above still doesn't make sense to you, then please just take a vow not to bother commenting on the issue any more. You're wasting your time and ours, because we understand what you're saying--you just don't understand what we (the [H] and the vast majority of us who agree with their testing methods) are saying.


Your post makes perfect sense, and I agree to an extent. However, this was a cpu evaluation. Not a video card evaluation. To see which cpu could push XX video card further. However, there was no comparisons at different resolutions. Just to say highest playable Frame rate was XX isnt enough of a comparison. In my Opinion, both cpu should of been tested started at say 1280x1024 max AA AF and then tested accross the board to see which could push the highest frame rates at the given setting.

This method would of fit your example of the weight lifting competition perfectly and been more valuable and offer the readers a chance to compare their own computers performance to the numbers in the review. That information in return could then be used to say Ok, I get this much performance in say oblivion on this XX processor and XX video card and the performance numbers included in this test give me an idea on how much performance I could expect asuming I have either the GPU or the CPU used in the review.
 
Of course I want a fast cpu, but to be honest, for gaming, what the fuck does it matter what cpu you have, if the settings you play at don't depend on that any more, and instead are based on gpu performance, then we need to know that. Imagine what the guy that was running an AMD X2 at 2600mhz with a GTS felt, when he got a new C2D upgrade at 3200mhz with shiny new ram and hard drives and the same GTS, and game speeds were exactly the same? :p


The end result is what matters. Yes, my C2D kicks some ass, but I knew going into this upgrade that my game speeds were going to be basically the same. The C2D does other things than game, and THAT is why I upgraded.

The [H] method of video card testing totally kicks ass...period, finito, ad infinitum! If you don't play games, then read reviews somewhere else.
 
Your post makes perfect sense, and I agree to an extent. However, this was a cpu evaluation. Not a video card evaluation. To see which cpu could push XX video card further. However, there was no comparisons at different resolutions. Just to say highest playable Frame rate was XX isnt enough of a comparison. In my Opinion, both cpu should of been tested started at say 1280x1024 max AA AF and then tested accross the board to see which could push the highest frame rates at the given setting.

This method would of fit your example of the weight lifting competition perfectly and been more valuable and offer the readers a chance to compare their own computers performance to the numbers in the review. That information in return could then be used to say Ok, I get this much performance in say oblivion on this XX processor and XX video card and the performance numbers included in this test give me an idea on how much performance I could expect asuming I have either the GPU or the CPU used in the review.

I gotta give you one there--this thread has run for so long that I forgot it was more about the CPU than the GPU, and my comments were more GPU-focused. :)

But like Rapture says, taking you to the bottom line rather than showing every step in between has its advantages. You also have to remember that the reviewers actually do all those incremental steps in order to discover the bottom line--they just don't give all the detail because it would make the articles very unwieldy to write and very tedious to read.
 
Back
Top