Real-World Gaming CPU Comparison with BFGTech 8800 GTX SLI

This is not entirely true. When you buy a CPU, you also need to buy the coresponding chipset that enables the features of that CPU. Many could argue that the chipset is even more important than the CPU itself. Every single part of a system provides important pieces to the whole puzzle. Chipset, CPU, video card, memory, etc, etc. That's why the point was made in the article to have "identical" systems. But they aren't identical, and can never be identical. Why? Because of **gasp** architectures. [H] is the one that brought up the idea of comparing the two; my point was only to level the playing field. When AMD releases a new CPU with a higher core speed than the Intel, one that would beat the X6800, will the same test be performed? No, probably not. Even if it did, I imagine there would be people in here complaining that the test was unfair due to the difference in core speed: an unfair advantage for AMD.



Again, the processor is only part of the puzzle. Look at the big picture...



It is biased when the article comes to a conclusion saying one is the winner, one that has several advantages in its favor, and people read that and make their purchases based on what they're told. They go out and spend their money on what [H] says is good, but what happens when the table is turned and AMD comes back with a winner? The two companies have started leap-frogging eachother just like ATI and nVidia. One day it's X, the next day it's Y. If the two are not compared in the same circumstances, or at least with as few variables as possible, then it would not be an even comparison. Yes, I know that people will not buy an X6800 and underclock it - I'm not a fucking idiot. If you compare an Intel system to an AMD system you are comparing more than just clock speeds. You are comparing the CPU, the chipset, memory controllers, cache, instruction sets, and so on. Do you really think that all there is to a CPU is speed? If you do, I'm sorry for you...

If all this is too hard for you to understand, I really don't know how to put it to you any easier.

The essential problem with your thought process in all of these posts is the phrase "unfair advantage." An advantage is neither fair nor unfair when comparing the best available. It is simply an advantage. If there were better components available for building the AMD system and [H] failed to use them, that would be a valid concern for the validity of the test. If so, please point them out. If you put forward your best and it isn't good enough, you lose. This is not unfair. Unfortunately, your belief that it is unfair, presumably because you don't like it, is the entire basis of your complaints.

If one team in the Superbowl has a better quarterback, should they be forced to use their backup QB in order to "level the playing field"? There is no logic to your objections. They are the equivalent of throwing a tantrum because your little league team lost a game. The market place does not care about fairness in the sense you are trying to define it. The better product wins. Too bad it wasn't on your favorite "team."
 
1. Measuring CPUs from different manufactorers at same clock speed would uncover certain architectural advantages like how many, or long, SSE instructions it could do per clock cycle. The Core 2 if I recall is outstanding in SSE2 instructions, doing a full 128bit in one go.

But if you should be real real, it would be performance per $, or if you are a rich bastard, performance per average overclockability per highest-end CPU offering from each manufactorer.

As some pointed out, the Core 2 Duo overclocks a lot more on an average basis.


2. Running more or less syntetic benchmarks (like Intel/3dmarks multithreaded Ice storm Fighters) can give a hint at how the CPU may perform in future games that are multithreaded or uses the CPU differently with more AI and less object work (DX10 promises to offload the CPU even more).

3. This come to the point that the Core 2 Duo most likely will matter more in the future as games always will be more complex and CPU sensitive in the future. So a year from now, you might actually see 40% performance difference in the same setup, when you are forced to lower the res of the crazy 8800SLI GTX to get 60-85-100 or whatever FPS you like, and the CPU gets bottled.


Just trying to find any sense in this article. Capping the results at 60FPS and seeing how much the CPU dips due to cache-miss and memory latency is excellent, credit to [H]ardOCP for that.
 
If one team in the Superbowl has a better quarterback, should they be forced to use their backup QB in order to "level the playing field"? There is no logic to your objections. They are the equivalent of throwing a tantrum because your little league team lost a game. The market place does not care about fairness in the sense you are trying to define it. The better product wins. Too bad it wasn't on your favorite "team."

Is the score becoming lopsided? I prefer a close game ;)
 
If all this is too hard for you to understand, I really don't know how to put it to you any easier.
I'd imagine I'm having a difficult time grasping what you're saying because I choose not to skew reality. Now, you're certainly free to live in whatever world you've concocted inside your mind, but to assume that this askew fantasy world is the very same that the rest of us call reality is somewhat strange.

You have some valid points, but I think your global perspective is irrelevant and nonsensical.

Donnie27 said:
Then if a poll shows 1600 X 1024 or 1200 is the most common setting, use that. I'd be absolutely SHOCKED if even more than 10% of the posters here can come close to those settings used in what's called a "Real World" review.
What percentage of 8800 GTX SLi owners with $800+ processors aren't running displays that are natively 1920x1200 or larger? I think the monitor choice is in line considering the test systems. As the test systems change, though, so should the monitor choice.
 
What percentage of 8800 GTX SLi owners with $800+ processors aren't running displays that are natively 1920x1200 or larger? I think the monitor choice is in line considering the test systems. As the test systems change, though, so should the monitor choice.

All tests should be created using some kind of system of Hi-Mid & Lo. I'm just saying no way in hell is a SLI-ed 8800GTX and a 30" LCD is common and has almost NOTHING to do with real-world. Now INTEL overclockers can reach and pass the stock performance of the X6800 so that becomes moot. You can't cheat not having a SLI-ed 8800-GTX and sweet-assed 30"-er. Maybe call it Xtreme or whatever. To run real world tests you first need a real world rig. I say that with all due respect to [H].
 
I'd imagine I'm having a difficult time grasping what you're saying because I choose not to skew reality. Now, you're certainly free to live in whatever world you've concocted inside your mind, but to assume that this askew fantasy world is the very same that the rest of us call reality is somewhat strange.

You have some valid points, but I think your global perspective is irrelevant and nonsensical.


What percentage of 8800 GTX SLi owners with $800+ processors aren't running displays that are natively 1920x1200 or larger? I think the monitor choice is in line considering the test systems. As the test systems change, though, so should the monitor choice.

You make a good point. If you can afford a system like the one in the review, then you probably have at least a 24" LCD. I've also read numerous posts on these forums from users with 30" monitors. So while they are somewhat of a minority, saying that no one owns these things is incorrect. It is in fact a real world test.
 
All tests should be created using some kind of system of Hi-Mid & Lo. I'm just saying no way in hell is a SLI-ed 8800GTX and a 30" LCD is common and has almost NOTHING to do with real-world. Now INTEL overclockers can reach and pass the stock performance of the X6800 so that becomes moot. You can't cheat not having a SLI-ed 8800-GTX and sweet-assed 30"-er. Maybe call it Xtreme or whatever. To run real world tests you first need a real world rig. I say that with all due respect to [H].

The point of the article was to show the difference between the AMD and Intel CPUs in a high end system. To do that, you really need the CPU to become the only variable. Pushing the extreme one way or the other is the best way to do that. Would you call the same setup running at 640x480 real world? I doubt you would. In that regard this is a far more meaningful test than it would have been run at 640x480.

I think that if this test was conducted at lower resolutions with different video cards, the numbers wouldn't have made as much sense. The GPUs probably would have been the bottleneck at those lower resolutions and you wouldn't have seen a difference in the CPUs at all. Most people don't seem to be grasping that it seems.

If we ran an X2 3800+ and a Core 2 Duo E6600 (which are the same clock speed) would you have called it a fair test? That's what it sounds like as I read these posts. No one should call that fair. We already KNOW that the Intel CPU would have raped the X2 3800+ at stock speeds. Anyone who doesn't get that obviously isn't going to understand this article at all.
 
The point of the article was to show the difference between the AMD and Intel CPUs in a high end system. To do that, you really need the CPU to become the only variable. Pushing the extreme one way or the other is the best way to do that. Would you call the same setup running at 640x480 real world? I doubt you would. In that regard this is a far more meaningful test than it would have been run at 640x480.

640 X 480 isn't real world the same way 2xxx X 1xxx is real world, nor a 30" monitor. That's the whole point. Add 800 X 600 to that as well. Somewhere between 1600 X 1050 and 1280 X 1024 there sits mainstream and Real World.

I think that if this test was conducted at lower resolutions with different video cards, the numbers wouldn't have made as much sense. The GPUs probably would have been the bottleneck at those lower resolutions and you wouldn't have seen a difference in the CPUs at all. Most people don't seem to be grasping that it seems.

I beg to differ, even my POS X1800XT runs a mean game of BF2 at either 1280 X 1024 or 1024 X 768 Online. All I'm saying is;

A. Do a poll to find the most common systems.
B. Most common settings used.

Those were some killer tests on a killer kick-ass rig but if these were Cars it would be like talking about real world 0 - 200MPH in one of these.:) Not just the car, but since when is 0 to 200MHP real world?

If we ran an X2 3800+ and a Core 2 Duo E6600 (which are the same clock speed) would you have called it a fair test? That's what it sounds like as I read these posts. No one should call that fair. We already KNOW that the Intel CPU would have raped the X2 3800+ at stock speeds. Anyone who doesn't get that obviously isn't going to understand this article at all.

IMHO, the processors picked was 100% fair. I have no issue with it. I just meant that whatever processor is used is really moot in a way. For most of us here, stock 6800 is kind of Mid range since many INTEL folks overclock slower processors until they run faster than a stock 6800. Look at Kyle's own results? For me 2.4GHz stock, 3GHz is a very slight overclock. We know how well Conroe scales:)

I do a review with a 19/20" viewable LCD with one 8800GTX and that more than likely Native resolution of 1280 X 1024/1600 X 1200 respectively, with whatever the Monitor's refresh. But even that is the reason I still hold on to my old 20" viewable CRT!
 
What I read from all this is basically a request for a "Joe Sixpack" round-up style review.

Line up a couple of midrange C2D and A64, the most popular/common motherboards, the most popular/common sound solution(s), GPU, HDD, etc; and then test them all at the most commonly used resolutions. Include some mild overclocking based on the most repeatable and common results, remember it's Joe Sixpack we're talking about here. See what falls out based on a number of criteria.

That having been said, the review this thread is based upon does its intended job, and that was to compare one of the best consumer setups a person could ask for with another of the same ilk.
 
640 X 480 isn't real world the same way 2xxx X 1xxx is real world, nor a 30" monitor. That's the whole point. Add 800 X 600 to that as well. Somewhere between 1600 X 1050 and 1280 X 1024 there sits mainstream and Real World.
That's more of the Majority World than any so-called "Real World". Now, with some test rigs, using a 30" LCD would be very unusual. In this case, however, I think it's very fitting. I think the 3007 is the perfect monitor choice for an FX-62/8800 GTX SLi rig and a perfect monitor choice for an X6800/8800 GTX SLi rig. I personally would use a somewhat smaller monitor with such machines, but that's mainly a personal preference.

When the test system changes, you have to re-evaluate the monitor choice if you're doing an evaluation like this one where the CPU is the variable reference point and not resolution, GPU muscle or other factors. The next article Brent does may use the 3007, or he may use some other panel if that is what makes sense for the intention of the article.
 
That's more of the Majority World than any so-called "Real World". Now, with some test rigs, using a 30" LCD would be very unusual. In this case, however, I think it's very fitting. I think the 3007 is the perfect monitor choice for an FX-62/8800 GTX SLi rig and a perfect monitor choice for an X6800/8800 GTX SLi rig. I personally would use a somewhat smaller monitor with such machines, but that's mainly a personal preference.

When the test system changes, you have to re-evaluate the monitor choice if you're doing an evaluation like this one where the CPU is the variable reference point and not resolution, GPU muscle or other factors. The next article Brent does may use the 3007, or he may use some other panel if that is what makes sense for the intention of the article.

Hell, I'd love to have that rig they used! I'd love it if that was a COMMON rig and I had one, but clearly that's not the case. Now please understand what I'm saying takes NOTHING away from their review. I thought it was great.

"Real-World Gaming CPU Comparison with BFGTech 8800 GTX SLI", hell, I could have subbed my E6600 in there without overclocking and given the FX-62 a run for the money LOL! I'd have named it [H]ardStyle Gaming CPU Comparison with BFGTech 8800 GTX SLI . Or exchange Real World for High End.
 
I guess my initial expectations for the article were a bit off. My impression was that the point of the article was to remove the GPU bottleneck as much as possible, and then see the difference in performance between the FX-62 and the X6800. To me, that would mean getting some high-end card (a single 8800GTX would be plenty), running the game at some resolution the card can handle easily (1280x1024, for example), and then changing the CPU.

Aaaaanyways, out of curiosity, I started a poll about people's resolutions while gaming, and over 60% of the votes are for 1280xsomething and 1680x1050. Only 7 voters out of over 230 went for resolutions over 1920x1200.
 
I guess my initial expectations for the article were a bit off. My impression was that the point of the article was to remove the GPU bottleneck as much as possible, and then see the difference in performance between the FX-62 and the X6800. To me, that would mean getting some high-end card (a single 8800GTX would be plenty), running the game at some resolution the card can handle easily (1280x1024, for example), and then changing the CPU.

Aaaaanyways, out of curiosity, I started a poll about people's resolutions while gaming, and over 60% of the votes are for 1280xsomething and 1680x1050. Only 7 voters out of over 230 went for resolutions over 1920x1200.

8 now.
 
Quite simply at anything less than 2560x1600 8800 GTX SLI is spinning its wheels. We did not want to be bottlenecked on any front, we wanted the fastest system platforms, and graphics platform to find out how both compare running at the highest resolution. It is a truly high-end gaming experience, and the comparison tells you a lot, it tells you for gaming overall you'd rather have a Core 2 Duo right now. Things can change in the future, but for right now, it is what it is. As a gamer I know what I want in my system.
 
Wouldn't it be more more useful to run the tests with resolutions and specs that some of us might actually be able to use? I doubt very many people here have a 3007fpw and two 8800GTXs. I think one 8800GTX and a range of resolutions like 1280x1024, 1600x1200 and then 2560x1600 would have been a lot more useful. This setup is so much more than any of us will ever have that I'm not sure weather the result will necessarily scale down to us.

2nd'ed.

And since I'm sort of in a minority (resolution-wise) at least until 1080p becomes more prevalent (it's already a standard) for tv's, I might as well put in a request. I'd think that 1920x1080 should also be included if [H] is gonna go "sky's the limit"...at least it'd be more likely to be used by the masses -TV's will be more abundant with this spec long before such specialty pc monitors come down.

<strictly opinion>
I think a 37" 1080p monitor trumps your higher res. 30".
Why?
Native 16:9 HD tv/dvd, not to mention gaming goodness! :p
...much less limiting overall. PC and TV use in a single appliance. Which is the way it's all headed anyways...but that's another argument altogether LOL
 
I think objectors need to understand two things:

First, this was not a traditional broad evaluation of either the GPU or the CPU. Both had already been done. This was a narrowly focused specialty review.

Second, removing bottlenecks and stressing systems to probe their strengths/weaknesses are opposing goals that have to be balanced somehow. This wasn't simply a CPU speed test. Again, that had already been done. It was more like a CPU torture test. Put the fastest available graphics system in the machine, then use that power to put the screws to the CPUs and find out which one cries uncle, because pushing the graphical settings puts more stress on the CPU too. This wasn't about generic benchmarking--it was about pushing the best systems available to their limits in every area, relative to gaming.

And let's face it--the test was partly designed in response to the complaints about the "real world" Core Duo vs. A64 article. Back then y'all complained that the graphical settings weren't extreme enough to allow the Duo to show its real superiority. Now you're complaining that this test is TOO graphically intensive and asking for a test that would basically be the same as the earlier one. Go to Home Depot, buy a ladder, and get over it. While there, go to the lightbulb aisle and buy a clue.:eek:
 
Quite simply at anything less than 2560x1600 8800 GTX SLI is spinning its wheels. We did not want to be bottlenecked on any front, we wanted the fastest system platforms, and graphics platform to find out how both compare running at the highest resolution. It is a truly high-end gaming experience, and the comparison tells you a lot, it tells you for gaming overall you'd rather have a Core 2 Duo right now. Things can change in the future, but for right now, it is what it is. As a gamer I know what I want in my system.

Agreed. This data is valid regardless of the test configurations. You would have seen the exact same results at 640x480/800x600 with a different video card and you would have seen identical results in other resolutions with this hardware I'd imagine.

Even though the test configuration is super high end, it should tell everyone what they want to know. That is right now if you are building a gaming machine today, the Core 2 Duo is the best choice for that machine.
 
I'd think that 1920x1080 should also be included if [H] is gonna go "sky's the limit"...at least it'd be more likely to be used by the masses
So far...people have requested that [H] test at 640x480, 800x600, 1024x768, 1280x1024, 1600x1200, 1680x1050 and 1920x1080.

Get to work, Brent :)
 
So far...people have requested that [H] test at 640x480, 800x600, 1024x768, 1280x1024, 1600x1200, 1680x1050 and 1920x1080.

Get to work, Brent :)

Lol, good point. Guys, [H] only has the resources to test one category at a time. This time, they went for high-end, and you say they need to do real-world instead.

The ironic thing is, last time they did a C2D review, they went for real-world (single video card, 1600x1200 is pretty common), and you complanied even more that they needed to do high-end ("They didn't use Crossfire or SLI!!!"). :p
 
So far...people have requested that [H] test at 640x480, 800x600, 1024x768, 1280x1024, 1600x1200, 1680x1050 and 1920x1080.

Get to work, Brent :)
well that would only take 6 months to test and write up :eek:

I'd argue that anything below 1024x768 (and I think it's optional too) should be tossed. I can't think of an LCD that has a native resolution below 1280x1024, but I assume some older ones are probably at 1024x768. And frankly, anyone that wants to play at 800x600 probably doesn't require a new CPU or a new GPU (I haven't played at that res in at least 5 or 6 years).

The rest of the tests would, IMO, be very useful. What's more, it'd be something you could build on every year. I know THG isn't popular on this site, but there GPU and CPU charts are pretty useful, IMO (though I'd like to see them keep all the older tests on the charts for at least 3 years, which they don't seem to do).

The ability for people who don't upgrade every 6 months to see how their card performs compaired to some newer card can be very informative and it's normally not something you see if a card is more than 12 - 18 months old.

Having that type of data combined for "real world" gaming tests would be very useful. Yes, it'd be a ton of work, but imagine the hits you get as people came to [H]ardOCP's definitive guide/charts.

You wouldn't have to do all combinations either. Once you got a combo that was GPU bound, there'd be no reason to go to higher resolutions with that card. If it was CPU bound, there'd be no reason to test faster cards on that combo.
 
Did you have the dual core optimiser installed from amd for the tests, i used to have that problem before i installed that, the other is that i have found that the performance of the nvidia cards isnt really showing until two or three driver updates have been released after the cards release, or at least thats what i found with my 7950gx2. Get smooth motion and decent fps with the new drivers but the previous drivers gave poor fps and jerky movements. Please let me know via email, [email protected].
 
So [H] knocks others for unreliable reviews yet in their own when comparing two cpus, they do not even use the same in-game settings....

I really don't trust [H] reviews. It's not done correctly. It should be the same settings between all comparisons for a unified base.
 
I really don't understand [H] reviews.

Fixed it for ya...

Their entire reasoning for their review philosophy has been explained very well and very often. All you're really saying is that they do it differently. Yes, we know, and they have explained why. If you still don't get it, here's the reader's digest version: They vary settings as performance allows in order to provide a tangible, visible difference to demonstrate different performance. Anyone can (and many sites do) run a benchmark and throw together a chart with frame rates. But the chart doesn't tell you if the differences matter. Differences in game settings matter, so they give you a hands-on reason why one product is better than the other. If you don't get this, you aren't paying attention.

Or is your attitude being dictated by the brand of CPU in your sig? Of course, the irony of that would be that if they had done the testing the way you suggest, the AMD CPU would have looked even worse.
 
So [H] knocks others for unreliable reviews yet in their own when comparing two cpus, they do not even use the same in-game settings....

I really don't trust [H] reviews. It's not done correctly. It should be the same settings between all comparisons for a unified base.

Then you missed the entire point of the review. It wasn't a review to let you know which cpu would give you the bigger e-wang.

If CPUa runs with higher settings, it's the better CPU.

Seems pretty damn straight forward. I wouldn't want all reviews to be like that, but I think the information is still useful.
 
So [H] knocks others for unreliable reviews yet in their own when comparing two cpus, they do not even use the same in-game settings....

Exactly. I don't care if a Core 2 gets 150 FPS when an FX-62 only gets 120 FPS, especially at 1024x768. I do care that the Core 2 can run at high AA levels.;)
 
Exactly. I don't care if a Core 2 gets 150 FPS when an FX-62 only gets 120 FPS, especially at 1024x768. I do care that the Core 2 can run at high AA levels.;)

There was a long time debate about Core 2 Duo being better than the AMD X2's and FX processors in games and being necessary to keep from bottlenecking the Geforce 8800GTX GPUs, and especially in an SLI configuration. Some people said the performance of the Core 2 Duo would make a big difference and others said that the difference was only something you'd see in benchmark testing alone. The article in question answers those questions and lets you know what the truth is. The answer is the Core 2 Duo can provide a 19&#37; performance increase in some instances with G80 SLI setups. Thus, the question and debate are now over. (Or should be.)

I think the article addressed a question people seemed concerned about and answered it nicely.
 
There was a long time debate about Core 2 Duo being better than the AMD X2's and FX processors in games and being necessary to keep from bottlenecking the Geforce 8800GTX GPUs, and especially in an SLI configuration. Some people said the performance of the Core 2 Duo would make a big difference and others said that the difference was only something you'd see in benchmark testing alone. The article in question answers those questions and lets you know what the truth is. The answer is the Core 2 Duo can provide a 19% performance increase in some instances with G80 SLI setups. Thus, the question and debate are now over. (Or should be.)

I think the article addressed a question people seemed concerned about and answered it nicely.

QFT and well said!

That said, I suggested on follow up, not change the original. IMHO, a poll will show many more users using 1280 x 1024 (vs the Computer) to 1024 X 768 (online), next would be something like 1600 X 1200. That's I'd suggested a "Poll", simply ask, "What screen resolution do you use the most?" I use 1280 X 1024@85Hz with V-Sync enabled. Ask for the most common size CRT or LCD while you're at it?

I just turned 50 on Feb 1st and I'm loosing my eye sight slowly but surely. I can stand 1600 X 1200 on my 20" viewable CRT any longer.

On a side note and OT, I got my butt kicked in BF2 by some 61 year old Grand-Ma LOL!
 
This article is solid, but like many others plaguing the web lately, the use of the term "bottlenecking" is one I strongly object to. It's being really badly misused from a system design and programming sense, sorry guys. And the result is that now every devoted follower of [H], Tomz, etc is running around insisting that anything less than a Core 2 "bottlenecks" 8800GTX SLI.

Im not going to get into a long debate/description here, but if a given CPU sees increasing gains going from X1900XTX to 8800GTX to 8800GTX SLI, it can simply not be said that the CPU is "bottlenecking" the GPU. You guys know full well how the 3D pipeline works. The GPU has work to do that it begins after the CPU finishes its work. The completel process from CPU to GPU determines the time it takes to render a frame. Obviously, total FPS cannot exceed the CPUs ability to do its work, but that does not mean that GPU horsepower driving the GPU processing cycle down is "wasted".

A bottleneck would be if the extra power of the GPU was going completely unused on a given CPU due to the CPUs inability to process the code quickly enough to hand-off the pipeline. The way 3D rendering works, this is actually *impossible*. CPU does work, GPU does work. Faster CPU does work faster than slower CPU, same for GPU.

Of course the Core 2 is a faster CPU than the FX, so it will complete its work faster leading to higher framerates, but talking in terms of "bottlenecks" in this context is just technically inaccurate. You can say its nitpicking, but technical discussions, if nowhere else, are an area where "mere semantics" have real meaning.

Edit: To put it another way, the definition of "bottleneck" is when, within a given system, a specific component is preventing system performance from continuing to scale. In other words, a given component has introduced a performance ceiling. Anything can be a bottleneck. In the case of every game you showed, 8800GTX SLI brought new levels of performance to systems running AMD FX-62. That means that no component (not the CPU, not the RAM, not the I/O buses, not the game engine) is introducing a bottleneck. The fact that higher levels of performance could be attained with higher powered parts (or more efficient engines - or a more efficient bus) is irrelevant. Each system, from a system design standpoint, is evaluated only against itself if you are performance tuning and searching for a bottleneck.
 
If the article had been done in the form of a more traditional benchmarking review with a range of resolutions and graphical settings, I believe that you would have seen bottlenecking taking place--literally, framerate pegged at a maximum figure across a range of settings because the GPU wasn't being pushed hard enough and the CPU couldn't perform its work any faster. I have seen this result in benchmarking graphs on several occasions in recent years, and that matches the definition of bottlenecking you are referring to, so I don't think the term is as misused as you believe it to be.

While it is sometimes used to refer to a component holding back another component without completely preventing measurable gains, I don't think that this small amount of looseness with the term means that we don't understand the more technical, precise definition.
 
I didnt mean to imply that you guys dont know the specific definition, more that Im noticing an increasing trend of misinformation getting spouted by people who dont have enough knowledge to really understand what they're reading.

Not that this is the fault of journalists (people should really build a background of knowledge before diving into technical topics), but this one is admitedly a pet peeve of mine.

Not a big deal - it was still a great article with some great data :cool:
 
I didnt mean to imply that you guys dont know the specific definition, more that Im noticing an increasing trend of misinformation getting spouted by people who dont have enough knowledge to really understand what they're reading.

Not that this is the fault of journalists (people should really build a background of knowledge before diving into technical topics), but this one is admitedly a pet peeve of mine.

Not a big deal - it was still a great article with some great data :cool:

Cool. One place where I do see it misused frequently is in forum posts, especially those about upgrading AGP-based systems. The first line of attack for AGP-dissers is that there are no good, new AGP cards. Then, when (miracle of miracles) a good AGP card is released, they say it isn't worth the investment because "your outdated CPU will bottleneck it." They imply that the CPU is so feeble that the new GPU will produce no performance gains at all, and as you point out, that is rarely true, especially if one adjusts settings to play to the strengths of the GPU and get the most out of the investment. So if your main concern is misunderstanding by readers/end-users/forum posters, I think you're right on the money. I took your post to mean that the problem was misuse within the article content itself.
 
I didnt mean to imply that you guys dont know the specific definition, more that Im noticing an increasing trend of misinformation getting spouted by people who dont have enough knowledge to really understand what they're reading.

Not that this is the fault of journalists (people should really build a background of knowledge before diving into technical topics), but this one is admitedly a pet peeve of mine.

Not a big deal - it was still a great article with some great data :cool:

Maybe there can be an 'absolute' bottleneck (no performance scaling) and a 'relative' bottleneck (reduced performance scaling)? Think about the actual word 'bottleneck' - the neck of a bottle. Liquids can still flow through the bottleneck, just at a lower rate.

In essence, isn't this what an FX-62 is compared to an X6800? It still generates framerates, just at a lower rate than the X6800. I would consider it a relative bottleneck, because it's certainly limiting on the framerate in certain games.
 
Thats actually the point of my post. This would be a misinterpretation of the term. In the case of the FX-62, it would most definitely not be a bottleneck. The X6800 could achieve higher framerates than the FX-62 could paired with an 8800GTX, but then again, going from 8800GTS to 8800GTX to 8800GTX SLI, I saw scale on an FX-62. Im sure the same would be true on an R600 (assuming it has more rendering power than the G80).

Therefore in no sense of the specific definition of that term as it applies in computer science is any part a "bottleneck". One systems ability to achieve higher performance due to higher powered components does not imply a bottleneck in a lower powered system. It's not good to redefine terms that have specific meaning. Any technical discipline is pretty exacting when it comes to semantics. Thats the nature of my pet peeve with this one.

Now if, at a given resolution/detail level/API revision/engine complexity/etc, the FX-62 reached the point where ANY GPU could easily match what it could prepare for handoff, THEN the FX-62 would be a true bottleneck.

Oddly enough, Toms Hardware, in direct contrast to their amazingly irresponsible "you must buy Core 2" paid Intel ad they called "8800GTX NEEDS the highest power CPU!!!", they now have a quiet little article on how older systems run with newer cards.

Youd be shocked how even an AMD64 3200+ still extracts benefit from a X1950 Pro. Much of the 3D pipeline has shifted to the GPU. Most modern, high powered, CPUs (think AMD64 and the later P4s and of course Core 2), have more than enough horsepower to do their part of the 3D pipeline MUCH faster than the GPU can unless you run at very low resolution.
 
http://www.guru3d.com/article/Videocards/416/20/

Correct, the majority of gamers out there are still on an LCD screen at 1280x1024. Fact is though that roughly 15% of the resolutions are already at or above 1600x1200 and that number thus is rapidly growing. Go-go Guru3D enthusiast gamers ! :)

Do the poll, get the most common setting/s and then simply run that setting or settings as real world?

It's like talking about "Real world driving at 150mph". How many of us own cars that can go that fast. Or a place to drive that fast without going to jail? I love to Game but I use my computer for so much more.

Hell, I agree with Tom's Hardware and it would have been irresponsible to tell folks to keep on buying the second best. Doesn't matter who's paying for the ads on that site or even on this site. You don't have to be paid by Intel to know or say X6800 kicks the crap out of FX-62 that performs closer to a stock 6600, sheesh!
 
I still think you're "gaming" (benching) at too high of resolutions / settings. If your min. hits 0,2, 18,20 o 22 fps (some samples from review) or even under 30 fps, you need to back down the resolution/game settings. IMHO.

I'd prefer to play at 1600x1200 with high AA/AF thats buttery smooth, than a higher resolution that gets choppy sometimes. Nothing spoils your fun more than a choppy section of the game causing you to die. Most games still look incredible at even 1280x1024 with max settings & high levels of AA/AF.

Personally I prefer to make my min. fps around 60+fps! But I'm probalby on the other end of the extreme.


i've said this all along... it almost seems like the [H] is trying to get people to buy 30" monitors! :)
 
i've said this all along... it almost seems like the [H] is trying to get people to buy 30" monitors! :)

There is nothing wrong with 30" monitors. You should try one.









You know you want to. :D
 
Back
Top