Why is clock-for-clock a valid CPU comparison?

sizzlemeister

Limp Gawd
Joined
Jul 27, 2007
Messages
165
Someone needs to help me understand the current state of the "enthusiast" community.

And, just a preamble here: I'm not trying to cause an argument at all; rather I'm sincerely curious about the matter I'm about to bring up.

Why is it "enthusiasts" are content to compare products like video cards as a whole, regardless of the architecture, but then insist on comparing products like CPUs on a "clock-for-clock" basis?

The example, as borne out in these very forums, is that even the [H] editors will say something to the effect that "the architecture of the video card is not as important as how it performs". Which I completely agree with - the product, as a whole, at whatever speeds its processor(s) and memory run at, should be measured against competing products as it comes from the factory - and, of course, how well it performs when pushed beyond the factory spec.

Yet, a lot of CPUs comparisons are done on a clock-for-clock basis. Which is not to suggest that [H] or any other site, or any of its forum members, STRICTLY rely on this comparison. However, invariably the comparison is always included in reviews and is discussed and discussed and discussed.

Aren't the CPU architectures from AMD and Intel sufficiently different enough that a clock-for-clock comparison is fundamentally baseless? The specs from the factory on, say, an I7 920 include the 2.66GHz operating frequency - and the spec on a x4 955 includes the 3.2GHz operating frequency. Why is a clock-for-clock between these two valid?

I realize that you can overclock both CPUs, and I think a valid comparison in this context would be a comparison of performance after both CPUs maximum reasonable overclock is achieved - which is done, too, BTW, here and elsewhere - I'm not saying this isn't done.

All I am trying to get at here is I wonder why "enthusiasts" still use clock-for-clock to compare CPUs, when it seems most realize this is not really a sound comparison - and I point to video cards as a way to validate the observation that "enthusiasts" realize there is a point where specs/clocks comparisons do not make sense.
 
Video cards have completely different architectures so their clocks aren't really comparable at all.

CPUs while different can still be semi-compared based on their clocks. The "max reasonable OC" is tough to determine so it can vary widely. In the case of x4 955 and the i7, they both can hit the 3.6-4Ghz range most of the time. To me, the clock-for-clock is a decent comparison because if you know the i7 is (not the real number) 20% faster clock-for-clock, then you can approximate how well the processors will do when you do your own OCing. So 3.6Ghz on one CPU would be equal to 4.32Ghz on the other CPU, for example.

There are plenty of sites on the web that pull the benchmarks at stock so if the clock-for-clock isn't interesting, there's no shortage of places to look. I have seen OC benchmarks on other sites but it's few and far between and often not very comprehensive at all.
 
I'm not interested in reviews, or that context, so much because as I noted (and you did as well) nearly all competent reviews do stock, OC'd, and clock-for-clock comparisons.

I'm interested in the "enthusiasts" and why it's still acceptable to compare CPUs clock for clock. I don't see how performance relates clock for clock from one manufacturer's product to another manufacturer's product.

Like, the bottom line on cars is how fast they do 0-60 mph. Does it matter one uses a turbo-charged four-cylinder and the other just a six-cylinder? Who measures performance in cars by the revolutions of its cam shaft?
 
It's because the clock of a CPU can be varied widely, sometimes 50% or more quite reasonably. Whereas with video cards, a reasonable overclock would hit closer to 5%, with about 10-15% being absolute max. Since the video card clock can't change so much, it makes sense to compare cards 'as is', especially with such huge differences in clocks that you see between different cards (eg. 750MHz core for 4870 vs 576MHz core for GTX260). However, since CPUs can be overclocked so effectively and because the maximum reasonable overclock for most CPUs is roughly similar, it makes sense, especially for those aiming for that 'maximum reasonable overclock', to see a clock-clock comparison between chips to see which would give best performance if they were all at that speed.
 
clock-per-clock is a term used to compare how much work is done in a single clock cycle, not a comparison of speeds. so your analogy with the camshaft isn't really valid. We're not comparing the 4-cylinder hitting higher RPMs to match the 6-cylinder at lower RPMs, clock-per-clock is how much is being done per-revolution.

So... if you have a 2.8GhZ Core i7 and a 2.8GhZ Phenom II, which one is faster clock-per-clock? We know the answer is the Core i7, because it does more work per clock cycle. So with the cars, the 6-cylinder is the answer because it does more work per-revolution.

Make sense?

So this is a valid thing to compare in the sense that if you've got 2 chips at the same clock-speed, there isn't much questioning which one is going to do the 1/4 mile faster.
 
I realize that you can overclock both CPUs, and I think a valid comparison in this context would be a comparison of performance after both CPUs maximum reasonable overclock is achieved - which is done, too, BTW, here and elsewhere - I'm not saying this isn't done.
The fact is that current-gen architectures from both AMD and Intel run at similar speeds, and especially when overclocked, since they tend to top out at around the same frequencies (generally ~3.8GHz for Phenom II CPUs and ~4GHz for i7 CPUs). Because the maximum speeds are so similar, a clock for clock performance analysis will tell overclockers which CPU will be the best when pushed to its limit.

Of course, there are many reasons why stock-clocked comparisons are perfectly valid as well. Both approaches have their advantages and I personally think that they are both necessary. However, they cater to different crowds, and as far as overclocking goes, clock for clock comparisons are more useful.
 
When you have video cards from two different manufacturers, you can't compare clock-for-clock. Each manufacturer has it's own instruction set, features, and capabilities. For example, the AA algorithms on an ATI card can yield very different results than the algorithms on the Nvidia card. The Nvidia card can have special optimization that the ATI card doesn't (or vice versa). There are just way too many variable to be able to compare them clock for clock. Not to mention that we can actually SEE the differences between the cards.


With x86 class CPU's they all have do the same tasks. They have to execute the same instructions. And for the most part the feature set is the same (SSE, MMX, etc). So clock-for-clock is a valid way to see how efficient it's doing things. The only measurement you get with a CPU is how fast it gets things done.


(NOTE: internally the CPU may be very different, but it's shielded from us).
 
clock for clock i something that is easy to understand.
But it is much more complicated than that.

If you have a superfast core that can do a lot but it has slow lines to memory, that cpu needs to wait on memory most of the time.
Comparing a RISC vs CISC cpu. The risc cpu will do more instructions compared to cisc cpu. But the risc cpu might need more instructions to do something compared to the cisc cpu.
CPU's have stronger and weaker areas. It is possible to create software that fits one cpu better than the other. If someone says this cpu is this fast for that clock, he/she should also point out some more facts in order to make some sense of it.

CPU's today are mostly waiting for data/code from memory. The core is so superfast that if it wouldn't need to wait it would be a huge improvement in speed. It is a very small area on the chip that actually do something, most of the chip is used to feed the core with code and data.
 
clock-per-clock is a term used to compare how much work is done in a single clock cycle, not a comparison of speeds. so your analogy with the camshaft isn't really valid. We're not comparing the 4-cylinder hitting higher RPMs to match the 6-cylinder at lower RPMs, clock-per-clock is how much is being done per-revolution.

So... if you have a 2.8GhZ Core i7 and a 2.8GhZ Phenom II, which one is faster clock-per-clock? We know the answer is the Core i7, because it does more work per clock cycle. So with the cars, the 6-cylinder is the answer because it does more work per-revolution.

Make sense?

So this is a valid thing to compare in the sense that if you've got 2 chips at the same clock-speed, there isn't much questioning which one is going to do the 1/4 mile faster.


I know what clock-for-clock measures, thanks. The car analogy ends at the engine technology.

It DOES NOT make sense that you're trying to force some sort of an equivalency between an i7 860 and a PII X4 920. I think AMD would insist that the 920 isn't meant to go up against the 860. To me, it's way simplistic thinking to say "gee, one company's 2.8 GHz should compete with another company's 2.8 GHz" with no regard for differences in technology and architecture.

Hence the car analogy - what matters is the performance, regardless of what powers the car. You could be comparing a Hemi Six Pack to a turbo-charged small cylinder 12 banger - what matters is the 0-60, not what technology each uses and how fast the parts inside the engine spin and/or pump.
 
clock for clock i something that is easy to understand.
But it is much more complicated than that.

If you have a superfast core that can do a lot but it has slow lines to memory, that cpu needs to wait on memory most of the time.
Comparing a RISC vs CISC cpu. The risc cpu will do more instructions compared to cisc cpu. But the risc cpu might need more instructions to do something compared to the cisc cpu.
CPU's have stronger and weaker areas. It is possible to create software that fits one cpu better than the other. If someone says this cpu is this fast for that clock, he/she should also point out some more facts in order to make some sense of it.

CPU's today are mostly waiting for data/code from memory. The core is so superfast that if it wouldn't need to wait it would be a huge improvement in speed. It is a very small area on the chip that actually do something, most of the chip is used to feed the core with code and data.

risc vs cisc is a old and outdated argument that really no longer applies today's CPU's.

And having to wait on data/code from memory also wildly varies depending on the instructions being executed. L1 and L2 caches, prefetch and branch prediction units pretty much keeps the core fed without having on memory... depending on the actual code being executed. Sometime a stall in the core from badly optimized code can cause more delays than waiting on the memory.
 
When you have video cards from two different manufacturers, you can't compare clock-for-clock. Each manufacturer has it's own instruction set, features, and capabilities. For example, the AA algorithms on an ATI card can yield very different results than the algorithms on the Nvidia card. The Nvidia card can have special optimization that the ATI card doesn't (or vice versa). There are just way too many variable to be able to compare them clock for clock. Not to mention that we can actually SEE the differences between the cards.


With x86 class CPU's they all have do the same tasks. They have to execute the same instructions. And for the most part the feature set is the same (SSE, MMX, etc). So clock-for-clock is a valid way to see how efficient it's doing things. The only measurement you get with a CPU is how fast it gets things done.


(NOTE: internally the CPU may be very different, but it's shielded from us).

Are you saying video cards from ATI and Nvidia handle Direct X and Open GL uniquely, but CPUs from Intel and AMD handle x86 the same?

That doesn't make sense.
 
I know what clock-for-clock measures, thanks. The car analogy ends at the engine technology.

It DOES NOT make sense that you're trying to force some sort of an equivalency between an i7 860 and a PII X4 920. I think AMD would insist that the 920 isn't meant to go up against the 860. To me, it's way simplistic thinking to say "gee, one company's 2.8 GHz should compete with another company's 2.8 GHz" with no regard for differences in technology and architecture.

Hence the car analogy - what matters is the performance, regardless of what powers the car. You could be comparing a Hemi Six Pack to a turbo-charged small cylinder 12 banger - what matters is the 0-60, not what technology each uses and how fast the parts inside the engine spin and/or pump.

It's about making all things equal. You can't change how many cores there are, or the way the manufacturer implemented the architecture. So you take the one thing can change and set it equal to the other. That way you can see that at 2.8GHz, X cpu is getting something done 25% faster than the Y cpu. Then you can even take that result further when it's noted that X cpu is 60% cheaper than the Y cpu.
 
It's about making all things equal. You can't change how many cores there are, or the way the manufacturer implemented the architecture. So you take the one thing can change and set it equal to the other. That way you can see that at 2.8GHz, X cpu is getting something done 25% faster than the Y cpu. Then you can even take that result further when it's noted that X cpu is 60% cheaper than the Y cpu.

why so cryptic we all know that X cpu is from AMD and Y cpu is from intel.... :D
 
why so cryptic we all know that X cpu is from AMD and Y cpu is from intel.... :D
If X was AMD then it wouldn't be getting things done 25% faster than Intel. ryan_975 was just using arbitrary numbers to illustrate his point.
 
It's about making all things equal. You can't change how many cores there are, or the way the manufacturer implemented the architecture. So you take the one thing can change and set it equal to the other. That way you can see that at 2.8GHz, X cpu is getting something done 25% faster than the Y cpu. Then you can even take that result further when it's noted that X cpu is 60% cheaper than the Y cpu.

But that's arbitrary and meaningless. You are therefore determining for AMD how fast their CPU should work to process data, when in truth the stock speed is the speed AMD says a particular CPU should be running at to process whatever amount of data.

That's like trying to force a one inch thick slab of plastic into being as strong as a one inch thick slab of tempered steel - when the manufacturer of the plastic slab is trying to tell you it takes a three inch plastic slab to equal the one inch steel slab.

You guys are doing nothing for me here but to convince me further that clock for clock is apples and oranges.
 
yea op has a point. i think the argument was more valid back in the a64 days where 2.2ghz amd was competing with 3.2+ ghz intel. Now the ghz race is gone and the speeds are more similar.
 
amd and intel cpu architectures aren't so different that they can't be compared on the basis of clock speed. modern video cards are designed specifically for the unified shader architecture. computing units are no longer bound specifically to pixel, vertex, or geometry processes. nvidia and amd leverage their technologies differently to complete these calculations. the nvidia gt200 architecture relies on fewer stream processors than amd's r700, but the 800 stream processors of that architecture are bound clock for clock with the gpu.
 
There was this old example I found several years ago when HyperThreading came out and it was before dual core processors were released. It explained, in simple terms, what HyperThreading is for the Pentium 4 processor.

I'll try to replicate the example as best as possible, but you should get a general idea of this "clock-for-clock" for CPUs after I write it all down.

Let's say you have a program that you want to Add and Subtract two numbers (i.e.- 8 and 5).

AMD Processor (single core processor):
First Cycle - First thread
Start - Add 8 and 5 together
End - Returns 13.

Second Cycle - Second thread
Start - Subtract 8 and 5 together
End - Returns 3.
Intel Processor (single core processor with HyperThreading):
First Cycle - Instructions are split into two separate "threads"
Start -
- Thread one - Add 8 and 5 together
- Thread two - Subtract 8 and 5 together
End - Returns 13 and 3.​
So, from the example above, in one clock cycle, the AMD processor started adding 8 and 5 together. By the time the first clock cycle ends, Intel's processor already finished adding AND subtracting 8 and 5 together by the time the AMD processor finished the first instruction thread. By the second clock cycle, AMD processor is starting the second instruction thread to subtract 8 and 5 together; whereas the Intel processor is already waiting for the next set of instructions to process. In other words, you could probably say that the Intel processor with HyperThreading is two steps ahead of AMD.

Now, imagine this with current quad core processors and let's say the ideal program uses all 4 cores simultaneously with tasks split evenly to all four cores.

Let's say the program has 4 tasks it wants the processor to do with 2 instructions in each task:
First task - Add 4 and 5; Subtract 3 from 8
Second task - Add 2 and 1; Divide 10 by 2
Third task - Multiply 4 and 2; Divide 6 by 3
Fourth task - Subtract 4 from 10; Divide 8 by 2
And, let's say in a perfect environment, the program splits each task to each of the four cores evenly.

AMD Phenom II X4 Processor:
Cycle 1 starts:
First task, first instruction gets sent to Core 1:
-> 4 + 5
Second task, first instruction gets sent to Core 2:
-> 2 + 1
Third task, first instruction to Core 3:
-> 4 x 2
Fourth task, first instruction to Core 4:
-> 10 - 4
Cycle 1 ends:
Core 1 returns 9
Core 2 returns 3
Core 3 returns 8
Core 4 returns 6
Cycle 2 starts:
First task, second instruction gets sent to Core 1:
-> 8 - 3
Second task, second instruction gets sent to Core 2:
-> 10 / 2
Third task, second instruction to Core 3:
-> 6 / 3
Fourth task, second instruction to Core 4:
-> 8 / 2
Cycle 2 ends:
Core 1 returns 5
Core 2 returns 5
Core 3 returns 2
Core 4 returns 4
Now, let's do that for an Intel Core i7 Quad Core CPU with HyperThreading:
Cycle 1 starts:
First task, both instructions gets sent to Core 1:
-> 4 + 5
-> 8 - 3
Second task, both instructions gets sent to Core 2:
-> 2 + 1
-> 10 / 2
Third task, both instructions to Core 3:
-> 4 x 2
-> 6 / 3
Fourth task, both instructions to Core 4:
-> 10 - 4
-> 8 / 2
Cycle 1 ends:
Core 1 returns 9 and 5
Core 2 returns 3 and 5
Core 3 returns 8 and 2
Core 4 returns 6 and 4
As you can see above, in one CLOCK CYCLE, the Core i7 is doing 8 instructions simultaneously. The Phenom II X4 is still doing the first 4 instructions in the first clock cycle. It won't complete all tasks until the second clock cycle. By the time the second cycle begins, the Core i7 quad core is already waiting for the next tasks/next instructions to process.

Therefore, the Core i7 is about four steps ahead (if I counted correctly).

So, in a clock-for-clock comparison, the Core i7 is doing more than the Phenom II X4.

However, this is all dependent on the application. In certain comparisons and tests, where certain programs which are only single threaded, favored the AMD processor over the Intel processor. Those are the programs that haven't taken advantage of multi-core/multiple threads. Whereas, programs and applications that are multi-core/multi-threaded aware and are programmed for it, the Intel processor outpaced the AMD processor.

You might ask why an AMD processor would be favorable to a program that's only single-threaded and my best guess is this:
The program is a single thread and thus goes to one core on the AMD processor and isn't split up between the four cores. Thus, the other three cores go unused.

The same program on an Intel CPU would be sent to one core, but the program being only single-threaded gets divided into two. Now, mind you, this is my a rough guess. So you have one program that's not multi-threaded getting divided into two and that's where it gets slightly (not significantly) slowed down.
This is different with the Core i7/i5 Socket 1156 quad core CPUs as it has the ability to boost the speed of one core and lower the power/speed of the other 3 cores to process one single thread. So, if you see reviews, the Core i7/i5 S1156 will in fact outpace the AMD CPU in those situations.

All-in-all, when you use clock-for-clock, the Intel CPU is doing more per clock cycle than the AMD CPU. If you go back to the Core 2 Quads and Core 2 Duos, and see certain models outpace and outperform the Phenom I X4 and older Athlon quad cores, that all lies with the CPU architecture and how they are built and designed.

Look at the schematics of both the Core 2 Quad/Duo and compare them to the Phenom I/9xxx series of AMD CPUs.

The Intel CPUs employ the following in each core:
- Larger L2 cache amounts
- More registers
- More APUs
- More components to process a single intruction

Comparing this to the AMD CPU, I saw that each core were the following:
- Lesser L2 cache amounts
- Lesser registers-- anywhere from 2/3rds to 3/4ths the amount Intel has
- Lesser APUs
- Lesser components to process a single instruction

This could all be the result in the amount of R&D funding both companies get and what they can pay for to get what component into their CPUs. If AMD had the resources and the money, they could hire the needed amount of engineers and scientists to implement their own version of multi-threading in each core like IBM has done to their Power6/Power7 CPUs.

I just think AMD doesn't have the money for that. And, if they did, the processors would probably cost a lot more, and AMD wouldn't be what they are now-- offering lower cost CPUs with performance equivalent to OR close to that of an Intel processor (sans higher-end i7 CPUs).

For the price you're paying for an i5 (socket 1156) or i7 (socket 1156 or 1366) CPU, you're paying Intel's researchers, engineers, and scientists that went into designing each of their CPUs. Thus, more resources, more money, means more stuff they can come up with in their CPU. It'd explain why they're more expensive, but within reason. If AMD didn't exist, Intel would have probably priced their CPUs much higher to get a greater return from each CPU and take advantage of consumers that way.

Therefore, you can think of it this way:
- Pay less, go AMD with good to very good performance
- Pay more, go Intel with greater performance

It's all about people's preferences and individual choices. In the end, if you go AMD, the processor will still perform less compared to certain Intel CPUs per clock cycle.

(I'm sure someone will give a much better explanation than I have. Sorry for the length.)
 
I like to see clock for clock comparisions, because I use it to help choose which processor class to buy. Say I've determined I'm going to buy a wolfdale-based chip. One can reasonably expect to clock them to 3.5-4.0 GHz, without substantial effort, most of the time. The remaining variable, is how much L2 cache you want on the chip. Since it is known the execution cores are the same, then a clock for clock comparison shows one what you get out of each next spending tier.
 
I just find it interesting to know. It doesn't necessarily influence my buying decision. Same reason I read about the specifics of each new graphics card architecture, even though at the end of the day it doesn't have a direct effect on me.
 
Some of you seem to be confusing the issue. I'm not asking what a clock-for-clock comparison is; I don't need it explained to me although I do appreciate your effort and participation in the thread.

It doesn't necessarily matter how efficient an Intel processor is versus an AMD (insert your favorite CPU vendors and models if you like - it doesn't change the issue at all), either, and let me propose a few semi-realistic hypotheticals:

What if AMD, with the resources, knowledge and skill they have, decide that until they can figure out how to match and surpass CPU cycles with Intel - what if they figure out a way to make their processors run between 4 GHz and 5 GHz with a reasonable TDP (say 125 watts and under). What good would a clock for clock comparison be, then, when the difference between the Core iX and the PII families is not per-clock instructions, but can only be performance. Intel would have efficiency, but AMD would have brute power (or, if you like, speed).

Or, what if AMD gives us the ability to Crossfire CPUs just like we can with video cards now. With, say 95% scaling. Okay, so with two PII X4 955s that's 6.24 GHz on one mobo. What good would a clock-for-clock comparison do here?

Okay, coming back to Earth now, the fundamental principle is still the same: Intel has efficiency, and AMD is pushing speed - why would you bother considering a clock-for-clock comparison other than to say "why, yes, if I go with AMD, I'm selecting a processor spec'd to accomplish its goals using speed, and if I go with Intel, I have selected a processor which would accomplish its goals using efficiency"?

It's novel, but means nothing beyond that.

It's like [H] electing to go with the "overall experience" for video card reviews. Why? Because frame-rates are not all that matter anymore. The cards are so fast, and the architectures so different, frame-rates does not tell the complete story - the video card experience is too sophisticated to be explained away with a simple "X gets 100 FPS in Cool Game, while Y gets 75 FPS in Cool Game".

I think we may be at that point now: the CPU experience is beyond clock-for-clock comparisons. But, honestly, I'm much less geeky than a lot of you, so perhaps I'm looking at this from too much of a consumer view, rather than as an "enthusiast".
 
Clock for clock isn't a good catch-all comparison, but its good as part of one. When you are on a budget and an overclocker looking at clock-for-clock helps because it can tell you where it might be best to spend your money to get the most bang for your buck. If two CPUs hit roughly the same OC and are within roughly the same price range after getting the other required components for it, it all comes down to how well each will perform at that OC. Its not really any more complicated than that. To the average consumers clock-for-clock is essentially useless as they're not likely to be looking at the OCing performance of these chips, but [H]'s reviews aren't just for the general consumer, they're for every member of the site which is very much an enthusiast community.

PS: You can use more than one CPU in a system as long as you buy a board with multiple sockets for it. No "Crossfire" required.
 
Doing a clock-for-clock comparison is like comparing a small block chevy and a small block ford. Both engines can make a easy 400hp. But put these engines in a car, and one may be noticeably faster than the other. Reason being that engine power is not the only deciding factor in speed. The weight of the car, type of transmission, what gearing it has, all are factors on how fast the car can be.

The same principle applies when comparing CPU's. A i7 at 3.2ghz benchies noticeably faster than a AM3 at the same speed. But was the test as equal as possible? Was the same ram running the same speed and timings used? Were the same graphics cards using identical settings used? All of these things will affect the outcome on how the CPU performs.

A trick that we car guys use when looking at power levels that a engine makes is to pay attention to the area under the curve. That means to disregard the peak levels at high rpm, but see what kind of power it makes between 2500-5000 rpm, not at 6500 rpm. This is where the engine spends most of time while running, so this is what really counts.

In a indirect way, the same thing applies to CPU's. Peak frame rate levels are pointless once you pass 60 frames per second. What counts is that if both CPU's can make a respectable constant frame rate, who cares about what the peak number is? And in regards to incoding, if the Intel does it in 60 seconds and the AMD does it in 90, is it a big deal? As long as both are still fast enough, the difference is inconsequential.
 
PS: You can use more than one CPU in a system as long as you buy a board with multiple sockets for it. No "Crossfire" required.

If you could direct me to where I could find a dual AM3 mobo that's cost-effective and not spec'd solely to be a server I would REALLY appreciate it.

I would love to have dual PII 955s that can access more than 8GB DDR3 1600 ram reliably.

Like I say, I'm less geeky than a lot of you, but I hadn't see this capability yet in the products at New Egg or ZZF.
 
i don't think clock for clock comparisons are out of bounds for this generation of both companies cpus because their base clocks are so similar. their overclocks are fairly similar as well. in 2004 an amd cpu clocked at 2.4 ghz performed just as well as a 3.8 ghz intel cpu. that's an enormous difference in clock speed; a 38% difference for the same amount of work. between a i5 750 and a phenom2 955 the clock speed delta is only 17%.
 
A trick that we car guys use when looking at power levels that a engine makes is to pay attention to the area under the curve. That means to disregard the peak levels at high rpm, but see what kind of power it makes between 2500-5000 rpm, not at 6500 rpm. This is where the engine spends most of time while running, so this is what really counts.

YES!

The overall experiences counts for more than the marginal rates top and bottom.

Although I would have to argue your engine analogy a little - I don't believe we're measuring engines with the same type of output. But I fully appreciate the addition of the outside sources - is it deniable that, in the case of Intel versus everyone else, Intel has the advantage because software is optimized for their architecture(s)? I would add "a lot more" but come to think of it I have never heard of software being optimized for AMD CPUs.
 
If you could direct me to where I could find a dual AM3 mobo that's cost-effective and not spec'd solely to be a server I would REALLY appreciate it.

I would love to have dual PII 955s that can access more than 8GB DDR3 1600 ram reliably.

Like I say, I'm less geeky than a lot of you, but I hadn't see this capability yet in the products at New Egg or ZZF.

Dual AM3 boards will come in time, but they'll be server boards. Still, if you're looking at a multi-GPU set-up you are generally in the realm of the enthusiast so looking at a multiple CPU system is in that range as well.
 
Dual AM3 boards will come in time, but they'll be server boards. Still, if you're looking at a multi-GPU set-up you are generally in the realm of the enthusiast so looking at a multiple CPU system is in that range as well.

Negative. Just because I need processor horsepower, doesn't mean I spend my time thinking about where to put the Xpressar Mico-Refrigeration unit. I would much rather have multiple CPUs than GPUs, though.

Anyway, when the consumer dual AM3 CPU boards that work seamlessly with Win 7 Ultimate come out, please shoot me a PM. Remember, though, they have to be priced at the consumer, and not business, level. Thanks! :)
 
Are you saying video cards from ATI and Nvidia handle Direct X and Open GL uniquely, but CPUs from Intel and AMD handle x86 the same?

That doesn't make sense.

actually i'm pretty sure this actually happens. different cards will render an image differently. just think about how back in the day ati cards had better image quality. i think even now cards will render games differently. two cards with different frame rates are rendering vastly different things.

cpus will give the same output regardless of the maker. 2+2 on an x86 is always 4.
 
YES!

The overall experiences counts for more than the marginal rates top and bottom.

Although I would have to argue your engine analogy a little - I don't believe we're measuring engines with the same type of output. But I fully appreciate the addition of the outside sources - is it deniable that, in the case of Intel versus everyone else, Intel has the advantage because software is optimized for their architecture(s)? I would add "a lot more" but come to think of it I have never heard of software being optimized for AMD CPUs.

The thing is, CPU's do their work at full speed. Their is no power range, no area underneath the curve, no "Well the AMD cpu is putting out most of its power between 1.8 and 2.0GHz, but the Intel doesn't really start shining until 2.8-3.0Ghz." With engines, you reach a point of diminishing returns, not with CPUs. If you can get it to clock faster, it WILL do more work. RPM cannot in anyway relate to GHz.

In the day of the P4/PD, Intel had brute force. They were reaching 3.8GHz and would have gone to 5GHz had they been able to conquer their power problems. But the Athlon 64 was spanking it's ass all over the place at much lower speeds, in some cases almost almost the P4/PD's

So for today's chips, clock-for-clock is where it's at. When the time comes that there's a serious unbalance between technologies (say Via pops up out of no where and release at technology 15% less efficient than Intel's Core but able to reach 7GHz while maintaining 150w TDP). Then do you really think we'll still be sitting here say "Gee, durr... Intel's tech is still more efficient... let's do a clock-for-clock comparison." when we know Intel can't touch the VIA clock speeds? However, if Via's 7GHz chip is $1500, and Intel's 4.5Ghz chip is only $500. Where do yo go? Do you need brute power, or do you need cool and efficient?

Right now, the chips are so similar in performance, both techs can reach similar clocks. Why NOT compare them clock for clock? But you don't leave it at that, you then consider initial price and total cost of operation.

Again, video cards are a completely diferent ballgame. They have no standards to adhere to when making their hardware, except making sure their video card driver can compile D3D or OGL instructions into it's own native instructions. We can literally see the difference between feature sets and implementations. We want to know how good that game look while still being playable. You get a slow CPU, your games slow down, but, for the most part, it doesn't affect what you see on the screen (not talking about slide shows). You might take three days to transcode your library of movies, but you won't see that in the finished product. F@H results will come in at a much slower rate, but Standford won't see difference results from it.
 
Last edited:
Let's say I'm on a tight budget. I want to spend no more than $150 on a CPU, and want the highest performance possible.

CPU X can overclock to 3.6GHz, CPU Y can overclock to 3.2 GHz, they both cost $150, and the cost and effect on performance of all other components in the PC are equal. Now, if CPU X only provides 80% of the performance that CPU Y does per clock cycle, how do I tell which one is the better buy?

See what I'm saying?
 
Negative. Just because I need processor horsepower, doesn't mean I spend my time thinking about where to put the Xpressar Mico-Refrigeration unit. I would much rather have multiple CPUs than GPUs, though.

Anyway, when the consumer dual AM3 CPU boards that work seamlessly with Win 7 Ultimate come out, please shoot me a PM. Remember, though, they have to be priced at the consumer, and not business, level. Thanks! :)

I love my dual GPU set-up, but I agree I'd rather have a dual CPU system over dealing with the hangups of SLI or Crossfire, assuming the performance boost was worth it. I'll keep an eye out for consumer price level dual boards, but it probably won't be for a while.
 
Clock for clock is a SIMPLE and EASY way to compare how efficient different CPU architectures are. We all know that an i7 is faster than an PhII, but what about an A64 vs a C2D? What clock would the A64 need to be to match the performance of a C2D? That's what clock for clock is good at determining.

(I know the comparison is bad, but that's besides the point)
 
Not to mention AMD cpu can have 45W dual and (soon) quad cores!

Intel's 65W is a rough estimate. It's designed to go a little over that before SpeedStep brings it down (regardless of settins), and the fan goes to cool it.

But Intel is a (quite a bit) bit faster then the equivilient AMD, so for enthusiasts, it may be alright.
 
Not to mention AMD cpu can have 45W dual and (soon) quad cores!

Intel's 65W is a rough estimate. It's designed to go a little over that before SpeedStep brings it down (regardless of settins), and the fan goes to cool it.

But Intel is a (quite a bit) bit faster then the equivilient AMD, so for enthusiasts, it may be alright.


You might want to look a bit more into AMD's ratings. http://www.formortals.com/the-facts-about-amds-acp-power-rating/


Also speedstep has nothing to do with max wattage (actually temperature is what you seem to be going for). It's purpose is to reduce the CPU's power usage when NOT being pushed.
 
This is why I put "enthusiasts" within quotation marks. Not all "enthusiasts" are interested in a system that has overclocking potential, which thusly renders that particular justification irrelevant.
 
This is why I put "enthusiasts" within quotation marks. Not all "enthusiasts" are interested in a system that has overclocking potential, which thusly renders that particular justification irrelevant.

Well, heck, if you aren't going to bother overclocking, then no, clock for clock comparisons don't mean anything for you just like they don't mean anything for the average joe buying an off the shelf PC at Walmart. Why do you need someone to explain this to you? I certainly wouldn't call someone who doesn't overclock and doesn't do clock for clock comparisons between processors an enthusiast.
 
Back
Top