Deneb 2,2 GHz --> 4,0 GHz

Yet again the argument that the humand eye can't see much over 30fps.... well, I for one can tell the difference between 60 and 85fps.. or 120 if a CRT monitor is blessed to have that high of a refresh rate.
You can't if frames are evenly displayed.
If each frame has one time span for 1/30 seconds presenting 30 FPS then this will be very smooth. The problem is that when computers work these frames isn't evenly displayed, depending on how the game is developed, cache, delays etc frames presented isn't calculated as evenly as you might think. More frames could in fact feel slower because one cpu works schizophrenic. It is the CPU that decides the smoothness because it decides the elapsed time between the frames and how much movement that should be done for different parts.

if the game has 300 FPS but the game calculates movement badly, that game will not feel smooth
 
4GHz sounds like an awesome improvement over my x2-5200+ ! Sign me up for one of those bad boys. :D
 
You can't if frames are evenly displayed.

No... Optical ganglia can operate in the hundreds of hertz... You might not notice a change in the rate of information between 30fps and 60fps, but you will (I promise) feel it. Consider how fast we blink when something comes at us... Which occurs in about 18ms... You will react to the stimulus before you can rationalize it's nature and level of threat... Considering that this complex reaction involves generating movement and reacting to a stimulus, the reaction portion probably resides around 11ms...

Humans can *feel* differences in fps speeds, even if the game isn't hitching or displaying frames unevenly... While you're absolutely right, that a game displaying frames in exactly 33.3ms each (an evenly spaced 1/30 frame) will feel much smoother than frames being displayed at an AVERAGE of 33.3ms... The game will still feel smoother, or be less fatiguing, if the frames were displayed at exactly 16.6ms (an evenly spaced 1/60 frame).

The real thing that made me laugh in this thread, is the assertion that Deneb and the i7 core will be ANY DIFFERENT in any real-world gaming scenario. In absolute synthetics, Deneb does appear to win out over the i7, but only in a situation that nobody on the freaking planet will utilize. Kinda like Koolance's radiator testing where 100c water was pumped through a radiator with around 300cfm of air moving through it. :rolleyes:
 
on the last page of the colaler forum post, they have some reviews vs regular phenom, it looks like the scaling issue has been fixed.
 
You might not notice a change in the rate of information between 30fps and 60fps, but you will (I promise) feel it.

You should read about how games are done, then you will understand. It isn't like a movie (to you know the fps of movies ;) ).
Just look at movements of humans, you se direct that it is computer animated. Why? Because the calculation of how the human is moved doesn't match the real world.

One good thing about Nehalem and games is that it has removed the FSB bottleneck, it is built to have less bottlenecks and that goes for AMD too. You can't see this checking FPS values though but time spans between frames will be more equal.
 
No Flippin's point is very valid. For years now the Phenom has been subjected to very specific tests demonstrating how inferior they are when in fact it's mostly irrelevant. This whole "shift in thinking" for the i7 should've happened years ago when AMD started it.

Agreed. It does seem like there's been the whole "fit the benchmark" thing going on for quite a while. Maybe it's just the bandwagon mentality figuring into the interpretations of the reviews. The "buzz" is that one processor is faster in games, so people look at results that reinforce that idea. Pretty soon, it's heretical to even mention something different. Now as to who's getting paid what and by whom to create that buzz (I'm sure both sides do it)... you'll never hear about that.

Of course I bought into it and got a QX so... :confused:

Edit: If this AMD 45nm turns out to be any good I could go back to selling AMD machines. Especially if the price is right.
 
Why are you intel fanboys so obsessed with how well Nehalem transcodes video, when I can transcode an hour and a half DIVX to h.264 with ATI XCODE in about 10-20 minutes?

CUDA and XCODE will leave NVIDIA and ATI at a position where Intel will *NEVER* be able to *touch* them for parallel tasks. Look at Photoshop CS4. It uses DX9 and OpenGL, not the CPU, and we're seeing HUGE performance gains. Gains that Nehalem could NEVER even fathom.

So, your GPU is sitting idle when you're in photoshop, and you're going to depend on a processor for it?

Nehalem is a dead end, NVIDIA and ATI will destroy nehalem for parallel tasks, which is exactly what nehalem is meant to do. It's going to be a failure if CUDA takes off.

So you fanboys need to stop bragging about how it's going to be awesome in multi-threaded tasks. The x86 CPU is by far not the most efficient architecture for such things.
 
BUT, CPU is not that important for games, when game developers create games they can control the workload for the processor. If the developer designs the game to run on X2 5000+ that processor will always be enough. You will not get better game play using a faster processor.
That's the whole fucking point. The CPU does not matter since games are limited by the GPU. Therefore, how could Deneb possibly be better for gaming than Nehalem? The answer is simply that it isn't. Therefore, the argument that Deneb is a better gaming CPU is a total crock.
Why are you intel fanboys so obsessed with how well Nehalem transcodes video, when I can transcode an hour and a half DIVX to h.264 with ATI XCODE in about 10-20 minutes?

CUDA and XCODE will leave NVIDIA and ATI at a position where Intel will *NEVER* be able to *touch* them for parallel tasks. Look at Photoshop CS4. It uses DX9 and OpenGL, not the CPU, and we're seeing HUGE performance gains. Gains that Nehalem could NEVER even fathom.

So, your GPU is sitting idle when you're in photoshop, and you're going to depend on a processor for it?

Nehalem is a dead end, NVIDIA and ATI will destroy nehalem for parallel tasks, which is exactly what nehalem is meant to do. It's going to be a failure if CUDA takes off.

So you fanboys need to stop bragging about how it's going to be awesome in multi-threaded tasks. The x86 CPU is by far not the most efficient architecture for such things.

GPUs are not nearly as general-purpose as CPUs are. By your logic, we could just replace our CPUs with GPUs and use those instead, and they would be faster. The fact is, we still need CPUs, and for such tasks as they are used for, Nehalem is faster, plain and simple.
 
That's the whole fucking point. The CPU does not matter since games are limited by the GPU. Therefore, how could Deneb possibly be better for gaming than Nehalem? The answer is simply that it isn't. Therefore, the argument that Deneb is a better gaming CPU is a total crock.


GPUs are not nearly as general-purpose as CPUs are. By your logic, we could just replace our CPUs with GPUs and use those instead, and they would be faster. The fact is, we still need CPUs, and for such tasks as they are used for, Nehalem is faster, plain and simple.

You heard it here first folks, nehalem pre-won.
 
You heard it here first folks, nehalem pre-won.

Based on the benchmarks that are floating around the net, yes, Nehalem "pre-won". When Deneb can't even reach Yorkfield levels of performance clock-for-clock, I don't know what else one would expect.
 
That's the whole fucking point. The CPU does not matter since games are limited by the GPU. Therefore, how could Deneb possibly be better for gaming than Nehalem? The answer is simply that it isn't. Therefore, the argument that Deneb is a better gaming CPU is a total crock.

AMD is much cheaper and differences in performance isn't noticed by the user. Why pay more than you need?
 
AMD is much cheaper and differences in performance isn't noticed by the user. Why pay more than you need?

Because computers do other things besides gaming too, and Intel CPUs are faster at those.
 
Because computers do other things besides gaming too, and Intel CPUs are faster at those.
And maybe the nehalem will be faster running vmware workstation too. I have tried three different Intel computers running vmware and it is very slow. Much faster on AMD, even X2 4400+ is faster. Phenom = no competition
 
I have tried three different Intel computers running vmware and it is very slow. Much faster on AMD, even X2 4400+ is faster. Phenom = no competition
It depends on the Intel CPU. Intel disables VT for many models, not limited to only low end CPUs. AMD does leave it enabled across the board, which is good. My BE-2400 server runs great loaded up with memory and a couple of VMs.

I run VMWare all the time at work on my dual core Xeon (Woodcrest) workstation. Performance is as good as any other CPU I have run it on, including X2 models (my Phenom is a B2 and unstable with those loads unpatched, and way too slow patched). AMD does have a slight advantage in 4s/16p VMWare performance (~10%), but the difference is pretty insignificant in 1s (2p/4p) systems. http://www.vmware.com/products/vmmark/results.html Nehalem VMWare performance hasn't been released yet, but it wouldn't be surprising for either Nehalem or Deneb to show further performance improvements. IOW, don't celebrate yet.
 
Owe this thread hurts my head.

I say we wait for I7 and Deneb to come out.

Will one be faster then the other? of course its new tech etc etc blah blah. Lets just wait and see. I mean FFS Deneb is being reviewed soon, as well as I7 being reviewed too.

All we know is, Both procs seems to be good, and it should make for an interesting 4th quarter :).

New TECHNOLOGY ftw
 
Good for it. I'll buy it wether its 10%, 20%, or 30% behind. So to be clear, and to cast no doubts as to where I stand, I have a hard enough time bringing myself to type the word Intel, let alone use anything made by them. Like i've said in the past, i'll find a new hobby before I spend a cent on anything made by Intel. So in fact I won't be dissapointed if "Deneb barely catches up to Kenstfield".

I appreciate your honesty and dedication.
 
EVERY good news related to amd has to turn into a intel vs amd thread

honestly, deneb is looking better than expected, and the worst lot of the intel fanboys have to admit that. also, while gaming it has been noted over and over and over again that after 3ghz (EVERY recent phenom 9950 overclocks beyond this speed), be it phenom or a QX, at higher resolutions it never mattered, but still the suckass reviews all over the net had to bench at 640x480 and 800x600 to show how much better yorkfields and kentsfields were in gaming compared to agena. absolute bullshit. now im sure more and more reviews will pop out comparing the processors at higher resolutions and conclude that nehalem wins by 1fps at higher resolutions so it doesnt matter. utter rubbish, this should have been done ages back. this is the point flippin_waffles was trying to say i guess

can we please stick to the topic? deneb is looking good and there are a few of us AMD guys too with sb750 boards hoping that deneb will not dissappoint, and if it does well its all good, even for intel guys its good, it drives their prices down
 
Either way, since Deneb is only at Yorkfield performance levels, I wonder why this is? Architecture differences? Or because Deneb uses 45nm SOI but Intel's using 45nm hk/mg?

45nm and hkmg have zero to do with IPC. It's completely architecture dependent.
 
All the Intel zealots should be cheering for this processor and hoping it gives the latest Intel chips a run for their money. It will drive down prices across the board and encourage more innovation in the market.
 
honestly, deneb is looking better than expected, and the worst lot of the intel fanboys have to admit that. also, while gaming it has been noted over and over and over again that after 3ghz (EVERY recent phenom 9950 overclocks beyond this speed), be it phenom or a QX, at higher resolutions it never mattered, but still the suckass reviews all over the net had to bench at 640x480 and 800x600 to show how much better yorkfields and kentsfields were in gaming compared to agena. absolute bullshit. now im sure more and more reviews will pop out comparing the processors at higher resolutions and conclude that nehalem wins by 1fps at higher resolutions so it doesnt matter. utter rubbish, this should have been done ages back. this is the point flippin_waffles was trying to say i guess

Thankfully, not all reviews test gaming performance at low res: http://www.legionhardware.com/document.php?id=770

CPU speed still matters, at least with a high end GPU setup. So in such cases, it is quite relevant, and if Deneb can hit 4GHz consistently without excessive overvolting then all the better.
 
Thankfully, not all reviews test gaming performance at low res: http://www.legionhardware.com/document.php?id=770

CPU speed still matters, at least with a high end GPU setup. So in such cases, it is quite relevant, and if Deneb can hit 4GHz consistently without excessive overvolting then all the better.
Here is another from that site
http://www.legionhardware.com/document.php?id=775&p=5

There are some problems with that review and that is that tests are done for 10 minutes showing average. most time spent in games isn't spent on action parts. you may be sneaking on enemies etc. In those parts the processor doesn't need to work and the videocard can produce loads of frames. those frames will increase average fps levels. Parts from the game when there is actions and you may need performance will not be shown in 10 minutes game plays.
Another interesting thing about that first review is that there hardly is any difference for game play experience buying one expensive C2Q or a cheap X2

When games start to increase lowest CPU demands to quad's, then you may need to think a bit what CPU to buy if you want to play the most demanding games

Another review: http://www.guru3d.com/article/amd-phenom-x4-9950-be-processor-tested/10

There they have used a slow video card and that will cut the high fps values and the low fps values will get more influence on total fps score. Low fps is what counts
 
Another review: http://www.guru3d.com/article/amd-phenom-x4-9950-be-processor-tested/10

There they have used a slow video card and that will cut the high fps values and the low fps values will get more influence on total fps score. Low fps is what counts

Notice how the scores are almost identical at 1600x1200? Yeah. Considering a 9800XT (that's the ATI card, AKA R360 - not nVidia's G92) can run CoD4 smoothly at 1024x768, resolutions at that level are hardly a good metric of performance.

Not only that, but they're comparing an old dual-core X6800 against AMD's top-of-the-line quads. Plus, in the other two games they tested, the scores were identical. So that link doesn't actually prove anything, aside from the fact that games are GPU-limited at any reasonably high resolution - which is what I've been saying all along.
 
I hope this isn't true because we need intel to keep making good cpus not amds 140W+ phenoms.
 
It's using 1.72v though, so I wouldn't really take that into consideration as it's hardly a permanently feasible overclock (although the Deneb is running at 1.6v, so that isn't exactly 24/7-sustainable either).

Personally I find all this overclock talk about non-released CPUs nonsense... Engineering samples are rarely a good indication of overclocking potential of production CPUs.
But if you're going to talk about an overclocked Deneb and compare to Nehalem, then the least you can do is take an overclocked Nehalem into account.

I'd rather wait and see the final products before I draw any conclusions on which is the better overclocker, or which is the better gaming CPU etc.
 
I haven't even read this thread, but I feel very confidant in saying... Kassler, you're wrong.
 
No, it is a good time since they are using 1.72V to get there :D

1.72V to get a 3.2GHz CPU to 4.2GHz compared to 1.6V to get a 2.2GHz CPU to 3.975GHz. :p

Considering we know nothing about any other factors, I can't make any significance out of the voltage really.
I mean, if you were to put 1.72v on the Deneb, would it get to 4.2 GHz aswell?
Or if you downclocked the Nehalem to 3.975 GHz, would you be able to run at 1.6v?
And what is the significance of the voltage anyway, if you don't know how much power it draws? It's highly possible for a CPU to use less power and run less hot, despite of a slightly higher voltage.
And then we haven't even looked at performance yet. Even if the Nehalem would draw more power, it could still have better performance-per-watt, if it had enough performance to go along with it. In fact, it could even be possible that the Nehalem at the stock speed of 3.2 GHz (and stock voltage) already delivers the same performance as the Deneb at 4 GHz.

But you seem to have made your mind up already, good for you.
 
Notice how the scores are almost identical at 1600x1200? Yeah. Considering a 9800XT (that's the ATI card, AKA R360 - not nVidia's G92) can run CoD4 smoothly at 1024x768, resolutions at that level are hardly a good metric of performance.
Testing the lowest FPS you will get very similar results, if that type of result differ much then that means that one game could be unplayable. There are some frames in the game that may be delayed. It could be that data isn't found in the cache and there is something other that is going on, that could stall the processor for one single frame.
I did some more explaining here (have been reading about game programming)
http://www.hardforum.com/showpost.php?p=1033152673&postcount=38
 
Testing the lowest FPS you will get very similar results, if that type of result differ much then that means that one game could be unplayable. There are some frames in the game that may be delayed. It could be that data isn't found in the cache and there is something other that is going on, that could stall the processor for one single frame.
I did some more explaining here (have been reading about game programming)
http://www.hardforum.com/showpost.php?p=1033152673&postcount=38

I think you are confusing the order of magnitude of some things here.
Cache misses occur at the lowest level of code: instruction level.
In order to render a complete frame, you have millions of instructions (and possibly millions of cache accesses).
Basically you will never have 'some' frames delayed because of cache issues.
After the initial frame is rendered, the cache is pretty much 'settled' for all similar frames. So on average you'll get about the same amount of cache hits and misses on all frames, and you won't see any significant 'delay' from one frame to the next.

You need something a lot bigger than just a cache hit or miss to actually delay a frame. Like for example a texture that needs to be uploaded to video memory, or some shaders that need to be compiled/initialized for the first time.
But cache? Nah, it doesn't affect performance at a level as high as a single frame.
That's like saying "The fuel/air mixture in an engine is not entirely constant, so some combustions will be more powerful than others, therefore your car will sometimes slow down".
 
Back
Top