Core 2 Duo Benchmarked *real world

The testing methodology for their gaming benchmark makes little sense as they put more stress on the video cards rather than the CPU by maxing out all those settings.
 
"I originally set out with this processor to see if Intel was indeed attempting to pull one over on us with their "benchmark sessions" they have been holding across the country, but instead I found that Intel has been underselling their own product if anything. "

Gasp!
 
oh man i love this guy.....

"better (higher) minimum frame rates definitely give the user a better gaming experience. "

simply ingenious :D tell me more oh master of the obvious. i donno to me that sounds like this guy couldnt make a paragraph baised on the average or max framrate, so he decided he would state something about the minimum.

anywho, the onlything i found really spectaclar in that artical was infact the idea that COre 2 duo E6700 would be $550, i was expecting AMD to drop prices --which they will-- but i didnt think it would have to be that monstrous a drop. K8L, amd needs you and they need you now.
 
Well, I for one am glad he pointed that out. Nothing worse than cruising along fine and then suddenly having a huge framerate dip. Minimum FPS is important, and should probably be stressed more than it is.
 
deanx0r said:
The testing methodology for their gaming benchmark makes little sense as they put more stress on the video cards rather than the CPU by maxing out all those settings.

When he is getting a delta 12% on those high settings in Fear @16x12x4x8, it's showing you just how effective a CPU the Conroe is. Scalling is all nice and good but the fact that there is that much room on a 7800 SLI rig is amazing to say the least. Nevermind the processor price difference.
 
deanx0r said:
The testing methodology for their gaming benchmark makes little sense as they put more stress on the video cards rather than the CPU by maxing out all those settings.


I reall don't understand how you can say that, I mean do you game at low res with nothing enabled?
I for one can hardly play at 1280x1024 its too low, I do all my games at either 1600x1200 or 1920x1200 with everything maxed. So to me its like who cares that a cpu can do 3million fps at 800x600 with all the eye candy off but can only do 30fps at the higher res's with eye candy on.

And like InorganicMatter a min framerate is very important, imo more so than the max. I think we all want a great looking game that runs smooth and if that frame rate drops below 30 it is rather noticeable and annoying
 
deanx0r said:
The testing methodology for their gaming benchmark makes little sense as they put more stress on the video cards rather than the CPU by maxing out all those settings.

Not flawed at all actually, those are real world test to see how the normal high end user would be affected. It shows that even under high load situations where the GPU is normally the bottleneck (look at the 4800 vs old intel, same fps almost), the conroe gives a nice boost.
 
deanx0r said:
The testing methodology for their gaming benchmark makes little sense as they put more stress on the video cards rather than the CPU by maxing out all those settings.
they maxed out settings that stressed the cpu not gpu. they tuned down most of the eye candy for the graphics. in the real world when people use a 7600gt and above and eye candy there will probably be no advantage to the conroe. well maybe a sli 7900gtx or crossfire x1900xt will be a little cpu limited.
 
serbiaNem said:
http://www.pcper.com/article.php?aid=265

This is a non-intel setup and shows the $530*est Core 2 Duo beating the FX-62 easily. I for one welcome the competition, and my be getting my first Intel rig.

I posted that link on another thread, I didn't know there wasn't a thread about it already.

Many folks just don't get it that these are the old AMD mothboard.com guys. Ryan has never been an Intel Fan. Doctor Tom himself called this guy an AMD Fan. Kyle and the gang has fond memories of that event! :) Of late, they've done pretty good reviews IMHO.

The sad part missed in all of this is, count how many times the $319 MSRP E6600 was faster than both the FX-60 and 62. Or even when it wasn't as fast it was right on their heels.

Come on folks who love AMD, just admit it! Yonah stuck its foot in the Door, Conroe kicked the Sum- Bee-yoch' in! Surely AMD will have an answer. Please Boycott Conroe, that way the prices stay low and I can afford more than one. ;)
 
deanx0r said:
The testing methodology for their gaming benchmark makes little sense as they put more stress on the video cards rather than the CPU by maxing out all those settings.

Read the thread title. They tested real world performance. Benching Quake 3 at super low resolutions is not something you would do in the real world.
 
So do you HAVE to get "C3" or "C4" memory to get these kind of OC's?

What about just Corsair "TWIN2X2048-6400"?
 
does core 2 duo have hyperthreading, doesn't it help to have hyperthreading (ever with dual core processors) under extreme multitasking like when encoding + burnin dvd + gaming + recording the game via fraps?
 
uniwarp said:
does core 2 duo have hyperthreading, doesn't it help to have hyperthreading (ever with dual core processors) under extreme multitasking like when encoding + burnin dvd + gaming + recording the game via fraps?
Nah, not really necessary at this point, Intel has Kentsfield which is a Dual Die Quad Core which can do 4 Threads natively without HyperThreading, coming down the pipe for Q1 2007 of course.
 
While I anxiously await my E6600, I do have to point out one thing:

I'm not certain abot CoD2, but when F.E.A.R was first beginning to be used as a benchmark, it was *very* well known that it was GPU-limited, as it brought even many SLI and Crossfire systems to their knees. Thus, I'm not quite sure just how "CPU dependent" it is. I'm sure it'd make a difference, but I doubt that much. You'll see a much greater difference once DX10 cards are out and running it.
 
ToastMaster said:
While I anxiously await my E6600, I do have to point out one thing:

I'm not certain abot CoD2, but when F.E.A.R was first beginning to be used as a benchmark, it was *very* well known that it was GPU-limited, as it brought even many SLI and Crossfire systems to their knees. Thus, I'm not quite sure just how "CPU dependent" it is. I'm sure it'd make a difference, but I doubt that much. You'll see a much greater difference once DX10 cards are out and running it.

That's why some folks used FRAPS and test in Game Performance. Time demos can't account for AI, I/O (Keyboard and Mouse) input and etc... The more advanced the Game, the more A.I. will change the out come when compared to a Timed Demo.
 
. I for one welcome the competition, and my be getting my first Intel rig.
What mobo are you looking at? I want a conroe in my next rig as well but I am still deciding on the mobo
 
I was expecting a bit more of a difference between the 2mb of 4mb L2 conroes. A few benches really showed the large cache helping out, but for the most part the Conroes scaled pretty linearly.
I'm a lot less hesitant about skimping and buying the E6300 than I was before reading this. Now I just need a good, reasonably priced, motherboard that'll hit a ~400mhz FSB...
 
FreiDOg said:
I was expecting a bit more of a difference between the 2mb of 4mb L2 conroes. A few benches really showed the large cache helping out, but for the most part the Conroes scaled pretty linearly.
I'm a lot less hesitant about skimping and buying the E6300 than I was before reading this. Now I just need a good, reasonably priced, motherboard that'll hit a ~400mhz FSB...

I think for games at least the extra 2MB L2 cache will make a pretty decent difference. CPU intensive games like Oblivion should really benefit.

PC Per said:
Our gaming tests set out to find games that were heavily CPU dependent in order to test for platform differences.

So they went about doing this by testing FEAR, COD 2 and 3DMark06, all of which are primarily GPU bound? lol
 
FreiDOg said:
I was expecting a bit more of a difference between the 2mb of 4mb L2 conroes. A few benches really showed the large cache helping out, but for the most part the Conroes scaled pretty linearly.
I'm a lot less hesitant about skimping and buying the E6300 than I was before reading this. Now I just need a good, reasonably priced, motherboard that'll hit a ~400mhz FSB...

Ok i just read the review and noticed something very important that should be mentioned lol.

PC Per said:
As I already mentioned, while I have the E6700 CPU in my hands that will be the top of the line Core 2 Duo processor released next month, the Core 2 Extreme Edition X6800 will run one multiplier faster at 2.93 GHz compared to the 2.67 GHz of the E6700. While this ES processor would not allow me to move the multipliers UP to test at X6800 speeds, I could move the multipliers DOWN in order to test E6600, E6400 and E6300 CPU speeds set respectively at 2.40 GHz, 2.13 GHz and 1.86 GHz. So while the CPU speeds are emulated, they should be relatively close to real world results we'll see when the CPUs launch. (Note: currently, the information I have seen says the E6400 and E6300 will only have 2MB of L2 cache, not the 4MB that the E6600 and E6700 have; this may affect performance results on those actual processors when they launch).

The reason why you dont see much difference between the 2MB and 4MB L2 Conroes in this review is because they all have 4MB L2!!
 
Stop posting to this review. It is the most sloppy review I have seen from a "reputable" site in a while.

Here are my discoveries:

First I noticed a discrepancy between the settings used in FEAR. This screenshot shows 1280 while the graphs say 1600.





Bar Graph does not correlate to this graph below at all...





Here as well...




Aren't you supposed to optimize for SLI when you are testing SLI?



It looks like they just sorta grabbed a bunch of random screenshots and random data and threw it up on their site for more hits.

Here is PC Perspectives LapTop Self Destructing due to retarded User Input:
 
savantu said:
Write them a email.

I sent them one yesterday actually and got no reply...

I find it interesting that Conroe's hype has blinded people's ability to make rational judgements.
 
burningrave101 said:
Ok i just read the review and noticed something very important that should be mentioned lol.



The reason why you dont see much difference between the 2MB and 4MB L2 Conroes in this review is because they all have 4MB L2!!

haha. Nice catch, thanks.
I guess that's one way to make your Conroe pre-review a lot cheaper.
 
FreiDOg said:
haha. Nice catch, thanks.
I guess that's one way to make your Conroe pre-review a lot cheaper.
We will have to wait till the final benchmarks to see, but you obvious do want the E6600 for it's full fledged Conroe status.
 
rusek said:
They obviously threw out the outliers in the data,

How can you even defend this bullshit? If they threw out outliers, then they should be thrown out of both graphs and from both Intel and AMD! This is something they clearly didn't do.

rusek said:
and im sure there is a reason for not optimizing for SLI.

Religious are you? Blind faith you have.
 
burningrave101 said:
Ok i just read the review and noticed something very important that should be mentioned lol.

The reason why you dont see much difference between the 2MB and 4MB L2 Conroes in this review is because they all have 4MB L2!!

LOL! It's NOT they but It. He used one processor at difference speeds to simulate only and even says so. Funny thing is E6400 will have the same amount of cache as the Opterons and FX's and twice as much as half of the other X2/AM2's :rolleyes:

That sweet assed E6600 will have 4MB.

And unless you're mulitasking, something they didn't do much of, it is NOT going to make that much of a Difference.

He also took a Screen of Sandra before he refreshed to aquire the correct reading for the Conroe. Does that mean he didn't test it?

The Game settings have NO relation to Lame, DivX and ANY of the other tests they ran WOW! How Lame is that?.

burningrave101 said:
So they went about doing this by testing FEAR, COD 2 and 3DMark06, all of which are primarily GPU bound? lol

I don't know about 3DMarks but the SLI and SMP patches remove some of the load from One Processor to two and UN-bounds Single GPU for SLI. Look at the read me files of those patches? Some even add a Hyperthreading update.

""WME9 with some Mozilla"" the Conroe should have been blown out the water. These apps have very little Intel Optimization. Should folks jump them for this? No one uses WME-9 when WME 10 feature the latest SIMDs and is Multi-Threaded. Unlike since users who install Monzilla almost never use WMP of any kind and surely not the older 9 version when 10 is much faster.

Last but not least.

J-Mag said:
How can you even defend this bullshit? If they threw out outliers, then they should be thrown out of both graphs and from both Intel and AMD! This is something they clearly didn't do.

Religious are you? Blind faith you have.

Then your blind faith in the Green Arrow guys make you no better right?

I believe Fugger, Coolater, Freecableguy, Victor Wang, HiCookie and all the rest who tested this thing already and are gushing over it. Many of these same guys set all kinds of records with Athlons of all kind.

Ryan, an AMD Fan since his early teens, might have slipped on a couple of tests, but to try and smack him down and invalidate the whole review is Bogus! Many of the tests/apps are older and keeps AMD from being blown away even more. Yes, many folks do use Premier 7 and higher that's actually optimized for Intel. Add Elements that [H] uses, think they're Intel Fans? Movie Maker 2, the ROXIO, Creative and Nero software are also updated for Dual Core or newer SIMDs or both. Yonah should have been a warning.
 
Donnie27 said:
Then your blind faith in the Green Arrow guys make you no better right?

I am just pointing out the facts here. I am not a brand person. i had a 2.4c before my current Opteron. I will switch to AM2 or Conroe depending on which platform has better Multi Card Graphic performance.

Donnie27 said:
I believe Fugger, Coolater, Freecableguy, Victor Wang, HiCookie and all the rest who tested this thing already and are gushing over it. Many of these same guys set all kinds of records with Athlons of all kind.

Yes I agree, Conroe is whooping ass in various types of applications. Encoding, super Pi, compressing, encrypting etc... I have seen the threads and I am not denying Conroes performance. Again, I am mainly concerned about SLI (and X-fire, but let's save that for another thread) performance.

Anyway These guys are also running 975x boards mainly... Conroe performance depends upon the chip and the memory controller. So it is apples to oranges here, well maybe more like plums to prunes...

Donnie27 said:
Ryan, an AMD Fan since his early teens, might have slipped on a couple of tests, but to try and smack him down and invalidate the whole review is Bogus!

I didn't invalidate the whole review. You are inferring incorrectly. I was pointing out the SLI section of the review is Bogus. Which it is. I have a feeling that either the Intel 590 chipset is performing poorly or unstable. Mainly because we have not seen any other reviewers with Intel and SLI, but also because of the History of Nvidia's Intel SLI chipset. I mean Nvidia has to design the memory controller, which it doesn't have to with AMD chips...

Donnie27 said:
Many of the tests/apps are older and keeps AMD from being blown away even more. Yes, many folks do use Premier 7 and higher that's actually optimized for Intel. Add Elements that [H] uses, think they're Intel Fans? Movie Maker 2, the ROXIO, Creative and Nero software are also updated for Dual Core or newer SIMDs or both. Yonah should have been a warning.

Uhh where at all was I talking about any of these results? Again, I have no problem accepting conroe's efficiency on many applications.
 
J-Mag said:
I am just pointing out the facts here. I am not a brand person. i had a 2.4c before my current Opteron. I will switch to AM2 or Conroe depending on which platform has better Multi Card Graphic performance.

Me too and can't make up my mind. ATI-600 has the lead right now.

J-Mag said:
Yes I agree, Conroe is whooping ass in various types of applications. Encoding, super Pi, compressing, encrypting etc... I have seen the threads and I am not denying Conroes performance. Again, I am mainly concerned about SLI (and X-fire, but let's save that for another thread) performance.

I didn't mean to sound like a Jerk or anything. I was just saying if the Game Benchmarks were not acceptable OK, but don't dis the whole review? I'm not sure if he did the Screen shot before setting were change, NOT an excuse and a Lame mistake either way.

J-Mag said:
Anyway These guys are also running 975x boards mainly... Conroe performance depends upon the chip and the memory controller. So it is apples to oranges here, well maybe more like plums to prunes...

HEKPC tested i975 vs i965 and showed very little difference between the two, about 1% to 0% and more time than not fell with in the margin of Error. The guys at Xtreme system say ATI's boards are faster than Intel and nVidia's for Intel or AMD processors.

FACT: Conroe was built with smart cache and smart memory accessing to lessen the importance of the memory controller. Or going to RAM once it has to.

J-Mag said:
I didn't invalidate the whole review. You are inferring incorrectly. I was pointing out the SLI section of the review is Bogus. Which it is. I have a feeling that either the Intel 590 chipset is performing poorly or unstable. Mainly because we have not seen any other reviewers with Intel and SLI, but also because of the History of Nvidia's Intel SLI chipset. I mean Nvidia has to design the memory controller, which it doesn't have to with AMD chips...

If I didn't understand that you only meant those games or as it related to SLI, I'll say a BIG MY BAD! The biggest problem I see from nVidia is their not wanting to enable SLI on Intel chipset because they know Intel's is better. I have NO idea why they flopp with their IE boards. Hell, Dell likes them.

J-Mag said:
Uhh where at all was I talking about any of these results? Again, I have no problem accepting conroe's efficiency on many applications.

Sure we'll have to get our own hands on them. I wish like hell Kyle, Chris or one of the [H] guys could get their hands one or tell about the one they might have already LOL!

Again, most of those Guys at Xtreme Systems were very Pro AMD until Pentium M should up, then Dothan, then Yonah. When I first visted that site it was like AMD = 125 viewing, Intel = 20 viewing. Now it's when hell, go there and see for yourself?

http://www.xtremesystems.org/forums/forumdisplay.php?f=61
 
Donnie27 said:
HEKPC tested i975 vs i965 and showed very little difference between the two, about 1% to 0% and more time than not fell with in the margin of Error. The guys at Xtreme system say ATI's boards are faster than Intel and nVidia's for Intel or AMD processors.

When I said 975x boards I should have said Intel boards. It seems as if Intel just produces better chipsets for their procs than Nvidia. I admit haven't looked into ATI's boards too heavily because I have two 7900gtx's right now so switching over would require me to purchase new GPU's... Although, when the new GPU refresh hits I will definitely consider this option.

Anyway they say ATI's boards are Faster for overclocking, right? I am sure they are on a pretty even keel when compared to stock results.


Donnie27 said:
Again, most of those Guys at Xtreme Systems were very Pro AMD until Pentium M should up, then Dothan, then Yonah. When I first visted that site it was like AMD = 125 viewing, Intel = 20 viewing. Now it's when hell, go there and see for yourself?

http://www.xtremesystems.org/forums/forumdisplay.php?f=61

Yeah I know I have been over there for a while myself. Most net users are bandwagon people anyway.

There will be no definitive answer until retail Intel 590 boards start showing up, unless you don't care about SLI obviously.
 
Sweet GOD.
Even if when it launches it's only as powerful as the FX-62 (I'm speaking hypothetically here) The Core 2 Duo will still most likely be half the price. Judging by the benches he ran I am really looking forward to the Core 2 line up.

But heres a question: When is Intel going to integrate a memory controler like that of the K8 line?
 
Majin said:
But heres a question: When is Intel going to integrate a memory controler like that of the K8 line?

When they feel the need too, not a second before.
So far even the Kentsfield run impressive(from what I have read)...without an onboard memory controller.
So so far they have no need to do so.

Terra - Intel's chipset are VERY efficient!
 
Donnie27 said:
LOL! It's NOT they but It. He used one processor at difference speeds to simulate only and even says so. Funny thing is E6400 will have the same amount of cache as the Opterons and FX's and twice as much as half of the other X2/AM2's :rolleyes:

That sweet assed E6600 will have 4MB.

And unless you're mulitasking, something they didn't do much of, it is NOT going to make that much of a Difference.

You can't compare the amount of cache used on an X2/Opteron to the amount of cache used on Conroe and say that because there is little difference in performance between 512kb and 1MB on the A64's that its little difference on Conroe or any other processor. It doesn't work that way. Different CPU architectures will benefit much differently from the sizes of the cache. The X2/Opteron's dont benefit too awefully much from the extra cache. Intel processors however do in all the results i've seen to this point. The major key to the Pentium M's performance was 2MB of fast L2 cache. If it had only had 512kb or 1MB it wouldn't of performed as well as it did. Intel has stuck large amounts of L2 cache on a lot of the P4 processors as well to limit the performance hit of the long pipelines used in Prescott and Presler.

http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2762&p=6

In this article you can see that even on the AMD processors the added L2 cache does make a slight difference in specific areas, especially games.

Donnie27 said:
I don't know about 3DMarks but the SLI and SMP patches remove some of the load from One Processor to two and UN-bounds Single GPU for SLI. Look at the read me files of those patches? Some even add a Hyperthreading update.

Yes but games like FEAR and COD2 are not extremely CPU intensive whether you are running SLI or not. You can see a much bigger difference in gaming performance if you look at games like Oblivion which is VERY CPU intensive.

Majin said:
Sweet GOD.
Even if when it launches it's only as powerful as the FX-62 (I'm speaking hypothetically here) The Core 2 Duo will still most likely be half the price. Judging by the benches he ran I am really looking forward to the Core 2 line up.

But heres a question: When is Intel going to integrate a memory controler like that of the K8 line?

They dont really need to at this point. Intel makes the best chipsets on the market and the Northbridge memory controller has a low enough latency that its not a huge difference in performance. The biggest benefit to the memory controller on the A64 isn't so much the lower latency but that it runs at the same speed as the CPU so it improves performance when you increase the clock speed. It works a lot the same way with the FSB and the MC in the Northbridge for Intel except the speeds just aren't quite as high. But when you start putting the MC on-die you start running into more problems. I dont know if you've noticed or not but a lot of the times the MC can be weak on different steppings and models of A64's and can prevent you from overclocking as well, especially with the memory. You run into more compatibility issues with RAM as well and a lot of the Athlon 64/X2 processors dont like to run 1T at high memory speeds as well as you still can't run 4 DIMMs at 1T as far as i know.
 
burningrave101 said:
You can't compare the amount of cache used on an X2/Opteron to the amount of cache used on Conroe and say that because there is little difference in performance between 512kb and 1MB on the A64's that its little difference on Conroe or any other processor. It doesn't work that way. Different CPU architectures will benefit much differently from the sizes of the cache. The X2/Opteron's dont benefit too awefully much from the extra cache. Intel processors however do in all the results i've seen to this point. The major key to the Pentium M's performance was 2MB of fast L2 cache. If it had only had 512kb or 1MB it wouldn't of performed as well as it did. Intel has stuck large amounts of L2 cache on a lot of the P4 processors as well to limit the performance hit of the long pipelines used in Prescott and Presler.
No not necessarily accurate, the difference between Banias with 1MB of cache to Dothan 2MB of cache was pretty much negligible. Maybe 2-5% better performance due to the added cache.

Cache typically on high IPC architecture doesn't not provide a tremendous performance benefit.

The major key to Pentium M's performance advanatge was the fact that it's 2MB cache
had a 10 cycle latency, not because it was in fact 2MB of it. 1MB conversions of equivalent clockspeed performed very close to it.

Well I agree using another architecture to predict gains made due to cache is flawed, I would surmise from the data we have seen now, on High IPC architectures the difference between large amouns of cache and somewhat smaller amuonts is not signficant.

The current performance data we have seen isn't conclusive that the bulk of Core Architecture performance comes form it's large 4Mb of LV2 cache.
 
burningrave101 said:
You can't compare the amount of cache used on an X2/Opteron to the amount of cache used on Conroe and say that because there is little difference in performance between 512kb and 1MB on the A64's that its little difference on Conroe or any other processor. It doesn't work that way. Different CPU architectures will benefit much differently from the sizes of the cache. The X2/Opteron's dont benefit too awefully much from the extra cache. Intel processors however do in all the results i've seen to this point. The major key to the Pentium M's performance was 2MB of fast L2 cache. If it had only had 512kb or 1MB it wouldn't of performed as well as it did. Intel has stuck large amounts of L2 cache on a lot of the P4 processors as well to limit the performance hit of the long pipelines used in Prescott and Presler.

Sigh* Can compare them just as I compare their costs=P I don’t give a flip about theoretical or whatever. It’s the end result that matters. When I mentioned cache, it was in the scheme of total transistors, how they perform and what they cost. It’s not just the architecture that determines that.

http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2762&p=6

In this article you can see that even on the AMD processors the added L2 cache does make a slight difference in specific areas, especially games.

I said Conroe and that link has NOTHING to do with Conroe. Plus every app isn't cache dependent and many times Multi-tasking will show a difference.


Yes but games like FEAR and COD2 are not extremely CPU intensive whether you are running SLI or not. You can see a much bigger difference in gaming performance if you look at games like Oblivion which is VERY CPU intensive.

Sorry, not true at all. CoD2 has a SMP/Hyperthreading patch that works real well, sorry! Same for Quake and others I linked to awhile back.

They dont really need to at this point. Intel makes the best chipsets on the market and the Northbridge memory controller has a low enough latency that its not a huge difference in performance. The biggest benefit to the memory controller on the A64 isn't so much the lower latency but that it runs at the same speed as the CPU so it improves performance when you increase the clock speed. It works a lot the same way with the FSB and the MC in the Northbridge for Intel except the speeds just aren't quite as high. But when you start putting the MC on-die you start running into more problems. I dont know if you've noticed or not but a lot of the times the MC can be weak on different steppings and models of A64's and can prevent you from overclocking as well, especially with the memory. You run into more compatibility issues with RAM as well and a lot of the Athlon 64/X2 processors dont like to run 1T at high memory speeds as well as you still can't run 4 DIMMs at 1T as far as i know.

I never said Intel needed to, I said they did do some very nice thing with L1, L2, access to L2 and Main Memory and etc.. It seems to be paying off. Please don't waste your time trying to explain A64 to me. I pointed out problems with my 3500+ just like you are trying to explain to me.

The others here can tell you about me complaining about the 4 slots issue, 1T problems, doesn't run as cool as many say, and it will cry like a little girl when it comes to overclocking.

One guy even said "If you're going to get a $319 E6600, why didn't you get a $375 (time of purchase) Opty-165"? I'm getting a $319 or ($343 if I pre Order) because its faster than a 1000+ FX-62 before it's overclocked LOL!
 
coldpower27 said:
No not necessarily accurate, the difference between Banias with 1MB of cache to Dothan 2MB of cache was pretty much negligible. Maybe 2-5% better performance due to the added cache.

Cache typically on high IPC architecture doesn't not provide a tremendous performance benefit.

The major key to Pentium M's performance advanatge was the fact that it's 2MB cache
had a 10 cycle latency, not because it was in fact 2MB of it. 1MB conversions of equivalent clockspeed performed very close to it.

Well I agree using another architecture to predict gains made due to cache is flawed, I would surmise from the data we have seen now, on High IPC architectures the difference between large amouns of cache and somewhat smaller amuonts is not signficant.

The current performance data we have seen isn't conclusive that the bulk of Core Architecture performance comes form it's large 4Mb of LV2 cache.

Yup I remember screwing that up as well. My thinking was Yonah = 14ns L but 4MB of L2 was like the older models 10ns L = 2MB, so then would be like 7ns = 2MB X 2, no so bad but was wrong, but at least I didn't dig a deep hold and thank that guy for teaching me something!

Conroe's not only back to being fast, but much much smarter.
 
Damn, how many "real world" benchmarks will it take to convince the green arrow boys that these are NOT rigged?!
 
Back
Top