Upgrade from PhenomII X6 1045 to ?

Just my 2 cents...........8370e for 119 @ MicroCenter.........:D

Damn have they dropped down that much? Yeah, cant beat that at all if you've got a Microcenter near you which most people dont.
 
Just my 2 cents...........8370e for 119 @ MicroCenter.........:D

Honestly, at that price I'd buy a Haswell core i3 instead.

Would perform better in most tasks, and be quieter and use less power.

edit:

Oh. OP already has a 990FX board. In that case an FX chip makes more sense.
 
Zarathustra[H];1041513249 said:
Honestly, at that price I'd buy a Haswell core i3 instead.

Would perform better in most tasks, and be quieter and use less power.

edit:

Oh. OP already has a 990FX board. In that case an FX chip makes more sense.

An i3 will never be able to keep up with an 8Core AMD EVER. Makes no sense at all to ever, for any desktop, recommend an i3. I never recommend less than 4 cores for Intel and 6 for AMD.
 
Yeah Id rather have a 8300 overclocked than a i3 as well. Now if youre talking an i5 then yeah Id rather have one of those but the 8 cores of a 8300 especially when overclocked are gonna give you a lot more horsepower and performance than a pair of i3 cores Hyper Threading or not unless youre doing nothing but single core applications all day and never multitasking.
 
Well I bit the bullet and grabbed an FX-8350 off ebay for $135. Seemed reasonable enough. Probably not the best deal in the world but it'll do for a couple more years.
Just got my 2nd 7950OC card installed in crossfire and with some mild overclocking (CPU from 2.7ghz to 3.1ghz and the GPU from 950mhz to 1050mhz) i'm scoring right around 9400 on my firestrike benchmarks. Hopefully another gig of CPU and i'll be good to go for gaming and graphics for a while.

Should I keep my air cooler setup for now, or go to the H100i - i'd want to reuse a water setup in the next upgrade as well
 
I say go for the water cooler, and keep all of the mounting hardware, that way you could use it on your nex build, well unless Intel changes mounting dimensions again ;) ... AMD has been pretty steady in the past.

Guys also on that Microcenter deal you get 40 of a mobo, and can switch the discount between sockets, so you might be able to ebay that other board for a slightly cheaper cpu.
 
Well I bit the bullet and grabbed an FX-8350 off ebay for $135. Seemed reasonable enough. Probably not the best deal in the world but it'll do for a couple more years.
Just got my 2nd 7950OC card installed in crossfire and with some mild overclocking (CPU from 2.7ghz to 3.1ghz and the GPU from 950mhz to 1050mhz) i'm scoring right around 9400 on my firestrike benchmarks. Hopefully another gig of CPU and i'll be good to go for gaming and graphics for a while.

Should I keep my air cooler setup for now, or go to the H100i - i'd want to reuse a water setup in the next upgrade as well

I vote H100. I've got one on mine and it does a great job. At a screaming 4.8 GHz I'm able to keep it on the lowest fan speed setting and it keeps the chip in the low 40's while gaming with virtually no noise.
 
Sounds like a plan. I guess the latest variant is the H105 with the round block on it. ~$100 on amazon
 
An i3 will never be able to keep up with an 8Core AMD EVER. Makes no sense at all to ever, for any desktop, recommend an i3. I never recommend less than 4 cores for Intel and 6 for AMD.

You've got your head in the sand.

Even today in 2015, unless you are rendering or encoding, 8 cores are pretty much useless on a desktop.

For just about everything else, fewer strong cores will perform better.

I hate buying Intel, cause I consider them a terrible company after all their shady and illegal business practices, but the truth is, for most users a i3 will perform better than any CPU in AMDs lineup, regardless of how many cores it has.

Each core of an i3-4360T is about 70% to 85% faster than each core of an FX-8370E.

You can shrink this gap by overclocking the AMD chip (all i3's are multi-locked) but you won't come close. That FX would have to hit between 7.3 and 8.0 Ghz in order to keep up core for core and that ain't happening.
 
You can also say that about Intels i5 i7 cpu.....For most users. I Would never trade my cpu for an i3 lol. You have to be fucking kidding me and newer games are atleast using 4 + cores now days.
 
You can also say that about Intels i5 i7 cpu.....For most users. I Would never trade my cpu for an i3 lol. You have to be fucking kidding me and newer games are atleast using 4 + cores now days.

and only going to use more going forward...


direct x 12 seems to peter out at 6 though...
 
Zarathustra[H];1041514211 said:
You've got your head in the sand.

Even today in 2015, unless you are rendering or encoding, 8 cores are pretty much useless on a desktop.

For just about everything else, fewer strong cores will perform better.

I hate buying Intel, cause I consider them a terrible company after all their shady and illegal business practices, but the truth is, for most users a i3 will perform better than any CPU in AMDs lineup, regardless of how many cores it has.

Each core of an i3-4360T is about 70% to 85% faster than each core of an FX-8370E.

You can shrink this gap by overclocking the AMD chip (all i3's are multi-locked) but you won't come close. That FX would have to hit between 7.3 and 8.0 Ghz in order to keep up core for core and that ain't happening.

Wrong. The IPC count used by most, unfortunately, either ignorant or purposely misleading, is judged by Cinebench 11.5 which is biased to AMD with compilers.

Let me add that I have managed to play with the best i3s, i5s and A10-7850K and none made me regret my 8350, none came close to its everyday performance. Best I could tell in some cases where the IPC was definitely in the others favor, CORE count is what made the difference. My favorite part is that every benchmark for CPUs only ever benches one thing at a time, never the multitasking ability. My guess is this is where I am seeing the difference and it is real world.
 
Wrong. The IPC count used by most, unfortunately, either ignorant or purposely misleading, is judged by Cinebench 11.5 which is biased to AMD with compilers.

Let me add that I have managed to play with the best i3s, i5s and A10-7850K and none made me regret my 8350, none came close to its everyday performance. Best I could tell in some cases where the IPC was definitely in the others favor, CORE count is what made the difference. My favorite part is that every benchmark for CPUs only ever benches one thing at a time, never the multitasking ability. My guess is this is where I am seeing the difference and it is real world.

I would agree with you in as much as going from 2 to 4 cores will benefit.

Adding more cores after 4 has severely limiting returns for what the majority of people do, even us here on [H]. After 4 cores you really need to be doing rendering or encoding to see any real benefit.

Also, you are misinformed regarding the compiler bias. The whole Intel compiler hurting AMD CPU thing is one of the past. Intel settled with AMD in court over this - what - almost 10 years ago now?

The reason Cinebench is often used to determine single core performance is because it is one of the few benchmark tools that has a single core mode. It's results ARE very accurate and very fair.

While I would prefer to have at least 4 cores, I would still take 2 haswell cores over anything in AMD's lineup.
 
Zarathustra[H];1041514973 said:
I would agree with you in as much as going from 2 to 4 cores will benefit.

Adding more cores after 4 has severely limiting returns for what the majority of people do, even us here on [H]. After 4 cores you really need to be doing rendering or encoding to see any real benefit.

Also, you are misinformed regarding the compiler bias. The whole Intel compiler hurting AMD CPU thing is one of the past. Intel settled with AMD in court over this - what - almost 10 years ago now?

The reason Cinebench is often used to determine single core performance is because it is one of the few benchmark tools that has a single core mode. It's results ARE very accurate and very fair.

While I would prefer to have at least 4 cores, I would still take 2 haswell cores over anything in AMD's lineup.

Boy you need to read up. Cinebench 11.5 does use ICC from Intel that does BIAS against AMD. It is stated in Cinebenches info, which is required by law because of the lawsuit. Therefore IPC used and verified thru Cinebench 11.5 is void of relevance or direct comparison.
 
Boy you need to read up. Cinebench 11.5 does use ICC from Intel that does BIAS against AMD. It is stated in Cinebenches info, which is required by law because of the lawsuit. Therefore IPC used and verified thru Cinebench 11.5 is void of relevance or direct comparison.

Interesting. I remembered the compiler lawsuit being settled longer ago, but it was actually 2010.

OK. so Cinebench 11.5 was launched in 2010, so it makes sense that it would have been compiled using the pre-settlement Intel compiler, as would pretty much all performance dependent software of that time, because even with the intentional crippling of AMD features in that compiler, it was (and still is) the compiler resulting in the highest performing binaries for both AMD and Intel.

As such for a 2010 benchmark it would still have been relevant, as it would have been reflective of real world results, as in the real world, the software you run would have been compiled on that same compiler. In windows just about everything that isn't open source uses ICC for this reason. Even with the crippling of the time, it was the best compiler for AMD hardware.

Now, fast forward Cinebench R15. This would have been compiled using a post-settlement ICC compiler.

It shows an even larger difference between the i3-4360 core and the FX-8370E core than the cinebench 11.5 results do.

Or are you alleging that Cinebench R15 still uses the pre-2010 settlement ICC compiler?


The truth is this. The biggest performance differences to come out of intels "crippling" was that they disabled SSE2 on AMD hardware in the compiler. However, evidence suggests that Cinebench R11.5, while using the affected compiler, actually used compiler options forcing SSE2 on all hardware, which means the benchmark is not subject to this problem.

If we read this we find:

Just a few remarks regarding CineBench as some people seem to draw their conclusions from conspiration theories instead of facts:

1. CineBench 11.5 is based on CINEMA 4D (a professional 3D software package, consisting of several million lines of code). There is no special code tuned for benchmark purposes (or for a specific vendor), there are no special benchmark libraries; to be more precise: Except of the ability to save scene data, this is the production code.

2. CineBench 11.5 requires SSE2 compatible cpus. There is no differentiation between Intel or AMD cpus (the compilers are set to create SSE2 code without creating jump code for different cpus or cpu vendors).

3. The CineBench 11.5 Windows version uses ICC (the OS X version GCC 4.2), as these have been the compilers creating the fastest code at that time (end of 2009) for these platforms - independent of the cpu vendor.
To be more specific: With the (SSE2) compiler setting used in CINEMA 4D and CineBench 11.5, the speed advantage of ICC over MSVC (roughly 15-20%) has been slightly bigger on AMD cpus than it was on Intel cpus.

4. While OpenMP is used on Windows in some parts of the app, there is no use of it in the rendering (that 's benchmarked and that people refer to when comparing results).

5. People who still think that CineBench prefers a specific vendor, might check what happened when the first Bulldozer CineBench results leaked. Despite of what some fanboys said before the launch (of this cpu), the
CineBench results have been pretty accurate in picturing the strengths and weaknesses of this cpu.

Best regards,

Wilfried

It actually suggests that Cinebench 11.5 favors AMD, not Intel.

I think your fanboyism for AMD is making you buy into conspiracy theories that bear no truth in reality.

Yes, there was a time when the Intel compiler sabotaged AMD (and Cyrix) CPU's by omitting certain CPU extensions. Intel is an evil and criminal corporation for doing this.(and I hate giving them my money because of this). This - however - is a historical issue. Not a current one. With their settlement with the FTC and AMD, new versions of the compiler no longer have this defect.

The truth is that every since bulldozer launched in 2011 we have known that it took an extreme hit in IPC compared to the Phenom II's that came before it, let alone compared to Intel's offerings at the time. Piledriver has improved this a little, but not enough, and at the same time Intel has improved IPC as fast or faster.

It should not be a surprise that the i3 performs better per core.

I was prepared to buy a Bulldozer CPU for my desktop in 2011. I had already bought a 990FX motherboard and was running it with a 1090T in anticipation. When the launch benchmarks came out, I was disillusioned, said "fuck it all" and hated spending money on an Intel system but I did. The 990FX did server duty instead. At first with the 1090T, and later with a FX-8120 and finally an FX-8350 before being retired, where it's large core count performed admirably as I was running a virtualized environment.

Long story short. I was disillusioned, but moved on, rather than take a detour to crazy conspiracy theory land.

Summary:
There was a compiler problem. It is gone now. Benchmarks are not biased today (and possibly never were due to the forcing of SSE2 support) and even if they were, it would have been a fair representation of how the CPU would perform in the wild, as EVERYONE except the open source community used that same compiler, as they should have, because even with the crippling the ICC compiler performed better on AMD hardware than the alternatives.
 
Real world usage and times for several different systems
Here are some times running BOINC primegrid Sierpinski/Riesel Base 5 Problem (LLR)
Each core has its own work unit to do. quad CPU does 4, hex does 6 and so on.

HT is off on Intel systems

Times are hours : minutes
10:15 on my [email protected] SSD W7Pro mem 1450 7-7-7-21 x2 Time is roughly the same on both systems
6:45 on Intel Xeon x5660@4ghz no HT SSD win8.1Pro mem 1400 7-7-7-21 Asus Rampage3
7:38 on Intel Xeon [email protected] no HT SSD win7pro mem 1400 7-7-7-21 Asus P6T V2 Deluxe
6:52 on intel Xeon x5660@4ghz no HT SSD win7promem 1400 7-8-7-21 EVGA FTW3
16:00 Roughly,, 4P 24 core AMD Opteron 8425 2.1ghz 80gb velociraptor HD Linux 800mhz mem ECC
20:00 to 22:00+ 4P 48 core Opteron 6166HE 1.7 ghz pc3 1333EEC mem Stock speed no overclock
9:30 on dual x5650 2.9ghz pc3 1333EEC 80gb velociraptor winPro7
13 hours roughly Q6600 @ 3.0ghz DDR2 800 SSD Linux

New systems with AVX
4.35 2600k @ 3.7 GHz HT off
under 2 hours 4400MHz on the i7-5960X
3hours even, i7-4820k @ 4.5 GHz HT off
i7 4960X 3h 16min SSD
i7 5960X 3h 0min SSD
i7 3970X 6h 2min HDD
i7 4960X 4h 3min HDD

5-7 hours AMD FX8320 @ 4.4ghz Only 4 cores running 4 work units
12-15hours FX-8350 @4.00 GHz All 8 cores running 8 work units.
10h 09m [email protected] 6core 6 Wus.
 
Last edited:
Zarathustra[H];1041515225 said:
Interesting. I remembered the compiler lawsuit being settled longer ago, but it was actually 2010.

OK. so Cinebench 11.5 was launched in 2010, so it makes sense that it would have been compiled using the pre-settlement Intel compiler, as would pretty much all performance dependent software of that time, because even with the intentional crippling of AMD features in that compiler, it was (and still is) the compiler resulting in the highest performing binaries for both AMD and Intel.

As such for a 2010 benchmark it would still have been relevant, as it would have been reflective of real world results, as in the real world, the software you run would have been compiled on that same compiler. In windows just about everything that isn't open source uses ICC for this reason. Even with the crippling of the time, it was the best compiler for AMD hardware.

Now, fast forward Cinebench R15. This would have been compiled using a post-settlement ICC compiler.

It shows an even larger difference between the i3-4360 core and the FX-8370E core than the cinebench 11.5 results do.

Or are you alleging that Cinebench R15 still uses the pre-2010 settlement ICC compiler?


The truth is this. The biggest performance differences to come out of intels "crippling" was that they disabled SSE2 on AMD hardware in the compiler. However, evidence suggests that Cinebench R11.5, while using the affected compiler, actually used compiler options forcing SSE2 on all hardware, which means the benchmark is not subject to this problem.

If we read this we find:



It actually suggests that Cinebench 11.5 favors AMD, not Intel.

I think your fanboyism for AMD is making you buy into conspiracy theories that bear no truth in reality.

Yes, there was a time when the Intel compiler sabotaged AMD (and Cyrix) CPU's by omitting certain CPU extensions. Intel is an evil and criminal corporation for doing this.(and I hate giving them my money because of this). This - however - is a historical issue. Not a current one. With their settlement with the FTC and AMD, new versions of the compiler no longer have this defect.

The truth is that every since bulldozer launched in 2011 we have known that it took an extreme hit in IPC compared to the Phenom II's that came before it, let alone compared to Intel's offerings at the time. Piledriver has improved this a little, but not enough, and at the same time Intel has improved IPC as fast or faster.

It should not be a surprise that the i3 performs better per core.

I was prepared to buy a Bulldozer CPU for my desktop in 2011. I had already bought a 990FX motherboard and was running it with a 1090T in anticipation. When the launch benchmarks came out, I was disillusioned, said "fuck it all" and hated spending money on an Intel system but I did. The 990FX did server duty instead. At first with the 1090T, and later with a FX-8120 and finally an FX-8350 before being retired, where it's large core count performed admirably as I was running a virtualized environment.

Long story short. I was disillusioned, but moved on, rather than take a detour to crazy conspiracy theory land.

Summary:
There was a compiler problem. It is gone now. Benchmarks are not biased today (and possibly never were due to the forcing of SSE2 support) and even if they were, it would have been a fair representation of how the CPU would perform in the wild, as EVERYONE except the open source community used that same compiler, as they should have, because even with the crippling the ICC compiler performed better on AMD hardware than the alternatives.

OK read your insert again. It doesn't say it runs better on AMD just that the difference from MSVC was greater on AMD. Nor does any part prove that there is no hardware call as that is in the ICC compiler by Intel and not directed by Cinema4D. It is this slight slant of words that I watch out for and apparently you missed it completely.

ALWAYS LOOK FOR WHAT THEY DONT SAY, more than for what they did.

Actually I know it was mentioned in the fine print of Cinebench 11.5 that ICC used did infact run better on original Intel hardware (obvious) and as required by law had to indicate that any non-Intel hardware could not be guaranteed the same , say, refined use.

I am not debating Intels position in performance, but rather that it isn't nearly as huge as too many posters would love to portray. And for rather fun: Benchmarks are in fact poor reflections of real world use. Simply put Cinebench only goes so far to prove how Cinema4D will perform on particular hardware, but can not speak how another program will run. Or how about Ram benching shows great differences between 1600 and 2400 speeds, yet in real world shows negligible gains if any.

I have seen the real performance enough to see that all that Benchmark crap, outside of Games to a varying degree, tell us little to nothing of everyday use.

Why when benching games on particular CPUs do they never run more than just the game? Lots of users record their game, run communication software ie: Vent Mumble, surf the web and listen to music all whilst gaming. Any one of these added to a gaming benchmark for CPUs would quickly diminish the list of capable CPUs.

Why do so many in these forums take Benchmarks done by reviews as the final word? Have any of you ever gotten the exact same result? Look at any two sites and even they differ to wild degrees in their findings.

Any person in any Computer Enthusiast Forum that recommends an i3 for a desktop user, other than for grandmas email and facebook, should be banned. Ok maybe not that extreme but really!

I have used and worked on many different CPU configs and can say there is something to having more cores in the Windows environment. An i3 is sluggish especially from wake and when running multiple programs or even tabs in IE, Chrome or FF. In laptops maybe the diff isn't so bad, but in desktop it is utterly pointless. I expected my Brothers 7850K to be faster than my 8350. My 8350 @ 4.6 should be slower in single thread to his 7850K @ 4.4. Add to that his Ram is 2133 to my 1600, both 16Gb. Yet My PC was still just a bit faster and smoother, not a lot but noticeable.

Fact is the 7850K is improved upon with single thread over my 8350. Benchmarks can easily prove this, yet in real world it does not. I attribute this a lot to core count and the module design which for me can populate 1core per module up to 4 times before using the 2nd, whereas the 7850K can only go twice before populating the second. Even Intels HT doesn't work as well as AMD module for the extra threads being it is software to AMDs hardware threads, that and HT being a ~30% boost to AMDs60-70% over the first 4 cores.

Anyway, read more carefully and always look for what they are not saying, especially to the fine print.
 
I am not impressed with the new AMD FX83xx chips
As you can see my q6600 core2 quad, Core for core is doing work units just as fast as the FXxx when all 8 cores are working on the FX83xx clocked 1000mhz faster.
My AMD PhenomII 1045T clocked 600mhz slower is faster too core for core with all 8 FX cores working.

Then when you do a work unit on every core on the FXxx it really takes a hit.
 
Here is the i3 4360 vs. the FX8350.

It's obvious which chip is faster outside of games. Kinda sad i had to even link those.

It's one thing to harp on the "per-core" IPC advantage of Intel vs. AMD (it does have obvious merit), but then another whole level to say that an i3 is a better chip in your typical workload than a full-on FX83xx. Hell, you can't even overclock the i3. You could overclock the hell out of a FX chip.

If you were talking about some HT-less i5, i'd give it to you even then. But an i3? Really?
 
I had the 9370 on my Sabertooth R2 Gen 3 mpbo and I was using AMD overdrive
and I had nothing but problems but I've heard overdrive kinda tricky.

I returned it got 6350 instead-thought about 8370/8370e but 6350 doed a decent enough job.
 
Here is the i3 4360 vs. the FX8350.

It's obvious which chip is faster outside of games. Kinda sad i had to even link those.

It's one thing to harp on the "per-core" IPC advantage of Intel vs. AMD (it does have obvious merit), but then another whole level to say that an i3 is a better chip in your typical workload than a full-on FX83xx. Hell, you can't even overclock the i3. You could overclock the hell out of a FX chip.

If you were talking about some HT-less i5, i'd give it to you even then. But an i3? Really?

Outside of games, it seems to be a mixed bag to me. Intel wins some, AMD wins others. That i3 is providing those results with less clock speed, half the cores, half the threads, half the cache, and less than half the power draw. I'd still choose the AMD between those particular two, though...if I was somehow limited to them in choice.
 
I am not impressed with the new AMD FX83xx chips
As you can see my q6600 core2 quad, Core for core is doing work units just as fast as the FXxx when all 8 cores are working on the FX83xx clocked 1000mhz faster.
My AMD PhenomII 1045T clocked 600mhz slower is faster too core for core with all 8 FX cores working.

Then when you do a work unit on every core on the FXxx it really takes a hit.

That says more about the program than the CPU. Look at 7zip or Handbrake. One instance does not correlate to all programs nor does it really say much, especially in your case, about the whole of users.
 
I upgraded from a 1045t overclocked to 3.2 to a FX 8320e and overclocked it to 4.2. Definitely have seen improvement. I do alot of encoding so I use use programs that will take advantage of all 8 cores. Also It is super smooth when running multiple things at a time. No complaints!
 

What exactly is your system to have been running it for 10+ years? If I go back ten years I remember just upgrading my AMD Athlon 1700+ to an AMD Athlon 3200+.

Could I still be using that old single core 3200+ today? Yeah... But I couldn't play ~85% of the games I currently do. Yes, I'm a gamer but just from a raw power performance stand point in simply opening Microsoft Word or a web browser, or simply starting the machine up... I can't imagine anymore. I remember upgrading my mother from a Athlon 2600+ to an AMD Athlon II X2 240 Regor Dual-Core 2.8GHz. And quite frankly I was blown at how much faster and smoother it was. Night and day difference.
 
Here is the i3 4360 vs. the FX8350.

It's obvious which chip is faster outside of games. Kinda sad i had to even link those.

It's one thing to harp on the "per-core" IPC advantage of Intel vs. AMD (it does have obvious merit), but then another whole level to say that an i3 is a better chip in your typical workload than a full-on FX83xx. Hell, you can't even overclock the i3. You could overclock the hell out of a FX chip.

If you were talking about some HT-less i5, i'd give it to you even then. But an i3? Really?

Wow I am actually very surprised at the performance of the i3 comparatively. I knew AMD had some issues as of late, but man. Not a troll comment, that is my honest opinion.
 
Wow I am actually very surprised at the performance of the i3 comparatively. I knew AMD had some issues as of late, but man. Not a troll comment, that is my honest opinion.

Totally valid observation. It really is a shame that a single Haswell core has very nearly twice the IPC throughput of a "single" Bulldozer core in most standard x86 code.

But....it is an i3 versus an "8" core Bulldozer. Cmon! ;):p:D
 
Here is the i3 4360 vs. the FX8350.

It's obvious which chip is faster outside of games. Kinda sad i had to even link those.

It's one thing to harp on the "per-core" IPC advantage of Intel vs. AMD (it does have obvious merit), but then another whole level to say that an i3 is a better chip in your typical workload than a full-on FX83xx. Hell, you can't even overclock the i3. You could overclock the hell out of a FX chip.

If you were talking about some HT-less i5, i'd give it to you even then. But an i3? Really?

Well when you just look at a few benchmarks sure.. but when it comes to the total experience from a fast chipset, IPC heavy cpu etc. You can't beat Intel. I've gamed on an i3 that felt much faster than an FX8120 w/a heavy OC (the i3 was my HTPC and the FX a client build.) Especially in blizzard games but also in many other titles too.
 
Well when you just look at a few benchmarks sure.. but when it comes to the total experience from a fast chipset, IPC heavy cpu etc. You can't beat Intel. I've gamed on an i3 that felt much faster than an FX8120 w/a heavy OC (the i3 was my HTPC and the FX a client build.) Especially in blizzard games but also in many other titles too.

FX chips are alright if you overclock them. Of course if your talking stock then it will be a lot slower than a overclocked i3. 8120 FX plays everything out there pretty good still. With Directx12 coming to the front now the 8 core will be more future proof than the i3. Performance of all benchmarks will change. The 8 core might even take the lead. Don't even know yet. FX is ahead of its time. Reality has to catch up with it. Thats DX12.
 
Last edited:
Yeah I doubt DX12 will save bulldozer...

doesn't need saving really and DX12 already shows great improvements on any CPU >4 cores. Bulldozers issue upon release was no optimization, it was a brand new architecture. After drivers and said optimizations it is a decent CPU, granted not the best.
 
doesn't need saving really and DX12 already shows great improvements on any CPU >4 cores. Bulldozers issue upon release was no optimization, it was a brand new architecture. After drivers and said optimizations it is a decent CPU, granted not the best.

Except all the while AMD banked on new architecture Intel has been dominating. And while that architecture may be looking better now Intel has had how many years to improve to that tech?

I never understood AMD just banking on new tech being better in the very long run and letting Intel get so far ahead. Everyone can complain about Intel prices and that AMD is a better budget build, and while that's true for many... This is an enthusiast forum. We should know full well why Intel CPU's cost more.

I too want to see AMD release an architecture that runs neck and neck with Intel's highest offering. I'd love to go back to AMD... but I don't want to do it on three year old tech. And like someone else said I'm not so sure we'll see it either.
 
For those that are interested in the differences with my setup - dual Sapphire 7950 @950mhz:

1045T oc to 3.1ghz - air cooled:
In-game Temp: 55C
3dmark firestrike bench: 9050

FX8350 at 4.1ghz water cooled:
In-game temp: 51C
3dmark firestrike bench: 9135

soooo....yeah, not a whole lot of difference for another gig of cpu power.
 
For those that are interested in the differences with my setup - dual Sapphire 7950 @950mhz:

1045T oc to 3.1ghz - air cooled:
In-game Temp: 55C
3dmark firestrike bench: 9050

FX8350 at 4.1ghz water cooled:
In-game temp: 51C
3dmark firestrike bench: 9135

soooo....yeah, not a whole lot of difference for another gig of cpu power.

Seeing as your GPU's are the same, I wouldn't expect that big of a difference as it is primarily a GPU benchmark.
 
For those that are interested in the differences with my setup - dual Sapphire 7950 @950mhz:

1045T oc to 3.1ghz - air cooled:
In-game Temp: 55C
3dmark firestrike bench: 9050

FX8350 at 4.1ghz water cooled:
In-game temp: 51C
3dmark firestrike bench: 9135

soooo....yeah, not a whole lot of difference for another gig of cpu power.

That has to be the worst watercooling ever. @4.6Ghz with a 120 rad I only get to 49C with a R9-290. @4.2Ghz I can't even break 40C on IBT.
 
That has to be the worst watercooling ever. @4.6Ghz with a 120 rad I only get to 49C with a R9-290. @4.2Ghz I can't even break 40C on IBT.

Different WC setups are going to perform much differently, especially depending on the brand/model of chassis, case fans and fan speeds, and whatever else is in there, like add-on cards, drives, etc. 51C max in-game temps is pretty damn low for that kind of CPU, imo.
 
My 8320 @4.5 runs 45-50c after a couple hours gaming on my kraken x61. Encoding I have to actually increase the fans to full and I'll run 55c all night. Granted I have the rest of the case fans running slow enough to barely hear them so that probably doesn't help.

Can say this thing heats up a room pretty good >.>
 
Back
Top