Upgrade to FX 4100?

rage4order

Gawd
Joined
Oct 4, 2004
Messages
785
Was doing some browsing on the 'Egg and saw the (new to me) FX series cpu's. Now, I'm getting the itch to upgrade my processor. My current rig is below and I'd like to know if the FX 4100 would be a drop-in replacement with my current mobo. I've seen that some people have to update their bios in order for it to be compatible. Will I need to update my bios? Also, is this a good mobo for that cpu or should I also look at getting a new mobo? I'd prefer to keep my cost as low as possible and if it's a drop-in replacement I can keep all my other current hardware.
EDIT: would it be just as good an upgrade if I just look at a Phenom quad core if I really wanna keep the cost down?
 
Last edited:
Do not get a FX-4100, probably the worst CPU AMD has produced this year, even the older Phenom II quad cores outperform it, get a nice 965 and call it a day.
 
go with what xbanzai suggested, going to the 4100 is just a side upgrade, performance wise the 550 and 4100 should be very similar. you should be able to find a used 955 or 1055T/1090T(if the motherboard supports them) for a pretty decent price.
 
For gaming, grab a 965, or 955. If you are into overclocking, seriously consider the 955: I recently purchased one and am currently running it at 4.1ghz. You can't argue with that value. I'm curious if the 'higher bin' Denebs are easier to OC, as the fastest AMD quad is the PII 980 @ 3.8ghz, and 4.1 isn't really a jump from there. Something to consider. AMD's Denebs ate some of the best value CPUs for gaming. Seriously a avoid the FX CPUs like the plague.
 
Well, it looks like my decision now is between the 955 and the 965. Thanks for all the input everyone!
 
Correct, motherboard support aside OP, you'd pretty much be upgrading to a dual core chip with hyperthreading.
Please stop spreading this FUD. We all know FX is piece of shit, but just because the cores have shared resources doesn't mean there is half of them. The FX4100 is a quad core chip, period. A shitty quad core IMHO, but a quad core nonetheless.
 
and FTR, the 4100 will be faster than your current chip in about everything except single threaded tasks. but a bigger upgrade would be a 1055 or 1090t.
 
Please stop spreading this FUD. We all know FX is piece of shit, but just because the cores have shared resources doesn't mean there is half of them. The FX4100 is a quad core chip, period. A shitty quad core IMHO, but a quad core nonetheless.

That is why I said "pretty much". Just trying to put the perfomance in perspective for the OP.
 
Funny, I've got pretty much the same CPU/MB as you do. Honestly I don't think you're going to see much benefit from a CPU upgrade right now, if I were you I'd focus on your GPU instead. I'm running skyrim right now at perfectly fine frames, so I think even with newer games these processors still have a good kick.
 
Please stop spreading this FUD. We all know FX is piece of shit, but just because the cores have shared resources doesn't mean there is half of them. The FX4100 is a quad core chip, period. A shitty quad core IMHO, but a quad core nonetheless.

The modules share resources thus each individual core cannot operate if 'sliced in half', thus they cannot be considered true cores. FX4100 is not a true quad core. It is a dual-MDL part, which is apparent in its performance. In fact, if you look at the FX series parts as half their advertised core count (looking at MDL count and not 'cores') they are actually impressive. Imagine if we saw the FX 81XX as a quad-core, its almost as fast as a 2500k in some tests, and pretty close in games, which makes the arch a step forward, however, as soon as we look at them by their advertised core-counts, they are rubbish in a hat with less than half the IPCPC of Intel. I think AMD fans owe it to themselves to see "Modules" as cores, otherwise you are just going to be disappointed.
 
Save some money, get a 955/965 quad and be done with it. If your board will take an x6 then do it - much better than faildozer
 
The modules share resources thus each individual core cannot operate if 'sliced in half', thus they cannot be considered true cores.

Still true cores regardless of how crappily they perform. They share more parts then traditional chips. On some motherboards you can actually disable one of the cores on each module, so in reality you could run an 8150 as an actual single core, or as you would say "½ core". Sandy Bridge shares an L3 cache between the cores, so really it's a single core then.
 
Please stop spreading this FUD. We all know FX is piece of shit, but just because the cores have shared resources doesn't mean there is half of them. The FX4100 is a quad core chip, period. A shitty quad core IMHO, but a quad core nonetheless.

It's actually a decent upgrade if you overclock it. You just need a good cooling solution.
 
Last edited:
The modules share resources thus each individual core cannot operate if 'sliced in half', thus they cannot be considered true cores. FX4100 is not a true quad core. It is a dual-MDL part, which is apparent in its performance. In fact, if you look at the FX series parts as half their advertised core count (looking at MDL count and not 'cores') they are actually impressive. Imagine if we saw the FX 81XX as a quad-core, its almost as fast as a 2500k in some tests, and pretty close in games, which makes the arch a step forward, however, as soon as we look at them by their advertised core-counts, they are rubbish in a hat with less than half the IPCPC of Intel. I think AMD fans owe it to themselves to see "Modules" as cores, otherwise you are just going to be disappointed.



Totally agree; I used to write optimized assembly language on AMD chips years ago around the time of Athlon XP/Athlon 64. The architecture was great; 3 instructions per clock on the front end, 6 execution units on the back end (including 3 symmetrical integer/ALU units).

It looks like AMD wanted hyper threading but patent lawyers said no. So we get two wimpy integer units that don't share (such a shame) and one half FPU per core (total disaster).


Doesn't matter if they are true cores or not-- AMD should have called modules "cores" and cores "modules" (instead of threads).


Bulldozer will sink the company. RIP.
 
Still true cores regardless of how crappily they perform. They share more parts then traditional chips. On some motherboards you can actually disable one of the cores on each module, so in reality you could run an 8150 as an actual single core, or as you would say "½ core". Sandy Bridge shares an L3 cache between the cores, so really it's a single core then.

False. L3 cache is characterized its shared nature. If the L3 cache were to be split up and 'partitioned' to individual cores, it would be really slow L2.

I'm not saying FX 'cores' are not true cores because they operate slowly when the shared core is disabled (the opposite is true, actually). I'm saying if you were to get rid/disable all of their shared components They would not logically function at all whereas if (and when) you disable the L3 from a sandy-bridge CPU, it still functions.

Kind of like bacteria: look at one and notice that it looks a hell of a lot like a human blood cell under the microscope. Using normal vision, the colony of bacteria look like a single object, and a human being is a single object. If you were to isolate the a single bacterial cell from the colony, it would still have a chance to survive and even start a new colony. If you were to isolate a single human blood-cell, it would not be able to function, and it would die quite quickly.

Now this is a stretch, but I hope you can understand where I'm coming from.
 
Funny, I've got pretty much the same CPU/MB as you do. Honestly I don't think you're going to see much benefit from a CPU upgrade right now, if I were you I'd focus on your GPU instead. I'm running skyrim right now at perfectly fine frames, so I think even with newer games these processors still have a good kick.
Thanks for that little bit of advice. The thing is, I WAS originally looking at a new gpu but I guess I got a little excited when I saw AMD came out with a new processor. I guess I kinda jumped the gun a bit but now that I see they're not really worth it, I might revert back to plan A and get a new GPU! I would really like at least a quad core though so if I can scrape up the $$ I may just do a new gpu AND a 965. Black Friday IS right around the corner~! :D
 
I'm not saying FX 'cores' are not true cores because they operate slowly when the shared core is disabled (the opposite is true, actually). I'm saying if you were to get rid/disable all of their shared components They would not logically function at all whereas if (and when) you disable the L3 from a sandy-bridge CPU, it still functions.

Ok, but you can take the 4100, disable 3 of it's cores so it's a single core cpu. So to you it's actually a half core processor running?
 
Ahh, that's what I was afraid of.
I know this may be a dumb question, but would it be better sense to go to a quad core black phenom as opposed to that T1055? Reason I ask is the X4 965 is $20 cheaper with a faster clock speed.

If you ar not overclocking yes go for 965
 
Ok, but you can take the 4100, disable 3 of it's cores so it's a single core cpu. So to you it's actually a half core processor running?

More like it's a single core running using only %60 of its silicon. Like I stated (in the actual phrase you quoted me on) the BD modules require the shared components in order to function. Disabling a "core" is only disabling %40 of the module.
 
I've been playing around with my FX-4100 (got it at a very, very sweet deal for $79), and can push it to 4.4 GHz, using stock voltage, and a Coolermaster heat sink / 92 mm cooling fan combo. I suspect that I can OC it even higher, if I want to up the voltage a bit.

It's actually not a bad performer at all at 4.4 GHz. Admittedly, not as nice as it SHOULD have been, but if you can find one at a bargain (there are a good number of disappointed folks who are dumping theirs), and have a decent motherboard that can OC it, you'll get some nice OC's.

I'm actually pretty happy with what I got. It will certainly hold me over until Piledriver gets here.
 
More like it's a single core running using only %60 of its silicon. Like I stated (in the actual phrase you quoted me on) the BD modules require the shared components in order to function. Disabling a "core" is only disabling %40 of the module.

how is that any different from a core 2 duo? Same deal, there are shared components there too. And guess what? If you disable 1 core, that would less than 50% of the silicon and the CPU gets better single threaded performance. I guess core 2s weren't really dual cores
...... Or your argument is pointless and we can all move on from this not real core bs.
 
how is that any different from a core 2 duo? Same deal, there are shared components there too. And guess what? If you disable 1 core, that would less than 50% of the silicon and the CPU gets better single threaded performance. I guess core 2s weren't really dual cores
...... Or your argument is pointless and we can all move on from this not real core bs.

If you were to disable a single core on a C2D, you would be left with one core operating with free reign of the L3, and the master instruction pipeline time cut in half. That is true with about any multi-core processor. However traditional multi-core processors have a single scheduler per core, single fetch, decode, L2 cache per core and, in more recent CPUs, dedicated 128 and 256 bit floating point units per core (some use two 128 bit FPUs in tandem in lieu of a single 256 bit FPU). Bulldozer 'cores' share a whole lot more than simply L2 cache: two cores share a single fetch, decode, Floating-point scheduler, a single 128 bit FP unit per 'core' and the ability to only perform one 256 bit FP calculation per clock, per MODULE.

I'll make it easy for you: If you were to isolate a single "core" from a Core2duo, Phenom II, or any real multi-core CPU, you would get an independent miniature CPU, with little extra architecture needed to 'make it work' so to speak. If you were to isolate the 'core' part of bulldozer module, you would be stuck with something that simply cannot act on its own without a major architecture overhaul, as some of the major pieces of a functioning CPU are completely missing: they are shared in BD's 'modules'.

Get it? If you don't, and just want to argue, I can't help you.

Anywho: Unabomber: good job with the purchase: even better with the OC. The FX4100 is a step up from most dual-cores, though I wouldn't pay the retail price for it. Tell us how it goes!
 
If you were to disable a single core on a C2D, you would be left with one core operating with free reign of the L3, and the master instruction pipeline time cut in half. That is true with about any multi-core processor. However traditional multi-core processors have a single scheduler per core, single fetch, decode, L2 cache per core and, in more recent CPUs, dedicated 128 and 256 bit floating point units per core (some use two 128 bit FPUs in tandem in lieu of a single 256 bit FPU). Bulldozer 'cores' share a whole lot more than simply L2 cache: two cores share a single fetch, decode, Floating-point scheduler, a single 128 bit FP unit per 'core' and the ability to only perform one 256 bit FP calculation per clock, per MODULE.

I'll make it easy for you: If you were to isolate a single "core" from a Core2duo, Phenom II, or any real multi-core CPU, you would get an independent miniature CPU, with little extra architecture needed to 'make it work' so to speak. If you were to isolate the 'core' part of bulldozer module, you would be stuck with something that simply cannot act on its own without a major architecture overhaul, as some of the major pieces of a functioning CPU are completely missing: they are shared in BD's 'modules'.

Get it? If you don't, and just want to argue, I can't help you.

Anywho: Unabomber: good job with the purchase: even better with the OC. The FX4100 is a step up from most dual-cores, though I wouldn't pay the retail price for it. Tell us how it goes!

What is it that defines a core to you? L2 Cache? L1 Cache? a scheduler? a FPU?

BTW, core 2s had no l3 cache, the l2 cache was shared.

So what is shared? Fetch + Decode hardware?, each core has it's own l1 cache, and shared l2.

How about an Arm processors, are they REAL cores? if you take a tegra and slice it in half will it work?

You can call it whatever you want, a half core, a half module, it doesn't change what it is, or how it performs. In each "module" there are 2 execution units, which is what we have come today to call " cores " just like Nvidia calls it's SPs cuda cores.

You can ring your bell till your blue in the face, you're arguing pretty much the same useless crap as AMD was back when they said the Q6600 was not a real quad core, because it was 2 dual cores slapped together. it's pointless and a waste of bits on the interwebs.
 
What is it that defines a core to you? L2 Cache? L1 Cache? a scheduler? a FPU?

BTW, core 2s had no l3 cache, the l2 cache was shared.

So what is shared? Fetch + Decode hardware?, each core has it's own l1 cache, and shared l2.

How about an Arm processors, are they REAL cores? if you take a tegra and slice it in half will it work?

You can call it whatever you want, a half core, a half module, it doesn't change what it is, or how it performs. In each "module" there are 2 execution units, which is what we have come today to call " cores " just like Nvidia calls it's SPs cuda cores.

You can ring your bell till your blue in the face, you're arguing pretty much the same useless crap as AMD was back when they said the Q6600 was not a real quad core, because it was 2 dual cores slapped together. it's pointless and a waste of bits on the interwebs.

Wikipedia said:
A multi-core processor is a single computing component with two or more independent actual processors (called "cores")

I think you may need to draw the line as to what you believe a 'core' is. You can pick apart a CPU until you are left with a single binary transistor, and by your logic, that means the FX8150 is actually 2-billion cores. If you are assuming that execution units and total number of consecutive threads per cycle, then an i7 2600 is actually an 8-core part as hyperthreading is not solely software: it does manifest itself in hardware execution units.

And changing the perceived number of cores does change the performance, at least according to the point of reference. The FX4100's performance is quite acceptable for a dual-core CPU. Once you think of it as a quad, its slower than molasses.

I won't continue to try and enlighten you. You are childish, and unwilling to think outside your set paradigm. That, and we are off topic for this thread.
 
I think you may need to draw the line as to what you believe a 'core' is. You can pick apart a CPU until you are left with a single binary transistor, and by your logic, that means the FX8150 is actually 2-billion cores. If you are assuming that execution units and total number of consecutive threads per cycle, then an i7 2600 is actually an 8-core part as hyperthreading is not solely software: it does manifest itself in hardware execution units.

And changing the perceived number of cores does change the performance, at least according to the point of reference. The FX4100's performance is quite acceptable for a dual-core CPU. Once you think of it as a quad, its slower than molasses.

I won't continue to try and enlighten you. You are childish, and unwilling to think outside your set paradigm. That, and we are off topic for this thread.

and how about the rest of that quote? also we all know wiki is just the best place to look.

which are the units that read and execute program instructions.[1
and Further down we have......

an octa-core processor containes eight cores (e.g. AMD FX-8150
so what part of the processor that is missing makes it not a processor? how about arm chips?

I think well outside the box, if there is a need for it, here there isn't. a 2600k has 4 execution units, it uses some cleaver scheduling to feed a single core 2 threads at a time. A very different approach from adding 2 "weaker" cores.
 
I've been playing around with my FX-4100 (got it at a very, very sweet deal for $79), and can push it to 4.4 GHz, using stock voltage, and a Coolermaster heat sink / 92 mm cooling fan combo. I suspect that I can OC it even higher, if I want to up the voltage a bit.

It's actually not a bad performer at all at 4.4 GHz. Admittedly, not as nice as it SHOULD have been, but if you can find one at a bargain (there are a good number of disappointed folks who are dumping theirs), and have a decent motherboard that can OC it, you'll get some nice OC's.

I'm actually pretty happy with what I got. It will certainly hold me over until Piledriver gets here.

What sort of temps are you getting, and do you have the ability to measure power draw? If so, is it as bad as everyone is making it out to be? I'm hearing an almost binary love or hate reaction to the FX processors: I'd like to get a more educated view on them.
 
What sort of temps are you getting, and do you have the ability to measure power draw? If so, is it as bad as everyone is making it out to be? I'm hearing an almost binary love or hate reaction to the FX processors: I'd like to get a more educated view on them.


Unfortunately, I don't have the means to measure power draw.

However, I can tell you, that with the above 92 mm fan / heatsink combination, along with using Arctic Ceramique 2, I'm getting about 28 C at idle, and after an extended session of CPU-crunching, the temperature never gets above 52 C. I suspect that the stock heatsink and cooling fan would be similar, maybe a few degrees higher.


Of course I'm disappointed at how Bulldozer has underperformed at its release. However, that doesn't make it a horrible chip. It smoothly runs all of my games at good detail levels (GeForce 460 GTX 1 GB video card), and with the 800 MHz overclock tacked on, I'm quite pleased with its stability.

Paying $109 for it? I wouldn't recommend this, since of course, the Black Edition Phenom II X4's can be found for 10 bucks more, and give better performance. Nobody can dispute this.

However, with a lot of people selling them after the initial wave of disappointment, I consider it an excellent buy at $79.00. It's an excellent grunt-level chip.

Again, it's simply a chip that hold me over until Piledriver proves itself one way or the other. Since I'm not one who maxes out every last detail setting on the latest games, it works just fine for me.

The way I see it, it's the same thing I did when I was given a Pentium 4 system at work, after the initial wave of disappointment that came about. It still wasn't a *terrible* performer, especially since my ATi Radeon 9500 Pro that was paired with it game me some decent performance in 2D and 3D apps. That system did quite fine for many years, until it was replaced with a Conroe-based system.
 
You'd think with a new CPU folks would do a small bit of research and look at how good they actually are.

Fine I can see the bargain buy aspect here...but I still think AMD are going to have to do some serious revisions on the next stepping to get things on track
 
Quick question for you guys- I was really looking at the 955 but Newegg has the 960T for $109 right now which seems like a good price. Plus my Google coupon would knock it down to $89 oop. Would that be a pretty good tradeoff? I should have got the 955 when it was $105 but took too long to decide but that 960T looks pretty tempting also. Plus, it's only 95W.
 
The 960T has a chance of unlocking all 6 cores, essentially making it a 3ghz unlocked 6 core thuban. Even if it doesn't unlock, the chip itself overclocks very well. If I weren't typing this on a 4 core phenom II at 3.9ghz, I'd jump on that deal myself. It'll definitely be an upgrade for you, especially considering the pricetag
 
A module is two cores, some folks have to get over it. Manually sheduling processes/threads I was able to get 90% performance using both cores in a module compared to using two modules one core each with CineBench. They are not half cores. Now if you throw in other tasks, branch other threads (schedule them) to different cores constantly so the cache's keep getting written over and over again from slow memory what does one expect?

The BD FPUs run at faster clocks right? The main issue is cache thrashing from poor thread scheduling IIRC.

Yes I believe that to be so and can be rather detrimental for BD in how Windows throws threads around at the moment.
 
A module is two cores, some folks have to get over it. Manually sheduling processes/threads I was able to get 90% performance using both cores in a module compared to using two modules one core each with CineBench. They are not half cores. Now if you throw in other tasks, branch other threads (schedule them) to different cores constantly so the cache's keep getting written over and over again from slow memory what does one expect?

Until you hit a 256bit FP and then you've got a 4 core 4 thread chip. I'm not disagreeing with you, though. These are about as "core" as you can get, but the term itself is defined by the task the CPU is presented with and not the label on the front of the box.
 
You don't run into many 256bit FPs though... I mean, unless you're doing crypto or something which BD actually performs quite well on, you're just going to be using 32-64 bit FP...

And like I mentioned above I believe that the FP runs faster so it doesn't bottleneck the 2 cores as much since you can only run 1 FP calculation at a time unless you do 2 FP at a time... one of each core.
 
Back
Top