AMD FX-9590 AM3+ Processor

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
Benchmark Reviews had a lot of good things to say about the AMD FX-9590 AM3+ CPU they just reviewed.

The newest installment of AMD FX CPUs is finally upon us. September 2014 marked the release of the a few new FX CPUs, including the FX-8370, the FX-8370E, and the FX-8320E. One of the side-effects of AMD’s release is a price drop in their existing CPUs, including the flagship FX-9590 CPU. In this article, Benchmark Reviews takes a look at the CPU that sits atop the FX line, the AMD FX-9590 AM3+ Processor.
 
There's nothing wrong with the chipset, other than it isn't PCIe 3.0 (It's still 2.0.) You have to go to an AMD APU to get PCIe 3.0 these days. 3.0 makes a difference for an IGP like these APUs, of course...but hardly makes a difference at all with a discrete gpu installed.

Problem is...although I'm an AMD fan from way back (always support the underdog), I'd have to get a new motherboard and a new PSU to support the 9xxx FX cpus. Neither one I presently have will support a 220W cpu--although I have an AM3+ motherboard.

I can't understand why AMD is promoting these things...the FX 8xxx series is a far better buy, and I find the 95W 8-core to be an exceptional buy--which my current motherboard & cpu could support (I'm currently running a 95W FX-6300, 6-core.) Really, AMD desperately needs to launch FX cpus with Steamroller cores, and they need a newer chipset for their FX series--one that supports PCIe 3.0. Performance wise, as I mentioned, with a discrete gpu (what you have to use with an FX cpu), there's functionally no difference between 2.x and 3.x. It's just a marketing thing, however....people like the higher numbers even if they don't mean much of anything.

Kind of like nVidia's "new" take on super-sampling FSAA...called what_I_forget..then AMD copies with "VSR" for a few of its gpus--super-sampling dusted off and re-marketed...;) (nVidia's always been proficient at dusting off old stuff and selling it again as something "new.") The only difference that I can see is that with these "down-sampling" resolutions the end user gets to pick them, as opposed to picking the 2x-8x FSAA levels in driver-based super-sampling controls. Anyway, marketing seems to be important to some people.
 
I can't take AM3 seriously when there's SandyBridge, IvyBridge and Haswell.

I'd take an used 2500k anyday of the week over a FX9590.
 
I can't take AM3 seriously when there's SandyBridge, IvyBridge and Haswell.

I'd take an used 2500k anyday of the week over a FX9590.
"an used" indeed...
http://www.cpubenchmark.net/compare.php?cmp[]=1780&cmp[]=804

My FX-8350 has been a beast for years. It compares appropriately.

However, what I dislike is the AMD marketing vs. Intel's.
The "8 core" talk. Both the FX-8350 and the Haswell Core i7's have the same two-integers per four cores.
So why does AMD market their CPU as 8 core, while Intel markets their CPU as 4 core? They both truly only have four cores, split up into 8 logical cores.

Why don't CPU reviews always explain that, instead of only sometimes?
 
However, what I dislike is the AMD marketing vs. Intel's.
The "8 core" talk. Both the FX-8350 and the Haswell Core i7's have the same two-integers per four cores.
So why does AMD market their CPU as 8 core, while Intel markets their CPU as 4 core? They both truly only have four cores, split up into 8 logical cores.

Why don't CPU reviews always explain that, instead of only sometimes?

The reason is because the cores are laid-out differently.

Intel puts lots of ALU/FPU dispatch ability inside each core, allowing for a very high peak single-threaded throughput. When a single thread cannot occupy all the execution resources, another thread can attempt to make use of the free units (hyperthreading). Hyperthrerading generally yields a 15-25% performance increase.

In each AMD "core," the number of ALU units is lower(2 versus 3 on Intel/Phenom II) to reduce core size, and the FPU resources (same as you have in one Intel core) are shared between two cores in the combined module. The 4 decode units are also shared between the cores, so you have a "partial" SMT setup here.

When you have a multi-threaded load, the module can make use of the second core, and will see up to 80% scaling (typically limited by decode bandwidth dropping to 2-per-core) so long as you are not FPU-limited. If you are only using a single thread in the module, all four decoders (with some scheduling limitations, as well as the total TDP budget for the module) are dedicated to your activity, increasing performance by 5-10%.

This is why Windows treats Intel's threads and AMD's modules the same: it is advantageous for the OS to schedule a single thread to each Intel core before it uses the logical threads, just as it is advantageous to schedule a single thread to each AMD module before it uses the second core.

When you can get an 80% speedup under most loads, it's not out-of-character for AMD to call each module "dual-core." While Intel sees a whole lot less speedup, so they make the distinction of "threads versus cores."
 
Last edited:
I can't take AM3 seriously when there's SandyBridge, IvyBridge and Haswell.

I'd take an used 2500k anyday of the week over a FX9590.

Problem with that is the only Intel's worth a damn right now are i5's and i7's. Right now the i3's and G3258's are worthless. The reason for that is nearly every damn game now requires Quad core. Even though AMD's APU quad cores are lacking in performance, the FX line of CPU's are pretty decent.

So as a gamer you have two sets of choices. The i5/i7's or the FX 6300+ CPU's. The sweat spot of competition is between the i5's and the 8350 which is about $180. The i5's generally win 90% of benchmarks but not by a lot.

So why do these dinosaurs still roam the earth? Cause for that price and performance they're not a bad deal. The 8320's were going for $120 but their prices are fluctuating a lot right now. Probably due to their demand. If you can only afford a G3258 from Intel you would still be better off getting a AMD 760K. Intel maybe faster single core but for that price range they only offer dual cores. And unless you hope that the community continues to provide hacks for some games, it's probably going to be quad core or nothing.

It's sad that Intel is still trying to pass dual core CPU's are cutting edge. It's their fault half the Steam users are still on dual cores.
 
I just can't take AM3+ seriously when the chipset is from the dark ages.

Kinda how I feel about CPU's in general. It was fun 10 years ago but they really aren't as relevant as they used to be. If we're not talking about GPU's, I simply can't take the discussion seriously.
 
I think that AMD has some pretty respectable performance in Linux though, and that if you're a developer, you could definitely take a look at this CPU. I know I would like to eventually get an AMD FX chip to play around with on a dev box. :)

EDIT: Nevermind - I guess the 8370 really is the CPU from AMD now.
 
Last edited:
However, what I dislike is the AMD marketing vs. Intel's.
The "8 core" talk. Both the FX-8350 and the Haswell Core i7's have the same two-integers per four cores.
So why does AMD market their CPU as 8 core, while Intel markets their CPU as 4 core? They both truly only have four cores, split up into 8 logical cores.

Why don't CPU reviews always explain that, instead of only sometimes?

Because that's not really correct. AMD's modules are not like Intel's hyperthreading. An 8-core AMD chip has 8 integer cores. Each 'module' (2 cores) shares certain components (like the FPU), but they aren't faking the number of physical integer cores.

AMD chips perform slower because their individual cores are significantly slower than Intel's current architecture, though they perform well in tasks that can take advantage of the extra cores.
 
Because that's not really correct. AMD's modules are not like Intel's hyperthreading. An 8-core AMD chip has 8 integer cores. Each 'module' (2 cores) shares certain components (like the FPU), but they aren't faking the number of physical integer cores.

AMD chips perform slower because their individual cores are significantly slower than Intel's current architecture, though they perform well in tasks that can take advantage of the extra cores.

This is correct.
Here is a more detailed version of an AMD "8-core" CPU:

4 modules, each with 2 CPU cores, 1 shared FPU, 1 shared write combining cache, 1 shared L2, 1 shared decoder, 1 shared dispatcher.
 
I dont understand why these use 95 more watts than the next model down. What is drawing so much power?
 
I dont understand why these use 95 more watts than the next model down. What is drawing so much power?

the extra speed. its just a overclocked FX-8350. that's all.. that's the extra power. a FX-8350 at similar voltage and speed will use more or less same power.
 
I dont understand why these use 95 more watts than the next model down. What is drawing so much power?

All processors regardless of manufacturer pull a helluva lot more power when overclocked. The FX-9590 is a factory overclock by AMD strictly for enthusiasts. If you overclock an Intel 6 core to just 4.4GHz for example it pulls almost 100 watts more. Of course the Intel is a sexy beast so it's fine that it wants to drink so much power when overclocked. :)

4ME81NJ.png


Source: http://www.guru3d.com/articles_pages/core_i7_5960x_5930k_and_5820k_processor_review,19.html
 
All processors regardless of manufacturer pull a helluva lot more power when overclocked. The FX-9590 is a factory overclock by AMD strictly for enthusiasts. If you overclock an Intel 6 core to just 4.4GHz for example it pulls almost 100 watts more. Of course the Intel is a sexy beast so it's fine that it wants to drink so much power when overclocked. :)

4ME81NJ.png


Source: http://www.guru3d.com/articles_pages/core_i7_5960x_5930k_and_5820k_processor_review,19.html

hey just a reminder, that pic is a 8 core/16 threads and under prime95 which isn't even recommended to run in Haswell or Haswell-E.. but yes.. 6 and 8 cores intel chips also tend to pull a big amount of power once overclocked..
 
you guys are making me want to run my cpu at stock looking at those numbers :eek:
 
hey just a reminder, that pic is a 8 core/16 threads and under prime95 which isn't even recommended to run in Haswell or Haswell-E.. but yes.. 6 and 8 cores intel chips also tend to pull a big amount of power once overclocked..

Well yeah, the voltage they required to hit 4.45 GHz was 1.385v, which is much higher than the stock 1.050v at 3 GHz. Dynamic power for the processor is computed by:

Circuit Load * Frequency * Voltage^2

So for this overclock, the dynamic power increase (Circuit Load is constant) is roughly:

4.45 GHz/3.0 GHz * (1.385v^2)/(1.050v^2) = 2.58 factor increase in dynamic power. The majority of that increase is from the massive voltage required to make it stable.

And this matches pretty well with the measured power increase. If you take the 189w load power, subtract 60w for motherboard+ram+disk drives+gpu idle = 129w estimate for processor power. Take that number and multiply by 2.58, then add back the platform power, you get 392w. Not exactly the same, but pretty close.

So yeah, there's no mystery behind the massive power increase for running 8 cores at 4.5 GHz; increasing frequency by X can mean several times X increase in power consumption. Make increasing frequency as much as you can for as small a voltage bump as possible your top priority when overclocking, and you'll only see a near-linear power increase (X watts for X frequency increase). Remember that the next time you go chasing that "last 10%" performance improvement...you could end up boosting load power by 30% or more for a tiny performance increase.
 
Last edited:
Yes, AMD FX makes a little more heat than Intel, but in winter that's a good thing, in summer not so, so overall it's a breakeven.
 
Yes, AMD FX makes a little more heat than Intel, but in winter that's a good thing, in summer not so, so overall it's a breakeven.

In SMP perhaps, but definitely not in single-threaded programs, sadly.
 
I have enjoyed mine, got it when released. Also another good point is I have yet seen one die from use not water. Seen many a MoBo die, many in flames but not a single 8320/8350. Impressive when you consider some of the overclocks many use and the amount of power and heat they can make.
 
Hey guys I don't mean to go off topic but I really wish the FX line of processors was refined with the next generation architecture from AMD like excavator or carrizo it looks like 2015 will just be APU's no new enthusiast class processors like FX. :(
 
So I had to google that :)

My brand new FX-8350 was $120 on Amazon, it beats my i5-3570K especially if you run VMs.

Maybe if you are running 8+ VMs with no FPU-action going on, even then, there is no way an FX8350 would out perform an i5-3570K quad-core @ 3.4GHz with 4 or less VMs.
Sorry, but even a lowly i7 3540M @ 3GHz can transcode audio files (single-thread) nearly twice as fast as my FX-6300.

Also, you do realize that your "8-core" CPU is more like a slightly-integer-enhanced quad-core, right?
Intel CPU's FPUs dominate AMD's by quite a large margin, and your processor only has four FPUs to begin with.

As much as I like AMD, their time in the high-performance x86 arena is over.
 
Maybe if you are running 8+ VMs with no FPU-action going on, even then, there is no way an FX8350 would out perform an i5-3570K quad-core @ 3.4GHz with 4 or less VMs.
Sorry, but even a lowly i7 3540M @ 3GHz can transcode audio files (single-thread) nearly twice as fast as my FX-6300.

Also, you do realize that your "8-core" CPU is more like a slightly-integer-enhanced quad-core, right?
Intel CPU's FPUs dominate AMD's by quite a large margin, and your processor only has four FPUs to begin with.

As much as I like AMD, their time in the high-performance x86 arena is over.

We will see in 2016. Until then, the 8350 will have no issues matching or beating the i5-3570k in his usage scenario. The 8 core is a real 8 core but, unfortunately, they did not do the CMT architecture correctly to make it perform better.

Oh well, CMT is at the end of the road next year.
 
Depends on your definition. By definition of a CPU core it is 8 cores. The problem is with all that is shared. Best way to think of it is that all programs will run 8 threads/cores on AMD but will not on 4 Core HT Intel. Even more simple is that AMD chose a Hardware HT and Intel a software HT. (yes it is a crude dumbing down so don't get too hung up on the loose terminology)
 
Maybe if you are running 8+ VMs with no FPU-action going on, even then, there is no way an FX8350 would out perform an i5-3570K quad-core @ 3.4GHz with 4 or less VMs.
Sorry, but even a lowly i7 3540M @ 3GHz can transcode audio files (single-thread) nearly twice as fast as my FX-6300.

Also, you do realize that your "8-core" CPU is more like a slightly-integer-enhanced quad-core, right?
Intel CPU's FPUs dominate AMD's by quite a large margin, and your processor only has four FPUs to begin with.

As much as I like AMD, their time in the high-performance x86 arena is over.

I see this over and over posted by Intel guys. It is simply not true. The 8350 crushes the I5-3570k in VM's. Given enough ram and Disk I/O to feed the VM The 8350 will run more VM's and have greater overall performance because of it in most workloads.

Nope.. this isn't true.. they work much better than the intel hyper threading as multi tasking... but they aren't true complete 8 cores..

The proper way of describing it is by Thread count. The 8 core FX chips support 8 threads. The FX chips share resources in a module. Each module has a FPU and 2 integer cores that can process 2 threads. Intel with Hyperthreading on the other hand has 1 FPU and 1 integer core, but they create a virtual thread so it can still process 2 threads.

AMD has better integer performance given the same number of threads, but worse FPU performance. (basically has 8 integer cores vs 4 integer cores of the intel cpu for example)

Intel has much better FPU performance and well IPC, but since hyperthreading is a virtual thread it can conflict with resources already being used by the first thread, resulting in minimal performance gain with hyperthreading, even though its processing 2 threads. There are cases where hyperthreading can give you a huge boost to performance as if it was another logical core as well.

It really boils down to work load to see the pros and cons of each.
 
Depends on your definition. By definition of a CPU core it is 8 cores. The problem is with all that is shared. Best way to think of it is that all programs will run 8 threads/cores on AMD but will not on 4 Core HT Intel. Even more simple is that AMD chose a Hardware HT and Intel a software HT. (yes it is a crude dumbing down so don't get too hung up on the loose terminology)

The basic CPU core is a decoder, a register file, an ALU, and some I/O to talk to the rest of the world. Since about the early 1990s, this has also included an FPU unit, but that hardly implies that a core without a complete FPU is somehow not a core.

Bulldozer module has a 4-wide decoder front-end that services one process per-clock. That means that the two cores are unaware of the existence of each other, and the front-end is just a simple shared unit with fine-grained multithreading. There is no ability for multiple processes to schedule instructions on either core in the module, so the decoders and ALUs are not SMT. The FPU is the only part of the module that is SMT, as the FPU scheduler queue can accept requests from multiple procesess. The Steamroller architecture update gets rid of this complexity and just has 4 decodes per-core, meaning only the FPU is shared.

If you run integer-heavy workloads on a Bulldozer module, it will be surprisingly close to a 2x speedup on well-threaded code, which means it looks and acts like a regular dual-core processor. And in this day and age, AMD's bet was surprisingly on-target, as the vast majority of the old FPU loads from games came from things like software lighting (that the GPU handles now). So today's game loads are mostly AI and game engine, which is a mixture of FPU and ALU.

The reason the module architecture sucks is because they castrated the decode/integer resources by 1/3 in a mad dash to make more simple cores that they could clock faster. They also didn't consider the fact that each core you have in your design adds interconnect complexity, and extra cache: no matter how simple your core is, it must be fed, so the "simple" core system actually backfired on them (giving therm the largest AND slowest per-core CPU around).

Then they hit up against the reasonable power wall, so the insane clock speeds they had been planning were shelved...until they realized they had nowhere else to grow performance, and released the 200w behemoths anyway :(

I can only hope that AMD has learned how to make STARS better, much like Intel learned how to make P6 better. Otherwise, it's going to be dark days ahead for AMD fans
 
Last edited:
^ Very nice write up.
Anyone who skipped it, I suggest you read it.
 
Just for fun, I booted up Windows 2000 which doesn't understand multi-threading, and it sees 8 cores.

Any dual core is more than sufficient for personal computing, even the Qualcomm Snapdragon in my Nexus 7 is plenty given a lightweight OS. It doesn't have to load a monstrous registry and doesn't grow by 1 GB with every update.

And I agree with the poster above that the speed race ended a long time ago. AMD won in principle, Intel prevailed through sheer size and skulduggery. The race now is for power efficiency, so we'll see.
 
Never could quite get all the negativity about the FX series(recent). Comparing the stars to the piledriver CPUs, the piledriver was much better real world. Only when one takes the clock to clock single core matchup does it look somewhat dismal but in reality when considering OCing potential does the fact of progression rise to the surface.

At any rate, the 8350/20 or 9590/9370 are great CPUs for user that like to OC and get that full machine OC. But they must be aware that all that OCing fun has a price as well, as in requiring much higher cooling needs and a higher phase board both which add cost. On the plus side as I stated earlier, they are pretty much bullet-proof. Your power-supply, motherboard and other components have a much greater chance of failure. All-in-all The FX chips are great, fun and work far better than most will find in many benchmarks.
 
Well yeah, the voltage they required to hit 4.45 GHz was 1.385v, which is much higher than the stock 1.050v at 3 GHz. Dynamic power for the processor is computed by:

Circuit Load * Frequency * Voltage^2

So for this overclock, the dynamic power increase (Circuit Load is constant) is roughly:

4.45 GHz/3.0 GHz * (1.385v^2)/(1.050v^2) = 2.58 factor increase in dynamic power. The majority of that increase is from the massive voltage required to make it stable.

Thank you for this formula man! I've always been wondering how much power my X5670 @ 4.2GHz is actually drawing at full speed. If I plug in my numbers into the formula, I get 4.2GHz / 2.93GHz * (1.28v^2)/(1.06v^2) = 2.09 factor increase over the reported 95W TDP which is approximately ~199 watts. I don't really know the exact real-life power consumption numbers at stock, so I just take the TDP.
 
Last edited:
well the 8350 can throw off some serious heat with that high overclock/high multiplier....power consumption is just shy a little less than an Oced Bloomfield. .I just ordered an 8320 and a Asrock 970 performance/fatality board....please tell me I will get a good chip haha, some of the 8320's I know are really good chips and can undervolt/OC waay up there.... That board looks/has to be bad ass man..... like better than the Gigabyte UD3....
 
Last edited:
well the 8350 can throw off some serious heat with that high overclock/high multiplier....power consumption is just shy a little less than an Oced Bloomfield. .I just ordered an 8320 and a Asrock 970 performance/fatality board....please tell me I will get a good chip haha, some of the 8320's I know are really good chips and can undervolt/OC waay up there....


My 8350 runs very cool with a Prolimatech Megahalems Rev.B. At idle it's barely above ambient (29-30C) with the CPU the fan spinning at 250 RPM with Sabertooth r2.0 using 62 Watts at the wall.
 
Thank you for this formula man! I've always been wondering how much power my X5670 @ 4.2GHz is actually drawing at full speed. If I plug in my numbers into the formula, I get 4.2GHz / 2.93GHz * (1.28v^2)/(1.06v^2) = 2.09 factor increase over the reported 95W TDP which is approximately ~199 watts. I don't really know the exact real-life power consumption numbers at stock, so I just take the TDP.

The formula is not a perfect representation of total power because it ignores leakage power (which is a helluva lot harder to model). It still gets you in the neighborhood, because in a fully-loaded system dynamic power is the majority of power consumed.

It's very useful for estimating how much PSU you will need with a heavily-overclocked processor :D
 
Last edited:
Back
Top