AMD Bulldozer / FX-8150 Desktop Performance Review @ [H]ardOCP

Is this real life? AMDs last gen smacking around AMDs own brand new architecture?!

Sorry but 2600K or 2700k ftw for me
 
The AMD FX 8150 is sold out everywhere. I guess that there are allot of people in denial out there.
 
The AMD FX 8150 is sold out everywhere. I guess that there are allot of people in denial out there.


My local Micro Center has them in stock.

To me the big let down is in the power out put. Performance wise it looks promising if AMD can improve the process management.

Price wise it looks like a value. I'm looking forward to the FX8100 reviews.
 
The AMD FX 8150 is sold out everywhere. I guess that there are allot of people in denial out there.

or the yeilds are REEAALLY bad on the 8150s and they didn't have very many of them..being sold out is bad..they should probably sell as many as they can now for the orig price.
 
Hardware Heaven were a bit kind on the review, the first positive review (kinda) of BD. doesn't look that bad there....
 
Ok, what is going on here. Why is the FX-8150 faster than the i7 2600k in this Handbrake 0.9.5 benchmark:

http://www.legitreviews.com/article/1741/9/

and yet loosing to the i7 2600k in the HardOCP Handbrake 0.9.5 benchmark?
I really want to understand situation. After reading about a dozen reviews of Bulldozer, I’m becoming increasingly convinced that something very strange is going on. Either the bios revisions have some serious tuning issues, or new Windows drivers need to be released for the Bulldozer CPU/graphics cards to work properly with Bulldozer.

Either way, I’m reserving judgement on the viability of Bulldozer until I’ve seen a couple of bios updates/patches for Windows applications. If it sucks, that's fine, but I sincerely believe there's some serious additional performance gains to be had with firmware/driver/patch updates. Bulldozer is such a completely different architecture, it's inevitable that windows isn't feeding it properly. I realize everybody is completely past the point of patience with this CPU, but every new piece of silicon requires some driver tuning/firmware tweaks in order to show it's true performance potential. I suspect the AMD Zambezei chips will require this even more than we're used to. Only a couple of bios/driver/patch updates will give us the complete picture here.

But that's not how you release a product. The optimization and software support should have been thought of beforehand and not after the fact. Your claim is akin to taking the blame off AMD and putting it back on AMD, but this time in a different place. I'm not going to buy a processor AND $150 software (windows 8, in this case), in HOPES that it performs 10% better. That's ridiculous.

It's quite clear with the way computing works today (apps, games, general usage) that they approached the architecture and design poorly and made a product that not only doesn't fit today's needs (low power consumption, general power/performance efficiency, price) but at times falls flat on its face when racing its predecessor.

Was it too much to ask for to even *match* the IPC of the phenom II's? I mean, they couldn't even get that right, nevermind the power consumption figures and how it would potentially work with windows. It's literally fail after fail for these guys, and I'm being harsh because I'm an AMD fan. I'd love to buy a bulldozer. But i'm not buying a doodoodozer.

http://www.xbitlabs.com/news/cpu/di...x_AMD_Engineer_Explains_Bulldozer_Fiasco.html
 
If they could've done it they would have, that's just common sense. So they didn't means they can't right now, but they need to make some money so they had to release it I'm guessing. It's not dead in the water, it's just not right for everyone and it's not up to everyone's expectations. Welcome to the Internet where we all expect more than we get regularly. :p I mean everyone seems to think AMD did this on purpose just to piss us off. They didn't make it this time, so they'll sell what they can and try again.
 
This mess is everyone’s fault lol, When you think of the 8150 as a 4 core/module it looks much better. It should be a 4 core 8 threaded design like what the 2600K is. As the FX has parts that can technically make it a core and people believed it as a core, however as it shares resources with each other and hyper treading does not as it shares the work load over the cores rather than resources.

This made AMD think well more is better and named it 8 core to outdo their competitors. After all originally AMD was going to call it a 4 module part. However this confused customers in thinking “what’s the difference from a core and a module”? Although the modules can give upto 80% extra performance it also means that all the cores are loaded as 80% performance, so efficiency has dropped, hence it acts more like a 6 Core CPU up from a Quad core but still lower than a real 8 core which it does in benchmarks.

It’s Microsoft fault as after all this time with the likes of hyper treading been around for so many years applications typically can’t tell a difference from a physical core and a logical unit. This is why a 2600K under very heavy single treads loose performance as it may assign two treads to one core and hence loose performance while the others are idle. With the high performance of the cores it won’t actually saturate the CPU in games which is mostly GPU limited. Technically the CPU scheduler should have really been out years ago in Windows Vista.

Sadly it’s AMD way is it not? To forward think too much they bring the hardware out then have to wait for software developers to catch up. They did this with their Graphics cards on quite a few occasions and this is another one but for the CPU.

When you think about it, it is a leap forward and gives the opportunity for software developers to work with it but when it does become more stream line their competitor will jump on the bandwagon and don’t need to worry about these problems.

As now everyone thinks it’s a 8 core their expectations are off. If you consider it as a Quad core with 8 treads does it look so bad?
 
If you consider it as a Quad core with 8 treads does it look so bad?
Architecture discussions and theoretical capabilities are all interesting, but outside of the academic arena, the important practical features are performance, purchase cost, and operating cost (power consumption). This consumer says, "Bah, humbug!" Bulldozer is not worth the price. It's Microsoft's fault? It's a BIOS fault? It's the fault of game coders? It's Global Foundry's fault? It's a CPU that at this time does not perform very well as compared to it's main competitor's top models and even compared to it's own manufacturer's previous generation. For me to seriously consider a new CPU, it must have x performance under y conditions, consume no more than q kWh power, and cost no more than b dollars. Cores/threads/manufacturing process - none of that matters when I compare the contents of my wallet to what I need to accomplish with a hardware component. It's performance, price, and power. Oh, someone else DOES make the CPU that meets factors x, y, q, and b.

If AMD can't make a design that accommodates current software, current manufacturing processes available to them, and current competition, that's their fault.
 
Wasn’t Pentium 4 a failer? it produced high thermals, power consumption and performance was awful per clock yet it brought something that no other single core processor could. Multi-tasking the ability to load treads to make things smoother, isn’t it hyper treading which is used today? Eventually shrinking die sizes will be getting harder and harder and the more cores you stick to a CPU is going to be limited, if you could save 70% die space yet have 80% of the performance isn’t that a better trade off?

If AMD can't make a design that accommodates current software, current manufacturing processes available to them, and current competition, that's their fault.

You say that now but we will be stuck and progress has halted if it's not there, then why develop? You can only push things so far in till you can't push it anymore? Only now since Quads are more mainstream that games are now taking advantage of them with 4 threads. If your method had merit we might never of bothered with dual or quad cores and will be stuck with single core processors and the limit of technology.
 
Last edited:
As now everyone thinks it’s a 8 core their expectations are off. If you consider it as a Quad core with 8 treads does it look so bad?

yes, it looks even worse, because now you have a quad core that is still slow in single/lightly threaded tasks.
 
You say that now but we will be stuck and progress has halted if it's not there, then why develop? You can only push things so far in till you can't push it anymore? Only now since Quads are more mainstream that games are now taking advantage of them with 4 threads. If your method had merit we might never of bothered with dual or quad cores and will be stuck with single core processors and the limit of technology.
Let me elaborate. I do not disparage AMD's forward thinking and attempt to pack more into the Bulldozer design than what we can utilize today. Great! But if a new item, whether an automobile, a kitchen applicance, a computer component, or what have you, has performance problems for the time and place at which it is released, then it misses the mark. So, kudos for boldness and forward thinking; disappointment for failure to meet demands of the present market.

Research, development, and risk are part of any successful tech company's endeavors, but releasing a product to buyers before the results of the R&D are not properly applied just begs for disaster. AMD can think ahead and be as bold as they wish, but they can't expect the paying public to be the last phase of their R&D. That's fine for beta software or CPU engineering samples, but not mass sales.
 
Let me elaborate. I do not disparage AMD's forward thinking and attempt to pack more into the Bulldozer design than what we can utilize today. Great! But if a new item, whether an automobile, a kitchen applicance, a computer component, or what have you, has performance problems for the time and place at which it is released, then it misses the mark. So, kudos for boldness and forward thinking; disappointment for failure to meet demands of the present market.

Research, development, and risk are part of any successful tech company's endeavors, but releasing a product to buyers before the results of the R&D are not properly applied just begs for disaster. AMD can think ahead and be as bold as they wish, but they can't expect the paying public to be the last phase of their R&D. That's fine for beta software or CPU engineering samples, but not mass sales.

What you just said can be summed up in two words: AMD SUCKS

How ever I completely agree with you, AMD does indeed suck :D
 
Sometimes paying more for less is the best choice for inner peace, perfect Zen and happiness. Just ask anyone who has bought an Apple Macintosh, they will tell you. Same thing with these new AMD FX chips. I mean, they come in nice metallic boxes and all that, so why not? :D

Dude, that is hilarious, lol.

But to answer those who say 'why are people buying BD'. People might already have the motherboards. Sure, I would probably not go that route if building a new system (and I am an AMD fan), but if you already have a BD compatible mobo you dont really have a choice. A 2500k will not fit in an AM3 board no matter how hard you squeeze it...

No, BD did not dethrone anything, but I wouldn't say its a slouch either.
 
AMD for video, Intel for processing.

This is an embarrassment for AMD's long-hyped new chip.
One meme to sum it up: Son, I am disappoint.

I'll be keeping my 2600k for a while.
 
http://www.xtremesystems.org/forums...ew-(4)-!exclusive!-Excuse-for-1-Threaded-Perf

DGLee is onto something. Asus has even given him a new BIOS to try out. Disabling 4 core modules, in some cases, gives +20% performance.

He also thinks that if you have Windows recognize the cores in odd and even order(cores 0,2,4,6, then 1,3,5,7) it will be faster, but in current form, the cores are read in order. 1-8.
Not ideal but you can get an extra 20% in single treaded applications by disabling alternative cores. So windows 8 performance improvement does seem to be in line with what is said. However, it’s done differently. Windows 8 overloads as many of the first cores and modules as possible and in turn doing this will allow Bulldozer/sandy to turbo more often and for far longer. From the benches in Windows 7 the CPU never really uses turbo as it assigns work treads to each core and turbo only kicks in once CPU’s are parked/turned off. You can disable each half of the Bulldozer core and get 20% extra single threaded performance but then multi-tasking suffers somewhat as the other cores are no longer working. But this method also has merit. But disabling half of them you are reducing power consumption dramatically and allowing even higher overclocks while attaining 20% single treaded performance.
 
I agree BD is a letdown but I do have an issue wiht the review.

The tests were run at DDR3-1333 / DDR3-1600. BD is designed for DDR3-1866.

If you are going to compare the chips by multiplier overclocking, then the RAM should be at the system designed speed.
 
All the reviews are using 1333/1600MHz as it is compairable to the Core I5/7 with 1600MHz. Personally i don't think the extra bandwith is going to make much of a diffrence maybe 2~5%.
 
All the reviews are using 1333/1600MHz as it is compairable to the Core I5/7 with 1600MHz. Personally i don't think the extra bandwith is going to make much of a diffrence maybe 2~5%.

That would be like running a i7 920 with the RAM at 800mhz speeds when comparing it to a C2Q chip on a DDR2 motherboard "so the RAM is at the same speed"

I7 isn't designed for 800 MHz speeds. BD isn't designed for DDR3-1333 speeds.

Yeah, it only may be 2% - 5% performance, but it's still not running the chip as designed.
 
What's with the
"DO NOT BUY RAGE!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
On the website? Is it that bad?
 
Pros: went to the guru in 3D, hardware reviews, processors and got a very thorough comparison of all the BD processors as compared to the Intel X980, the 2600K and the 2500k. The 8120 beat the I5 2500k in almost all the benchmarks. For an equivalent price. Considering the features and all, I immediately ordered an FX 8120 from the Egg as this processor seems to give the most bang for the buck. Right now it is giving superior performance and as multi threaded apps become more popular, this processor will come into its own.
Cons: None yet
Other Thoughts: Bought a Gigabyte 970 motherboard with this processor.

Normally NewEgg doesn't allow product comparisons in user reviews, but hey, they have to move product.
 
That would be like running a i7 920 with the RAM at 800mhz speeds when comparing it to a C2Q chip on a DDR2 motherboard "so the RAM is at the same speed"

I7 isn't designed for 800 MHz speeds. BD isn't designed for DDR3-1333 speeds.

Yeah, it only may be 2% - 5% performance, but it's still not running the chip as designed.

That's actually not an accurate comparison, because the Core 2 Duo uses DDR2 and the Core i7 (Nehalem) uses DDR3 Ram.
But that's besides the point.Memory speed would only matter if the Bulldozer CPU's would be bandwidth starved. And a CPU is only then bandwidth starved when it has high IPC count (Instruction Per Clock count)m which clearly isn't the case for Bulldozer. IN fact, Bulldozer would be fine even with DDR3 1066. For example the old Core 2 Duo and later the Core 2 Quad CPUs where bandwidth starved, but Intel couldn't do anything about it because the memory controller was embedded in the North Bridge, so it was slow. Even faster DDR2 like 1066MHz didn't help because the memory controller couldn't take advantage of it. Then chipsets started coming out that could use DDR3, like the Intel X48 or NForce 790i, but they couldn't effectively use anything higher than DDR3 1333, and as such the bandwidth didn't change much. Also, in most cases lower latency is more important than high clock speeds.

The bottom line is that reviews that used DDR3 1600 or DDR3 1333 didn't do it wrong.
Look here:
http://vr-zone.com/articles/amd-fx-...vestigation--feeding-the-bulldozer/13704.html

They have tried up to DDR3 2133MHz. The fact that the Memory Controller in Bulldozer can work with such fast memory doesn't mean that the CPU can take advantage of it.
 
bulldozer is bandwidth starved according to civilization 5.

assuming the test was repeatable the min fps jumped from 20 to 40fps using 2000mhz memory and proper overclocking not just multi over clocking.

thats a pretty big jump especially for min fps

bulldozer isnt starved in every application but thats a good example of what happens when a cpu is starved.
 
But that's besides the point.Memory speed would only matter if the Bulldozer CPU's would be bandwidth starved. And a CPU is only then bandwidth starved when it has high IPC count (Instruction Per Clock count)m which clearly isn't the case for Bulldozer

Neither is it the case for the Pentium 4 and yet it was one of the most bandwidth hungry processors to date with its high clock speeds, long pipelines, and lower IPC so i fail to see where high IPC has anything to do with it.
 
bulldozer is bandwidth starved according to civilization 5.

assuming the test was repeatable the min fps jumped from 20 to 40fps using 2000mhz memory and proper overclocking not just multi over clocking.

thats a pretty big jump especially for min fps

bulldozer isnt starved in every application but thats a good example of what happens when a cpu is starved.

Neither is it the case for the Pentium 4 and yet it was one of the most bandwidth hungry processors to date with its high clock speeds, long pipelines, and lower IPC so i fail to see where high IPC has anything to do with it.

Actually the problem isn't memory bandwidth, but cache latency. Both the L2 and L3 caches have horrific latencies. Add to that the fact that because Bulldozer is built on a modular architecture the L3 cache is modular, so unlike Sandy Bridge for example where the L3 cache is a single circuit that is shared by all cores, the L3 in Bulldozer is fragmented and comes in 2MB slices that are interconnected. Why does this matter? Simply because it adds to the latency.
Conclusion: The better benchmarks are obtained with higher clocks because it compensates for the high latency of the caches. The same stuff was going on in the Pentium 4 as well, even do that was a single core architecture.
 
What's with the
"DO NOT BUY RAGE!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
On the website? Is it that bad?


Noticed that too. Though after reading and watching several reviews Rage appears to be "just" horribly mediocre, so what gives?
 
Actually the problem isn't memory bandwidth, but cache latency. Both the L2 and L3 caches have horrific latencies. Add to that the fact that because Bulldozer is built on a modular architecture the L3 cache is modular, so unlike Sandy Bridge for example where the L3 cache is a single circuit that is shared by all cores, the L3 in Bulldozer is fragmented and comes in 2MB slices that are interconnected. Why does this matter? Simply because it adds to the latency.
Conclusion: The better benchmarks are obtained with higher clocks because it compensates for the high latency of the caches. The same stuff was going on in the Pentium 4 as well, even do that was a single core architecture.

I disagree, from all the reviews i would say it's bandwith staved not for the memory but the L2 cache, although Intel has better latancy it also has 6x more bandwith on the L2 cache. Bulldozer L2 bandwith seems to vary from review to review. I belive it might be tied into the pysical memory as people using faster memory has better L2 cache performance.

12800 Ram
Bulldozer
L1 cache 150GB/sec
L2 cache 15GB/sec w/ 1600MHz memory and 20GB/sec with 1866MHz memory.
L3 cache 25GB/sec

If you compare this with Intels sandy
Sandy
L1 cache 153GB/sec
L2 cache 80GB/sec
L3 cache 43GB/sec
 
Noticed that too. Though after reading and watching several reviews Rage appears to be "just" horribly mediocre, so what gives?

I know that this is off topic, and I have no interest in Rage what so ever. I just noticed that on the hardocp.com website and it seemed to me kind of odd, and maybe a bit unprofessional. It doesn't bother me or anything do, so I don't really care.
 
Noticed that too. Though after reading and watching several reviews Rage appears to be "just" horribly mediocre, so what gives?

I think it has to do with the fact that Rage streams "adaptive" level of detail to ensure always 60 FPS, no matter the hardware. If you have shitty PC hardware you'll still get 60FPS as if you had a OMGWTFBBQQUADROUPLE!!SLI [H] rig.

It kind of goes against everything this site stands for. Not to mention, it was primarily developed for consoles.

This has been a year of disappointments: Duke Nukem: Forever, Rage, iPhone4s? and the biggest of em all, AMD's Bulldozer.
 
Last edited:
i can see why they call it an 8 core cpu but really its just an 4 core cpu with internal HT (but not) i

the down side to this is if all 4 processor cores are not been fully used (or more so over used by 4x so there is 8 threads running that all want cpu time) bulldozer will suck if that is not happening, bulldozer only seems useful if server is overloaded something you would norm not want to happen (mite be good for VM boxs but an i7 does an better job of it)

where as where as Intel HT on an I7 or I3 comes into its own when more then 50% load is going on so HT can help keeping the system running smoothly even if more cpu is needed(only game i have seen HT work well is bf bad company 2 with HT off map loading time was way longer if you call 5-7 seconds longer then 1-2 seconds on map load)
 
It looks like I will be ripping the new 990FX motherboard out of my case and going to Intel (who I hate). In the future, I will not be buying without reading benchmarks.

AMD garbage ejected yesterday. Sick of waiting, so I did not hold out for Sandy Bridge Extreme or even the i7 2700K.

Found interesting article about the inside crap going on at AMD (now confirmed by the result):

http://www.brightsideofnews.com/news/2011/6/24/amd-insiders-speak-out-bapco-exit-is-an-excuse-for-poor-bulldozer-performance.aspx

Basically AMD marketing and management screwing AMD customers through deceit and AMD engineers not given the resources to make good high performance CPUs. Not loyality friendly actions.
 
AMD garbage ejected yesterday. Sick of waiting, so I did not hold out for Sandy Bridge Extreme or even the i7 2700K.

Found interesting article about the inside crap going on at AMD (now confirmed by the result):

http://www.brightsideofnews.com/news/2011/6/24/amd-insiders-speak-out-bapco-exit-is-an-excuse-for-poor-bulldozer-performance.aspx

Basically AMD marketing and management screwing AMD customers through deceit and AMD engineers not given the resources to make good high performance CPUs. Not loyality friendly actions.

Yeah I remember reading that few months back when it came out and thought it was all FUD, but after Bulldozer and reading other stories from AMD employees, management really screwed up.

They should have let nVidia buy AMD, Intel would have probably bought ATI and you would have much better "APU"s today and a healthy competition.

AMD and ATI engineers would be much happier and so would we the consumer.

AMD and nVidia would now not be fabless either (Global Foundries wouldn't need to be spun off) and with their combined resources they would probably be much closer to actually competing with Intel. Both nVidia and AMD received settlements from Intel, they would have also been in a much better position to bargain for more. And Intel would be weaker after purchasing ATI.

In the end everyone is a loser (including us the consumer) except for Intel.
 
If they could've done it they would have, that's just common sense. So they didn't means they can't right now, but they need to make some money so they had to release it I'm guessing. It's not dead in the water, it's just not right for everyone and it's not up to everyone's expectations. Welcome to the Internet where we all expect more than we get regularly. :p I mean everyone seems to think AMD did this on purpose just to piss us off. They didn't make it this time, so they'll sell what they can and try again.

Best post I read yet, don't nock AMD just because they messed up. Back in the X2 and Athlon 64 and FX single cores, I could of said Intel was overpriced crap compare to AMD.
 
Back
Top