So what went wrong with BD?

.I think more cache would help.

I think faster cache will help. They already have loads of cache but its slow. Maybe get rid of the exclusive cache (if that is the reason for high latency).
 
While some will consider Bulldozer a fail, here are some of the reasons it is not.

1. Bulldozer is not a traditional 8 core cpu. While it is marketed as a 8 core cpu, it is more like a 4 core cpu with hardware hyper threading.

2. Bulldozer is part of the Scorpius platform, which releases a bunch of new features to the main stream market. With the proper marketing they will sell well in consumer stores.

3. Bulldozer can clock high enough past the previous generation to offer performance improvements.

4.New instruction sets allow developers new ways to develop software which can take advantage of them. The purpose of instruction sets is to vastly improve performance when dealing with standard algorithmic problems. New sets of instructions will allow programs to make single threaded code which can be executed by multiple cores, or greatly expand the programmers ability to be efficient.

5.With a small price adjustment Bulldozer compares very favorably against competitor products. Given the cost of the Platform. Remember its not about just the cost of the Cpu, but also the cost of the motherboard, ram and supporting components. Currently Intel motherboards are more expensive than Amd motherboards. So while the sandy bridge 2500k might be a bit cheaper than the fx 8150, if you look at the combined cost for the motherboard and cpu they are comparable far as price vs performance.

I have used pretty much every combination of cpu and motherboard as of late. 990x, 2600k, 2500k, phenom II x6, and now bulldozer. I see nothing wrong with its performance. Granted my current work computer for cad work, solid works and such still houses the 990x. My gaming computer listed in my sig is based on the bulldozer build, if that tells you anything.
 
5.With a small price adjustment Bulldozer compares very favorably against competitor products. Given the cost of the Platform. Remember its not about just the cost of the Cpu, but also the cost of the motherboard, ram and supporting components. Currently Intel motherboards are more expensive than Amd motherboards. So while the sandy bridge 2500k might be a bit cheaper than the fx 8150, if you look at the combined cost for the motherboard and cpu they are comparable far as price vs performance.

I do not believe this is the case any more. In the past yes lga1366 boards were way more expensive than AM3 boards however Intel has considerably dropped the prices of their chipset at lga1155. There is not a large difference in price between motherboards. Sure you can find more expensive Intel boards but you can find the same with AMD boards.
 
I do not believe this is the case any more. In the past yes lga1366 boards were way more expensive than AM3 boards however Intel has considerably dropped the prices of their chipset at lga1155. There is not a large difference in price between motherboards. Sure you can find more expensive Intel boards but you can find the same with AMD boards.

Furthermore on this, I'd acutally be as bold as to say you need a higher end board to overclock a FX-8150 to a moderate (4.5GHz) or higher level. Reason: Power consumption.

Lower end boards in the AMD's lineup had lots of issues with the cheap 4-phase power not being able to handle Phenom II chips draw and burning up. See: http://www.overclock.net/amd-cpus/943109-about-vrms-mosfets-motherboard-safety-125w.html

Considering this, how do you think lower end 4-phase 9xx boards would handle BD that draws a lot more power than the Phenom IIs they were burnign up on? For BD I'd consider 8+2 (or dual 4+1) phase power mandatory, and you don't really find that feature on cheap boards.

On the other hand Intel's Sandy Bridge doesn't have near the power requirements when overclocked. You can get away with a lower end board and OC Sandy Bridge to a moderate level.

Advantage: Intel.
 
You don't need 8 phase power to overclock.

4-1 split plane is good enough. Sure it will hold up back from the most extreme overclocks, but will be ok.

if you compare motherboards of the SAME feature set amd vs intel, the amd boards are cheaper. Now if you want to trade away features to get a cheaper board then yes maybe the intel would have an advantage in still being able to overclock. However those cheaper boards are lacking the features you could have with a am3+ board.

I think one of the biggest advantages the am3+ boards have is they support 2 x16 pci express slots, vs intel which will run in x8 x8. Now you may argue that this is not much if any performance difference. Hardocp even did a article about it.http://www.hardocp.com/article/2010/08/23/gtx_480_sli_pcie_bandwidth_perf_x16x16_vs_x8x8/2

that was with older cards the 480's, the 6970's and 580's would have a larger difference.

so i guess one could debate either way.
 
I think faster cache will help. They already have loads of cache but its slow. Maybe get rid of the exclusive cache (if that is the reason for high latency).

All the caches in BD are slower than in K10.5, but at the very least for the L3 cache that is forgiveable as it's one third larger and now has a whopping 64-way associativity greatly magnifying it's effective size.

.....that won't do anything for desktop applications though.
 
NSide -

I'm a software developer for a leading manufacturer of Computer Automated Engineering software. Think Finite Element Analysis, Computational Fluid Dynamics, Thermodynamics, Aerospace, etc.

Not every problem can be written to run on multiple threads. It's not like we can just go in, flip a few compiler switches, and, "Blamo!", our code is multi threaded and scales to N number of cores. Some problems scale, some do not. Some problems are limited by memory bandwidth or disk I/O more than by the ability of the CPU to crunch the numbers. Some are GPU limited.

I suspect the same applies to the games you want to play and see scale perfectly on your BD, i7-980, or whatever.

JImcwill

I respect your position and profession. I am admittedly a hobbyist in comparison.

That said, I would think the software you make is a little more specialized than what I was thinking about. Although I don't quite fully understand your limitations, I am pretty sure that with time and perseverance, these things can be worked out. Although I'm sure some developers don't see a need to do it (they say just increase the IPC and clock speed and it will run faster)... I would hope others are looking for a way to overcome the issue. That's really what I meant.

It took MS 2+ generations before they were finally able to release a decently stable (and usable) 64 bit OS after multi-core processors were in mainstream machines, so I don't think these things happen with the flick of a switch. I remember reading the developers complaining about the Xbox 360 and it's triple-core processor, and how it would be difficult to utilize that CPU. It seems many have figured it out now, and I bet the GPU and RAM are now the bottleneck(s)...

Perhaps i am wrong (again, I really don't know squat about the limitations) and if that's the case, I will be looking for a really fast single or dual core processor for my next personal build. My OS, my Anti-Virus, and a few video-editing programs don't really justify owning 4 to 8 cores if it is impossible (as you said in so many words) to use the other cores with anything else. Oh well...

Thanks for the reply.
 
You don't need 8 phase power to overclock.

4-1 split plane is good enough. Sure it will hold up back from the most extreme overclocks, but will be ok.

if you compare motherboards of the SAME feature set amd vs intel, the amd boards are cheaper. Now if you want to trade away features to get a cheaper board then yes maybe the intel would have an advantage in still being able to overclock. However those cheaper boards are lacking the features you could have with a am3+ board.

I think one of the biggest advantages the am3+ boards have is they support 2 x16 pci express slots, vs intel which will run in x8 x8. Now you may argue that this is not much if any performance difference. Hardocp even did a article about it.http://www.hardocp.com/article/2010/08/23/gtx_480_sli_pcie_bandwidth_perf_x16x16_vs_x8x8/2

that was with older cards the 480's, the 6970's and 580's would have a larger difference.

so i guess one could debate either way.

I think you can almost get a MSI P67-G43 for $100 that has Sata III, crossfire, USB 3.0, overclocking abilities, and a few other things. That is perfect for overclocking a 2500K to at least 4.5 GHz. That puts the price on a Intel CPU+Mobo pretty close to AMD if not lower.
 
You don't need 8 phase power to overclock.

4-1 split plane is good enough. Sure it will hold up back from the most extreme overclocks, but will be ok.

No, a 4+1 won't be OK for taking BD to 4.5GHz+ with any type of real reliability / motherboard longevity. I linked a very detailed thread that explained why the VRMs are important, especially when a CPU is drawing an asston of power and you completely ignored it, good job.

Here's a good example of a killer 4+1 phase board that got fried by BD's embarrassingly high power draw.

MSI%20970A-G45%202.jpg


See: http://www.overclock.net/amd-cpus/1133267-tried-oc-pc-died-2.html


Oh, and that UD5 AMD board you keep referring to has major VDroop issues when overclocking.
 
Last edited:
As much power as BD draws why in the hell would anyone cheap out on power circuitry overclocking it?
 
look at sig.

i have plenty of experience with bulldozer. Do you? no

don't debate with me when you have NO FIRST HAND KNOWLEDGE.

you don't need more than 4-1 pow split plane power phase, the important part is you have quality vrms, they are kept cool, and have a proper amperage rating. MY ud3 has 8-2 power phase, but that's a mute point.

my fx hit 5ghz, it goes higher as well.

i've tested it on several boards now, most have been able to hit 5ghz, with some hitting 4.8, yes on the dreaded 4-1 split plane power phase boards.

i've tested the asus sabertooth, ud5, ud7, GA-970A-D3, and the crosshair V. Currently being reviewed is the ud3.
 
I'm not debating anything. I was simply saying why anyone would cheap out on power circuitry when overclocking. Not every individual CPU of even the same type overclocks the same, better power circuitry simply helps to eliminate problems with something that's not guaranteed and varies from individual CPU to individual CPU.
 
2. Bulldozer is part of the Scorpius platform, which releases a bunch of new features to the main stream market. With the proper marketing they will sell well in consumer stores.

Such as? I am now using a 1090T on an 890 chipset with a radeon 5770

(I only do very light gaming in older games, so i didnt need a high spec card)
Lets analyze this move to the Scorpius Platform:
1090T vs FX 8150 - $250+ For Negligible upgrade in actual performance
990FX Motherboard - $200+ For Nil Upgrade in actual performance
Radeon 6770 - $150 Neglible Upgrade

I honestly do not need one single percent more GPU performance, but I would love the hell out of much better CPU speed. and the 990FX is just an 890FX with a few more bells and whistle which i dont use.

I suspect that for a large amount of people the situation is similiar. Maybe not on this forum, but I know a lot of people who build a PC with the best CPU possible and do their gaming on console.
The scorpius platform, like the WR overclock, and the "FX Unlocked" shit they have plastered all over any official images of anything loosely related to the processor is just pure marketing hype.
 
Such as? I am now using a 1090T on an 890 chipset with a radeon 5770

(I only do very light gaming in older games, so i didnt need a high spec card)
Lets analyze this move to the Scorpius Platform:
1090T vs FX 8150 - $250+ For Negligible upgrade in actual performance
990FX Motherboard - $200+ For Nil Upgrade in actual performance
Radeon 6770 - $150 Neglible Upgrade

I honestly do not need one single percent more GPU performance, but I would love the hell out of much better CPU speed. and the 990FX is just an 890FX with a few more bells and whistle which i dont use.

I suspect that for a large amount of people the situation is similiar. Maybe not on this forum, but I know a lot of people who build a PC with the best CPU possible and do their gaming on console.
The scorpius platform, like the WR overclock, and the "FX Unlocked" shit they have plastered all over any official images of anything loosely related to the processor is just pure marketing hype.


While I see your point, and it is correct, I'd like to point out that $150 to upgrade to the same card would be a bad plan no matter how much money one has.

Otherwise, you're dead on. It's just not worth it to upgrade from a Thuban to Bulldozer in my opinion. I thought about pulling the 2 Thubans I have in my secondary rigs around the house but decided it wasn't worth the money. I am running 990FX and 990X chipsets in my rigs due to SLI/Crossfire support and it's convenience.
 
I think the problem is bottle neck caused by arrangement of chip and the quality of materials used.
 
Wasn't Bulldozer originally set for a 2009 or 2010 release? It might have been impressive two years ago.
 
Wasn't Bulldozer originally set for a 2009 or 2010 release? It might have been impressive two years ago.

The 2009 version on 45nm was scrapped according to AMD. This BD is a different version of that. Does anyone really know what AMD even changed between these two designs?
 
They changed the wrong stuff!
I regret buying my AM3+ mobo if I knew it would be that much of turkey CPU I'd have got another cheap ASRock with runs great on my other pc build.

Power consumption is higher on FX than PhII and overclocking FX is a huge power draw. The performance does not scale well even running at high overclocking with FX
 
No, a 4+1 won't be OK for taking BD to 4.5GHz+ with any type of real reliability / motherboard longevity. I linked a very detailed thread that explained why the VRMs are important, especially when a CPU is drawing an asston of power and you completely ignored it, good job.

Here's a good example of a killer 4+1 phase board that got fried by BD's embarrassingly high power draw.

MSI%20970A-G45%202.jpg


See: http://www.overclock.net/amd-cpus/1133267-tried-oc-pc-died-2.html


Oh, and that UD5 AMD board you keep referring to has major VDroop issues when overclocking.
Crap, this is the board that I just bought.....:-/

Dah well, hopefully the 95w 8 core comes out around the beginning of the year.
 
That one sounds much more competitive with the 2500k.
I'm actually using it for the multi threaded awesomeness...lol. I'm actually using this as my esxi server and I need the lower TDP because I want to keep it on 24/7..8 cores + 95w TDP = happy Joe
 
It seems to me the whole Bulldozer design is fighting within itself for shared resources, and leading to slow performance? Anyone else notice this?

If you take a FX-4100 and OC it to 5GHZ it will score around 3.35 in CB 11.5 multi-threading test. My X3 unlocked X4 running @ 3.875GHz scores 4.07. And I paid 84.99 for it last year!

There has to be some horrible resource sharing in each module to turn out numbers like that. Hopefully there is a software/bios fix coming SOON! Cuz AMD really needs to get their engineers on this stuff AS SOON AS POSSIBLE! This is crazy!

Video Link of 4.9 GHz FX-4100 CB 11.5 test score.
 
It seems to me the whole Bulldozer design is fighting within itself for shared resources, and leading to slow performance? Anyone else notice this?

If you take a FX-4100 and OC it to 5GHZ it will score around 3.35 in CB 11.5 multi-threading test. My X3 unlocked X4 running @ 3.875GHz scores 4.07. And I paid 84.99 for it last year!

There has to be some horrible resource sharing in each module to turn out numbers like that. Hopefully there is a software/bios fix coming SOON! Cuz AMD really needs to get their engineers on this stuff AS SOON AS POSSIBLE! This is crazy!

Video Link of 4.9 GHz FX-4100 CB 11.5 test score.

Exactly!
Who would have thought FX would be this poor.
You are right the FX -4100 simply doesn't have the performance to even match a quad core Athlon II even overclocking the crap out of it not a hope.

This processor actually costs more than an Athlon II and I paid a stupid low price for my 840 which again beats up the Fx 4 core easily. If folks had know FX would be this bad performance wise most probably wouldn't have bothered buying a swanky AM3+ board

AMD need to grasp this simple fact NEW= IMPROVED/BETTER
Not NEW= WORSE

No wonder folks are angry FX just doesn't have the legs to run even clocking it a much higher frequencies.
 
i will say that most benchmarks with bulldozer are going to look weak.

there are a couple of things going on.

1. The cpu has clock rate problems, ie when in full load it will drop to a lower speed, a throttling issue. The fix is to open up amd overdrive turn turbo core on, then back off, and stops it, perhaps that is only on my ud3 but kyle even pointed it out in his review.

2. Sharing resources is a bad idea in its self, but there are some problems with cache trashing. Because of the scheduler not knowing what cores are part of what modules and what not.

3. load line calibration seems to have some issues as quite a few users are having vcore drop issues under full load. While this wouldn't really affect performance per say, its a problem.

I expect the fx series will be alot stronger, if they fix the throttling issue, and the cache trashing.
 
The management at AMD discovered the joys of freebasing crack cocaine, at least that's my best theory on the situation.
 
Short Answer: AMD fucked up

Long answer:

1: They tried to put a new CPU on a new process, should have released 32nm Thuban to perfect the process, then move to BD.
2: They got rid of all the good engineers and allowed an automated design system design most of the CPU.
3: Designed it too big
4: Crappy Cache size, latency, thrasing - it goes on.
5: It sucks way too much power, especially when you overclock it.
6: It's pipeline is too long
7: Its over priced
8: Its modules have too many shared resources
9: Its primarily designed as a server processor that is being put into desktops too.

I think that covers a portion of its faults
 
Short Answer: AMD fucked up

Long answer:

1: They tried to put a new CPU on a new process, should have released 32nm Thuban to perfect the process, then move to BD.

Great point. They should have learned this from Intel by now.
 
Short Answer: AMD fucked up

Long answer:

1: They tried to put a new CPU on a new process, should have released 32nm Thuban to perfect the process, then move to BD.
2: They got rid of all the good engineers and allowed an automated design system design most of the CPU.
3: Designed it too big
4: Crappy Cache size, latency, thrasing - it goes on.
5: It sucks way too much power, especially when you overclock it.
6: It's pipeline is too long
7: Its over priced
8: Its modules have too many shared resources
9: Its primarily designed as a server processor that is being put into desktops too.

I think that covers a portion of its faults

LOL ^ Yup that sums it up! ;)
 
llano has a slightly higher IPC than Thuban in most workloads (short of some server workloads, basically - the primary benifit of the L3 cache).

Of course, Llano doesn't have an external HT link to it's NB - it goes straight into PCIe (it's south bridge is connected by a special PCIe link - more PCIe than A-Link was, apparently).
 
Does anyone else think memory bandwidth is hampering performance? Would quad channel memory help?
 
Memory bandwidth is not the problem. The design is the problem it's a long pipe line and it's sharing resources in each module so you do not get a genuine 8/6/4 core performance.
Yes it can work well in highly threaded applications but in some cases it can be a bit pants as well (Cinebench scores are very much worse than expected)


You can see how badly the design is flawed even overclocking these processors heavily they don't scale that well at all. So forget about ramping clock speeds to sort the issues. AMD are going to have to go back and address design problems my advice would be to ignore these FX processors and see what happens down the road. (ie next year some time)

Read this it gives a fairly good idea of what the problem is design wise
http://arstechnica.com/gadgets/news...sappointing-debut.ars?comments=1#comments-bar
 
Back
Top