How will the bulldozer be better then the i7?

I don't think it has anything to do with what they allow. At least not with AMD processor based chipsets. They do not have the control over board partners that Intel does. If EVGA wants to make a overclockable Opteron board all they need to do is design it and manufacture it. (Or contract the manufacturing out as they do now.)

Yeah, I agree. The reason I asked the question, however, was because of this quote (in this thread).

Sorry guys, we will not support overclocking on Opteron. It is targeted at the server market. The 890-class chipsets and Phenom processors will allow overclocking, Opteron and 5600 series chipsets will not.
 
As much as AMD and Intel would probably like to eliminate overclocking, the fact is that it isn't up to them. The motherboards largely determine just how overclockable a CPU is. At least to some degree. Intel is rumored to at least be attempting to change that with Sandy Bridge but we'll see if that holds true or not in time.
 
As much as AMD and Intel would probably like to eliminate overclocking, the fact is that it isn't up to them. The motherboards largely determine just how overclockable a CPU is. At least to some degree. Intel is rumored to at least be attempting to change that with Sandy Bridge but we'll see if that holds true or not in time.

Hmmm... true. I wonder if this means the AMD server chipsets have a burned in (or BIOS list?) of "max" speeds allowed.... Nah, that's just dirty speculation!

Honestly, if an overclockable dual/quad G34 board came out, I would immediately jump from my quad SocketF system to it!
 
Well the chipsets have a reference clock but the motherboard can override that. Most server boards simply don't provide BIOS options for overclocking. However their chipsets are fully capable of doing so. Will they overclock as well as their desktop counterparts? Unlikely, but it is still possible. It can and has been done in the past. The ASUS L1N64-SLI WS motherboard for the AMD 4x4 platform is a good example of this. ASUS has had other overclockable server boards for some time. MSI has as well though no models stand out in my mind as they were generally junk.
 
An AMD 8x8 platform with dual BD CPUs would be an ideal choice for me.

I'm going to wait and see how Bulldozer actually performs before thinking anything about it besides "interesting." I'm certainly not going to plan a build around one. Even if Bulldozer actually competes with Gulftown, it remains to be seen how it will fair against Sandy Bridge.
 
I'm going to wait and see how Bulldozer actually performs before thinking anything about it besides "interesting." I'm certainly not going to plan a build around one. Even if Bulldozer actually competes with Gulftown, it remains to be seen how it will fair against Sandy Bridge.

That's why I am holding off upgrading my computer systems. I'd like to first see how a quad-core 2.5GHz BD CPU compares to the fastest Thuban CPU?:confused:

The best idea might be to use a socket similar to the G34s and use a 4+4 to 8+8 BD core CPU on that socket.;)
 
I'm not remotely interested in how it competes with Thuban. If Bulldozer doesn't compete with Sandy Bridge I will view the chips as largely irrelevant for anything but modest budget builds.
 
What AMD should do is use a 1206 pin socket and align pin-1 on the AM3 CPU with pin-1 on the BD socket that way you have an AM3 compatible socket with 265 spare pins for BD.:cool:
Also they could look at calling the socket something 34;)
 
I'm not remotely interested in how it competes with Thuban. If Bulldozer doesn't compete with Sandy Bridge I will view the chips as largely irrelevant for anything but modest budget builds.

But then how do you define "compete"? As fast or faster per clock/core? Maybe 10% or so slower per clock/core? How does an on die IGP factor into that? etc.

Personally for me its all about bang for the buck. Current PhII's are around 20% slower per clock than current i5/7's, but often sell for less and can be unlocked for more cores. If I was gonna buy right now I'd have a hard time choosing between a PhII X4 and a i5 655K.

Right now I'm holding off with my "old n' slow" ~3Ghz quad Core 2 as I don't think it'd be worth it to upgrade for me yet. Gonna see how BD does vs. SB first. Hopefully they get close on IPC and the module approach lets them improve multi tasking performance while keeping the power/die size down.
 
Well if AMD does not enter the performance PC market then I will have no choice but to look at Intel's offering rather then AMDs budget midrange offering.:mad: High end users are after triple channel RAM and when you use 12 to 16 GB of RAM you can disable the swap file in Windows.:cool:

I'm after octa core for my file sever and quad to octa core for my other computers.:cool:
 
Last edited:
The motherboards largely determine just how overclockable a CPU is. At least to some degree. Intel is rumored to at least be attempting to change that with Sandy Bridge but we'll see if that holds true or not in time.

I read the rumours saying that only the unlocked multiplier Sandy Bridge chips can be safely overclocked, and if that's true, I'm going with Bulldozer for my next build even if it underperforms vs. Sandy Bridge.
 
To run triple channel ram AMD will need to add 120 pins to the AM3 socket and that would = 1061 pin socket and a PCIe controller on the CPU would add another 94 pins to the CPU = 1155 pins (like Intel) so if AMD went to a 1205 pin socket they would end up with 50 spare pins for future use.;)

I don't care if the socket is a PGA or LGA either.
 
Last edited:
What AMD should do is use a 1206 pin socket and align pin-1 on the AM3 CPU with pin-1 on the BD socket that way you have an AM3 compatible socket with 265 spare pins for BD.:cool:
Also they could look at calling the socket something 34;)

Actually, that will not work in reality because of 2 big reasons:

1. That would push up the costs for the client infrastructure and 99% of the customers are not going to want to pay more for it (remember that 80-90% of client systems are sold through retail by major OEMs.)

2. That would force tradeoffs in the product. Server, which drives higher margins, would probably win out, and wins on server could potentially limit client performance or features.

You don't want the 2 together, you want them apart, you will probably get better desktop features and pricing that way.
 
Actually, that will not work in reality because of 2 big reasons:

1. That would push up the costs for the client infrastructure and 99% of the customers are not going to want to pay more for it (remember that 80-90% of client systems are sold through retail by major OEMs.)

2. That would force tradeoffs in the product. Server, which drives higher margins, would probably win out, and wins on server could potentially limit client performance or features.

You don't want the 2 together, you want them apart, you will probably get better desktop features and pricing that way.

Well for the high end desktop market you will still need a 1155+ pin socket that supports up to 16 cores and more than 16 GB of RAM.:cool:

Motherboard manfacturers are already making XL-ATX motherboards and they fit in all of my full tower cases.:D
 
Actually, that will not work in reality because of 2 big reasons:

1. That would push up the costs for the client infrastructure and 99% of the customers are not going to want to pay more for it (remember that 80-90% of client systems are sold through retail by major OEMs.)

2. That would force tradeoffs in the product. Server, which drives higher margins, would probably win out, and wins on server could potentially limit client performance or features.

You don't want the 2 together, you want them apart, you will probably get better desktop features and pricing that way.

I can't say I disagree with you.

Really though, come off it. Is thing going to be faster than the current gen Intel or not? That's all we really care about.

So far, it looks like that answer is no, unless you write specific software to take advantage of it's features.
 
But then how do you define "compete"? As fast or faster per clock/core? Maybe 10% or so slower per clock/core? How does an on die IGP factor into that? etc.

Personally for me its all about bang for the buck. Current PhII's are around 20% slower per clock than current i5/7's, but often sell for less and can be unlocked for more cores. If I was gonna buy right now I'd have a hard time choosing between a PhII X4 and a i5 655K.

Right now I'm holding off with my "old n' slow" ~3Ghz quad Core 2 as I don't think it'd be worth it to upgrade for me yet. Gonna see how BD does vs. SB first. Hopefully they get close on IPC and the module approach lets them improve multi tasking performance while keeping the power/die size down.

Compete for me would be clock to clock. They need to be pretty close maybe trade blows in many benchmarks.

I think its really sad that the PIIs clock for clock are about the same as core 2 quads that came out how much sooner? That said AMD does offer great bang for the buck and i use them a ton in customers computers.

But for my personal rig bang for the buck would have to be amazing for me to not go with the faster choice. say SBs K series are the only ones that can OC and for socket 2011 Intel starts ks at $800 (that would suck lol). If amd wants to give me a 10% slower cpu for $200 that ocs then great. But if intel is at 350 or so and amds at 200 then i'll just pay the extra $$ for the better cpu.
 
I read the rumours saying that only the unlocked multiplier Sandy Bridge chips can be safely overclocked, and if that's true, I'm going with Bulldozer for my next build even if it underperforms vs. Sandy Bridge.

Don't believe everything you read. That's pure speculation. Doom and gloom with regard to overclocking each of Intel's next generation processors is par for the course. It hasn't happened yet. Hell they tried to put an end to overclocking the Pentium 133MHz CPUs back in the day with their SY039 stepping chips. BTW that's the only chip you couldn't actually overclock. Everything else rumored to not overclock has generally overclocked really well. The Core 2, the Core i7, Core i5's. We hear this crap all the time.

As for the next generation of CPUs, I will purchase whichever is absolutely the faster of the two. If that's AMD then great. If not, I'm going Intel again. However I will also factor in video card technologies. If AMD has the faster CPU, but I can't run NVIDIA cards then I may end up with Intel anyway. It all depends.
 
personally i think AMD is capturing a different segment of the market. while trying to compete with intel they offer a lower priced alternative for those who don't feel like paying a premium for the extra performance which they dont need...
 
personally i think AMD is capturing a different segment of the market. while trying to compete with intel they offer a lower priced alternative for those who don't feel like paying a premium for the extra performance which they dont need...

That's what they do because they have to. Not because they want to. Ideally they'd control the market segments top to bottom as they did in the Athlon 64 days. (Well from a price/performance ratio, not actual sales numbers which are another subject.) Trust me AMD doesn't want to be the budget processor company. The Athlon FX-51, 53, 55, and 57 and their associated prices are indicative of that. Plus the Athlon 64/X2 lines pretty much cost more than their Intel counterparts most of the time but they did offer better performance almost everywhere. They had cheaper parts than Intel did and more expensive/equally expensive parts. If they can go back to that, they will.

Bulldozer may or may not put them back on top. All I know is that Intel underestimated AMD and took a certain approach with the Pentium 4 which didn't prove to be the best move they ever made. I don't think Intel will be caught off guard like that so easily again. So this time AMD really has their work cut out for them. They will seriously need to ramp up IPC performance before their chips will compete with Intel on anything but budget segments. Which I maintain they only allow AMD to rule because crushing AMD out right would get them in all sorts of trouble with the government in the form of anti-trust lawsuits.
 
All I know is that Intel underestimated AMD and took a certain approach with the Pentium 4 which didn't prove to be the best move they ever made.

i was under the impression (and read a few places) that Intel didn't "underestimate" AMD during the Pentium 4 era per se.... they just made the wrong gamble on the p4.... and didn't foresee the problems with ramped up clockspeeds (the voltage leaks, heat), and assumed that process advancements would solve the problems if they did crop up...

I remember reading articles that Intel expected to be up to 10ghz or something crazy like that by this point in time (or even earlier)..... and who knows? the P4 architecture might have been insanely fast had it been able to really scale up in speed as much as intel had hoped....
 
Qwll that particular period of time and what Intel did is a matter of interpretation. Yeah I'd certainly say that they made the wrong gamble on the Pentium 4 being able to scale the way they thought it would. Hell the whole semi-conductor industry probably figured we'd be well past 5GHz now. However, I think Intel did underestimate AMD at some point. They could have probably done more during the Pentium III days to design a worthy successor. In a way that's true. They did have Pentium M, but kept it for the mobile market. I think they could have ramped up a desktop version earlier than they did.

Again Intel's gamble wasn't entirely wrong. They had the sales numbers even during that time. They just got trashed in the hardware review circles.
 
I remember reading articles that Intel expected to be up to 10ghz or something crazy like that by this point in time (or even earlier)..... and who knows? the P4 architecture might have been insanely fast had it been able to really scale up in speed as much as intel had hoped....

The only way I can see them getting to 10+ GHz is through the bus.;)
 
Again Intel's gamble wasn't entirely wrong. They had the sales numbers even during that time. They just got trashed in the hardware review circles.

But Intel was guaranteed the sales by default as AMD didn't have the fab capacity to meet the entire market's needs. IIRC AMD could've at best provided around 30% of the needed volume, and that was the best case. Also I don't think AMD could outsource much of their capacity at the time either per their deal with Intel.

AMD sold very well during that time period, if they had the capacity I bet you would've saw Intel lose lots more market share. Intel would've taken it back again with C2D of course later on, but AMD would've been doing better financially. They may have even of been able to avoid spinning off their fabs into GF.

P4 turned out to be a very mediocre chip from Intel, not bad but not good either. You could argue that it was process/heat limited, but then the chip designers are supposed to take that stuff into account, its certainly expected of them these days. Given the massive advantage they have in processes and cash for development I don't think they deserve any sort of slack when being criticized for their work on the P4.

Qwll that particular period of time and what Intel did is a matter of interpretation.
Coulda sworn their was a presentation from one of the lead engineers on the project where he pretty much flat out admitted that P4 was marketing driven. He sounded very frustrated and angry, and gave the impression there was lots of infighting on the project during initial planning. Was about an hour long, almost 3 years old now IIRC so I can't find the stream anymore...
 
But Intel was guaranteed the sales by default as AMD didn't have the fab capacity to meet the entire market's needs. IIRC AMD could've at best provided around 30% of the needed volume, and that was the best case. Also I don't think AMD could outsource much of their capacity at the time either per their deal with Intel.

AMD sold very well during that time period, if they had the capacity I bet you would've saw Intel lose lots more market share. Intel would've taken it back again with C2D of course later on, but AMD would've been doing better financially. They may have even of been able to avoid spinning off their fabs into GF.

P4 turned out to be a very mediocre chip from Intel, not bad but not good either. You could argue that it was process/heat limited, but then the chip designers are supposed to take that stuff into account, its certainly expected of them these days. Given the massive advantage they have in processes and cash for development I don't think they deserve any sort of slack when being criticized for their work on the P4.


Coulda sworn their was a presentation from one of the lead engineers on the project where he pretty much flat out admitted that P4 was marketing driven. He sounded very frustrated and angry, and gave the impression there was lots of infighting on the project during initial planning. Was about an hour long, almost 3 years old now IIRC so I can't find the stream anymore...

No, Intel managed to get decent sales because of this:

http://www.msnbc.msn.com/id/33882559


well, the reason behind that was the cause, not the actual lawsuit :p
 
Again Intel's gamble wasn't entirely wrong. They had the sales numbers even during that time. They just got trashed in the hardware review circles.
That's kinda like saying one football team's strategy wasn't entirely wrong because they managed to give the other football team food poisoning before the game.

What sucks for AMD is that they should be suing Dell and the other manufacturers who made deals with the devil in the early 2000s, but they can't because they still have to do business with them.
 
Last edited:
Intel's shenanigans certainly helped hurt AMD but AFAIK AMD sold everything they could produce even in those times.
 
i was under the impression (and read a few places) that Intel didn't "underestimate" AMD during the Pentium 4 era per se.... they just made the wrong gamble on the p4.... and didn't foresee the problems with ramped up clockspeeds (the voltage leaks, heat), and assumed that process advancements would solve the problems if they did crop up...

I remember reading articles that Intel expected to be up to 10ghz or something crazy like that by this point in time (or even earlier)..... and who knows? the P4 architecture might have been insanely fast had it been able to really scale up in speed as much as intel had hoped....


Considering my culv in laptop at 1,7Ghz outperforms Pentium IV 3,2 by not so small margin 10Ghz PIV would probably be around performance of 3Ghz Nehalem at most.

But still i'd love to see what kind of clockspeed PIV done at 32nm would do when 65nms one still hold all the records for highest frequency oc ever (7+Ghz on LN2)
 
Gigabyte's GA-890FXA-UD7 XL-ATX motherboard would make an ideal budget server/High-end Desktop board, and providing that it came out with a G34 socket (rather than AM3 or AM3+).;)

I'd go with 6 DDR3 RAM slots as with the XL-ATX format and the size of the G34 socket would mean 1 PCIe slot would have to go if 8 slots are used.:).
Also AMD will have to increase the RAM limit from 16 to 64 GB as 8 GB modules will be hitting the market soon.;)
One G34 socket will be able to support up to 32 cores on that socket.:D

Now if AMD merged the NB990 with SB950 and called it CB995 it would make even more room available on the board.:cool:
 
That's a pretty gross oversimplification.

True, but changes to the execution engine itself (e.g. it's still a 3 issue design, just like the original Athlon) were relatively tame compared to the addition of x86-64, HyperTransport replacing the EV7 FSB, and an integrated memory controller.

Please correct me if I'm wrong, it's been a long time since I dove into the architectural details of (Sledge)Hammer.
 
ScorpiDragon;1036138649 I'd go with 6 DDR3 RAM slots as with the XL-ATX format and the size of the G34 socket would mean 1 PCIe slot would have to go if 8 slots are used.:). Also AMD will have to increase the RAM limit from 16 to 64 GB as 8 GB modules will be hitting the market soon.;) One G34 socket will be able to support up to 32 cores on that socket.:D Now if AMD merged the NB990 with SB950 and called it CB995 it would make even more room available on the board.:cool:[/QUOTE said:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813131643
Im looking at this but want to see some benchtests before I buy one.
 
Back
Top