The AMD Reality Check Challenge

The Intel box I believe was the new 6 core CPU, which would of been 12 threads. I believe Bulldozer 8 cores is more similar to an Intel quad with HT than actually 8 separate cores. But I don't follow AMD that closely so I don't know for sure.

The AMD Reality Check Results:

System C (Intel Core i7-2700K): 40 Votes
System D (AMD FX-8150): 73 Votes
No Difference: 28 Votes

2700k is 4 core 8threads
http://ark.intel.com/products/61275/Intel-Core-i7-2700K-Processor-(8M-Cache-3_5-GHz)

BD is 8 core, maybe as the games increase in threads the better shot amd has?
8 slow cores vs. 4 fast cores split in half
Who knows
 
lol at the dudes posture in the pic. seat is too high and/or the table is too low. egads, you would think they would have decent ergonomics.
 
The better threaded a game is the better those AMD chips will perform; there is no doubt. They've designed their chips with that in mind: IPC dropped by 10% from the Thubans, but with more integer cores a heavily threaded application would benefit more greatly regardless of that 10% dip.

The issue they have to address is their half-way offload of the FP onto the GPU. We're obviously still not there but AMD has decided to step back in FPU cores and save die-space and transistors. They've got only 1 256bit FPU per module so it's only 4 cores. Games tend to be quite FP heavy, so Bulldozer is at a natural disadvantage here and that's why it performs the way it does :/ If it's got a lot of integer calcs the BD will do quite fine, especially if you tax all 8 integer cores it easily gets over that -10% speed bump. Things will obviously go south as you decrease the thread count, especially integer-based, so the architecture doesn't do well as the previous gen Thubans or even Denebs. 6 threads and you'd probably see the Thubans surpass BD (avoiding new instruction sets that the Thuban lacks, because then it's not fair =P). With 4 threads the SB's will do the best, etc etc.

All of this is a long way of stating that if AMD modified their architecture a bit and addressed the places it now lacks you'd get a processor that would perform very very well at 8 threads (because now you can actually call it an 8-core and not be lying to yourself) and trump the 8 thread 2600K and easily the 2500K in multi-threaded benchmarks (8 threads and more) despite trailing SB by up to 50% in IPC in some tasks and it's predecessor by 10%. As a general rule of thumb, BDs will do better with games and applications that are more heavily threaded, and the more threads the better -- assuming it's not floating point. Unfortunately that decrease in IPC hurts it everywhere so its victories tend to be slim margins and where it's not heavily threaded it tends to lose by quite a bit.
 
looks like a 2600k or maybe wait and get an x79 with the intel retail edge summer deal.
Wish I could buy AMD I loved my 2500+ Barton 3000+ Venice a64 :(
 
I'm not for AMD or Intel but god damn there is a whole helluva lot of AMD hate in this thread.

It's not hate, its cynicism. AMD holds an event to determine which system is better. One of those systems just happens to be an AMD system. AMD setup the event, chose the systems and configurations, ran the event, and then produced the results all themselves. The results ended up coming in favor of....shocker... AMD.

It isn't whether or not we like AMD, its just that in every other statistic, benchmark and review that is purportedly non-biased, AMD loses. But they hold their own event, with the conditions stated above, and all of a sudden the heavens open up, birds sing, and beautiful pyxies come out to sprinkle their magical dust over the systems and participants showing what a great and fabulous product it is and superior in every way to its competition.

/gag/

Get real, nothing to see here, move on.
 
The better threaded a game is the better those AMD chips will perform; there is no doubt. They've designed their chips with that in mind: IPC dropped by 10% from the Thubans, but with more integer cores a heavily threaded application would benefit more greatly regardless of that 10% dip.

The issue they have to address is their half-way offload of the FP onto the GPU. We're obviously still not there but AMD has decided to step back in FPU cores and save die-space and transistors. They've got only 1 256bit FPU per module so it's only 4 cores. Games tend to be quite FP heavy, so Bulldozer is at a natural disadvantage here and that's why it performs the way it does :/ If it's got a lot of integer calcs the BD will do quite fine, especially if you tax all 8 integer cores it easily gets over that -10% speed bump. Things will obviously go south as you decrease the thread count, especially integer-based, so the architecture doesn't do well as the previous gen Thubans or even Denebs. 6 threads and you'd probably see the Thubans surpass BD (avoiding new instruction sets that the Thuban lacks, because then it's not fair =P). With 4 threads the SB's will do the best, etc etc.

All of this is a long way of stating that if AMD modified their architecture a bit and addressed the places it now lacks you'd get a processor that would perform very very well at 8 threads (because now you can actually call it an 8-core and not be lying to yourself) and trump the 8 thread 2600K and easily the 2500K in multi-threaded benchmarks (8 threads and more) despite trailing SB by up to 50% in IPC in some tasks and it's predecessor by 10%. As a general rule of thumb, BDs will do better with games and applications that are more heavily threaded, and the more threads the better -- assuming it's not floating point. Unfortunately that decrease in IPC hurts it everywhere so its victories tend to be slim margins and where it's not heavily threaded it tends to lose by quite a bit.

You pretty much summed up perfectly why BD is at a disadvantage; something I, myself, figured out a long while back. The only way to fix it is for AMD to increase the number of FPUs and integers to bring its performance and IPC to be on par or better than Thuban.

The question though: Will that make the processor more expensive, larger and hotter?

The next issue for AMD that they need to address is that the processor is running hotter than it should be and the pipeline length is too long to process a single instruction thread. Let's not forget that in many tests it shows the cache running slower than processor. In SB, the cache runs at the same speed as the processor itself.

The last thing is the scheduler. If going by that most recent Windows patch, AMD needs to work with OS developers and its own engineering team to get Bulldozer-based processors to schedule threads more efficiently and effectively. Intel had a great idea by assigning port numbers to each instruction thread and scheduling them based on what the instruction thread contains.

And, lastly, as for the the AMD Reality Check Challenge, I believe that the whole point of the tests of i3 vs Llano and SB vs Bulldozer is to get purely subjective observations.

Put it this way: The hardware were almost similar except for the motherboard and processors. And, for the i3 and Llano test, the on-die integrated GPUs were different.

(Obviously the on-die GPU of Llano would be better than the HD3000 in the i3.)

The point I'm seeing is that those who took this so-called challenge was to see subjectively in their own opinion which game played smoother.

Humans can't count framerates natively. :eek:

If the BD system was 70 FPS and the SB system was at a 90 FPS, can anyone really tell the difference of 20 FPS between the two?

The GPUs on both BD and SB systems were also the same Radeon 7970. There can be a myriad of other factors such as drivers and board design that would make the BD system appear smoother but in reality I'm thinking that it's the human eye playing tricks on the person. Let's not forget that at Eyefinity resolutions, it's all GPU at that point. The CPU takes shotgun (not backseat) to the GPU.

The only way to prove one is smoother than the other is to take benchmarks and framerate counts over specific intervals like in many benchmarks and reviews.

And, we come to Guru3D's CPU scaling test with the Radeon 7970: http://www.guru3d.com/article/radeon-hd-7970-cpu-scaling-performance-review/9

As the resolution went up, the framerates were either the same or very similar such as 33 FPS at 2560x1600 resolution.

If that is true, then at Eyefinity resolutions the framerates should be very similar within a few frames per second. Unless there is a person that can definitively tell the difference between 75 and 80 FPS, or a minute difference of 1 to 5 FPS between different systems, then I think this whole AMD Challenge proved a single point: As long as the game is running smooth-- steady framerates above 30 or 60 FPS, then does it matter what CPU you have running inside your system?

Can you even tell the difference?

I can tell the difference easily between a game running at 15 FPS and 30 FPS. However as the framerate goes higher, I can tell the difference-- just barely-- between 30 FPS and 40 FPS. When the game is running above 40 FPS, I cannot really see too much a difference with my own eyes (I wear glasses since I'm nearsighted) between 50 FPS and 60 FPS. And, that's with using highest possible settings. For example, going from a Radeon 3870 to a Radeon 5770 and then a Radeon 6950 using similar settings in Anno 1404 and Company of Heroes, I can tell the difference when the game is running sub-30 FPS (using FRAPS) on the 3870 and above 40 FPS on a Radeon 6950. It is smoother as I got a more powerful video card.

Since I don't run more than one video card in any system I've owned regardless of Nvidia or AMD, I have yet to see what a game looks like running above 60 FPS in Eyefinity/Nvidia Surround resolutions. But, I bet it wouldn't look any different than a game running at 50 or 60 FPS to the human eye.

In multi-threaded or FPU-loaded tasks, the Intel system will win thanks to its more efficient design. In non-FPU loaded tasks or more multi-threaded tasks that'll take advantage of 4M/8T design, the BD system might edge a bit higher or match Intel (if going by the various benchmarks online).

Regardless, I think the whole point of the Challenge is to prove to people that for subjective reasons alone there isn't too much of a difference in what you see and what you play unless you take a good hard look at the numbers. And, that's when you'll see the AMD system is actually slower than the Intel system regardless how smooth it may look to you. Until the human eye can effectively discern a difference of 1 to 5 FPS (or even 5 to 10 FPS) between two systems when a game is already running above 40 or even 60 FPS, should it matter what CPU you have?

If it seems jerky like one CPU is at 15 FPS and the other CPU at 25 FPS, then yeah get the faster CPU. By the time you get to higher resolutions above 1920x1080, then you'll need a better, faster video card and a fast CPU to feed data to the video card(s). That's how I've always looked at it. At some point, if it looks smooth, then it should be fine right?
 
The cache doesn't run at the same speed. The L1/L2/L3 is all slower than the Thuban and the L2 is nearly twice as slow as the SB chips. They can't increase the clock speed of the chip's compute cores and have the cache match that speed because with their design it's technically impossible. The BD's frequencies would have to be like 2.5GHZ, if that were the case and it would decrease its performance by like 30-40%. They planned for a longer pipeline (25% longer) and wanted to increase clock speed by 30% so the IPC would match the Thuban even if the cache was slower. The clock speeds never made it to 4.5ghz, software still doesn't take advantage of GPU acceleration outside of like 1-2 programs, the IPC took a dip and they hedged their bets on multi-threaded software to make up that gap. All of these things didn't pan out like they thought it would have 4-5 years ago. If I were to say AMD failed in a single way it would be that they failed to see the way that hardware and software would look in 5 years. But that's something that's expected when you take large leaps rather than small steps -- like Intel's tick-tock -- so unlike Intel you run the risk of making a big jump in the wrong direction as opposed to a small misstep. The good news is they weren't completely off and it's still salvageable, but it'll take a lot of work.

The incredible thing is that the Llano has super fast cache -- faster than SB, even -- and shows that it is possible to manage that at 32nm with 4 entire cores. Lower clock speeds doesn't necessarily mean poor performance so long as the architecture plans for that. They need to quit chasing clock speeds and beef up those FPUs and maybe take a few notes from that Llano design.
 

In the end though, with a random sampling.. if the Human is perfectly subjective, as in we can't tell the difference between 70 and 90 etc.. you would have expected the results to be much more in the realm of 33% 33% 33% for A, B, and C

even if you add into the fact that most people probably won't choose C (They both look the same, everyone always wants to pick a winner) you would have expected to see A and B be much closer


but instead you have a 28%, 52%, and 20% distribution which to me does have some statistical significance, even after you add in all of the variables.


In the end, it doesn't really matter though..

Microsoft's Vista had the same mountain to climb.. So much FUD about it that even in the "Blind" test, people loved vista but it still couldn't shake the bad publicity from it.


In the end i think what AMD probably proved is that the GPU is more important than the CPU, Bulldozer or Sandy Bridge it doesn't matter.. just get a 7970 :)
 
Look at all the nerds that think gaming is only fun if you're tracking your FPS in a spreadsheet. I left that shit long ago and just focused on the price I was paying to have fun playing games. Hey I have a crazy theory, maybe people actually preferred playing on the AMD system because it felt better? That is far more likely than a company pulling a stunt where they'd be easily caught. AMD can't best Intel right now in many benchmarks, but it can still provide an awesome gaming experience. Learn how to not think in binary and grow up.

sR3qR.gif
 
I don't think it was rigged. :) I would like to see if Kyle played on them and wich one he picked :)
 
Battlefield 3 is a very popular game that supports Eyefinity, so it's natural that they'd pick something that has wide exposure and supports the platform. If it happens to perform better on an AMD processor than in Intel, then so what? This particular test by AMD proves one thing: More people found BF3 enjoyable on the AMD system than on the Intel system. THAT'S ALL. Nothing more should be read into it than that. That's what the gamer is going to care about. They're not going to care that the i7 has higher synthetic benchmarks, they're going to care that the AMD chip runs the game they want to play better and costs less doing so.

I would hope that every person on this message board would have understood this by now, but it's clear that some people here just troll the boards and never read any of the performance tests. This is exactly the methodology that Kyle uses in testing "Real World" game performance. The [H] stopped using benchmark software precisely because benchmarks are useless except as to show how well something performs in a benchmark, and because hardware manufacturers were caught trying to pad their benchmark scores in 3DMark. I was overjoyed when Kyle went to the new testing methodology because it changed from "This graphics card or CPU is faster/better" to "this is the experience you can expect to get from this piece of hardware and how it stacks up against other similar hardware in these games". Seriously, am I the only one on this message board that actually reads the reviews on this site and visits the site primarily for that reason?


ITs hard to take the "AMD runs BF3 comment better" seriously when the game runs almost exactly the same on a dual core i3 or a quad i7. Its a completely GPU bound game.
 
Look at all the nerds that think gaming is only fun if you're tracking your FPS in a spreadsheet. I left that shit long ago and just focused on the price I was paying to have fun playing games. Hey I have a crazy theory, maybe people actually preferred playing on the AMD system because it felt better? That is far more likely than a company pulling a stunt where they'd be easily caught. AMD can't best Intel right now in many benchmarks, but it can still provide an awesome gaming experience. Learn how to not think in binary and grow up.

Simple fact is, I don't think in binary, I think in terms of what gives me the most for my money. And no matter how many times I have tried AMD CPUs, the simple fact is, with the way I build my systems they cannot compete. Many people can definitely feel the performance difference of systems, especially when you start OCing them and doing a variety of tasks.
 
Simple fact is, I don't think in binary, I think in terms of what gives me the most for my money. And no matter how many times I have tried AMD CPUs, the simple fact is, with the way I build my systems they cannot compete. Many people can definitely feel the performance difference of systems, especially when you start OCing them and doing a variety of tasks.

Many people except those people in that blind test I guess. I'm pretty sure nobody is saying AMD is better now. What they are saying though is they are "fine" and not as bad as the fanboys are saying. Second place is not 80th place.
 
Would you both be saying this if Intel had done this and the Intel machine came out ahead, or would it be that you would say Intel was obviously faster because AMD sucks? Assumption of bias and proof of bias are very different things. After all, I have no way to prove you are both Intel fanboys, but I could assume it based upon your postings, the same as you assume AMD rigged the test without having any direct evidence. Can you even admit the possibility that the demonstration was not deliberately rigged in AMD's favor, or are you so blinded by fanaticism towards Intel that you are incapable of seeing things from any other point of view?

I'm sorry that empirical testing and logical inference are so difficult.
 
ITs hard to take the "AMD runs BF3 comment better" seriously when the game runs almost exactly the same on a dual core i3 or a quad i7. Its a completely GPU bound game.

Anybody who thinks that BF3 is entirely GPU dependent is either:

A.) A fucking moron, who thinks they know what they are talking about
B.) Never played the multi-player to know, to be able to give an accurate opinion
C.) Only looked into single-player benchmarks
D.) Is playing BF3 maxed out on a shit GPU, yielding garbage frame-rates anyways
 
Last edited:
ITs hard to take the "AMD runs BF3 comment better" seriously when the game runs almost exactly the same on a dual core i3 or a quad i7. Its a completely GPU bound game.

And yet the blind results say otherwise.
 
Anybody who thinks that BF3 is entirely GPU dependent is either:

A.) A fucking moron, who thinks they know what they are talking about
B.) Never played the multi-player to know, to be able to give an accurate opinion
C.) Only looked into single-player benchmarks
D.) Is playing BF3 maxed out on a shit GPU, yielding garbage frame-rates anyways


Thank you! on a 64 man map i see 100% load across all 8 threads on stock speeds. Need to oc to 3.8 with a 4870. with a 560 ti, will need a 2600k @ 4.0 or better.
 
This test wasn't about objective frames per second which we all know Sandy is superior but instead subjective. Nobody knew that the SB rig was getting better frames they just liked the way the Bulldozer rig "felt". It was designed to be subjective and Bulldozer won. Who knew...
 
This is the equivalent of which car is better, the one that has a faster top speed or the one that gives a better ride. Which you prefer is up to you.
 
People want AMD to suck so bad its funny.

It really is. I'd say 90% of the haters forgot what "Real world performance" means. Getting that extra 5FPS isn't going to matter in the long run and certainly isn't going to matter when you need it to most.

Intel clearly makes a faster processor when it comes to CPU intensive tasks but at the end of the Day most games are GPU intensive and that is unlikely to ever change. If one of my family members wanted me to build them a mid-range gaming rig, I'd be slapping a Bulldozer in it all day long.
 
This test wasn't about objective frames per second which we all know Sandy is superior but instead subjective. Nobody knew that the SB rig was getting better frames they just liked the way the Bulldozer rig "felt". It was designed to be subjective and Bulldozer won. Who knew...

This, and the sample size is so low that it doesn't really give any information at all.
 
I'm sorry man, but you really don't know what you're talking about when it comes to this topic. Having high end CFX/SLI on a single screen causes CPU limitations in almost ALL cases. I game with a 120hz monitor so I strive to keep at least a 120fps average. Having a better CPU certainly helps keep the GPU usage up, especially in CPU intense multiplayer, thus producing a higher, more stable frame-rate. In most cases when it comes to CPU's, you give a good argument, but i'm afraid you really don't know what your talking about when it comes to multi-GPU configurations, much less Eyefinity. I have had an eyefinity setup before, aswell as several multi-GPU machines, as well as many hours tweaking settings and finding out what works. When it comes to this, I know what the fuck i'm talking about. The only reason you would need a more powerful CPU in Eyefinity is because of the increased FOV, which stresses the CPU as there is more action on the screen to process. Most of the power has to come from the added GPU's because of the sheer amount of pixel increase. At 1920x1080 6970's are usually overkill and can easily cause a CPU bottleneck, for someone like me who likes a 120+fps average, having a better CPU can certainly help keep the 2 GPU's at a high usage rate, keeping the frame-rate as high as my 6970's can possibly produce.

P.S
I got my 8150 for $240 and dropped it into my 890FXA-GD70, replacing my Phenom II 970. Which yielded very nice fps increases across the board, especially in BF3. In fact, it DOUBLED my GPU usage (from 40-50% to 90-99%) in 64 player servers, DOUBLING my frame-rate. Oh wait, ON A SINGLE SCREEN @ 1920x1080. So, you must be one of those people who doesn't know what the fuck he's talking about.

Actually I also want to know how buys a 6970 Xfire setup to play at 1920x1080. It like the guy I saw on another forum who had quadfire 6990's and plays on a single 1920x1080.....
 
Again, I dont think this was meant to be a scientific study with the results to be published in a scientific journal or anything. It was simply an attempt to show people that Bulldozer isnt a flaming bag of crap like its been made out to be and to give them something to help rejuvenate Bulldozer's image a little. I think it was a good idea.

Actually I also want to know how buys a 6970 Xfire setup to play at 1920x1080. It like the guy I saw on another forum who had quadfire 6990's and plays on a single 1920x1080.....

Theyre out there dude. There was a guy on OCN the other day asking if a 1100 would bottleneck his THREE (!) 7970's for his single 27", 1920x1080 monitor! :eek:
 
Again, I dont think this was meant to be a scientific study with the results to be published in a scientific journal or anything. It was simply an attempt to show people that Bulldozer isnt a flaming bag of crap like its been made out to be and to give them something to help rejuvenate Bulldozer's image a little. I think it was a good idea.

Hypothetically speaking, if they had a Trinity APU sample there that you could play with that showed good performance increase from BD cores, would showing that have been better than something like this? Or even the same 17W Trinity APU they had at CES? Gaming at 720p or 1080p on a 17W APU with high settings is something that Intel is still years away from being able to do and AMD seems to have already done it. Those are very exciting products and I think a lot of us are eagerly awaiting results to the revision rather than a staged stunt that proves absolutely nothing other than "don't buy either of these chips for a single GPU at Eyefinity resolutions when you'd get more from a second GPU."

Instead of conducting an ill-conceived test they should have shown what's in the works, and they do have a some very interesting chips/GPUs coming out soon. Nothing short of releasing Trinity/Vishera will redeem Bulldozer and certainly not shows like this. I'm glad they're catering to the enthusiast, but after admitting they released a desktop turd that was meant for the server space maybe they should try rethinking their approach. On the one hand, you've got AMD admitting the chip wasn't designed for the enthusiast in mind and on the other the marketing and PR guys are saying it's a gamer's chip. I can tell you with absolute certainty that one of those groups are most definitely wrong.

I don't hate AMD, but I do hate Bulldozer. It just seems like they completely skipped over us when they designed the thing and it could have been much much better. Now they're trying to redeem it as their 45nm production has completely been phased out. Bleh... At least give us a reason to be hopeful. The sooner they realize BD sucks, the sooner they will realize just what Piledriver should look like.
 
Last edited:
HOn the one hand, you've got AMD admitting the chip wasn't designed for the enthusiast in mind and on the other the marketing and PR guys are saying it's a gamer's chip. I can tell you with absolute certainty that one of those groups are most definitely wrong.

I don't hate AMD, but I hate Bulldozer. It just seems like they completely skipped over us when they designed the thing and it could have been much much better.

On one hand it multitasks great as a "server chip" would, on the other you can overclock it and tweak it really well as "gamers" would. So it's not like either is far off base.
 
Except that their engineer admitted it was made with the server market in mind with the desktop segment as an afterthought.

It is also important to note that the "Bulldozer" architecture is configured and optimized for server throughput. The two integer execution cores present in Bulldozer are designed to deliver area- and power-efficient multi-threaded throughput.

http://www.hardocp.com/article/2011/11/29/hardocp_readers_ask_amd_bulldozer_questions/1

At least he's honest there. When I first read that I was hoping they'd at least come out and say it and though you had to sift thru the marketing jargon, it was confirmed. I understand they want to make money off these chips, but showing stunts like that to sell it to gamers is about as truthful as Intel was with the VLC video of their laptop "playing a DX11 game."
 
The term "gaming" can be vague is my point. The way I game most of the time is I run multiple 3D clients simultaneously after overclocking my chip as high as it can go. Has anyone benchmarked bulldozer that way? Who's to say it isn't actually faster for the way I might use it? Who makes the rules for what a "gaming" chip is? Bulldozer runs all games quite well, therefore I'd say it's a "gaming" chip. I get what you're trying to say, but you can't honestly say it's NOT a gaming chip. I mean who owns that definition anyway?
 
Well, if you're abiding by that vague a definition of what a "gaming chip" really is, then you might as well throw all chips in there. Thus the term "gaming chip" isn't a valid term for describing a CPU, and in reality I think we'd both agree it isn't. A good CPU is a fast CPU that handles a lot of work very quickly and quicker than its competition. What determines which chip is a better "gaming chip" CAN be determined and trusty sites like [H]ardOCP do it incredibly well and do it the way it should be done. That's why we're here :)

http://www.hardocp.com/article/2011/11/03/amd_fx8150_multigpu_gameplay_performance_review/

Now ask yourself again, which chip is the better "gaming chip" and you'll very quickly come to the same answer: the cheaper 2500K by a country mile. I sure as hell trust these guys a lot more than AMD's marketing team. So its not about whether the 2700K and 8150 are both equal in performance when it comes to gaming (they're not), but why they chose to do something like that at all when they've got far more exciting things in the pipeline. I mean, if anyone came here to [H] to comment on this thread to say "See! they are equal!" shouldn't be here at all. I think I can speak for all of us when I say that I'd have loved to see some Trinity/GPU/Piledriver info or presentation instead.
 
I don't hate AMD, but I do hate Bulldozer. It just seems like they completely skipped over us when they designed the thing and it could have been much much better. Now they're trying to redeem it as their 45nm production has completely been phased out. Bleh... At least give us a reason to be hopeful. The sooner they realize BD sucks, the sooner they will realize just what Piledriver should look like.

It sure seems like you do dude. Youre in every Bulldozer thread tearing it apart. Now its cool and all cause you seem like you know what youre talking about and youre tearing it apart with a facts and figures and not fanboy temper tantrums so Im not calling you guilty of nerdrage or nothing but its obvious its under your skin. ;)

I just dont think Bulldozer sucks. I admit its not as good as Sandy Bridge and its not a very good successor to the Phenom line since it barely outperforms it in anything that isnt heavily threaded but that doesnt mean it sucks. It was a huge disappointment, definitely because we waited so long to get it and had such high expectations, but its still a very capable processor.

Im sure AMD realized Bulldozer's shortcomings all too well and I doubt theyll repeat those mistakes with Piledriver. They did a great job in rebounding from Phenom 1 so Im confident they can do the same with BD cause BD is nowhere near as bad as Phenom 1 was.
 
Haha, thanks for the compliments ;) I try to keep everyone informed and hope I've done that with the little I know. I'm somewhat of an AMD fanboy, so that's why you see me tearing Bulldozer apart but claiming Thubans are the greatest thing since sliced bread. Can't tell you how many Bartons systems I've made. I still have my AMD64 3000+ system, albeit it's sitting in a closet beneath piles of boxes now. By far my favorite build

It doesn't suck, but it's not an improvement. It's a side-grade. It's not an Intel netburst fail where their new chips are worse than their old chips but AMD did stall. They can't afford to have so many different architectures targeted for different segments of the market like Intel and that's one of the reasons BD underperforms. When they made a server-targeted chip they had to sacrifice IPC and single-threaded gain on the desktop in favor of adding more integer cores and more cache (not that IPC doesn't matter on serverspace, but more cache helps more there). They've actually got a lower market share in the server segment than ARM does with Intel holding a 90%+ lion's share so they attacked the one place that needed the most work. In that sense they did the right thing, but as a desktop gamer it doesn't mean that I can't go kicking and screaming about it =P

I think they will rebound, and probably put it within spitting distance of the 2600K in multi-threaded applications and likely somewhere between 10-15% IPC increase over the Thubans, but this time around they don't have a die-shrink to rely on like they did with Phenom I > Phenom II. They've got a lot more work to do now than they did then if they want to catch up. #1 should be reducing the prices on Bulldozers... it sucks recommending Intel's all the time.
 
True, and Phenom 1's actually had a literal defect that they could hunt down and fix. Bulldozer isnt defective, it just isnt fast enough.

Is there nothing that can do to tweak this architecture or are they going to have to go with a totally brand new architecture to get performance up again?
 
I think so, and they'd probably start with addressing the L1 cache size, which it seems they already have; doubling it in fact. The pipeline isn't atrociously long, but if they're able to decrease the pipeline length and reliance on clock speed (address IPC) they'll also be able to decrease the cache latency. AMD processors cache speed is independent of "CPU clock speed," so the cache doesn't run at the same frequency as the rest of the CPU like SB does.

Sandra-Latency.png


Too much cache and it's too slow. You really don't need 8MB of L2 and definitely don't need that much if you're already packing 8MB L3.

They should also streamline just how much sharing is done within a module so there's less of a tax when 2 threads sit within the same module. I'm not sure what's causing this, but it's pretty significant and means that even with 8 integer cores the BD's perform a lot like 6 cores rather than 8 (why Thubans are so close in multi-threaded apps where BD's should shine). My guess is that the cache sharing is a detriment or that they need more optimizing within the fetch/decoder that's shared between 2 cores within a module. -20% is too much and it shows whenever a module gets 2 threads. If they can slim that down even further they would reach 2600K levels of multi-threaded performance even without addressing the IPC or clock speeds.

Our results suggest that Bulldozer takes a 15-20 percent hit compared to a standard multi-core configuration. That’s actually pretty decent trade-off, particularly considering that this is the first Bulldozer-style CPU AMD has built. Unfortunately, the performance hit is significant enough to undermine AMD’s strategy of outflanking Intel by offering more CPU cores. Eight Bulldozer cores end up looking a lot like six Thuban cores, which is part of why AMD’s new chip struggles to pull away from its older cousin.

http://www.extremetech.com/computing/100583-analyzing-bulldozers-scaling-single-thread-performance

It performs well in SSE4/AVX but it should perform better. It also looks like anything under SSE4 performs worse than Thuban/SB. AMD can't expect the entire software world to compile their code for newer architecture/instruction sets so they need fixing there. Then they also need to address the FPU within each module so that you get more than 4 FPU cores. I doubt they'll touch the FPU much or drastically enough to see significant performance boosts there.

I think what they'll probably do is increase clock speed through process maturity, keep the chips at 125W, decrease the L2 cache size and increase L1, decrease latency and introduce new instruction sets. It'll likely net 10-20% increase in performance but won't catch Ivy, esp when overclocking is considered. Basically single-threaded will slightly increase but multi-threaded performance should increase significantly. Nothing short of a complete overhaul will address the single-threaded performance gap with SB/IB, so they'll try to improve upon where designed it to excel in the first place and that's multi-threaded apps.
 
I'm sorry that empirical testing and logical inference are so difficult.

Resorting to personal attacks and insults is a predictable reaction during a debate when one is unwilling to answering directly. Thus, it is logical to infer that you are a brand loyalist to Intel, and are incapable of setting aside or unwilling to set aside your own personal bias against AMD and admit that the test could have in fact been legitimately conducted. While it is very probable the test was biased, if it were not and the results are accurate then it would contradict your world view. As a brand loyalist you would have an emotional investment in perceiving that Intel's product as superior to AMD's under all circumstances, thus to avoid this negative impact on your perceptions you must dismiss the results of the test without any direct empirical evidence to support your assumption. In short, you don't want someone to have a better experience on an AMD system, therefore it must have been impossible, or else some other explanation must exist, in your mind, to invalidate the results so that you can maintain the status quo.

While this may safeguard your emotional investment, it is irrational from a scientific standpoint in which data that contradicts a hypothesis must be accepted in favor of the hypothesis, not the other way around. Brand loyalty choices among consumers are often defended in this manner for this reason. Even when faced with evidence of a superior product, a large percentage of consumers often remain with the brand they've been using because of an emotional attachment to the brand. It is the same mechanism at work when individuals continue to support a public figure such as a politician, or religious leader, or sports idol, after being presented with unacceptable behavior by said figure. The individual is unwilling to suffer a loss on the loyalty investment, and so chooses instead to ignore the evidence. This mechanism is well known and understood by marketing strategists, politicians, and focus groups.

If you prefer Intel over AMD, that is your choice and I have no quarrel with that. You may, however, wish to refrain from laying insults at the feet of those who have a very firm grasp of the scientific method and how to apply it to less than ideal testing conditions.
 
I think so, and they'd probably start with addressing the L1 cache size, which it seems they already have; doubling it in fact. The pipeline isn't atrociously long, but if they're able to decrease the pipeline length and reliance on clock speed (address IPC) they'll also be able to decrease the cache latency. AMD processors cache speed is independent of "CPU clock speed," so the cache doesn't run at the same frequency as the rest of the CPU like SB does.

Sandra-Latency.png


Too much cache and it's too slow. You really don't need 8MB of L2 and definitely don't need that much if you're already packing 8MB L3.

They should also streamline just how much sharing is done within a module so there's less of a tax when 2 threads sit within the same module. I'm not sure what's causing this, but it's pretty significant and means that even with 8 integer cores the BD's perform a lot like 6 cores rather than 8 (why Thubans are so close in multi-threaded apps where BD's should shine). My guess is that the cache sharing is a detriment or that they need more optimizing within the fetch/decoder that's shared between 2 cores within a module. -20% is too much and it shows whenever a module gets 2 threads. If they can slim that down even further they would reach 2600K levels of multi-threaded performance even without addressing the IPC or clock speeds.



http://www.extremetech.com/computing/100583-analyzing-bulldozers-scaling-single-thread-performance

It performs well in SSE4/AVX but it should perform better. It also looks like anything under SSE4 performs worse than Thuban/SB. AMD can't expect the entire software world to compile their code for newer architecture/instruction sets so they need fixing there. Then they also need to address the FPU within each module so that you get more than 4 FPU cores. I doubt they'll touch the FPU much or drastically enough to see significant performance boosts there.

I think what they'll probably do is increase clock speed through process maturity, keep the chips at 125W, decrease the L2 cache size and increase L1, decrease latency and introduce new instruction sets. It'll likely net 10-20% increase in performance but won't catch Ivy, esp when overclocking is considered. Basically single-threaded will slightly increase but multi-threaded performance should increase significantly. Nothing short of a complete overhaul will address the single-threaded performance gap with SB/IB, so they'll try to improve upon where designed it to excel in the first place and that's multi-threaded apps.

Good read. Is Piledriver going to be new architecture or is it just Bulldozer with some tweaking here and there?
 
Resorting to personal attacks and insults is a predictable reaction during a debate when one is unwilling to answering directly. Thus, it is logical to infer that you are a brand loyalist to Intel, and are incapable of setting aside or unwilling to set aside your own personal bias against AMD and admit that the test could have in fact been legitimately conducted. While it is very probable the test was biased, if it were not and the results are accurate then it would contradict your world view. As a brand loyalist you would have an emotional investment in perceiving that Intel's product as superior to AMD's under all circumstances, thus to avoid this negative impact on your perceptions you must dismiss the results of the test without any direct empirical evidence to support your assumption. In short, you don't want someone to have a better experience on an AMD system, therefore it must have been impossible, or else some other explanation must exist, in your mind, to invalidate the results so that you can maintain the status quo.

While this may safeguard your emotional investment, it is irrational from a scientific standpoint in which data that contradicts a hypothesis must be accepted in favor of the hypothesis, not the other way around. Brand loyalty choices among consumers are often defended in this manner for this reason. Even when faced with evidence of a superior product, a large percentage of consumers often remain with the brand they've been using because of an emotional attachment to the brand. It is the same mechanism at work when individuals continue to support a public figure such as a politician, or religious leader, or sports idol, after being presented with unacceptable behavior by said figure. The individual is unwilling to suffer a loss on the loyalty investment, and so chooses instead to ignore the evidence. This mechanism is well known and understood by marketing strategists, politicians, and focus groups.

If you prefer Intel over AMD, that is your choice and I have no quarrel with that. You may, however, wish to refrain from laying insults at the feet of those who have a very firm grasp of the scientific method and how to apply it to less than ideal testing conditions.

tl dr buddy

The fact is there is simply no way that an AMD processor could be BETTER than an Intel processor for playing a game. That's the empirical test. It doesn't make more colors on the screen or massage your prostrate while you play, the only thing both of them do is affect frame rate. It could be equivalent if the CPU is not the important factor, but never better.

Inference comes from a test result that favors AMD heavily. Obviously they rigged the test. It could be as simple as having different chairs, or one monitor that has unfavorable ambient lighting conditions.
 
Well, it's Bulldozer-based in the sense that it will use the same resource sharing/module approach. It's not an entire new architecture but rather an improvement upon their previous architecture.

It's Bulldozer with some tweaking, and if they tweaked it correctly they could potentially surpass IB in multi-threaded apps/benchmarks. The good news is that Intel is (i think) only doing a die-shrink and nothing significant, so IB won't blow anyone away at stock speeds, but at 22nm-trigate it should overclock better than SB. Look for Piledriver to be stock clocked at 4ghz (without turbo) and cheaper than BD to produce and maybe even MSRP
 
tl dr buddy

The fact is there is simply no way that an AMD processor could be BETTER than an Intel processor for playing a game.

Actually it's entirely possible in certain circumstances. The framerate can skyrocket in many places and utterly plummet in others creating a jerky performance over all, but then at the end the average FPS can still be considered awesome. It doesn't mean it was an enjoyable gaming experience though. Why do you think HardOCP tests the way they do?

What would have been nice to see is testimonials from the testers about why they chose a certain system.
 
Well, it's Bulldozer-based in the sense that it will use the same resource sharing/module approach. It's not an entire new architecture but rather an improvement upon their previous architecture.

It's Bulldozer with some tweaking, and if they tweaked it correctly they could potentially surpass IB in multi-threaded apps/benchmarks. The good news is that Intel is (i think) only doing a die-shrink and nothing significant, so IB won't blow anyone away at stock speeds, but at 22nm-trigate it should overclock better than SB. Look for Piledriver to be stock clocked at 4ghz (without turbo) and cheaper than BD to produce and maybe even MSRP

That would be sweet. If Ivy Bridge isnt that big a jump from Sandy Bridge, AMD can catch up. Piledriver doesnt have to be faster than Intel, it just has to be good enough to offer a viable alternative. Thats what Phenom II did. Wasnt better, but pretty close and a good value. Thatll keep Intel honest and keep fanboys like me happy.
 
Back
Top