Cat (Bulldozer) out of the bag! Review here!

Those fabricators @ DH are raking in the ad dollars from all the people clicking those fake charts. This is what, the 4th time they've done this in a month?

Suckers.
 
Those fabricators @ DH are raking in the ad dollars from all the people clicking those fake charts. This is what, the 4th time they've done this in a month?

Suckers.

Like what I said earlier: "I want to see in real consumer experiences is where it's at IMO."

Reality is the best than to go with possible inflated or biased BS just for flame wars and or trying to increase sales.

Many people are happy with their Phenom II X 4's and X 6's, I'm currently using a Q6600 SLACR and it's enough though if a faster processor comes along "cheap enough" and not the prices of a BD/i5 2500k/i7 2600k is going for I'll upgrade my AM2 5000+ system.:D
 
Uhh... can we please wait for [Hard]|OCP, AnandTech, Tech Report, ... bit-tech, TechPowerUp, hell, even Silent PC Review to do a ... review?
 
The Graphs are pretty messed up. But, with an average OC of 5ghz, this should crush my 1090t @ 3.9Ghz. It's slot nicely between the 2500k and 2600k. Even if these are true, I'll gladly buy one tomorrow night. I hope it gets better with drivers, funny, I don't remember needing drivers for a CPU, but times have changed I guess. Though, I'm glad I went with the 2600k when that dropped, because I'd be pissed if I held off for this.

The 8150 should make a nice home in my #2 machine though.
 
TI hope it gets better with drivers, funny, I don't remember needing drivers for a CPU, but times have changed I guess.

If you ever ran XP on a Athlon X2 system, You needed AMD's Dual-Core Optimizer and the K8 Processor Driver (mainly for C'n'Q to work).

More than likely, a processor driver will be needed until Windows 8 drops.
 
If you ever ran XP on a Athlon X2 system, You needed AMD's Dual-Core Optimizer and the K8 Processor Driver (mainly for C'n'Q to work).

More than likely, a processor driver will be needed until Windows 8 drops.

^^^ Yes and even chipset drivers were crucial, but they were never make or break. It didnt magically give you 20% performance boost. It just kinda put performace where it should've been in the first place. ;)
 
Those graphs are really bad.

You think whoaaa thats a big drop, then you look and its actually +-2fps.

If BD comes in £100 cheaper than a 2600k then it will do fine. But I'll wait for the real deal thanks.
 
^^^ Yes and even chipset drivers were crucial, but they were never make or break. It didnt magically give you 20% performance boost. It just kinda put performace where it should've been in the first place. ;)
I dunno, try running an XP system without the Dual-Core Optimizer. There were times that multi-threaded programs could cause a BSOD.

I've actually experienced said BSODs
 

I dunno, it's not bad. It tackles two burning questions about Bulldozer:

1. How is the single-threaded performance?

Answer: Phenom II or slower.

2. How is the multi-threaded performance in 3D rendering and video conversion?

Answer: slower than 2600k.

Never mind the fact that the games were old. Those games are NOT optimized for multi-core rendering, and that IS representative of over half the games released today.

And the 3D rendering + video conversion benchmarks are very representative, and probably the only chance for Bulldozer to shine. And it still doesn't manage to: Handbrake hands Bulldozer a tiny defeat, and the defeat at Cinebench is much more sizable. I do believe that it's not much faster than the 1100T.

And that's REALLY sad because AMD could have made a better-performing chip for a lot less money if they just shrunk the die of Thuban to 28nm and raised the clocks by 400-500 MHz. The die would be smaller AND performance would be better.
 
Last edited:
If you ever ran XP on a Athlon X2 system, You needed AMD's Dual-Core Optimizer and the K8 Processor Driver (mainly for C'n'Q to work).

More than likely, a processor driver will be needed until Windows 8 drops.

You don't know what you're talking about.

It was needed on XP because XP wasn't built in an era of multi-core processors and blindly assumed every execution thread to be a separate processor. That is why AMD and Intel had to release drivers, so XP was fully aware of the best way to split threads across the processors execution threads.

Windows 7 is perfectly aware of EIGHT SOCKET TEN-CORE PROCESSORS. If you don't believe me, look at how happily Server 2008 R2 runs on Westmere-EX.
 
Never mind the fact that the games were old. Those games are NOT optimized for multi-core rendering, and that IS representative of over half the games released today
True. Techspot benchmarks of BF3 show the X4 980 and 1100T besting 2600k. BF3 seems to be one of the best games at utilizing multiple processors. I'd be surprised if 8150 didn't win the day at this game. It's also the game the AMD sponsored overclockers played on BD during the live event. AMD has also released (embargoed) benchmarks focusing on BF3.

TL;DR BD4BF3FTW
 
You don't know what you're talking about.

It was needed on XP because XP wasn't built in an era of multi-core processors and blindly assumed every execution thread to be a separate processor. That is why AMD and Intel had to release drivers, so XP was fully aware of the best way to split threads across the processors execution threads.

Windows 7 is perfectly aware of EIGHT SOCKET TEN-CORE PROCESSORS. If you don't believe me, look at how happily Server 2008 R2 runs on Westmere-EX.

Agreed. I wonder if it - out of the box - knows to first send threads to modules that are unloaded, so you don't have two cores on the same module sharing a FPU fully loaded, and the rest of the CPU idle.

I wonder how Intel does this? Is the OS aware which two threads are on the same core, or does it blindly send threads to the lowest loaded logical CPU it sees at the time it needs something processed?
 
Zarathustra[H];1037861723 said:
Agreed. I wonder if it - out of the box - knows to first send threads to modules that are unloaded, so you don't have two cores on the same module sharing a FPU fully loaded, and the rest of the CPU idle.

I wonder how Intel does this? Is the OS aware which two threads are on the same core, or does it blindly send threads to the lowest loaded logical CPU it sees at the time it needs something processed?

Every modern OS knows the difference between a logical and hardware thread. Even XP does, although the scheduler is crappy and prone to errors (as many have discovered over the years).

As a result of the HT processor identification support, the following HT-aware features are included in Windows XP and the Windows Server 2003 family.
• HT-aware thread scheduling
• Aggressive HALT of processors in the idle loop
• Using the YIELD instructions to avoid spinlock contention

To take advantage of this performance opportunity, the scheduler in the Windows Server 2003 family and Windows XP has been modified to identify HT processors and to favor dispatching threads onto inactive physical processors wherever possible.

Windows Vista / 7 improve on this immensely, but the basic features have been around for a decade.
 
Last edited:
Zarathustra[H];1037861723 said:
Agreed. I wonder if it - out of the box - knows to first send threads to modules that are unloaded, so you don't have two cores on the same module sharing a FPU fully loaded, and the rest of the CPU idle.

According to AMD, the FPU can either be a shared 256-bit unit, which processes AVX, or can split into two separate 128-bit SSE units. That's perfectly fine; Intel has dedicated 256-bit FPUs, but it will take a long while till AVX matters to everyday apps. Thus effectively every module has full integer and floating point units.

Zarathustra[H];1037861723 said:
I wonder how Intel does this? Is the OS aware which two threads are on the same core, or does it blindly send threads to the lowest loaded logical CPU it sees at the time it needs something processed?

The OS knows how many cores the processor has and that each can processor two threads. Thus it spreads the intensive threads evenly over the cores.
 
Uhh... can we please wait for [Hard]|OCP, AnandTech, Tech Report, ... bit-tech, TechPowerUp, hell, even Silent PC Review to do a ... review?

We're all waiting for that.... But here is a "but": if all these leaked benches we a total bunch of crap (i.e. fake, favoring Intel too much, etc) I'd expect, at least Kyle, to come out and say: "Hey, I'm under NDA, but everything you're seeing to date is a steaming pile of BS, that's all I'm gonna say".... But that hasn't happened....:rolleyes: So maybe, just maybe, we're seeing pretty much everything there is to see with the BD, and we're just waiting for confirmation, and probably a much better, in-depth review/explanation of what's really going on with the new arch...... My $.02.
 
Everybody respectable has been good about not saying anything one way or the other about Bulldozer in order to honor the NDA, apart from the occasional "If it's not final silicon, don't trust it" bits; as much as I'd love someone to break it, even in a seemingly harmless way such as that, I can certainly understand why they wouldn't. And if...if...we're within 48 hours from the end of that, there's no point in doing it now.
 
We're all waiting for that.... But here is a "but": if all these leaked benches we a total bunch of crap (i.e. fake, favoring Intel too much, etc) I'd expect, at least Kyle, to come out and say: "Hey, I'm under NDA, but everything you're seeing to date is a steaming pile of BS, that's all I'm gonna say".... But that hasn't happened....:rolleyes:
He probably just received the review sample a couple weeks ago and has been doing benchmarks and typing up the review during that time.
 
According to AMD, the FPU can either be a shared 256-bit unit, which processes AVX, or can split into two separate 128-bit SSE units. That's perfectly fine; Intel has dedicated 256-bit FPUs, but it will take a long while till AVX matters to everyday apps. Thus effectively every module has full integer and floating point units

What types of software currently utilize AVX instructions?
 
Those graphs are awful. The scale is all fucked up. A good chunk of those are neck to neck, but the scale shows one leading way over the other.

AMD was light years behind. Kentsfields was still rocking anything they had out when Intel released bloomfield. It was almost a bad joke.

My point in all this is that (assuming those graphs are semi-accurate) AMD is showing positive momentum in closing the performance gap between their hardware and Intels.

I haven't owned an AMD chip since my athlon xp 1800+. Since then I jumped from P4 2.4c to Core 2 e6300 to Core 2 e8500 to Core i7 920. I woulda got a X2, but they were out of my price range. However, I juggle around AMD stock like a mad man and make a fuck ton of money. So from a stock holders perspective. I am very pleased.
 
Somehow I doubt those benches.

Also there's been rumours/news of a new CPU driver coming out with Bulldozer to increase efficiency with the new design (so basically 5-10% extra performance)

Though I guess all will be revealed on launch day...
 
From the bowels of OCN forums... Not sure what thread, and not sure of the system setup/differences from previous benches, or if this one is any fake-ier then others: pic

fxvsphiivsi5.png
 
Uhh... can we please wait for [Hard]|OCP, AnandTech, Tech Report, ... bit-tech, TechPowerUp, hell, even Silent PC Review to do a ... review?

I love the fact that toms was left off the list when asking for a good review..
 
Zarathustra[H];1037862886 said:
I haven't trusted Tom's since the late 90's.
Same. Their tests just don't make much sense to me now. Really minimal and don't really explore many setups and use cases.
 
Those graphs are awful. The scale is all fucked up. A good chunk of those are neck to neck, but the scale shows one leading way over the other.

Glad I'm not the only one that thought this. Not that I believe the graphs anyway, but seriously screwed up scales. Numbers are within 5%, but scale shows 1/2 or 1/3 of the performance on some of those.
 
Let's hope Kyle actually reviews the FX.. He might not if it sucks and doesn't change anything for gamers as was his reasoning for not reviewing the X6 or 980X.

FYI.. I went back and found my post about websites that did review the X6 when it came out in late April 2010 (not a comprehensive list but just what I found while looking around; I list 19 here but eventually found 23 websites that reviewed the X6 on it's release day).. I would assume these same websites will review the FX..

PCPerspective.com

Hardwaresecrets.com

Tomshardware.com

Anandtech.com

Maximumpc.com

Hothardware.com

Bit-Tech.net

Overclockersclub.com

Legitreviews.com

Hard..ware..canucks.com (get rid of the ..)

Guru3d.com

Bench..mark..reviews.com (get rid of the ..)

Lostcircuits.com

Pureoverclock.com

Techspot.com

Tweaktown.com

Bjorn3d.com

Hitechlegion.com

Ocaholic.ch
 
I agree that the bar scaling on the graph is super misleading, but that doesn't change the unimpressive results.
 
I agree that the bar scaling on the graph is super misleading, but that doesn't change the unimpressive results.

They're disappointing (though not terribly unexpected) in the synthetic benchmarks.
The gaming tests I though were fairly favorable to Bulldozer. At worst it was splitting the gap between the x6 and 2600k, and that's a meaningful result if it's priced against the 2500k.

It's not shaping up to as good an all-around CPU as the i5 or i7 but there are some good signs of life in those results.
 
Yeah, I'm feeling pretty good about the gaming results. BD is right where you'd expect. Wondering why these people who have the chip in hand aren't running benchmarks on relevant games that people care about like BF3:beta, SC2, Metro 2033, etc. Instead they are running Fritz Chess and Batman...
 
Let's hope Kyle actually reviews the FX.. He might not if it sucks and doesn't change anything for gamers as was his reasoning for not reviewing the X6 or 980X.

The reason for that is because no games that they test use more than 4 cores. Most only use 2. The 980x uses the exact same architecture as the 975EE, so performance in those games would be exactly the same at the same clock speeds. Same with Thuban and Deneb; exact same architecture, same platform, identical results. There was no point in wasting their time and money on testing them when they know how it will perform. The only conceivable testing they might have done is overclocking results, and comparing how it overclocks to older gen stuff.

They will test Bulldozer because it's a brand new architecture. Similarly, if AMD decided to release a 5 module or even 6 module Bulldozer down the road, they won't bother reviewing that.
 
The reason for that is because no games that they test use more than 4 cores. Most only use 2. The 980x uses the exact same architecture as the 975EE.

975EE is Nehalem, 980X is Gulftown. Gulftown, aside from shrinking to 32nm and giving you two more cores, is slightly faster per clock, though hardly anything to praise it with.
 
975EE is Nehalem, 980X is Gulftown. Gulftown, aside from shrinking to 32nm and giving you two more cores, is slightly faster per clock, though hardly anything to praise it with.

Which is my point. They're essentially the same from a gamer's point of view.
 
Somehow I doubt those benches.

Also there's been rumours/news of a new CPU driver coming out with Bulldozer to increase efficiency with the new design (so basically 5-10% extra performance)

Though I guess all will be revealed on launch day...
That would almost put it as fast as an X6 t1100... Also drivers will not fix the performance, only a new bios if there is a bug.
 
LOL at the stupid graphs. What is this? A marketing campaign? Jesus.....

Review sites that post data like that should get their credibility stripped.
 
Back
Top