FX 4100/6100/8100 series clock for clock benches

Michaelius

Supreme [H]ardness
Joined
Sep 8, 2003
Messages
4,684
If someone is interested one of our Polish portals made benches for FXs at same clock setting to look at IPC:

http://www.purepc.pl/procesory/test_amd_fx8150_bulldozer_kontra_intel_sandy_bridge?page=0,11

wykres_22.png


wykres_23.png


wykres_24.png


wykres_25.png


wykres_26.png
 
Interesting. Any idea why the FX-4100 is so anemic? Should be binned 8xxx/6xxx, no?
 
2 active modules only.

If you look at pov-ray which is strictly single threaded it has same performance as all other FXes.
 
2 active modules only.

If you look at pov-ray which is strictly single threaded it has same performance as all other FXes.

2 active modules or 4 modules each with core disabled?

Judging from the way Anandtech is telling it, sounds like 2 modules and the others are disabled.
 
2 modules AFAIK 6100 and 4100 in this test are simulated by turning off 1/2 modules from 8150
 
Dzieki..

omg, I was gonna go the 6100 route but even that doesn't really make much sense... :(
 
Cooled down now.
I'd wait for Anand's follow up review and most importantly analysis, hopefully they'll be testing 6100 and 4100 too.
 
The 4100 (3.6 GHz) is about as fast as a Phenom 9950 @ 2.8 GHz.
That's really sad.

Lol, I still can't believe how bad Bulldozer is. It was literally a complete waste of R&D money.

It is so bad that they should abandon it for Stars and try to improve that architecture, I am afraid BD is hopeless.
 
Lol, I still can't believe how bad Bulldozer is. It was literally a complete waste of R&D money.

It is so bad that they should abandon it for Stars and try to improve that architecture, I am afraid BD is hopeless.

I think it's a little too early to tell whether or not it's a waste of R&D money, because we really don't know what improvements AMD has planned for the architecture over the next couple of years. I don't really know what enhancements we can expect out of process improvements.. I remember reading some newsgroup postings some time ago that claimed AMD had moved to employing teams of more junior engineers and utilizing more automated systems rather than hand-tweaked modules for their design. I'm guessing the horrible power efficiency of BD is mostly because of that.

This release has certainly been underwhelming, and AMD didn't help matters by being so tight-lipped about the benchmarks. They should have just come out and said it wouldn't be competitive and tried to work with the reality of the situation rather than setting everyone up for a huge disappointment - this is more of a PR nightmare than a total product failure.

I only hope that the underwhelming performance doesn't sink them before they get a chance to improve on it.
 
@wEvil, thanks for the reply, very insightful. I hope this doesn't sink them either, but I can't help but feel betrayed a bit. Every time I see the CPU that FX jumps at me.
 
I think it's a little too early to tell whether or not it's a waste of R&D money, because we really don't know what improvements AMD has planned for the architecture over the next couple of years. I don't really know what enhancements we can expect out of process improvements.. I remember reading some newsgroup postings some time ago that claimed AMD had moved to employing teams of more junior engineers and utilizing more automated systems rather than hand-tweaked modules for their design. I'm guessing the horrible power efficiency of BD is mostly because of that.

This release has certainly been underwhelming, and AMD didn't help matters by being so tight-lipped about the benchmarks. They should have just come out and said it wouldn't be competitive and tried to work with the reality of the situation rather than setting everyone up for a huge disappointment - this is more of a PR nightmare than a total product failure.

I only hope that the underwhelming performance doesn't sink them before they get a chance to improve on it.

I wonder if they can pull something like they did in GPU space
2900xt - underperforming space heater (BD is here now)
3870 - power consumption in check
4870 - not top but good performance
 
Thank you very much for the alert. I love when sites do things like this.

It's worth noting that BD is intended to run with DDR 3 1866 ram and up. Also with the NB around 3k.
 
After seeing all the benchmarks, I'm starting to think: Could the shared resources and split nature of each Bulldozer module be at fault here?

From how I understand it, each module has its own L2 cache but all modules share a larger L3 cache. In each module, workload is sent through one thread or split into two threads depending on load. Now, the benchmarks show that Bulldozer is near a 2500K or around a 1090T in multithreaded applications but performs miserably in single threaded applications.

The way I see it is this and purely theoretical and hopefully someone can provide a better explanation:
When one single module receives a single threaded workload, only HALF the resources of a Bulldozer module is at use. However, I believe that the core clock probably automatically increases for single threaded workloads but still only half the Bulldozer's module's resources at use-- half the integers, half the FP, etc. The other modules are disabled and probably automatically since it's only a single thread.

I'm thinking that a smarter approach to this method of thread scheduling would be to automatically distribute a single thread to the entire module. The processor would then utilize the full complement of the resources of each module but the module as a whole is working on the same single thread. The processor is using two times the resources available versus half the resources of a module.

At the very least, a single module won't be wasting resources for a single threaded application. Therefore, I'm thinking that AMD's Bulldozer isn't utilizing a full module for a single thread. Majority of the applications consumers use day-to-day are single- or dual-threaded. Many games are multi-threaded, or starting to soon.

A multi-threaded application of let's say two threads should use two modules instead of a single one. That way each thread gets to use the full resources of each module.

Four threads, four modules. Eight threads, eight modules but it would make it look like a sixteen core processor. I think the idea of using half the resources of a module per thread is being more of a detriment than benefit for Bulldozer. Especially since each "core" of a module is literally half the resources of the entire module but with less execution and computational components to work with on a single thread.

Comparing Nehalem's core to Bulldozer's "core," it really does seem like half the resources are used in a module thus a poor single threaded performance.

Bulldozer module: http://www.bit-tech.net/hardware/cpus/2011/10/12/amd-fx-8150-review/2
Nehalem core: http://www.anandtech.com/show/2594/3
Sandy Bridge core: http://www.tomshardware.com/reviews/sandy-bridge-core-i7-2600k-core-i5-2500k,2833-2.html

Now, I'm guessing AMD figured that Bulldozer would be a power hog somewhere during its design stage and thought maybe having Bulldozer save power is to disable the other modules if not in use or reduce power if they were underutilized. I think that backfired because the processor is still using quite a bit of power.​

And, would just an improved Thuban architecture in a more traditional sense have been a better option? Something with higher per thread/per core performance and increased IPC versus increasing speed to work on a single thread when half the module is underutilized.

I get this thinking that AMD should have stuck with, "If it ain't broke, why fix it?", mantra. I get the feeling AMD wanted to try something new and implement something that would be quasi-Hyperthreading (without possibly getting sued by Intel for a one-to-one copy of it).

But, is doing something new and different the best idea? Then again, we look at the core architecture of Nehalem to Sandy Bridge (and SB-E) and then to Ivy Bridge. Intel tries something new yet are more successful at implementing changes to its core architecture.

Is this because AMD hired the wrong engineers?
Is this the result that AMD has 8 times less operating income than Intel? ($6.4 billion versus $800 million.)
Is this because Intel has hindered AMD's place in the market by making backdoor deals with computer manufacturers that reduced their revenue and income?

What could be the source of the issue(s) here for AMD?

Seeing all of this, if one is to stick to AMD, you may as well go with the 1090T or 1100T. If that isn't enough, then go with 2500K or 2600K. (I'm not quite sure about Sandy Bridge-E yet since the benchmarks done in the previews show it on par with a 2600K or near it. Probably only if you want to move to a Socket 2011 platform I guess.)
 
Back
Top