Ask AMD about Bulldozer

Status
Not open for further replies.
How many transistors are in BD ( separated in logic, L3, system, I/O ) ? These details are usually published in ISSC or Hot Chips papers, shouldn't be classified.
 
I did some testing with a 8120 and was very surprised with the results:

Keeping a program with multiple threads dedicated to selected modules I saw the module performance using both cores performing at 90% compared to each thread separated to an individual core per module. Very good results on this.

Other test was multi-tasking ability running multi-threading programs once again configuring the cpu usage configuration so similar threads (from same program) per module with cpu usage around 90%. The results where outstanding.

So far from my experience and tests BullDozer performs well if programs (threads) are smartly configured.

So:

  • Can AMD help the users by giving some guidelines in how to maximize effectiness of the cpu design until the software catches up?
  • Supply software for better manual configuration of programs and threads? (maybe update one of AMDs software packages) A program to create profiles for programs which we can easily configured?

Here is link that explains what tests and how I did it if you want to check, no need to duplicate here.
http://www.rage3d.com/board/showpost.php?p=1336730819&postcount=57
 
Last edited:
Is AMD working with Microsoft to patch the scheduler with Windows 7?
 
Seeing preliminary Win8 performance results with its improved scheduler, does Microsoft have any plans to implement that improved "fix" into Win7?
 
I apologize for the length of my previous question, so I will rephrase it:

Do you as a company, after seeing all the reviews of the FX (Bulldozer) processor performing poorly against a previous generation 6-core Thuban CPU and Intel's Sandy Bridge CPU, see any faults in the architecture and design of the Zambezi processor? Please consider single-threaded performance most especially when comparing to the previously mentioned processors when answering the question.

If yes, will there be improvements to the design that increase per core IPC, improve power consumption, and increase both single and multi-threaded performance over a previous generation Thuban and Deneb AMD processor?

If no, why is it that you do not see any faults in the design and architecture of the CPU especially after looking at the reviews of the new processor?​
 
Why have you been so coy about the number of stages in Bulldozer's pipeline?
 
Last edited:
Given that the vast majority of current games/programs do not scale well beyond 4 threads, is there anything that can be done with the BD platform to increase it's performance with current software or is the current performance about as good as it is going to get with legacy programs?

Sort of related to the IPC/speed questions but if there is anything that can be done to increace it's performance in CURRENT applications that may be a ray of hope. None of us are using future programs so from an enthusiast standpoint there doesn't seem to be a lot going for BD right now.
 
I read a post purportedly from an ex-AMD engineer that stated the reason bulldozer was underwhelming was due to the design not being handcrafted, he said machine designed always ended up 20% slower and 20% larger. Will AMD revisit problem areas of the chip and hand design? Is machine design always going to be used in future?
 
Your question was answered in the OP. - Kyle
 
Last edited by a moderator:
Can we expect any new revisions of AM3 Phenom II x4 and x6 cpu's while Bulldozer undergoes it's design optimization?
 
Seeing as how Bulldozer's L3 cache is slow to access and it is also the first layer for sharing data between modules, wouldn't a fast shared L2 be more relevant?
 
Is it possible to design a scheduler inside the chip itself so that it can manage the threads on its own? For example, would it be possible to shut down one integer core per module when 4 or less threads are being executed and then enable them as needed when thread count increases?
 
Did AMD consciously choose to release Bulldozer in its present form with the expectation that it would help encourage software writers to optimize their programs for multi-core performance? To "seed" the market, so to speak, even though the first generation has single-thread performance issues?
 
I recently built my 3rd AMD based machine in anticipation of Bulldozer. It has a Phenom II x3 720BE as a "filler" cpu. (Hey, it's a good filler, clocked at 3.4 Ghz ;) .) The computer I'm typing on has a 1090t running at 3.8 Ghz. Great cpu... I haven't even started pushing it.

Why would I buy a $275 Bulldozer cpu when the $170 1090t seems to equal its performance or actually do better at every benchmark and game we've seen?
 
It seems Bulldozer was meant to be more of a server chip. Any plans to maybe streamline this architecture for the desktop?
 
The overall consensus for the majority of Bulldozer reviews has generally been one of dissatisfaction and disappointment in comparison to the competition. Assuming AMD bench marked and compared the results to the competition product, why did AMD decide to release the product in it's current state? Was it necessary to recuperate research and development costs at any cost? Is the company in a safe enough position with alternative sources of revenue (GPUs) to weather the storm of a possible poor selling product?

Thank you for the opportunity pose my question.
 
How does Opteron 3200 series differ from FX series, and what markets is each intended to address?
 
Would we see large increases in performance from particular programs(i.e. Handbrake) if they were specifically coded for BD architecture versus what is being done currently?
 
Can you explain who makes up the design team for AMD now? (It seems this has greatly changed from the amazing Athlon days)
 
What specific or general computing roles do you see BD excelling in? For instance virtualization, Windows 8, solitaire, etc?
 
Did Bulldozer meet internal AMD expectations for single threaded computational horse power and if so, why was the decision made to push for multi-threaded processing on a consumer product whose market share has little use for more than 2 or 3 cores?
 
Having used AMD products since the '286 days, I've become accustomed to the ebb and flow of AMD/Intel performance, but where AMD has always excelled was in the price/performance (and occasionally pure performance) department.

Does AMD consider Bulldozer's price/performance competitive, and if so, in what markets.

To me, the current Bulldozer release looks to be an upgrade part for the HPC/Cray world, with the desktop as more of a "maybe" market.
 
Is there a list of games that aren't compatible with your processor? Tired of finding out the hard way.
 
Why did you make such design decisions as increasing the length of the pipeline in order to achiever higher clocks as opposed to going for efficiency? Were architecture choices that resulted in better IPC offsetting the gains from sharing parts within the CPU?

It seems that the idea of modules and cores sharing parts is brilliant, but the idea of increasing frequency while lowering IPC seems like a step backwards. Why was this decided on?
 
My question is this : Why did a majority of review sites do most of their reviews using 1333 or 1066 ram speed settings instead of the 1866 speed the cpu is designed to be used with ? I also wonder how much this skews their review benchmarks and how do you feel about this as a company??
 
Will AMD allow early adopters an option to upgrade/trade in thier current bulldozer to a more power efficient stepping when/if one is released with minimal reinvestment on their part since current models or so power hungry?
 
Are you still planning on using bulldozer cores for your upcoming Trinity chips? It seems the fx 4100 would be slower, and I was under the impression Trinity would see 50% or so performance increase.
 
How come you guys did not embrace asynchronous clocking of the various execution units inside the processor design? Like for example have the floating point units run at 10 GHz, but other parts of the processor are running at the core clock. A cache block between the decoders and execution units can be used to hide the fact it will be running at a higher frequency than the decoders feeding data to the execution units so it does not become a bottleneck issue.
 
Why did the marketing department choose to put the FX moniker on a product that is mid-range in performance?
 
Will it be possible to try unlocking additional modules/cores in 4/6 core versions of FX ?
 
I know that the comments made by John Fruehe on this board do not represent the official opinions of AMD, but is the enthusiast sector really as insignificant as he has said, which might explain the subpar performance of FX?
 
I see that you have recently hired a new Chief Technology Officer, does this possibly signal a new direction for AMD in terms of how processors are engineered?
 
Why did AMD choose the Asus Crosshair V Formula for the reviewers kit? People at Xtreme System forums reported significantly lower power consumption with a different mainboard. Was this not tested before distribution?
 
Status
Not open for further replies.
Back
Top