Future APU's and the end of FX

Makes sense for them to bundle BF4 with Kaveri, that will just improve sales even more (however broken BF4 itself actually is, lol). If true, they're doing this because: 1) It's the first game to have Mantle, which they can easily demonstrate the power of Kaveri via that, 2) entice people on the fence with yet another game bundle promotion

I think they should perhaps add Kaveri to the Never Settle promotion, at least the Bronze tier.
 
>AMD ARM opterons can deliver the same Apache score as Xeons or x86 Opterons, they support 10Gbps ethernet and can fit 8 servers in a 1u rack.

There is plenty of Geode/Atom processors for webfarms.
What I really wait from Opterons is not this stupid java/apache scores, but actual double precision flops increase.
All my software is already optimized for best utility. I disappointed with performance even with deep assembler optimized codes on AMD. Because Intel has its 512 wide FPU which is _awesome_. 512 = 8 DP operations per tick.
Professionals use processors only for floating point operations. Even many integer operations could be fourier transformed into floating point with _greater_ efficiency.

PS. Everything I know about ARM and Atoms is _disgusting_.
For comparison: Intel Atom has 10 times less power consumption and 100 times less productivity with Prime95's fourier transforms (then traditional x86 100 watt processor).
So power consumption gains do not buy off losses in productivity. I doubt that ARM processors have some gains above Atoms. Essentially they are the same.
 
Last edited:
This is misleading the AM3+ line will not stop if you read the full article accurately you would know this instead of just jumping the gun it will be at least late 2015 before the even consider changing platform
 
>AMD ARM opterons can deliver the same Apache score as Xeons or x86 Opterons, they support 10Gbps ethernet and can fit 8 servers in a 1u rack.

There is plenty of Geode/Atom processors for webfarms.
What I really wait from Opterons is not this stupid java/apache scores, but actual double precision flops increase.
All my software is already optimized for best utility. I disappointed with performance even with deep assembler optimized codes on AMD. Because Intel has its 512 wide FPU which is _awesome_. 512 = 8 DP operations per tick.
Professionals use processors only for floating point operations. Even many integer operations could be fourier transformed into floating point with _greater_ efficiency.

PS. Everything I know about ARM and Atoms is _disgusting_.
For comparison: Intel Atom has 10 times less power consumption and 100 times less productivity with Prime95's fourier transforms (then traditional x86 100 watt processor).
So power consumption gains do not buy off losses in productivity. I doubt that ARM processors have some gains above Atoms. Essentially they are the same.

Then you should be more interested in AMD's HSA enabled offerings, as the iGPU tends to be much more efficient at floating point operations than the CPU.
 
Professionals use processors only for floating point operations.
Mmm...no? Near as I'm aware, there's no single integer instruction (add/subtract/multiply/add/modulo) that takes more clock cycles on modern CPUs than the equivalent floating point instruction. The benefit of floats are in the vector instructions, but it's not as though there isn't any support for vector integer instructions in modern processors.

I mean, you aren't doing loop counting with doubles, are you?
 
Then you should be more interested in AMD's HSA enabled offerings, as the iGPU tends to be much more efficient at floating point operations than the CPU.

It would be nice, if AMD could throw a bone to the enthusiasts to chew on though. Like, "we prepare a new stepping" or something. Because, otherwise, saying that we "keep it till 2015", doesn't mean much. It's more like a zombie situation... The bad thing is, that releasing new Visheras with different clocks, i don't know if has any meaning... Already one can buy the ones out there and overclock and for the daring ones, there are the 9xxx series with the 220W TDP. So unless they somewhat stir the pot a bit, keeping Vishera till 2015, is more or less "dead man walking" as far as progress or upgradability is concerned.

And the bad thing in all this, is that when a company has something noteworthy cooking, usually goes out to boast about it and make the potential customers salivate and wait, instead of joining the dark side of Intel. Just like AMD keeps talking about her APU plans all the time. The problem with AM3+ is that they don't say anything to make anyone salivate.
 
I really wish they were working on something new - at this point in the game as much as I want to suggest a 8350, its < 4 thread performance doesn't seem to be a good use of one's finances, and comes even to the point of not being worth it for many "intermediate" desktop tasks like gaming and whatnot that may only use a couple of cores. On top of this, the AM3+ chipset for 8350 is still 990FX, which is getting long in the tooth - no PCI-E 3.0 for instance (except I think there's a single Asus Sabertooth version that has it. 2.0 GEN3 or something like that).

Telling enthusiasts to simply sit on it until 2015 just doesn't seem reasonable. It would be one thing is the 8350 was like $99 and Sabertooth/Crosshair level boards were $75, but they're not. I really like AMD and want to support them for their GPUs, openness and Linux support (also: mantle), but I cant' see how they can say with a straight face "Buy this CPU/mobo combo for enthusiasts who game and perform other intermediate/advanced performance requirements. It will do "well enough" and be a great value" with anything they have at current.

Either they need to throw the AM3+ stuff to Sempron pricing levels or come up with some APUs that have decent single-threaded performance and 8+ cores and switch over to just running on APUs. However, right now their APUs seem to be 4 cores at most, and maybe the next gen will be a little better, but they're not going to be winning any performance awards - I'm not sure they can stack up to the 8350 in single thread, or multithread performance.

Its sad to see Intel get SO much further ahead, especially as Intel's dickish behavior is well verified ever since the Sandy Bridge days - charging more for everything, rolling out things later and later etc..they badly need a threat.
 
It's impossible to make good APU -
there's limited transistor budget on the chip in any given litographic process
there's limited power budget that can be used for such chip

So a cpu which dedicates whole die area to CPU part will always be faster in cpu tasks and gpus taking 250W alone will always outperf Apus
 
^ it's impossible to make a good APU using current production materials and manufacturing processes.

I agree to an extent, but the GPU power in current APU's is impressive, considering the limitations.

It's only going to get better with each generation...and I'm sure we're going to see some serious kick ass IGP power when next gen materials and manufacturing start emerging.
 
Last edited:
It basically depends where gaming is going, stuck in the past with single thread and DirectX the normal combo would win.
With Mantle it is actually possible that an APU+GPU combo can perform better on graphics engines. We would still have to see where Kaveri is going in a year maybe 2 years for the software developers to see the light.
 
If developers properly use HSA to capitalize on the combined serial + parallel power of an APU, they could easily outpace today's enthusiast-level chips.
 
It basically depends where gaming is going, stuck in the past with single thread and DirectX the normal combo would win.
With Mantle it is actually possible that an APU+GPU combo can perform better on graphics engines. We would still have to see where Kaveri is going in a year maybe 2 years for the software developers to see the light.

If developers properly use HSA to capitalize on the combined serial + parallel power of an APU, they could easily outpace today's enthusiast-level chips.

For me, if HSA is truly to take off and succeed, software not only has to take advantage of its benefits, but I'd like to see native support from operating systems like Windows and Linux-based distros.

The idea of properly sharing memory between graphics card and processor in the same memory space; better compute for functions that'd normally require a co-processor in the older days of computing; and better multi-threading performance would be very beneficial.
 
This is misleading the AM3+ line will not stop if you read the full article accurately you would know this instead of just jumping the gun it will be at least late 2015 before the even consider changing platform

There's nothing misleading about it.

There won't be a successor to the AM3+ platform. AM4 and the 1090FX chipset were canned a long time ago. It's just APU's from here on out. "FX" is nothing more than a brand name, and there won't be any new FX processors past Vishera, at least not for any of 2014 and 2015 according to the roadmaps.
 
There's nothing misleading about it.

There won't be a successor to the AM3+ platform. AM4 and the 1090FX chipset were canned a long time ago. It's just APU's from here on out. "FX" is nothing more than a brand name, and there won't be any new FX processors past Vishera, at least not for any of 2014 and 2015 according to the roadmaps.

There are GPUless FM2 and FM2+ processors. They carry the Athlon branding.

No reason AMD can't have an FX chip in an FM socket.
 
No reason AMD can't have an FX chip in an FM socket.

And no reason why they can't make FX Frisbee either :) . Maybe FX coaster or FX toaster.
The list goes on but what does it represent a piece of marketing which they did not have the success they wanted..

By then the market has changed and all that people know is the FX name from the cpu overclocking event and no longer that it was at one the time fastest x86 cpu.

AMD has no clue as how to market certain things (on the cpu side) it usually is all hail mary based.

The number of years where desktop pc has ruled are coming to an end. The reality is that things change and people along with it the need for the fastest cpu is no longer a big issue you can get away with something slower and still be able to do everything you wanted.

This is something that is reflected in the way AMD chooses to handle AM3+ platform.
 
Last edited:
^What he said.

And don't count on FX on FM2+ since there's nothing pointing to that happening.
 
It's impossible to make good APU. So a cpu which dedicates whole die area to CPU part will always be faster in cpu tasks and gpus taking 250W alone will always outperf Apus
Yes, but that doesn't mean iGPUs are the wrong choice. There's a vaguely-defined single-threaded performance wall that current CPUs are approaching: widening the pipeline, increasing cache sizes, increasing the number of registers and adding more branch prediction logic seldom pay meaningful dividends. The wall exists primarily because memory latency has not decreased nearly at the same rate that computational throughput has increased (we're talking multiple orders of magnitude difference here, so it's not a small thing). It also exists because frequency increases have effectively flatlined, which means clock cycles aren't getting any shorter. And that plays back into the memory latency equation.

You can certainly blow your entire transistor budget on more cores, which is essentially the only sensible way to scale up CPU performance at this point, but in how many applications does that pay real dividends? Few. In those applications, your computational tasks are probably quite fine-grained, and you're probably better off doing GPU compute. Hence iGPUs. Hence GPU compute. Hence HSA.

When you look at how Intel and AMD are spending their transistor budgets, they are absolutely making the right decision with iGPUs. Whether these are good decisions or not isn't even really debatable. True heterogenous architectures make impossibly good sense.
 
I'd love to see something like a hybrid arch.

1 SR module with 2 cores, 1 jaguar module with 4 more cores, and 8-10 CU's?

Best of all worlds.


(though i'd pay good money just for a PC version of the PS4 chip...)
 
You can certainly blow your entire transistor budget on more cores, which is essentially the only sensible way to scale up CPU performance at this point, but in how many applications does that pay real dividends? Few. In those applications, your computational tasks are probably quite fine-grained, and you're probably better off doing GPU compute. Hence iGPUs. Hence GPU compute. Hence HSA.

When you look at how Intel and AMD are spending their transistor budgets, they are absolutely making the right decision with iGPUs. Whether these are good decisions or not isn't even really debatable. True heterogenous architectures make impossibly good sense.

I'd like to think of it as: What's old is new again.

Remember when co-processors used to be paired with regular processors in the old days? Those days came to an end when Intel integrated them on the chip itself post 486DX days.

It's probably a lot like what's going on with integrated on-die GPUs and APUs again-- using the GPU as a high performance FPU for things that a normal FPU cannot do alone.

And no reason why they can't make FX Frisbee either :) . Maybe FX coaster or FX toaster.
The list goes on but what does it represent a piece of marketing which they did not have the success they wanted..

By then the market has changed and all that people know is the FX name from the cpu overclocking event and no longer that it was at one the time fastest x86 cpu.

AMD has no clue as how to market certain things (on the cpu side) it usually is all hail mary based.

The number of years where desktop pc has ruled are coming to an end. The reality is that things change and people along with it the need for the fastest cpu is no longer a big issue you can get away with something slower and still be able to do everything you wanted.

This is something that is reflected in the way AMD chooses to handle AM3+ platform.

You make a very good point.

A lot of our daily computing tasks have gone mobile. Sooner or later we're going to be doing most of our work on tablets that aren't computationally intensive. A tablet or smartphone that docks up and connects to a monitor and keyboard at home or the office.

But, we're still going to need a fast processor for intensive applications that the tablet and smartphone cannot do currently. An APU would make a nice low cost workstation processor especially if it has a FirePro-equivalent GPU on-die.

A lot of the shift in the processor market is making them fast, but make them efficient and low powered (low TDP/similar), so they can be used in more smaller computers, tablets, and smartphones.

(though i'd pay good money just for a PC version of the PS4 chip...)

The PS4 APU would make a damn nice high-end, thin-and-light ultrabook or notebook.

Imagine: A high-end thin-and-light portable with 8 cores and a Radeon 7850-equivalent GPU with a 1080p or 1440p LCD monitor.

It probably won't knock the socks off of an ULP (ultra-low powered) Intel Core i5-equivalent ultrabook, but it'll probably be so much cheaper. :D
 
Does the "end of fx" really concern anyone? They are "ok" processors at best. If future APU's can match or better the performance of FX while at the same time have a usable IGP, I don't see an issue with that.
 
Back in mid 2012 AMD publicly stated they will support socket AM3+ into 2015. Therefore, the current line up of FX CPUs will be manufactured until sometime in 2015 before they decide to retire AM3+. I highly doubt there will be a socket AM4 because FX CPUs only serve the desktop space. Due to negative growth in desktop top market segment for the past 6 quarters there is no real incentive for AMD to manufacture desktop only processors.

PC Market Records Sixth Straight Quarter of Negative Growth

They will simply focus on the socket FM2+ APUs since they are sold in both the desktop and laptop space. This frees up some cash that can be used to pay down debt (I think they have about $3 billion of senior notes due in 2017) and / or use to fund additional R&D. More importantly they need to improve upon Temash and Kabini APU since they are focused on netbooks and tablets.

I can't remember exactly which APU was meant for the tablet market, but I think it was Kabini. Unfortunately, AMD did not get any tablet design wins for Kabini because of it's lackluster performance / no competitive edge over ARM processors or Intel's Bay Trail Atom CPU. Hopefully for AMD the Beema generation APU can address the issues with Kabini so that they can crack into the tablet market.

The tablet market is important for AMD (and Intel) because that is where future growth will come from. According to the article I linked to decline in desktops (and laptops) seems to be stablizing, but is driven by US sales. But desktop / laptop sale will continue to decline in regions like Africa and the Middle East as tablet sales increases.
 
Does the "end of fx" really concern anyone? They are "ok" processors at best. If future APU's can match or better the performance of FX while at the same time have a usable IGP, I don't see an issue with that.

This is pretty much of the realm of my thinking.

I speculate that if we can get an APU with the CPU portion equivalent to an FX-8350 (or higher) and the IGP portion equivalent to a 7950 (or higher) whilst drawing a hell of a lot less power and generating a hell of a lot less heat, then AMD would garner a MUCH larger share in the laptop, desktop, entry-level workstation, and server segments. Lower spec'd units would then be viable for tablets.

I fathom they'll eventually get the overall APU performance there (and hopefully in the current or next-gen APU socket), but hopefully it'll be sooner rather than later, because every other brand that is a direct competitor to AMD aren't exactly resting on their laurels and are perpetuating progress with their own solutions for the individual market segments, as well.

It's no secret that the FX line was (and to an extent still is) a rather large let-down to a majority of the global market, which is why I feel AMD has placed so much focus on developing and generationally refining their APU offerings...that's where a large portion of revenue is being generated from.
 
^An APU with those specs would have an insanely huge die size, which would probably have a good bit of yield issues and be a monster to try to cool properly. No one needs that sort of power in a mobile device, so those chips would be exclusive to the desktop where they would once again be a niche product just like FX was.
 
^An APU with those specs would have an insanely huge die size, which would probably have a good bit of yield issues and be a monster to try to cool properly. No one needs that sort of power in a mobile device, so those chips would be exclusive to the desktop where they would once again be a niche product just like FX was.

Yes, it would be huge...with current lithography process sizes. I was more so speaking towards future die shrinks...< 15nm.

And the mobile market would boom, since a majority of uProcessor sales revenue comes from the mobile segment thanks to enterprise customers, and enterprise customers are buying mobile offerings like crazy.
 
The current and future APU's the have planned are more than enough for average consumers, who are the main ones responsible for most of their processor-related revenue.
 
APU with 7950 performance is beyond wishful thinking. IMO At least for the foreseeable future.
 
the fastest APU we will see in the next few years will be probably low-level i5 CPU performance with a radeon 7790 onboard GPU.

It will be nearly a half-decade before we see 7970 level IGPs, by that time the 290X will look pathetic like the 4870 does now.
 
Back
Top