AMD..."Oh crap", Intel

um...tablets and phones brah. they are everywhere. servers and pc desktops are not. even servers, are moving to ARM.
 
I don't care as much about top or bottom as long as the performance is good enough ;)

Of course the best thing that could happen is that AMD becomes performance competitive again across a wider range of product and it forces Intel to re-align their pricing structure to something a tad more affordable...

The problem is that it is very rare that the top does not also win at the bottom in the processor space.

You make a good efficient fast chip, it typically runs better at all levels and price points. That is why the swings are so wild in the competition between intel and AMD. When you are the guy with the lower performance product most of the time you have no advantage you simply have to lower your price and suck it up. Intel could have bankrupted AMD several years ago. All they had to do was pull down the prices at all the product price points and AMD would have had no chance. They didn't in sort of the same way MS kept apple on life support. They thought they needed a competitor. The irony of it all is in both cases they ignored the real problem. In this case ARM, MS case google.

AMD had one major advantage, ATI, but in the time Intel has managed to really increase GPU performance to the point where it is senseless to buy an after market gpu unless you are over a certain performance level, usually around a X5X level card.

The point is AMD needs a good win to at the very least give them a strong run for a couple years that can get their finances in better shape.
 
even servers, are moving to ARM.
Naturally. In the server market, it's often the case that threads-per-dollar is the most important cost metric. Many server architectures are still based around one-thread-per-user models, so getting as many hardware threads per dollar is important.

Does that mean that ARM is the future? Not necessarily, no. Not all server applications are defined by the notion that threads-per-dollar is the most critical metric. Often times, it's still about low numbers of very high-performance hardware threads, which is the area in which x86 still dominates.
 
Never fear, contrarian wonderfield is here.

Intel is dribbling out "innovation" at a glacial pace when it comes to high-end desktop performance. You know this. We know this. You don't have to go through yet another pedantic exercise in mental masturbation to demonstrate how you know so much better than everybody else.

I, for one, would be delighted with strong competition from AMD. Intel has been muddling along since Nehalem. That's not a coincidence. Nehalem, the architecture following Core, was already far along in development when Core hit. When it became clear that AMD wouldn't be able to answer Core, Intel throttled back.

Nowdays they're rapidly re-prioritizing towards mobility, power efficiency, etc. Great things! The only reason Intel is bothering is because of ARM.

QFT. Reinvigorated competition between intel and AMD would be a good thing for everybody.
 
QFT. Reinvigorated competition between intel and AMD would be a good thing for everybody.

I want to see the 3-way war between ARM, AMD, and Intel continue.

End up with super high performance at super low power usage.

Intel will have a solid performance boost once they die-shrink and throw more transistors at their cores.
 
In the long run, ARM will overtake x86 in just about every segment except super dense high-performance server clusters and the like. x86 will be mostly irrelevant on the desktop and totally irrelevant in mobile in the coming years. APU's will have graphics that make mid-range and eventually even some high-end dGPU's totally pointless. The only people who will have dGPU's at that point are "enthusiasts" who game at high resolutions ~1440p and beyond who will be even more niche than they already are, or professionals who need the immense compute power from the big workstation cards. Won't happen overnight, but it'll certainly be the landscape we're in with due time. "Big" CPU cores at least will become niche as the smaller ones become even more efficient, both in performance and energy efficiency. We're already seeing a move away from big CPU cores being in desktops in OEM boxes and the like. The only people who will be using those products later on are DIY guys.
 
ARM for AMD is only to make up for lost time where they were just not able to compete on a level playing field.

And the only reason that ARM is relevant is because Intel are totally focused on the wrong things. They keep looking towards X86 as where ARM does not care for market domination they are in the business of being in the business (user oriented).

It is funny Naroongtx you mention that moving away from big cores is somewhat popular but you are forgetting that the mobile market are using hybrids these days where it is big cores and several small ones.

But aside from the future all of these things only have merrit as long as they have a purpose. If you look at what AMD is doing , moving away from there CPU only products on desktop and focusing on APU (even in the server space). This is how AMD predicted it back in 2010 (or 2011) already.

In the end the AMD APU will win on performance as it already does on OpenCL applications. This is not because it is done with clever marketing it is working for them because when you need compute power the GPU is the only real option. In the end all the normal (old) benchmarks are so futile since you do not use the fastest method , you are using a method to calculate things more and more software will have no incentive to be stuck in the past and just running inefficient.

This is something that AMD has shown several years ago. This solution would be the only true solution for processing power. These days it just takes smarter software developers to take notice and use it. Even when Bulldozer was released people laughed at it and proclaimed "weak cores" while in reality today these weak cores when using something as Mantle in videogames suddenly perform really well.

I'm willing to "gamble" that X86 is here to stay even tho the numbers versus ARM are not that great. AMD is still making/designing X86 also we are still waiting for some more information regarding their new design that is due in 2016.
 
Yeah, things like HSA really are the future, it's not just more marketing bs, it's the truth. The software engineers are the ones who will have to get off their asses and actually code the apps the right way, however. Granted, not every app out there would benefit from OpenCL extensions since some workloads are inherently serial-bound in nature, but it's definitely not something they should ignore in general.
 
I hope things really take off. Besides budget machines i have basically not touched AMD in years now.

But from where I am standing...budget machines are what 99% of users want.

I sell dozens of dual core 4GB base unit boxes and they work great for the customers. I get asked maybe once a year to build a gaming rig.

I'm now not even bothering to buy new a lot of the time. I'm buying in ex Corporate machines with on average 2.8GHz C2D chips, 4GB of ram, 160GB HDDs, dispay port, eSATA, Windows 7 Pro for just $100 a box.

Customers love them.
 
I'm not sure why would anyone want to use AMD cpu in budget machine that isn't used for gaming.

For stuff like that Intel celeron or pentium have enough power and H81 mobos are dirt cheap while offering great power efficiency.
 
Power efficiency means less and less on the desktop when you move lower down the bracket. Those CPU's are barely sipping power most of the time, remaining in their lower P-states for the vast majority of the time, only spiking up the clocks when someone loads a new webpage or something, which lasts a couple seconds at best. The difference between running an AMD chip and the Intel equivalent at that low-price bracket would be a difference of maybe a couple cents on your power bill at most.
 
Even for simple stuff AMD APU still runs circles around Intel . Kabini Temash and Beema Mullins are worthwhile.
 
In what respects do APUs "run circles around" Intel?

WE GOT PLANTS VS ZOMBIES /FLEX

*sigh*.

The only people that willingly buy AMD APU's are the ones that want the lowest cost piece of crap for their kid, or they hate their lives.
 
Every aspect regarding performance with open-CL. Mantle for gaming is something which will become interesting if the setup of the hardware is right.

Mantle isn't magic that increases GPU performance, its there to free up the CPU bottleneck. That's assuming the CPU on the APU is the major bottleneck and not the memory bandwidth for the GPU. At least we are getting performance increases just from software advances.

OpenCL performance is good, as long as the program can take advantage of the structure, which isn't a whole lot. As long as a majority of the programs take advantage of generic cores (easier to produce) the OpenCL advantage will stay niche on APU's.

GPGPU function is great but it did not pick up like everyone thought it would. IF that is something that is a priority to you then the king will still be Nvidia or for OpenCL specific tasks FireGL video cards.
 
Mantle isn't magic that increases GPU performance, its there to free up the CPU bottleneck. That's assuming the CPU on the APU is the major bottleneck and not the memory bandwidth for the GPU. At least we are getting performance increases just from software advances.

OpenCL performance is good, as long as the program can take advantage of the structure, which isn't a whole lot. As long as a majority of the programs take advantage of generic cores (easier to produce) the OpenCL advantage will stay niche on APU's.

It is rather simple if the program wants compute performance then OpenCL is the only thing that is going to do it for them in a non specific platform solution.. If you don't want performance then don't compare it to something inferior.

This is about AMD not Nvidia , Nvidia does not offer the same on the X86 platform and they never will.

Well to be honest even if Mantle does not currently boost gpu performance it can work that way once programmers know they can send more data to the gpu.

HSA is another smart solution which pays of on AMD hardware.
 
Libreoffice is one of them ?

Everyone that wants to support HSA in the application can do so there no hidden costs there, it is also supported on different platforms.

https://www.youtube.com/watch?v=k9M2n7q-bOM&list=UUHQDjDDW8w2RieO-IuqYlyg
https://www.youtube.com/watch?v=TN9dJ7sewFI&list=UUHQDjDDW8w2RieO-IuqYlyg
https://www.youtube.com/watch?v=ui--Mo6_bBo&list=UUHQDjDDW8w2RieO-IuqYlyg
https://www.youtube.com/watch?v=FK6ctilE7hY&list=UUHQDjDDW8w2RieO-IuqYlyg

These are all from the AMD youtube channel ;)
You did know right that I find it amusing that you start posting benchmarks ?

have fun :)
 
Last edited:
What other methods of evaluating performance exist? How are you qualifying that "APUs run circles around Intel" "with respect to performance" without actual evaluation of performance? Is it based on a mere hunch, a misguided notion of AMD corporate superiority or on a misguided notion of Intel inferiority?
 
Well combining the articles, I see where 512 bit instructions would basically let AMD run programs off the CPU which if the branch logic is better than the P4 would cut energy, time, and heat because it would cut down on time waiting for the next instruction, compared to 32 bit instruction which has to go to the memory and something even the hard drive for the next instruction. Which means every four 32 bit instructions would run in roughly the same space of time as a 512 assuming it gets implemented correctly. Could be as low as three 32 bit depending on the efficiency of the bus. Of course the reverse is also true on a missed branch it will take four 32 bit instructions, to replace it and two to five to get the new info.

I'd love to see where this goes because while MMX was joke at the time, it lead to instruction sets that are useful and 512 bit instructions are going to be very useful if they don't screw up the overhead or require a new OS to handle the calls. Hopefully they can just patch windows and linux not require windows 9 512bit. Of course that means windows on windows will have to do 512 to 64 and 32 bit apps will have to be virtualized, the same way 16 bits apps on windows 64 have to be.

The only saving grace is that AMD is the one that designed the hybid mod for x86-64 so they may find a way to get 32, 64 and 512 to play nice.
 
What other methods of evaluating performance exist? How are you qualifying that "APUs run circles around Intel" "with respect to performance" without actual evaluation of performance? Is it based on a mere hunch, a misguided notion of AMD corporate superiority or on a misguided notion of Intel inferiority?

It is really easy benchmarks in general have little to no value when they can not be compiled. When some benchmarks use legacy as X87 where it is downright stupid to use this because it is supported but holds no value due to other instructions being more efficient.

Then you have to weed through all this crap as a user you need a PHD to really understand what is "tested" and what is not.

As you saw from the AMD video and your own linked benchmarks you can see that both have scores which would validate each side of the coin.

Now how do you want to rate this? Is one or the other lying? You know that both are true.....
 
It is really easy benchmarks in general have little to no value when they can not be compiled. When some benchmarks use legacy as X87 where it is downright stupid to use this because it is supported but holds no value due to other instructions being more efficient.

Then you have to weed through all this crap as a user you need a PHD to really understand what is "tested" and what is not.

I agree with this however without measuring there is no such thing as running circles. I would like to see real world app testing more than synthetic benchmarks.

As you saw from the AMD video and your own linked benchmarks you can see that both have scores which would validate each side of the coin.

Now how do you want to rate this? Is one or the other lying? You know that both are true.....

I say that both are lying and that is their job. Well at least for AMD, I mean isn't the primary job of a marketing team is to show your product in the best possible light and the competitors in the worst possible light at the same time.
 
Last edited:
I agree with this however without measuring there is no such thing as running circles. I would like to see real world app testing more than synthetic benchmarks.

And that can be as subjective as you want it to be also , people who do raytracing differ from video editing and so on and so forth.

Yet there are plenty of websites which just bombard you with "benchmarks"

About running circles yes , we should ask AMD to make a benchmark for that one so the claim holds validity ;)
 
It is really easy benchmarks in general have little to no value when they can not be compiled. When some benchmarks use legacy as X87 where it is downright stupid to use this because it is supported but holds no value due to other instructions being more efficient.
This is not only a non sequitur, it's also nonsensical. All code invariably ends up as machine instructions, whether pre-compiled, jitted or interpreted. Considering we're talking about OpenCL performance, though, it is an astonishingly and brain-meltingly irrelevant comment, because OpenCL isn't an instruction set.

As you saw from the AMD video and your own linked benchmarks you can see that both have scores which would validate each side of the coin.
To what "coin" are you referring? In no OpenCL benchmark I linked was AMD competitive.
 
This is not only a non sequitur, it's also nonsensical. All code invariably ends up as machine instructions, whether pre-compiled, jitted or interpreted. Considering we're talking about OpenCL performance, though, it is an astonishingly and brain-meltingly irrelevant comment, because OpenCL isn't an instruction set.


To what "coin" are you referring? In no OpenCL benchmark I linked was AMD competitive.

why do you quote a post and say its about OpenCL, when the post being quoted doesn't mention OpenCL......

OpenCL is not an instruction set your right, its an add on programming standard. Which can be added to C, C++, Java etc. Its made to allow software to compute on other devices such as the gpu.

To me benchmarks are only useful to those who actually use the said program the benchmark came from. How its compiled and such doesn't really matter, because those who use that program will purchase the equipment to operate it. We compare CPU's based on programs we commonly use and are industry standards. It however doesn't give us any information on how fast the CPU can process that task. When the software was written it was coded to run a certain way. There is no telling if that software is keying into the strengths of that cpu. An example of this would be single threaded program running on a muti-core cpu.

Ideally both AMD and Intel would have their own priority software that leverages their products in best light. Sadly that is not the case.
 
I don't think there is any doubt that Intel has much more $$ and resources to put into R&D, and they have come up with a lot of innovations.

Just the same, there is also no doubt that AMD with their meager budget takes bigger risks, makes their R&D dollars go further, and dare to challenge the status quo. They took on 2 of the biggest behemoths, Intel and Nvidia, beat them both handily in innovation, performance/dollar, didn't purposely engineer products to not be compatible and thus become obsolete in a few years, and didn't lie in benchmarks.

Its pretty hard not to root for AMD. They have given so much and the computing landscape would be very very different, with prices at least 2-3x what they are, had AMD not been there to compete. But for a few bets that didn't work out and ad dollars they didn't have, AMD would still be very strong today.
 
why do you quote a post and say its about OpenCL, when the post being quoted doesn't mention OpenCL.....
The entire discussion stemmed from his claim that AMD runs circles around Intel in OpenCL performance. The links he's referring to, which I posted, were to OpenCL benchmark results. So, that's what the context is for those posts between he and I.

To me benchmarks are only useful to those who actually use the said program the benchmark came from. We compare CPU's based on programs we commonly use and are industry standards. It however doesn't give us any information on how fast the CPU can process that task.
That depends upon the unit of measure in which the benchmark results are expressed. When they're expressed in terms of time, it's almost universally the case that the benchmark result is simply the amount of time taken to complete a given task. The idea being that you run the task on a given piece of hardware and do the same on another, comparing the amount of time each took.

If your evaluation method isn't based in some way on the time it takes to execute something, I'm going to suggest that either you're using it as an analog to some existing time-based measurement (like, say, watt hours consumed or IC temperature reached) or they're designed specifically not to be useful.
 
I don't think there is any doubt that Intel has much more $$ and resources to put into R&D, and they have come up with a lot of innovations.

Just the same, there is also no doubt that AMD with their meager budget takes bigger risks, makes their R&D dollars go further, and dare to challenge the status quo. They took on 2 of the biggest behemoths, Intel and Nvidia, beat them both handily in innovation, performance/dollar, didn't purposely engineer products to not be compatible and thus become obsolete in a few years, and didn't lie in benchmarks.

Its pretty hard not to root for AMD. They have given so much and the computing landscape would be very very different, with prices at least 2-3x what they are, had AMD not been there to compete. But for a few bets that didn't work out and ad dollars they didn't have, AMD would still be very strong today.

Well here is the problem, check how fast over the last 5 years the CPU was. now check how fast the GPU was last 5 years.

Now if the R&D money was key to the success of the CPU then it would have scaled better, we would have seen 32 cores or maybe 64 cores. The reason why we are not seeing this is because the landscape of the CPU tends to be slow moving (code being written for multi-core even to date in most cases is a crying shame). When we are talking GPU then the problem of addressing extra processing power does not hold the same problems.

And this is why software will focus more using compute then be stuck on the CPU.
 
Last edited:
This is not only a non sequitur, it's also nonsensical. All code invariably ends up as machine instructions, whether pre-compiled, jitted or interpreted. Considering we're talking about OpenCL performance, though, it is an astonishingly and brain-meltingly irrelevant comment, because OpenCL isn't an instruction set.


To what "coin" are you referring? In no OpenCL benchmark I linked was AMD competitive.

I was not referring just the openCL, stuff in general I used X87 as a reference as old and obsolete.
Yeah somehow when you get your hands on GCC and an Intel compiler you notice the difference. Maybe some day you can :) .

I'm talking in general and not about OpenCL , but from the benchmarks I linked AMD was faster.

Code in general can be written easily where it is optimized for cache sizes (L1,L2, L3) with specific instructions used which benefit (these days) certain cpu architecture better then others.

This is nothing new this happened back in the day of the 386.
 
Back
Top