From ATI to AMD back to ATI? A Journey in Futility @ [H]

the fact they made a big deal about qpi direct memory access and then sat on it is likely part of the issue. The interconnects between the gpu and it's memory is a lot faster than Intel's memory to anything after the level 1 cache. Big companies do this all the time they act like they are at the top of the world and no one can design a better widget and then when someone does they all scramble to replace their widget without any lead time. Intel processors are much faster at branching logic but run into issues when the data is not there.

Having on board integrated gpu that can draw what is on the screen means that at the tasks they used to be used for the most they still stay relevant. Right now most of the rendering of scene files gets kicked over to a gpu with a bunch of logic processors that do one task really fast and don't have to wait in line for the branching logic. There is only so fast you can spin the electrons through gates before they end up just as waste heat, but if you can figured out how to build a better mouse trap at solving problems...
 
Exactly. GDDR5X instantly destroyed HBM1 and partly HBM2. GDDR6 looks to be the final nail for HBM2 outside the top bins. Then we can wait and see for 2020 or so with HBM3 and low cost, lower speed HBM2 variants. But again, why put it in an IGP.

Its clear gamers move up in graphic SKUs, not down. A faster IGP got no value as such. And nobody is willing to pay extra for it. You think people would know the last 5-6 years. And if the value was so great, we would see something like the EDRAM solution everywhere.

The interposer itself have to go, its a fixed static cost. Something like Intels EMIB can save on the cost there. Then the manufactoring and TSV issues. Not only does it add cost, but any failure=total loss. GPU, HBM and interposer out the window. Nothing to save.

And it keeps going around to the biggest issue with HBM, cost structure. Its just never in favour of HBM.
^^ This ^^
 
I read this on Forbes earlier.... I should have gotten Kyle's autograph at Tiger Direct back when the GTX 780's released. He's famously CONTROVERSIAL.
 
I remembered hearing about this last year in response to the nVidia patent licensing deal expiring.

When Intel signed the deal with nVidia, they didn't use nVidia tech, just had an agreement to cover technologies that Intel didn't have patents for in their own iGPU.

Example: Out in March of this year: https://www.extremetech.com/computing/224964-report-claims-intel-amd-discussing-gpu-patent-licensing

What I'm curious about though is how far does this deal with AMD really go?

This guy makes an interesting point:


so Intel still going to be Intel CPUs with shit iGPUs or are we actually going to see AMD tech in there?
 
Deja-vu for me, wasn't something similar announced last year that ATI would go on it's own again...
 
I for one hope AMD used this leverage to renegotiate the x86 license such that it doesn't go *poof* in case they run out of money and are forced to sell themselves.

Sure, well license you our Radeon IP, but in exchange we want - among other things - to convert the x86 license to a permanent irrevocable one.
 
Nice to know intel finally manned up with an IGP that isn't completely worthless. ;)

My thinking it's primarily for 4k display support. It's not going to be competing with AMD's graphics cards but more of that NUC/IGP market where it makes sense to licence it out.
 
Nice to know intel finally manned up with an IGP that isn't completely worthless. ;)

My thinking it's primarily for 4k display support. It's not going to be competing with AMD's graphics cards but more of that NUC/IGP market where it makes sense to licence it out.

I don't know, the Intel Iris Pro stuff wasn't bad for an IGP. It wasn't in a whole lot of parts, but that's beside the point.

Honestly though, I don't see where people keep calling AMD"s APU's such a big deal.

Sure, they may have been faster GPU's than Intel's typical IGP's, but still nothing I'd play games on. An intel IGP and an APU do everything else pretty much equivalently, so I don't really see the APU as a huge benefit.

I had a Kaveri 7850K in a HTPC when they first came out. Still didn't think they were anything to write home about in the graphics department

Faster than Intel IGP but still not fast enough for gaming means they may be faster, but still don't really add any more value.
 
  • Like
Reactions: N4CR
like this
isn't this just the same with nvidia licensing deal? some intel engineer once said that the licensing is mostly for patent issue so they did not step into other patent illegally while developing their own. it's not like they want to put other tech directly into their gpu
 
APU's do sell well for like Surface Pro, MacBook pro and other tablet type devices. So watching 4K movies, games (lower resolution) etc. can make some sells. Kaby Lake should do well with newer versions of Surface Pro and hopefully even better with AMD tech once that rolls around. Now I wonder if Intel is interested in HBM stuff ;), I think so. . .
 
The iGPU has gotten progressively better. What will Intel really gain from this, Kyle. I know you see it from the business perspective. How does this really help Intel? They are already dominant on the business side.
 
HBM costs will drop but you have other tech that provide more bandwidth that what is used now at lower costs than HBM. If the need is there then HBM will be adopted otherwise the other tech is good enough in the short term. And 1 to 3 years is just one generation of CPU's/iGPU/APU, do you think 1 generation is enough?




Yet AMD, Raja, stated 1 Q ago to create the manufacturing pipeline for HBM they still haven't recovered the cost?



GDRR5x first announcement for mass production was ahead of schedule by 1 quarter, which happened 1st quarter of this year, tech like ram takes about 1 year to produce, so they started around beginning of 2015, HBM was in the works from 2010.





And you don't think AMD thinks about bottom line too?
Think about this - Titan X at 4K with Watch Dog 2 comes to a crawl when you really start pushing the finer graphical options and boy they do look good! How much bandwidth will it take to get that performance from 22fps up to 60fps? Looks like a factor of 3 here. DDR 6 will not get you there easily.

When VR hits the 2nd stage or next generation with even higher resolution, how much bandwidth or what kind of processing power will you need? Especially if you really start pushing beyond the current graphics level? Looks like a factor greater than 3. DDR 6? Nope.

Process tech is coming to a snail pace so when 10nm and more laughingly 7nm tech actually will exist is really any's guess. Intel maybe has a good shot but GloFlo, Samsung or TSMC? Now if AMD can hitch a rid with Intel on their process tech, that would be rather cool.

So what other options are there besides multiple smaller GPU's on a very fast bus connected together - INTERPOSER - as in Navi. Looks like AMD is going in the right direction. DDR 6 looks dead to me unless progress comes to a snails pace, Intel style currently with cpu's.

HBM is a good solution to allow continue rapid advancement - the more it is used the lower the cost will become overall. There are many options with HBM2 that supports many different configurations for bandwidth and memory capacity. How low in the price range can it go and still be profitable is unknown. I expect AMD Vega generation to be faster then current Pascal generation.
 
This has more to do with GCN architecture and the api's it directly influenced. Mantle, Metal, Vucan, DX12.
Intel played quite a good game with their expensive Iris GFX, and adequate with their standard iGP, but they were in bad standing with anything DX11 forward.
I would guess they were pinched off and had to purchase some IP to move forward. They chose to pay who they needed to pay. Just business.
If GCN wasn't powering the two largest consoles in the world thing may be different.
But it isn't.
 
The elephant in the room!

While with time HBM may enter cards like 102 and 104 series, maybe 106 and Polaris 10 type series if really pushing it. There is a far way to IGPs and lower dGPUs.



Congratulations, you completely missed everything despite how hard it could be :)
Yeah, that is what she said . . . oops (y)
 
isn't this just the same with nvidia licensing deal? some intel engineer once said that the licensing is mostly for patent issue so they did not step into other patent illegally while developing their own. it's not like they want to put other tech directly into their gpu

Exactly.
 
Has traffic to the website and/or forum noticeably increased as a result of that Forbes article?

Won't just be the Forbes article driving traffic, there is at least 10 articles reporting on the deal, and probably more reporting on the stock activity as a result.
 
AMD has mastered cheap, decent integrated graphics. Intel has, apparently, not really found a way to make this a reality on their end without taking up a ton of silicon real estate and dramatically increasing costs. For MOST of us on here this is irrelevant, we buy discrete cards and don't really care. But...take my wife for example....I bought her a cheap AMD APU laptop and it'll play Dragon Age: Inquisistion, Sims 4, etc at playable framerates and I paid a litle over $300 for it. That's pretty incredible in my mind. My laptop with a I7-6700HQ, which can't do ANY form of gaming on it's own (even though the CPU itself costs as much as I paid for my wife's entire laptop), cost $1500 with a decent dedicated graphics card.

So make no mistake....AMD has something to offer Intel. If they could combine intellectual property and come out with a middle of the pack Kaby Lake with AMD Bristol Ridge level iGPU you'd see a revolution in the $500 laptop market. It would practically negate the low-end dedicated GPU market in laptops, decrease manufacturing costs (the extra copper and cooling for adding a discrete GPU is NOT cheap), and really bridge the gap between the consumer gamer and enthusiast products.
 
Think about this - Titan X at 4K with Watch Dog 2 comes to a crawl when you really start pushing the finer graphical options and boy they do look good! How much bandwidth will it take to get that performance from 22fps up to 60fps? Looks like a factor of 3 here. DDR 6 will not get you there easily.

When VR hits the 2nd stage or next generation with even higher resolution, how much bandwidth or what kind of processing power will you need? Especially if you really start pushing beyond the current graphics level? Looks like a factor greater than 3. DDR 6? Nope.

Process tech is coming to a snail pace so when 10nm and more laughingly 7nm tech actually will exist is really any's guess. Intel maybe has a good shot but GloFlo, Samsung or TSMC? Now if AMD can hitch a rid with Intel on their process tech, that would be rather cool.

So what other options are there besides multiple smaller GPU's on a very fast bus connected together - INTERPOSER - as in Navi. Looks like AMD is going in the right direction. DDR 6 looks dead to me unless progress comes to a snails pace, Intel style currently with cpu's.

HBM is a good solution to allow continue rapid advancement - the more it is used the lower the cost will become overall. There are many options with HBM2 that supports many different configurations for bandwidth and memory capacity. How low in the price range can it go and still be profitable is unknown. I expect AMD Vega generation to be faster then current Pascal generation.


multi small dies on the same process isn't going to help either lol, cause cost increases because now you need a very expensive interconnect, on top of the interposer on top of extra memory for the two or more GPU's to communicate.

It all comes down to when those other components are cheap enough to manufacture to sustain current bracket prices.

The only thing it will help with is yields of bigger chips vs.... and right now that is why nV is making the performance chips first before they go to the enthusiast chips or they do the professional cards first since volume is less there and they can cover the increased cost of having less yields because of the increased costs of those cards.
 
Last edited:
Everyone talks about Intel vs. AMD in the cpu sector, but has anyone wondered about Intel vs. Nvidia? Nvidia's stronghold is in GPUs, but they are expanding into self-driving cars, same as Intel, HPC, data centers, IoT, etc. Yeah AMD is fighting for that CPU money but at the end of the day, Nvidia's the rising star. Perhaps if Intel truly thinks Nvidia is a rival, they would be willing to go to AMD for GPU tech (beyond the patent deals). On that note, how does this deal configure into Kyle's article on RTG joining Intel? More possible or less likely now?
 
they will not use AMD GPU core IP, nor will AMD give Intel GPU core IP, cause that is their advantage over Intel.

nV can't compete with Intel on CPU's, outside of their CPU's as an appendage to have their GPU's to do their thing, that is all nV's CPU's are good for, for the time being. Intel has Phi to take on nV in DL and HPC specific areas.

The RTG splitting off, was a threat for AMD to get their asses moving on the importance of their graphics division. They can't sell off RTG, cause that would cause headaches for their APU, CPU and Semi custom divisions.
 
Last edited:
Everyone talks about Intel vs. AMD in the cpu sector, but has anyone wondered about Intel vs. Nvidia? Nvidia's stronghold is in GPUs, but they are expanding into self-driving cars, same as Intel, HPC, data centers, IoT, etc. Yeah AMD is fighting for that CPU money but at the end of the day, Nvidia's the rising star. Perhaps if Intel truly thinks Nvidia is a rival, they would be willing to go to AMD for GPU tech (beyond the patent deals). On that note, how does this deal configure into Kyle's article on RTG joining Intel? More possible or less likely now?

Completely valid. Intel has dominated the last almost 40 years since the microprocessor CPU became dominant....but much like IBM before them, they're a big, bloated, slow to move monster selling what's arguably not the tech of the future. GPU and ARM tech seem to be the new path, and both NVIDIA and AMD are players there already. NVIDIA already has a jump start in deep learning and such. It's probably getting near time for AMD and Intel to stop looking at each other as competitors and more like allies in the battle to keep x86 relevant and transition it into future tech.
 
AMD has mastered cheap, decent integrated graphics. Intel has, apparently, not really found a way to make this a reality on their end without taking up a ton of silicon real estate and dramatically increasing costs. For MOST of us on here this is irrelevant, we buy discrete cards and don't really care. But...take my wife for example....I bought her a cheap AMD APU laptop and it'll play Dragon Age: Inquisistion, Sims 4, etc at playable framerates and I paid a litle over $300 for it. That's pretty incredible in my mind. My laptop with a I7-6700HQ, which can't do ANY form of gaming on it's own (even though the CPU itself costs as much as I paid for my wife's entire laptop), cost $1500 with a decent dedicated graphics card.

So make no mistake....AMD has something to offer Intel. If they could combine intellectual property and come out with a middle of the pack Kaby Lake with AMD Bristol Ridge level iGPU you'd see a revolution in the $500 laptop market. It would practically negate the low-end dedicated GPU market in laptops, decrease manufacturing costs (the extra copper and cooling for adding a discrete GPU is NOT cheap), and really bridge the gap between the consumer gamer and enthusiast products.

I think you got it all turned around. ;)

Kaveri is 2.4 billion transistors, Carrizo is 3.1 billion transistors. Now try see what a dual core Skylake with IGP is. Not to mention who is much more energy efficient.
 
Completely valid. Intel has dominated the last almost 40 years since the microprocessor CPU became dominant....but much like IBM before them, they're a big, bloated, slow to move monster selling what's arguably not the tech of the future. GPU and ARM tech seem to be the new path, and both NVIDIA and AMD are players there already. NVIDIA already has a jump start in deep learning and such. It's probably getting near time for AMD and Intel to stop looking at each other as competitors and more like allies in the battle to keep x86 relevant and transition it into future tech.

GPU and ARM isn't the new path. And Intel isn't a dinosaur. It seems the savior for those "displeased" is always ARM when it cant be AMD.
 
Well that is like putting the final nail in AMD's coffin. The gpu in AMD's APU was the only thing that made AMD's processor better than Intels.

Not necessarily. If AMD can secure licensing agreements to provide cash flow, they can become like ARM, and never make a product, just license the technologies they develop. Would be a sad day for us to see AMD no longer producing products, but for the company, it could be a positive evolution.
 
Completely valid. Intel has dominated the last almost 40 years since the microprocessor CPU became dominant....but much like IBM before them, they're a big, bloated, slow to move monster selling what's arguably not the tech of the future. GPU and ARM tech seem to be the new path, and both NVIDIA and AMD are players there already. NVIDIA already has a jump start in deep learning and such. It's probably getting near time for AMD and Intel to stop looking at each other as competitors and more like allies in the battle to keep x86 relevant and transition it into future tech.


X86 has no competition from ARM at least not yet

main points

ARM doesn't have the software stack that x86 does

ARM performance CPU's aren't competitive yet.

The later might happen soon (so far haven't lived up what people have promised yet) but the former will not happen for a long time.

This is why companies making high performance ARM CPU's are focused on server first, because the software stack for servers is better than stand alone pc's, but only specific types of servers.

On Intel side of things, Intel is not like IBM, they are pretty nimble for such a large company. Just as an example, Intel with P4 took 5 years to get out Nehelam, how long did it take AMD to go from Phenom to Zen? 10 years. And in all this time Intel has diversified into all aspects of platform delivery. So no they are not sitting still and trying to do what IBM did.

Lack of pushing performance due to no competition is not the same thing as lack of innovation.
 
Last edited:
I think you got it all turned around. ;)

Kaveri is 2.4 billion transistors, Carrizo is 3.1 billion transistors. Now try see what a dual core Skylake with IGP is. Not to mention who is much more energy efficient.

Have you seen an Iris Pro chip? I'm talking die size, actual SIZE, ie real estate.

GPU and ARM isn't the new path. And Intel isn't a dinosaur. It seems the savior for those "displeased" is always ARM when it cant be AMD.

And...I didn't say dinosaur, I said bloated monster. Which they are. They're VERY good at doing one thing, maybe two if you count SSDs. But they've rarely been terribly innovative outside of that arena in a long time. Without AMD driving them to adapt x64 and consumer level multi-core, we'd probably be getting our first Core2Duos right about now....

Companies like Intel need to acquire to innovate, it's not a negative, it's just the reality. And, in this case, there is a company that does a BETTER job of making economical integrated graphics. If you can get on that intellectual property in a way that's beneficial to both parties, why the heck not?
 
You post is filled with red flags.

If HBM/HMC is so blindly good, why isn't server CPUs fitted with it?

CPUs dont care that much about latency. That's what caches are for.

So now you want both HBM and DRAM? Those cost keeps going up and those benefits keeps going down.

Heard about LPDDR4?

That depends on the workload. IF you can keep the working data out of the system RAM and only work from CPU cache, the speed difference is quite amazing. And yes, I have experimented with this and have a working program that lets me adjust the size of the data set to whatever I want.

Even when factoring in the extra cycles to set up a much bigger number of data sets, keeping the datasets small enough to stay in the CPU cache makes a world of difference.

HBM/HMC would be a pretty ideal setup for something like that.
 
Have you seen an Iris Pro chip? I'm talking die size, actual SIZE, ie real estate.

Yes, still as a quad core with 8MB L3 its much smaller at around 180mm2.

Kaveri is 246mm2 without any L3 and lets call it as it is, a dual core.
Carrizo is 245mm2 and much denser than Kaveri.

Both throttle like mad.
 
Yes, still as a quad core with 8MB L3 its much smaller at around 180mm2.

Kaveri is 246mm2 without any L3 and lets call it as it is, a dual core.
Carrizo is 245mm2 and much denser than Kaveri.

Both throttle like mad.

That's NOT including the the eDRAM it needs to function, and for 4x the cost.

And AMD has an OBVIOUSLY inferior CPU design on Kaveri and Carrizo compared to the Intel products, so yeah, they need to throw a lot more crap into it on the CPU side of things to make it comparable....but once again, that's not what was being discussed.
 
That's NOT including the the eDRAM it needs to function, and for 4x the cost.

And AMD has an OBVIOUSLY inferior CPU design on Kaveri and Carrizo compared to the Intel products, so yeah, they need to throw a lot more crap into it on the CPU side of things to make it comparable....but once again, that's not what was being discussed.

The EDRAM isn't needed to function.

And the main size of Kaveri and Carrizo is the IGP.

You simply got it wrong without checking facts first.
 
arm works off 3000mille amps or 3 amps of power for the whole device my phone has a smaller battery than the fastest ones, with only 3.85 volts and a total of 10 amps for the whole device. that is 3 amps at for the device verse intel's high end use 150 watts at 12 volts or 12.5 amps of power/current for the processor alone.

So the ARM tech has less current and the current flows slower.

Processors do work per cycle... arm is designed to compeite in the small form factor with as little power as possible. A workstation or server CPU is more concerned with doing as much work as possible. Think of amps as how much water there is and volts as how fast it is moving, then when the water goes through the various moustraps it needs enough water to spin the wheels [amps] and how much water is between the sluice gates per cycle is [volts]... so different purposes... AMD focused on using the energy more efficiently and in the Thunderbird days made some major strides with the current it is just since the core duos, that they really have not made a comparable cpu to Intel.
 
This guy makes an interesting point:


so Intel still going to be Intel CPUs with shit iGPUs or are we actually going to see AMD tech in there?

That saved me a bunch of time typing :p
I'm willing to bet my bottom dollar that this is all that is going on. I see no logic in there being anything more to it. I suspect that they've gotten everything they needed from Imagination Technologies in order to make their iGPU as capable as they needed it, but now will simply require the IP for them to legally continue using the graphics cores. Won't matter if it's from AMD or nVidia, they just are looking for that piece of paper which says "We, AMD, are A-Ok-peachy-keeno with Intel utilizing pat.# <long strings of numbers> for usage in their overly-expensive processors, in exchange for <large figure> of money that we desperately need."

However, if at any time Intel wants to further upgrade their chips, they'll PROBABLY be told "Do it your damn self, you're a big boy!", or at the very least will hold their hands out while rubbing their fingers together, to hint at the fact that it'll cost Intel if they want AMD (well, RTG) to do the work. Still, I don't see that happening as, IMO, AMD would be shooting themselves in the foot. It's the sort of thing you'd do if the tech would be used it only a limited number of products, thus being of little impact to your bottom line. Think of it how multiple car companies utilize Mercedes-Benz's high grade AMG engines in said 'multiple car companies' high performance vehicles, but they're not vehicles that are churned out in a very high volume.

That's just my 2cents, though. God speed to AMD in any case!

EDIT: Relevant quoted post accidentally buhleeted, unrelated to initial removed portion.
 
Last edited:
I was not aware that Kyle was a "controversial character", much less that he had acess to inside information from AMD after the nano review.:cool:

This is probably bad news for us consumers on the CPU side of things and good news on the GPU side. Less compettion for Intel, more competition to NVIDIA. AMD survives a few more quarters, gives up on the chance to build a good CPU for the chance to build a good GPU. The perfect Storm would be a GPU with Intel's manufactoring processes.
 
Back
Top