Statement from AMD - Raja Koduri Leaves

GCN and CUDA isn't the same category. The AMD counter to CUDA is OpenCL.

You are trying to mix physical and non physical parts and claim its the same. Hardware vs software.

That was really useful. I'm learning things.

I am taking a look at the micro architectures right now and what hasn't helped is how they name things.

For instance, you are right about Cuda however they advertise as a marketing term Cuda cores and AMD has something similar called stream processors which are essentially also Cuda cores. However Cuda cores aren't anything so special other than cores that are enabled for the Cuda software to interact with them.

It's taking a little bit too Wade through the marketing lingo and get to the heart of it but I am getting there.

I may end up coming around to your way of thinking by the end of it.

My main concerns are that whatever AMD has going forward it presses the advantage in terms of freesync, async compute, and the other advantages it has simply by nature of AMD working with Kronos and Microsoft in order to have their unique hardware developed for.

At the same time when I was comparing the Maxwell and Pascal architecture to amd's latest iteration, what jumped out at me was that Nvidia has found a way to pack in way more Cuda enabled cores while at the same time improving on the efficiency of them, hence the better power efficiency.

Is that about right from your perspective?

From a design standpoint then, it would mean AMD is actually doing more with less, however is also running into limitations that prevent them from scaling their architecture. Hence the larger power requirements.

Does all this sound about right?
 
AMD is doing less with more. Because the hardware isn't capable for todays needs. Hence the issue with needing -~75% more transistors, ~90% more power and ~50% more memory bandwidth to get somewhere near the competition.

And the issue is GCN because its not matching todays needs. It was great in 2012 and 2013, after that it had peaked and went downhill. GCN got long in the tooth. AMD needs a "Maxwell" and "Volta" for post GCN.
 
Last edited:
AMD is doing less with more. Because the hardware isn't capable for todays needs. Hence the issue with needing -~75% more transistors, ~90% more power and ~50% more memory bandwidth to get somewhere near the competition.

And the issue is GCN because its not matching todays needs. It was great in 2012 and 2013, after that it had peaked and went downhill. GCN got long in the tooth. AMD needs a "Maxwell" and "Volta" for post GCN.

For once Shintai, I completely agree with just about everything you have been posting in this thread.

AMD are in the same place with their GPU's as they were in with their CPU's 10 years ago. AMD cannot afford to keep putting bandaids on the infirmed GCN architecture. They need a new architecture no later than next year, and not just yet another respin of GCN. If they do that, and don't overtake nVidia's current top end card for less money, then AMD graphics will fail, and AMD do not have the money to claw their way back out of it. Bad management from AMD, as usual. They are the masters of having it all, then pissing on it.
 
AMD is doing less with more. Because the hardware isn't capable for todays needs. Hence the issue with needing -~75% more transistors, ~90% more power and ~50% more memory bandwidth to get somewhere near the competition.

And the issue is GCN because its not matching todays needs. It was great in 2012 and 2013, after that it had peaked and went downhill. GCN got long in the tooth. AMD needs a "Maxwell" and "Volta" for post GCN.

I did some digging into what Navi is supposed to be. Obviously there isn't much out there.

What I did find was interesting but really in the end tells me nothing.

Ideally Navi will be 7 mm. Given the size of the gpus in the past I am wondering how they will do this if it is GCN.

What they are planning on making use of is the infinity fabric in ryzen processors in order to stack multiple GCN units, the way threadripper stacks cores.

However, do not quote me on it being multiple gcn units. That part is my assumption. Their main GPU is what they are shrinking and stacking.

It doesn't exactly tell me though that they're getting away with putting out less heat or needing less power. What little I read made it seem like those AMD trademarks will still be there.

I could see this going one of two ways with no middle ground.

On the one hand, stacking units and for the sake of argument let's say they are gcn units, could allow AMD to cheat in a sense and because of inroads already made with Vulkan and dx12, suddenly we see AMD just destroying Nvidia in terms of gaming performance. We would have a very weird dichotomy I think in that for DirectX 11 games, Nvidia would still be king of the hill by a large margin. But for DirectX 12 and vulkan games, AMD would absolutely destroy.

That's the very Rosy side of things.

Here's the other side of the coin and in my opinion more likely.

Infinity fabric is not some miracle catch all. However minimal there is a little bit of latency. And as you said gcn seems to have run its course and is plagued by scaling and heat issues.

So Navi could very easily be the graphics cards' bulldozer.

It's something that sounds really neat in theory, but when put into practice you get a lot of bad with little upside.

Depending on what they do, it's entirely possible that stacking gcn units will not make a damn difference because of game coding or poor design or something else.

I just a reimagining all of the bulldozers flaws only on the GPU level.

I don't think there is a middle ground for this.

Ultimately what this leads me to think is that those who are predicting that AMD will go back to just processors may be right. Perhaps they will be out of the video card Market by 2019 and will sell off the graphics card division.

AMD has a hell of a hit on its hands with a risen and if it can scale will give Intel a very hard time.

For the RTG, people for paying attention know very well that they are treading water to recoup research and development spending. They aren't heavily marketing their graphics cards and have no problems with minors buying up their cards because at the end of the day A dollar is a dollar.

The one point where I will disagree with you on is that if AMD can recover its research and development costs that went into the Arctic Islands architecture, they may be able to finance development of a new architecture. Those research and development dollars have to go somewhere, and if Lisa Su still carries an interest with graphics cards, and with the main research and development already done for ryzen, they can afford it to divert research and development dollars into a new architecture for graphics.
 
For once Shintai, I completely agree with just about everything you have been posting in this thread.

AMD are in the same place with their GPU's as they were in with their CPU's 10 years ago. AMD cannot afford to keep putting bandaids on the infirmed GCN architecture. They need a new architecture no later than next year, and not just yet another respin of GCN. If they do that, and don't overtake nVidia's current top end card for less money, then AMD graphics will fail, and AMD do not have the money to claw their way back out of it. Bad management from AMD, as usual. They are the masters of having it all, then pissing on it.

I agree with the assessment with the exception of not having the money to claw out of it.

People said the same thing about bulldozer, and eventually they were able to claw out of it by hiring Jim Keller who designed their next chip. They did this when they were in a far worse Financial spot than they are now.

The ryzen threadripper research and development money is already sunk into those designs which are paying off remarkably. Therefore some of those future research and development dollars can be diverted to the RTG on a short-term basis.

Indeed, what we may be seen with Lisa taking over temporarily is her wanting to put someone in place with RTG who she can trust with that research money. They may then go out and hire their own Jim Keller on the graphics side. That was what Raja was supposed to be, but it turned out he only wanted to spin the RTG off so that they could be acquired by Intel.

They did do some remarkable things under Koduri I will say. Drivers are more consistent and reliable, the Partnerships they made with Microsoft and the Cronos group have been instrumental in AMD remaining competitive with Nvidia in Vulkan and DirectX 12 games, and they have made some very smart strategic Partnerships.

But I think if Lisa is going to divert some of the money they have coming in from the success that ryzen is, she needs to be able to trust who she's giving it to.

There is a window here for RTG to get back on its feet. It will not happen quickly. They need to hire a replacement for Raja, and that replacement needs to hire a chief engineer like a Jim Keller, even if it is on a temporary basis.

This is just me, but I would try to hire one of the people responsible for the design of Maxwell if not Pascal from Nvidia. Regardless they need a genius engineer who can take those research dollars and turn them into the same Gold Jim Keller did with ryzen.

It can be done and they are in a better situation than they were with bulldozer to do it. But even if they succeed, we likely will not see the results for another 3 to 5 years. That is just how long these things take.
 
Word is that Raja went to Intel. I suppose it makes logical sense given Intel and AMD are now joined by gpu.

 
Now that Raja will be at Intel, what can we expect?

Is there a place for a 3rd contender on the 3d graphics card front?
 
Now that Raja will be at Intel, what can we expect?

Is there a place for a 3rd contender on the 3d graphics card front?

I don't think there's room for a 3rd given that there's no way to make a gpu w/o infringing on IP from either Nvidia or AMD. That would make strange bed fellows, though I suppose its now down to AMD w/o knowing what IP license arrangements they've inked with Intel thus for.
 
Now that Raja will be at Intel, what can we expect?

Is there a place for a 3rd contender on the 3d graphics card front?

There always has been.

Just no one with the stones or capital to make it happen.
 
I don't think there's room for a 3rd given that there's no way to make a gpu w/o infringing on IP from either Nvidia or AMD. That would make strange bed fellows, though I suppose its now down to AMD w/o knowing what IP license arrangements they've inked with Intel thus for.

They have Intel's budget and connections, and amd open sources a LOT.

I think Raja and intel can find a way to be a 3rd player.

Also wouldn't be surprised if there ends up being a mega deal down the line where AMD sells the RTG to Intel...
 
They have Intel's budget and connections, and amd open sources a LOT.

I think Raja and intel can find a way to be a 3rd player.

Also wouldn't be surprised if there ends up being a mega deal down the line where AMD sells the RTG to Intel...

Yea, there's some serious wonder twin powers activate thingy going on between Blue and Red!
 
They have Intel's budget and connections, and amd open sources a LOT.

I think Raja and intel can find a way to be a 3rd player.

Also wouldn't be surprised if there ends up being a mega deal down the line where AMD sells the RTG to Intel...

I have a feeling that is the eventual case, probably in tune of AMD's long term debt and some pocket change.
 
Yea, there's some serious wonder twin powers activate thingy going on between Blue and Red!

I actually see this playing out in a few different ways.

One could be that Scott herkelman uses his Nvidia connections and hires some former Nvidia engineers to help develop the next architecture for AMD and the RTG.

Raja develops a real Nvidia competitor using his knowledge and engineering as well as leveraging every open source AMD initiative available, thus creating the new architecture shintai mentioned but working with all the fringe benefits that RTG enjoys now in DirectX 12 and vulkan titles. This innocence would be a spiritual successor.

In this scenario, you end up 5 years down the line with a 3-way graphics card battle, each going their own unique ways, but in some ways AMD and Intel would both benefit from amd's open source structure while and video would be stuck with its in-house software and development.

Another way it could play out is AMD fails with the RTG and sells it for straight up cash to Intel which then has all the intellectual property it needs and the finances to create a competitor and you have a 2 card race. AMD ends up financially benefiting and getting a much-needed cash infusion which could go right towards research and development.

The great irony would be that if this did happen, in focusing on Nvidia, Intel would end up in a baleen AMD to possibly overtake them in terms of processor.

The third way I see it playing out is everybody fails. AMD has egg on their face, intel has egg on their face, and Nvidia laughed their asses off, until they realize they have a monopoly and suddenly are getting hammered left and right until they eventually break apart and cease being a single entity because of all the antitrust anti-monopoly laws across the world.

When you factor in all the different scenarios, the only one that has potential benefits for all three is a 3 card race. Any other outcome has the potential to royally fuck over one of the three companies.

Although when you consider it, the one in the best position, or at least the one who stands to lose the least is AMD.
 
For once Shintai, I completely agree with just about everything you have been posting in this thread.

AMD are in the same place with their GPU's as they were in with their CPU's 10 years ago. AMD cannot afford to keep putting bandaids on the infirmed GCN architecture. They need a new architecture no later than next year, and not just yet another respin of GCN. If they do that, and don't overtake nVidia's current top end card for less money, then AMD graphics will fail, and AMD do not have the money to claw their way back out of it. Bad management from AMD, as usual. They are the masters of having it all, then pissing on it.

Well it is clear that yourself and Shintai have little to no knowledge what GCN actually is. Don't you find it weird that every card is GCN ? Have you ever stopped to think of how it is GCN ? The only thing you infer is that the performance is crap so it must be GCN.

https://www.amd.com/Documents/GCN_Architecture_whitepaper.pdf
 
I wonder why Raja didn't have a non competition disclosure on his contract. Big fail for AMD.
 
2ba.jpg
 
Raja with Intel's budget will be interesting...

Just think of the keynotes he can give with all that budget!

But I like to think that Raja has had his move to Intel planned for a very long time. I saw it coming long before his "holiday" was announced.

Anandtech has it confirmed now... Untrustworthy man leaves AMD for Intel, in planned move that shocked no one, and will help Intel build a new GPU, after failing to make AMD spin-off Radeon GFX, so that Intel could buy it. His next move will be to suck out all the good talent from AMD to his new team at Intel.

AMD is really in the shit now.
 
Last edited:
I guess it's pretty obvious that Raja left AMD. RTG didn't have the budget to make Raja shine. It's going to be interesting what he can do at Intel... Intel has the money and they have superior process compared to GF.
 
I did some digging into what Navi is supposed to be. Obviously there isn't much out there.

What I did find was interesting but really in the end tells me nothing.

Ideally Navi will be 7 mm. Given the size of the gpus in the past I am wondering how they will do this if it is GCN.

What they are planning on making use of is the infinity fabric in ryzen processors in order to stack multiple GCN units, the way threadripper stacks cores.

However, do not quote me on it being multiple gcn units. That part is my assumption. Their main GPU is what they are shrinking and stacking.

Navi is in reality just a glorified Vega with a node shrink.

Infinity Fabric will be used on Vega 20, not Navi 10. And it will serve the exact same purpose as Nvlink. A replacement of PCIe for HPC. The consumer versions wont have it at all.

Vega 20 is just Vega 10 with FP64 and GMI links and a node shrink.

4096SP for all chips.
 
Intel already got full access to NVidia patents. All they need.
 
I'm not really up on the GPU side of things, so somebody help me out here. But given the resources at Intel, if they push hard I suspect their dedicated GPU side could come up to speed pretty quick right? Nvidia has probably only been pushing just enough to stay above AMD, but they could fairly easily push quite a bit harder?

So in 3-5 years we'll have Intel vs Nvidia, and AMD left in the dust?
 
I'm not really up on the GPU side of things, so somebody help me out here. But given the resources at Intel, if they push hard I suspect their dedicated GPU side could come up to speed pretty quick right? Nvidia has probably only been pushing just enough to stay above AMD, but they could fairly easily push quite a bit harder?

So in 3-5 years we'll have Intel vs Nvidia, and AMD left in the dust?

It is possible. But their strategy may not be that. Its already quite easy to scale Intels current GPU upwards, they just never had intentions for it.

I think we get a mix of something like the Apple product they make with entire SKU lines with decoupled IGPs. So the IGP section on the CPU itself is gone and EMIB is used for interconnect.

Dont expect Titan killers. Not even in 5 years. Think something along the line of very efficient ~GTX 1060 mobile GPU solutions on leading edge nodes with full game ready driver support.
 
It is possible. But their strategy may not be that. Its already quite easy to scale Intels current GPU upwards, they just never had intentions for it.

I think we get a mix of something like the Apple product they make with entire SKU lines with decoupled IGPs. So the IGP section on the CPU itself is gone and EMIB is used for interconnect.

Dont expect Titan killers. Not even in 5 years. Think something along the line of very efficient ~GTX 1060 mobile GPU solutions on leading edge nodes with full game ready driver support.

I'm thinking more "Intel smells the profit margins in AI and wants more of the pie" Even if they make dedicated discrete chips on cards, this will be intels real bread and butter.

What a really big question here is: How much of a license did RTG do with Intel? Do they have patent-license rights to Heterogeneous computing and memory space? HBM? DP/2SP compute engine designs?

With NVIDIA's patents and RTG's patents, Intel has a hell of a portfolio they can work from. And if Raj did give Intel such a sweetheart deal, he royally bent AMD over. With true 10nm coming on line (and possibly 7nm or less by first release) Intel could give both companies a HUGE headache.

On the con side: Intel does have a history of investing/acquiring, then letting things die because it cost them too much. The laundry list of Intel's failed ventures is quite long.
 
There is no patents between Intel and RTG. Intel got all the patents it needs from Nvidia...forever. RTG got nothing to offer.

And HBM isn't even something AMD designed. Its owned by Hynix if any. And the rest is just fancy words for something the rest already got or can do without patents.

AMD let RTG die. They went 100% CPU and dropped everything for GPUs back in 2012/2013. Expect Raja to pick everyone worth their salt left at RTG and move them to Intel. If Apple and others didn't pick enough already.
 
Last edited:
There is no patents between Intel and RTG. Intel got all the patents it needs from Nvidia...forever.

And HBM isn't even something AMD designed. Its owned by Hynix if any. And the rest is just fancy words for something the rest already got or can do without patents.

And you know this how? It boils down to licensing agreements which are usually internal. News of the deal just leaked Tuesday.

Here's the way I'm looking at it: While you may not like HBM and call it inferior it's great for large linear access and MIMD ops. And that's very useful for AI training datasets. And that is why NVIDIA uses it on their AI platforms. Obviously Intel has to have some type of agreement in place for them to receive HBM chip packages or this whole deal from Tuesday is moot. Now what are the terms of that deal?

NVIDIA's patents only extend up to this point. Any new technologies do not apply to their licensing agreement. And NVIDIA does not have a GOOD heterogeneous memory solution like RTG does.

3D gaming cards actually have a thin profit margin. Intel has never been about thin profit margins. That is why they dropped so many startup ventures. They milk every penny they can. The real money is in Mining specific and AI cards.

I'm looking at Tuesday's deal and I'm thinking that chip packaging probably adds about $100->$150 to the cost of the stand alone processor (depending on the HBM speed & Vega chip specifics).

I think it's time I readjust my portfolio.
 
Last edited by a moderator:
Think something along the line of very efficient ~GTX 1060 mobile GPU solutions on leading edge nodes with full game ready driver support.

I could see a decent market for power efficient OpenCL compute cards/modules.
 
And you know this how? It boils down to licensing agreements which are usually internal. News of the deal just leaked Tuesday.

Here's the way I'm looking at it: While you may not like HBM and call it inferior it's great for large linear access and MIMD ops. And that's very useful for AI training datasets. And that is why NVIDIA uses it on their AI platforms. Obviously Intel has to have some type of agreement in place for them to receive HBM chip packages or this whole deal from Tuesday is moot. Now what are the terms of that deal?

NVIDIA's patents only extend up to this point. Any new technologies do not apply to their licensing agreement. And NVIDIA does not have a GOOD heterogeneous memory solution like RTG does.

3D gaming cards actually have a thin profit margin. Intel has never been about thin profit margins. That is why they dropped so many startup ventures. They milk every penny they can. The real money is in Mining specific and AI cards.

I'm looking at Tuesday's deal and I'm thinking that chip packaging probably adds about $100->$150 to the cost of the stand alone processor (depending on the HBM speed & Vega chip specifics).

I think it's time I readjust my portfolio.

You seem to forget all the knowhow Intel got. And maybe you missed the last few years. Gaming is high margins now. What do you think Nvidia shovels money in on? Not to mention the HPC cards Nvidia make with GDDR. The fastest AI card for Pascal didn't use HBM. HBM adds nothing but ECC and size. Everything else is just PR crap that only resides inside powerpoint slides.

Intel already got HBM chips if you didn't know. Its used on some FPGA chips. HBM is an industry standard and AMD was last to the table with HBM2. Over a year late compared to everyone else. All this nonsense gets you nowhere.
 
Intel already got full access to NVidia patents. All they need.

No they do not, you obviously cant read what the lawyers drew up in the lawsuit settlement, they cant use anything from Nvidia patents that has anything to do with AI processing. Second of all they are not allowed to modify the design without a licensing agreement being renewed. They cant use Nvidia tech like they would need to to make gaming cards or hell even AI cards. They will have to come up with a new way to do things or get a licensing deal done with someone.
 
Damn, I remember thinking how crazy that sounded back then...but props and respect to you, not perfect but scarily accurate! You were right, I was wrong. I'm impressed you saw this coming that long ago. :)

The last few days have caught me off guard as I haven't been as good at keeping up with the news, so thanks for the thread and all the useful info. Probably the best info/analysis thread of it I've found on the web so far, thank you.
All good. Some things that I wrote about at the time, 1.5 years ago, changed a bit over time as well.
 
Any thoughts on how bad you feel AMD wanted to keep Koduri? I ask b/c of the head-butting backdrop between he and Dr. Su that you've alluded to in the past. I also agree with your opinion that he's not been an optimal front man for AMD graphics. AMD graphics have turned into the sore spot of the company. So Raja's leave-of-absence then departure to Intel plays out more like AMD is ok w/him leaving?
I think it was agreed upon by both parties. That is just my opinion. I do not have sources on that.

And how do you see AMD graphics playing out from here?
Nope, not at this moment.
 
Ok, my craziest theory at the moment is that Intel didn't plan this until Raja was shafted by AMD. They'd been looking for a graphics guy for a while, but there's not been a whole lot of really good qualified people available. The only reason I'm thinking that it had to be a recent decision is that if Intel knew they were going in heavy on GPUs they'd have scooped up all the ImgTech engineers when Apple screwed ImgTech, but they didn't. When they sidelined Raja I know he wasn't happy about it, I knew there were tensions 'tween him and Lisa Su as you could just see it in there body language and clashing personalities. (NOTE: Having had more than a few conversations and drinks with Raja over the years has given me the impression that he is EXACTLY what you'd think of him as, a very soft spoken and passionate man who's depth of knowledge is scary as hell but he can simplify it down and loves explaining technology to people. He's just a big sweetheart, honest. It's all I've ever known him as. Met Lisa Su for a few hours once and she was showing me exactly the way she wanted me to see her. With Raja you feel like you're just hanging out, with Lisa Su you sort of feel like you should be forcing a bit of an enthusiastic smile and nodding at what she says. TL/DR- Lisa is a suit, Raja is a t-shirt and jeans person)

My question is did AMD know what was going on or did Intel pull off a fast one under Lisa's nose since she was so eager to dominate Raja and RTG? On my theory they didn't and are now in shock and lock down, and I haven't been able to get any comments out of anyone besides that one official announcement....which is telling as hell to me.

Does this make any sense? Sorry if it doesn't, like I say I'm rusty but interested as hell in this one since I've been following/involved with it since about SIGGRAPH.

You can take the geek out of the community, but you just can't take the community out of the geek....MAN it feels great to cruise the boards again trying to figure out what is what! :D
 
Last edited:
Ok, my craziest theory at the moment is that Intel didn't plan this until Raja was shafted by AMD.
I think the same, and that leads me to ask perhaps a less popular question on his move (only for understanding's sake, I've nothing personal against Raja) - what convinced Intel that Raja's management and product execution at AMD qualified him for an upgrade to their much larger budget and market opportunities? I mean, we all know his budget was limited at AMD, which may have constrained his execution, but:
  1. AMD's changing that with increased R&D spending, and
  2. bad products and execution can be born from large budgets as well
On the flip side, I've always felt a significant contributor to AMD's lackluster graphics has been Global Foundries low power plus process (I hope someone corrects me if they feel I'm inaccurate here), and that may have been out of Raja's control. It's not meant to be pushed to high clock speeds, which is in part why we see high temps and power w/AMD's graphics and cpus when clocks are pushed. Had he been able to build GPUs on a more clock friendly process, I think we'd be having a different conversation about AMD's GPUs, and that'd in turn change my perception of Raja's execution at AMD.
 
Last edited:
Back
Top