Separate names with a comma.
Discussion in 'AMD Flavor' started by Kyle_Bennett, May 27, 2016.
I wonder if this is an in for Intel to get their APU into the console market??
Unlikely. More to remove design restrictions on the IGP developments.
Intel already license from Imagination Technologies and NVidia. And the first one isn't in so good shape.
You can bet all you like but that isn't good evidence.
Right now? No one knows publicly. Its cheaper and more practical than any other option though. That much is clear.
Sure it does. Its faster and has less latency than off package system RAM so it can be used as a bulk cache when the iGPU isn't busy with it.
8GB isn't much of a limit. Especially for a iGPU. 2GB will probably be fine. No is expecting to get good performance at anything other than 1080p out of a iGPU for a very very very very long time. And what it offers can't be practically matched by anything else out there.
Right now but the future won't always be like the past. A fast decent sized memory solution would solve that issue quite well. Not enough to wipe out the high end GPU market. But definitely wipe out whats left of the low end and take a giant chunk away from the mid range.
Guesswork and "what if"... so tiresome...look at reality, not PR FUD...*sigh*
Reality called and is annoyed at your post.
Your posts a ABSENT of any form of evidence...nice own goal.
Uh it doesn't sell like hotcakes. No one outside of HPC market buys them. The volume isn't high at all. Not like GPU's where millions are sold each year. They're also nearly exclusively used in supercomputers and the like. Its a totally different market there.
Its a HPC card so...
Quite ironic that AMD has now split the graphics and CPU R&D teams that used to be more integrated (up to and including Zen).
HBM2 is already supposed to be cheaper than HBM just because of the increased memory density BTW.
You post is filled with red flags.
If HBM/HMC is so blindly good, why isn't server CPUs fitted with it?
CPUs dont care that much about latency. That's what caches are for.
So now you want both HBM and DRAM? Those cost keeps going up and those benefits keeps going down.
Heard about LPDDR4?
Now please, give me citation on your cost
HPC is a niche but high dollar value market. The volume isn't high.
Even more ironic that the graphics division runs on fumes. See SEC fillings about R&D being moved to CPU from graphics. -30M Graphics, +40M CPU for example.
Nothing there says the costs of HBM aren't going down at all.
And Hynix was said last year HBM2 would be cheaper than HBM because of the density a while back. Even your own article mentions more efforts to get the cost of HBM down by Samsung BTW.
You'd have to ask Intel I can't speak for them.
WHAT?! CPU 'care' massively about latency. Caches aren't a cure all to that issue. Its why they keep getting bigger and more complex with almost every new CPU arch.
DRAM for system RAM and HBM for on package cache is the only thing that makes sense if you want to talk about server CPU's. If you're trying to go back to talking about GPU's only then yes they'd only need HBM.
Will it provide TB/s of bandwidth? Doesn't look like it. It might be a nice replacement for GDDR5X which is great for the mid range or maybe a affordable high-ish end GPU but you wouldn't want to put it on die or package with a iGPU.
You forgot the citation on cost.
However Raja already gave his view on cost and why it wasn't used for Polaris for example and not coming to the mainstream anytime soon. And GP102 didn't get it either for the same obvious reason.
And by the looks of it I dont think you thought it through. Why do you think a 75W RX460 got 80GB/sec only and its enough? Or why 320GB/sec is enough for my 1080 at 180W. The P100 is 250W without NV link and 720GB/sec. Mainly for compute. See the issue? You think thorium reactors for the home will be ready in time for your IGP dreams?
on a related note. I hope you chaps can trade pre-market, I just loaded up on amd stock. Don't pump an dump me Kyle!
Well, if AMD get the same figure nVidia got 5yrs ago, for the same sort of deal ....$1.5 billion... then AMD are essentially debt free.
Nvidia got something around 60M$ a quarter over the span. However you have to see the deal 2 ways, its not a freebee for Nvidia or AMD tho it is easy money. There are both obligations in the deal as well as the competitive issue that cost for Nvidia and AMD.
Sounds like a sweet deal.
Don't have to. Hynix mentioned it multiple times in their demos its cheaper per GB than HBM1 since you can do the same with less stacks.
I gave a 1-3yr time line for a reason.
Most of the power being used in all of those cards is the GPU die and not the bus or memory. HBM is already known to be fairly power efficient for the bandwidth it provides and was a power savings for Fury/FuryX vs GDDR5 versions. That will continue to be true in the future with HBM2 and HBM3.
So the answer is no, you have no idea about the real cost. And then there is the interposer and TSV on top.
Now here is a clue about timeline. Its called GDDR6 in 2018. Something Samsung for example is jumping in on. Ask yourself why cheap and plentiful HBM is around the corner, so cheap that IGPs will get it as well. Yet the product stacks from companies doesn't show this. And they struggle to make a future lower cost version too.
HBM essentially did nothing for Fiji besides being a PR gimmick. But it did kill the cost structure completely.
WRONG. you can stop now.
Cheaper than HBM1 but not cheaper than GDDR type memories. You have to look at a cost of $120 and more when you factor in the interposer and labor and verification of the process for a full stake of HBM so how much cheaper is HBM2? It could be 1 dollar less, or 10 dollars less, but don't expect it to be 50% less.....
The bus for GDDR is part of the die and is clocked at the memory speed, which the bus can take up to 15% of the die space, so yeah the memory alone isn't the contributor of the power consumption but the bus for GDDR is.
GDDR5X/GDDR6 is almost on pair in mW/Gbps/pin with HBM2. While GDDR5 was 22 and 43% worse than HBM1 and HBM2.
If you mean exact die and production costs then no I don't but that isn't public info as far as I know. Also the interposer/testing/etc costs are all an 'of course' with HBM that go without saying.
14Gbps in 2018 is ho hum. Affordable vs HBM sure. But ho hum. Samsung is also jumping in on HBM. Your own previously linked Anandtech article noted this and was quite clear they were going to be doing cheap(er) HBM.
That is nonsense and you know it. The problem with Fiji is that its a GCN based product that was released on a process which it was barely practical for since 20nm fell through.
Absolutely. But I never said it was. But then GDDR of any sort also won't be suitable for a on package cache or frame buffer for a iGPU either so I'm not sure why you'd bring it up at all. As a memory solution for a mid range/high-ish range dGPU sure it'd make more sense than HBM will but that is a whole other ball of wax.
That isn't in dispute but that also doesn't disagree with what I said either.
But HBM2 has lots more pins. You probably won't ever see a 512bit bus GDDR5X/GDDR6 product much less one with a 1024 bit bus. Realistically, since GDDR5X and GDDR6 are both value focused memory products, you probably won't see a video card with more than a 384 bit bus using them.
Low cost HBM still won't cover the warranty costs and manufacturing and verification costs of implementation. So it will be more expensive than GDDR even at that point. And you just agreed to that in your line above so why keep going at it?
HBM did nothing for last gen, GCN couldn't utilize all that extra bandwidth.
Irrelevant since its accounted for. Hence mW/Gbps/pin. Something Hynix stopped talking about after GDDR5X was launched for a good reason. Low cost HBM will also reduce pin count and reduce speed.
Yes, for high end products. But certainly not for IGPs or other mainstream parts.
$9.14 a share and climbing!
Hehe, I wish I could ethically buy and sell tech stocks but that would not be a good idea given our input and exposure we have to the tech market.
well if you get the information from a round about way instead of directly from one of the companies hehe.
Those will get cheaper in time too as manufacturing methods mature. Because thread context. We we're talking primarily about using some sort of cache or buffer for iGPU. Occasionally Shintai would randomly bring up other stuff that had nothing to do with that but it was always the part about using HBM for a iGPU that I was trying to address. For some reason he thinks the future will be essentially like the past and since HBM hasn't scaled down yet in cost enough to be viable in a mid or low end product it won't ever. Which is ridiculous but there you go.
It didn't have to. It was overkill but didn't mean that it got no benefit at all. Especially at higher resolutions.
Higher resolution performance had nothing to do with HBM it was the increased shader amounts. Fiji was so bottlenecked in the front end that the increased shaders and bandwidth did it no good. As the res went up, the bottlenecks shifted to more shader and that is what we saw. On top of that AA and AF performance hurt Fiji more than Maxwell, which speaks to the ROP's and TMU's were being bogged down.
Nope. Per pin is per pin. Its not total bus bandwidth which is what you're trying to mix it up with and what I was hinting at by bringing up HBM will inherently always be much wider.
Vs HBM2 yes. Vs GDDR5X or GDDR6 no. Even your own chart shows low cost HBM with a 512 bit bus. You'll likely never see that for GDDR5X or GDDR6.
Your slide says nothing about them not doing iGPU's.
You do know that GDDR got a low bin count and a high GBps? That's why mW/Gbps/pin is the important factor. HBM isn't GDDR in a wide format.
Its not me with the IGP+HBM dreams
Higher resolution by default means you have to move lots more data around. Yes Fiji had plenty of issues but more bandwidth is always better.
Look its simple HBM even low cost HBM for the same amount of bandwidth that GDDR5x or 6 can give will be more expensive, now on an iGPU or low end GPU, what will the % differential be, it will be higher as you have lower cost GPU's and of course in iGPU the cost benefit ratio would be minimal. That is the problem.
As tech changes and the needs change, the price will drop because of further manufacturing efficiencies are done, but when will those happen, its going to take a while to see those trickle down, and in the meantime you will have other memory technologies come into play too. Just like we saw GDDR5x come out after HBM v1 and it was more cost effective.
Also the fact HBM/HBM2 are NEW using older Node for the interposer so will be slightly more expensive at the moment then GDDR5/5X main reason being, GDDR5 is OLD they produce so much of it so have had the time to fine tune yields etc resulting in lower pricing, HBM/HBM2 are far far better performance per watt this has been stated many times over, GDDR5X is better than 5 in giving more raw speed but pretty much same power use.
The simple truth is as such, we are only seeing HBM/HBM2 on the high/highest end stuff, i.e expensive, when producing a low cost part, they would not hamper it with expensive memory such as HBM/HBM2/GDDR5X why, cause they need to make $ and keep power in check, by saddling a low budget product with pricey memory they hurt the bottom line as in $ to be made from and sell it at.
You cannot directly compare how many things are sold with HBM/HBM2 with GDDR5/5X and say "the numbers of HBM/HBM2 are nothing but a PR gimmick" they have not been out nearly as long so will only count for a small small number of a company balance sheet until when and if more and more of them are being used in place of "standard" GDDR5/5X/6 or whatever, that time is not here yet, and if they were truly a "dead end" AMD/SK Hynix/Samsung/Micron/ JEDEC would not be continuing to persue optimizing speed/spec for HBM2-3 and beyond, they would have dropped it for something else, pretty sure no high tech company likes to chase multi-million dollar "pranks"
HBM allowed Fiji to keep power and performance in a better fashion for "profile" they intended for it to have, which it very much did, in nearly every regard, only downfall, limitation of 4gb, "low cost HBM" HBM2, HBM3 all jump this ahead by being faster, more available per stack, less power and so forth.
Many things can be said about AMD, but one thing that cannot, is that they are not innovators, GDDR3-4-5/HBM among many other things, all designed by or major contributor being ATi/AMD.
Anyways HBM/HBM2 saves power for the interposer being more or less directly "on the die" the closer to the die the better, so it wins over GDDR in this regard by many magnitudes, you need less chips for the required amount of GB/bandwidth again it wins in this, performance per watt and raw speed/pin again it wins in this regard, only thing HBM/HBM 2 "lose in" currently is cost, which again can be justified depending on the selling point of the product, as after all, we just use them and buy them for the price they are, or we dont ^.^
GDDR3-4-5 are ample for the "budget" to mainstream performance level as they are dirt cheap, can be plenty fast, and if not at high clock, they do not use a boat load of power
GDDR5X uses more power but is considerably pricier and faster (supposedly) so therefore relegates itself to "flagship" level products where the cost and performance needs can be justified
HBM/HBM2/UMC etc same thing, pricey but stupid amount of bandwidth, so really only currently usable in the highest end stuff.
because cost become prohibitive (not as many made/sold nor needed to be at this point) they do not take up a large % of a company's "invoice" data.
Exactly. GDDR5X instantly destroyed HBM1 and partly HBM2. GDDR6 looks to be the final nail for HBM2 outside the top bins. Then we can wait and see for 2020 or so with HBM3 and low cost, lower speed HBM2 variants. But again, why put it in an IGP.
Its clear gamers move up in graphic SKUs, not down. A faster IGP got no value as such. And nobody is willing to pay extra for it. You think people would know the last 5-6 years. And if the value was so great, we would see something like the EDRAM solution everywhere.
The interposer itself have to go, its a fixed static cost. Something like Intels EMIB can save on the cost there. Then the manufactoring and TSV issues. Not only does it add cost, but any failure=total loss. GPU, HBM and interposer out the window. Nothing to save.
And it keeps going around to the biggest issue with HBM, cost structure. Its just never in favour of HBM.
Low pin count compared to HBM yes. High GB/s compared to HBM no. HBM isn't GDDR in a wide format but it is a inherently wide bus. So much so that doing a per pin bandwidth comparison can be more than a bit misleading.
Didn't say it was, you're the one who dreams the future is a place where nothing improves over time. I don't really know why you would believe that but apparently that is the case.