From ATI to AMD back to ATI? A Journey in Futility @ [H]

Too niche for AMD.

Probably, such an APU only makes sense for boutique and mITX builds. Still, they could create an entire new niche with overpowered APUs with HBM2 freeing system ram bandwidth limitations.

Actually now that I think about it, it makes sense: notebooks, ultrabooks especially, and NUCs. Even Intel throws their Crystal Well APU into these higher margin markets.
 
That would be nice, if Intel has eDram for their regular APUs. But Intel figured it was too expensive for them to keep doing such a design. Their eDram Crystal Well powered APU was ridiculous, an entire 2x die is devoted to eDram.

HBM2 2GB should serve well with Vega's HBCC for any APU. Keep costs low while it's effectiveness as a cache keeps performance high with low system ram bandwidth.


hmm edram is some of the U model Kaby lake IGP's (mobile), but for the desktop ones directly connected to L3 cache, so still more benefit than what AMD has right now in current architecture, Vega should catch up to that with l2 direct access with ROP's.

Its not where the ROP's connect to that is important as long as they have direct access to the cache is whats important, right now the way AMD's uarch's are set up there can be a lot of cache thrashing if not coded right. So this is why you can't estimate the bandwidth savings for such a change, its highly application specific and programmer specific.
 
That would be nice, if Intel has eDram for their regular APUs. But Intel figured it was too expensive for them to keep doing such a design. Their eDram Crystal Well powered APU was ridiculous, an entire 2x die is devoted to eDram.

HBM2 2GB should serve well with Vega's HBCC for any APU. Keep costs low while it's effectiveness as a cache keeps performance high with low system ram bandwidth.

Yes, but not even Intel is making a special Kabylake With bigger IGP for that, they are just adding some eDRAM in a separate die in a MCM module.

If AMD can pull that off for a performance boost sure.

I am just saying they won't tape out a separate Raven Ridge with Big GPU section. AMD is all about cost containment.
 
I* believe that eventually you'll buy an AMD SOC that is literally all on one die even the DRAM. In that context it's no wonder they started cultivating HBM. Sure it's not ready for this now, but it will be.
 
It'll never be as good as CPU + GPU combo. What's the point?

Integrated GPU + HBM will probably be not that much cheaper than a MXM GPU+GDDR. CPU is a wash.

You get more flexibility and existing economies of scale with MXM designs.

Also, with external graphics becoming more affordable, why do you even want an APU?
 
a good, strong integrated graophics will always beat a half assed MXM design. Truth is these so called hybrid builds are full of compatibility issues, failure to wake up from sleep and deliver very poor battery life. Intel has gone the extra mile delivering very good iGPU upgrades over teh last generations. It is par for the course that AMd's APU bring an even better power/performance graphics to the mobile market.

external GPUs are another beast altogether: they are amazing, but require the user to stay connected, because gaming with a battery is not realistic using those.
 
Shouldn't you have had "enough solid information to defend your position" when you originally stated your position or am I missing something obvious?
And I did when I made the original statement. I have not been keeping up with all the changes going on, but there have been a lot to cover.
 
I read that article when you originally posted it, though it didn't prompt me to sign up to the forum. Because I was local to ATI, in a high end hardware design position and vocal in that community, I have a soft spot for what was ATI. They were pretty good about wining and dining to flog product btw.

AMD isn't spinning off GPU -right now. Even with a new , decent core, the APU market (and that includes custom designs) is all that AMD really has going for it in terms of product differentiation. Sure they'll score a few wins with Ryzen/Threadripper/Epyc but I don't ever think that's where the daily bread of their sales is going to be - Intel has that market pure and simple. It's mobilish stuff, with a SOC that includes GPU flogged to consumers on 500-600 dollar or less products where they make themselves the obvious better choice (if they can).

I'm surprised they are making a dash at compute with Vega, the fact that some of them might game OK... we'll see.

If they fail with Zen, if they can't make any inroads in server, then sure call it a day, slice up what's left and potentially spin off GPU just to extract shareholder value from the rotting corpse. While I don't doubt what you hear is real, I really don't believe RTG can force the issue until Zen based stuff flops , flop meaning widening losses over time. That isn't the vibe at the moment, but that can change in a heartbeat.

If RTG had a shot at a split it was prior to Zen, now we're in the middle of product roll outs based on that design and there's way too much happy happy joy joy about it even though I find expectations somewhat unrealistic. I mean there's definitely potential, but how often in the past has AMD marketing translated a great design into sales?
Intel has the big $ AMD does not. For survival in the long term AMD may need to be very inventive to stay around.

AMD does not have to sell outright but Intel and AMD can have a Joint Adventure which Intel's $ and AMD Intellectual assets are combined. I am not sure it would be gaming graphics that Intel would be too worried about, chunk change at best but more so HPC particularly Deep learning with fast FP 16/8 calculations - Intel is weak there. That is the new frontier that is growing exponentially. AMD would have to have something that can address that area and they do. While Nvidia is scrambling to keep their edge in this AMD can be a very upsetting force in all of this.

The deal AMD strikes up, can leverage the potential China server making deal in China or head that off if Intel has a good enough proposition.
 
I find it hard to believe AMD would give its latest and greatest architecture over to Intel, Intel is already ahead of them in DL and HPC with Phi too. I can see AMD giving away, last gen (GCN 1.3/1.4) to Intel or for a very specific product.
 
I find it hard to believe AMD would give its latest and greatest architecture over to Intel, Intel is already ahead of them in DL and HPC with Phi too. I can see AMD giving away, last gen (GCN 1.3/1.4) to Intel or for a very specific product.
Well, that's probably why it all fell through with Ryzen out.
 
Well, that's probably why it all fell through with Ryzen out.

You know for a fact the licensing deal is a no go?

I see any potential licensing deal as a natural progression to the fact they already cross license each others x86/64 CPU IP. A strong secondary supplier of x64/GPU is rather important for Both Nvidia and Intel, otherwise they are operating monopolies which makes doing business a little more difficult than it is today.
 
I know for a fact it was not done at the time of denials.

Yeah that statement has no wiggle room. You are confirming 100% there is no signed licensing deal, but not that they are (for example--- EXAMPLE) in the middle of pushing it through legal.
 
Hey Kyle, any updates or status? If not, an ETA on when you might have something more perhaps?
Thanks in advance..
Cheers
 
Just outed yourself. How are those shares going bro?
Waiting for AMD to drop a little after Q2 report before going all-in.
Yeah that statement has no wiggle room. You are confirming 100% there is no signed licensing deal, but not that they are (for example--- EXAMPLE) in the middle of pushing it through legal.
My statement has all the wiggle room it needs. Both of them in fact. In fact, it seems too obvious to me that if Ryzen was a flop, it would be done a quarter ago.
 
I think you are vindicated Kyle.

The engineering sample of this monstrosity has arrived on Sisoft and GFXbench. It's on the main page of videocardz.

Intel CPU + Radeon gfx9 (Vega architecture?)?.. some weird CU and 1720 SP counts (might not be reading right, or some rather strange configuration here).

I don't believe it's on the same die, more like Intel's MCM approach.

Definitely for Apple, IMO. Only Apple would want a premium iGPU of such high performance.

RTG would enable this, without conflicting with AMD's business. Basically RTG sell Intel GPU chips, Intel packages it next to their CPU dies as a custom SOC. There would be zero need for licensing at all, Intel becomes a customer of RTG chips.
 
Awww boy.. 694C:C0

intel CPU - AMD GPU.jpg
 
I think you are vindicated Kyle.

The engineering sample of this monstrosity has arrived on Sisoft and GFXbench. It's on the main page of videocardz.

Intel CPU + Radeon gfx9 (Vega architecture?)?.. some weird CU and 1720 SP counts (might not be reading right, or some rather strange configuration here).

I don't believe it's on the same die, more like Intel's MCM approach.

Definitely for Apple, IMO. Only Apple would want a premium iGPU of such high performance.

RTG would enable this, without conflicting with AMD's business. Basically RTG sell Intel GPU chips, Intel packages it next to their CPU dies as a custom SOC. There would be zero need for licensing at all, Intel becomes a customer of RTG chips.


That doesn't make sense. That would be cramming Polaris 10 into a CPU package, with no memory bandwidth.

Apple has much better choice, doing what they do today, putting AMDs GPUs on the MB, along with GDDR memory to actually feed it.

Also, the reset of the info is nonsensical. It says it is a 1 Core Intel CPU.
 
That doesn't make sense. That would be cramming Polaris 10 into a CPU package, with no memory bandwidth.

Apple has much better choice, doing what they do today, putting AMDs GPUs on the MB, along with GDDR memory to actually feed it.

Also, the reset of the info is nonsensical. It says it is a 1 Core Intel CPU.
Who knows, maybe it is a cut down Vega chip with HBM2 on the same package as the CPU. Wouldn't that be interesting?
 
Who knows, maybe it is a cut down Vega chip with HBM2 on the same package as the CPU. Wouldn't that be interesting?

Be more interesting if AMD did it.

I've been thinking about this and I'm calling bullshit. It doesn't make economic sense even for Apple.

If I'm wrong I'll own up to that later.

The only possibility that comes to mind is that this is an Intel design of an AMD IP holding and has it's own dedicated memory in some form even if that's two channels of a quad channel controller for GPU only. Something like that, because feeding that many SP's requires something more than what's in an Intel CPU at the moment.
 
Who knows, maybe it is a cut down Vega chip with HBM2 on the same package as the CPU. Wouldn't that be interesting?

Sure with a 1 core Intel CPU, with 528 KB cache and 10.4 GB of RAM. ;)

Everything in the post is absurd, but hey it say Intel and AMD in the same sentence, so; Unicorns are Real !!!
 
Sure with a 1 core Intel CPU, with 528 KB cache and 10.4 GB of RAM. ;)

Everything in the post is absurd, but hey it say Intel and AMD in the same sentence, so; Unicorns are Real !!!

It's weird as hell but we have no idea of design or drivers used.
Could be artificially restricted like pre-release GPU drivers.

Some of that RAM could be set aside for slower VRAM if onboard is used up? Who knows... We need more details but it's a weird oddity to pop up with all the talk going on about this.
 
TV or some sort of console might make sense with a shared memory pool. Current push with Metal2 was GPU driven graphics so a fast CPU might not be required. Intel CPU because of Apple.
 
That doesn't make sense. That would be cramming Polaris 10 into a CPU package, with no memory bandwidth.

Apple has much better choice, doing what they do today, putting AMDs GPUs on the MB, along with GDDR memory to actually feed it.

Also, the reset of the info is nonsensical. It says it is a 1 Core Intel CPU.

Doesn't seem to be iGPU on the CPU.

But an MCM. The GPU seems to be 1536 SP w/ 4GB (HBM @ 800mhz) + Intel's HD 630 so the SP count is read as 1720.

For clarification, L2 on Intel HD 630 is 512kb shared, while L2 on each GCN SP is 16kb, it adds up to 528kb.

A 1536 SP GCN would be 24 CU. HD 630 is 23 CU. Add up it's 47 CU.

Sisoft apparently will add up iGPU + dGPU if you run the GPU benchmark on dual GPU system.
 
I have been reading HardOCP for almost 20years and had and still have full confidence in Kyle's statement.

EDIT: This is sad :( ba294 n00bie Joined: May 2, 2009 Messages: 1
For some reason, couldn't find my old account here :)
 
Last edited:
What about some type of car entertainment/navigation/control system?

Tesla and Ford both have their headquarter at Palo Alto (but of course it could just be some engineer's home city) and might want to up their game regarding usability and features of their cars. Apple is known to create intuitive GUI solutions and have a quite nice software ecosystem for embedded plugins (i.e. iCloud and AirPlay).
 
  • Like
Reactions: ba294
like this
What about some type of car entertainment/navigation/control system?

Tesla and Ford both have their headquarter at Palo Alto (but of course it could just be some engineer's home city) and might want to up their game regarding usability and features of their cars. Apple is known to create intuitive GUI solutions and have a quite nice software ecosystem for embedded plugins (i.e. iCloud and AirPlay).
Ford uses Microsoft, Ford Sync.
 
So, would it make sense for Tesla to team up with Apple to improve their infotainment system? I don't own a Tesla car but as far as I have seen their GUI was not as cutting edge as their driving technology, so possibly they got some improvements for their new Model Y?
 
my 2016 has it, could be a carryover.

I probably should have said they started transitioning to QNX in 2015. My 2015 Ford has the MS system but my 2016 F150 has the QNX one. Also remember that model years are not the same as calendar years :)
 
Back
Top