Monstrous 500W Intel Xe MCM Flagship GPU Leaked In Internal Documents – 4 Xe Tiles Stacked Using Foveros 3D Packaging

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,874
500 Watts? you got to be kidding me, that's crazy

"Then we have SDV or Software Development Vehicles which are also prototypes, but I would say much closer to the final mainstream product. These appear to use standard PCIe although it is not clear how Intel plans to manage that considering they are still drawing 500W from the wall. Maybe Intel plans to augment its GPUs in a separate enclosure or through some auxiliary PSU. The fact that there is a PCIe expansion box present in the table and the SDV all list "new design" seems to lend credence to this theory. The 500W variant also pulls 48V in total. This is absolutely insane as far as DC goes. To put this into context, most computer Uninterruptible Power Supply (UPS) units usually use a 48V input (and can output over 2000W of power).

If Intel succeeds in this absolutely crazy plan, things are going to get very very exciting for gamers in 2021. MCM is the future of computing as AMD has already proved and looks like Intel might actually beat them to the punch."


https://wccftech.com/monstrous-500w...-xe-tiles-stacked-using-foveros-3d-packaging/
 
500 Watts? you got to be kidding me, that's crazy

"Then we have SDV or Software Development Vehicles which are also prototypes, but I would say much closer to the final mainstream product. These appear to use standard PCIe although it is not clear how Intel plans to manage that considering they are still drawing 500W from the wall. Maybe Intel plans to augment its GPUs in a separate enclosure or through some auxiliary PSU. The fact that there is a PCIe expansion box present in the table and the SDV all list "new design" seems to lend credence to this theory. The 500W variant also pulls 48V in total. This is absolutely insane as far as DC goes. To put this into context, most computer Uninterruptible Power Supply (UPS) units usually use a 48V input (and can output over 2000W of power).

If Intel succeeds in this absolutely crazy plan, things are going to get very very exciting for gamers in 2021. MCM is the future of computing as AMD has already proved and looks like Intel might actually beat them to the punch."


https://wccftech.com/monstrous-500w...-xe-tiles-stacked-using-foveros-3d-packaging/


If this is real, I'll give it about 90% certainty that it is NOT a gaming part, but is rather some sort of enterprise compute/machine learning part priced way outside where gaming would make sense, and probably without gaming optimized drivers.
 
Last edited:
If this is real, I'll give it about 90% certainty that it is NOT a gaming part, but is rather some sort of enterprise compute/maccine learning part priced way outside where gaming would make sense, and probably without gaming optimized drivers.

Awww, I was hoping for a 800-1000W overclocked part.
 
Wasn't it already confirmed that Intel won't have a gaming dGPU for quite a while more since the first gen is scrapped?
 
A 500 watt GPU means that my 1000 watt PSU might actually see some use, given the death of SLI/crossfire.

That said, it's certainly an enterprise part especially with the 48 volt requirement. I would love to see some 500 watt GPUs trickle down to the enthusiast gaming space though.
 
Wasn't it already confirmed that Intel won't have a gaming dGPU for quite a while more since the first gen is scrapped?


first gen seems more of a disaster then Larrabee ever was!



Tom ForSyth said:
Why didn't Larrabee fail?
TomForsyth, 16 August 2016 (created 15 August 2016)
Every month or so, someone will ask me what happened to Larrabee and why it failed so badly. And I then try to explain to them that not only didn't it fail, it was a pretty huge success. And they are understandably very puzzled by this, because in the public consciousness Larrabee was like the Itanic and the SPU rolled into one, wasn't it? Well, not quite. So rather than explain it in person a whole bunch more times, I thought I should write it down.

This is not a history, and I'm going to skip a TON of details for brevity. One day I'll write the whole story down, because it's a pretty decent escapade with lots of fun characters. But not today. Today you just get the very start and the very end.

When I say "Larrabee" I mean all of Knights, all of MIC, all of Xeon Phi, all of the "Isle" cards - they're all exactly the same chip and the same people and the same software effort. Marketing seemed to dream up a new codeword every week, but there was only ever three chips:
  • Knights Ferry / Aubrey Isle / LRB1 - mostly a prototype, had some performance gotchas, but did work, and shipped to partners.
  • Knights Corner / Xeon Phi / LRB2 - the thing we actually shipped in bulk.
  • Knights Landing - the new version that is shipping any day now (mid 2016).
That's it. There were some other codenames I've forgotten over the years, but they're all of one of the above chips. Behind all the marketing smoke and mirrors there were only three chips ever made (so far), and only four planned in total (we had a thing called LRB3 planned between KNC and KNL for a while). All of them are "Larrabee", whether they do graphics or not.

When Larrabee was originally conceived back in about 2005, it was called "SMAC", and its original goals were, from most to least important:

1. Make the most powerful flops-per-watt machine for real-world workloads using a huge array of simple cores, on systems and boards that could be built into bazillo-core supercomputers.

2. Make it from x86 cores. That means memory coherency, store ordering, memory protection, real OSes, no ugly scratchpads, it runs legacy code, and so on. No funky DSPs or windowed register files or wacky programming models allowed. Do not build another Itanium or SPU!

3. Make it soon. That means keeping it simple.

4. Support the emerging GPGPU market with that same chip. Intel were absolutely not going to build a 150W PCIe card version of their embedded graphics chip (known as "Gen"), so we had to cover those programming models. As a bonus, run normal graphics well.

5. Add as little graphics-specific hardware as you can get away with.

That ordering is important - in terms of engineering and focus, Larrabee was never primarily a graphics card. If Intel had wanted a kick-ass graphics card, they already had a very good graphics team begging to be allowed to build a nice big fat hot discrete GPU - and the Gen architecture is such that they'd build a great one, too. But Intel management didn't want one, and still doesn't. But if we were going to build Larrabee anyway, they wanted us to cover that market as well.

Now remember this was around 2005 - just as GPGPU was starting to show that it could get interesting wins on real HPC workloads. But it was before the plethora of "kernel" style coding languages had spread to "normal" languages like C and FORTRAN (yes, big computing people still use FORTRAN, get over it). Intel was worried its existing Xeon CPU line just weren't cutting it against the emerging threat from things like GPGPU, the Sony CELL/SPU, Sun's Niagara, and other radical massively-parallel architectures. They needed something that was CPU-like to program, but GPU-like in number crunching power.

Over the years the external message got polluted and confused - sometimes intentionally - and I admit I played my own part in that. An awful lot of "highly speculative marketing projections" (i.e. bullshit) was issued as firm proclamations, and people talked about things that were 20 years off as if they were already in the chip/SW. It didn't help that marketing wanted to keep Larrabee (the graphics side) and MIC/Knights/Xeon Phi (the HPC side) separate in the public consciousness, even though they were the exact same chip on very nearly the exact same board. As I recall the only physical difference was that one of them didn't have a DVI connector soldered onto it!

Behind all that marketing, the design of Larrabee was of a CPU with a very wide SIMD unit, designed above all to be a real grown-up CPU - coherent caches, well-ordered memory rules, good memory protection, true multitasking, real threads, runs Linux/FreeBSD, etc. Larrabee, in the form of KNC, went on to become the fastest supercomputer in the world for a couple of years, and it's still making a ton of money for Intel in the HPC market that it was designed for, fighting very nicely against the GPUs and other custom architectures. Its successor, KNL, is just being released right now (mid 2016) and should do very nicely in that space too. Remember - KNC is literally the same chip as LRB2. It has texture samplers and a video out port sitting on the die. They don't test them or turn them on or expose them to software, but they're still there - it's still a graphics-capable part.

So how did we do on those original goals? Did we fail? Let's review:

  • 1. Make the most powerful flops-per-watt machine.
SUCCESS! Fastest supercomputer in the world, and powers a whole bunch of the others in the top 10. Big win, covered a very vulnerable market for Intel, made a lot of money and good press.

  • 2. Make it from x86 cores.
SUCCESS! The only compromise to full x86 compatibility was KNF/KNC didn't have SSE because it would have been too much work to build & validate those units (remember we started with the original Pentium chip). KNL adds SSE legacy instructions back in, and so really truly is a proper x86 core. In fact it's so x86 that x86 grew to include it - the new AVX512 instruction set is Larrabee's instruction set with some encoding changes and a few tweaks.

  • 3. Make it soon. That means keeping it simple.
NOT BAD. It's not quite as simple as slapping a bunch of Pentiums on a chip of course, but it wasn't infeasibly complex either. We did slip a year for various reasons, and then KNF's performance was badly hurt by a bunch of bugs, but these things happen. KNC was almost on time and turned out great.

  • 4. Support the emerging GPGPU market. As a bonus, run normal graphics well.
PRIMARY GOAL: VIRTUAL SUCCESS! It would have been a real success if it had ever shipped. Larrabee ran Compute Shaders and OpenCL very well - in many cases better (in flops/watt) than rival GPUs, and because it ran the other graphical bits of DirectX and OpenGL pretty well, if you were using graphics APIs mainly for compute, it was a compelling package. Unfortunately when the "we don't do graphics anymore" orders came down from on high, all that got thrown in the trash with the rest. It did also kickstart the development of a host of GPGPU-like programming models such as ISPC and CILK Plus, and those survive and are doing well.

BONUS GOAL: close, but no cigar. I'll talk about this more below.

  • 5. Add as little graphics-specific hardware as you can get away with.
MODERATE SUCCESS: The only dedicated graphics hardware was the texture units (and we took them almost totally from Gen, so we didn't have to reinvent that wheel). Eyeballing die photos, they took up about 10% of the area, which certainly isn't small, but isn't crazy-huge either. When they're not being used they power down, so they're not a power drain unless you're using them, in which case of course they're massively better than doing texture sampling with a core. They were such a small part that nobody bothered to make a new version of KNC without them - they still sit there today, visible on the die photograph as 8 things that look like cores but are slightly thinner. Truth be told, if KNC had been a properly graphics-focused part we would have had 16 of the things, but it wasn't so we had to make do with 8.


In total I make that three wins, one acceptable, and a loss. By that score Larrabee hardly "failed". We made Intel a bunch of money, we're very much one of the top dogs in the HPC and supercomputer market, and the "big cores" have adopted our instruction set.


So let's talk about the elephant in the room - graphics. Yes, at that we did fail. And we failed mainly for reasons of time and politics. And even then we didn't fail by nearly as much as people think. Because we were never allowed to ship it, people just saw a giant crater, but in fact Larrabee did run graphics, and it ran it surprisingly well. Larrabee emulated a fully DirectX11 and OpenGL4.x compliant graphics card - by which I mean it was a PCIe card, you plugged it into your machine, you plugged the monitor into the back, you installed the standard Windows driver, and... it was a graphics card. There was no other graphics cards in the system. It had the full DX11 feature set, and there were over 300 titles running perfectly - you download the game from Steam and they Just Work - they totally think it's a graphics card! But it's still actually running FreeBSD on that card, and under FreeBSD it's just running an x86 program called DirectXGfx (248 threads of it). And it shares a file system with the host and you can telnet into it and give it other work to do and steal cores from your own graphics system - it was mind-bending! And because it was software, it could evolve - Larrabee was the first fully DirectX11-compatible card Intel had, because unlike Gen we didn't have to make a new chip when Microsoft released a new spec. It was also the fastest graphics card Intel had - possibly still is. Of course that's a totally unfair comparison because Gen (the integrated Intel gfx processor) has far less power and area budget. But that should still tell you that Larrabee ran graphics at perfectly respectable speeds. I got very good at ~Dirt3 on Larrabee.

Of course, this was just the very first properly working chip (KNF had all sorts of problems, so KNC was the first shippable one) and the software was very young. No, it wasn't competitive with the fastest GPUs on the market at the time, unless you chose the workload very carefully (it was excellent at running Compute Shaders). If we'd had more time to tune the software, it would have got a lot closer. And the next rev of the chip would have closed the gap further. It would have been a very strong chip in the high-end visualization world, where tiny triangles, super-short lines and massive data sets are the main workloads - all things Larrabee was great at. But we never got the time or the political will to get there, and so the graphics side was very publicly cancelled. And then a year later it was very publicly cancelled again because they hadn't actually done it the first time - people kept working on it. And then a third time in secret a bit later, because they didn't want anybody to know people were still working on graphics on LRB.

So - am I sad Larrabee as a graphics device failed? Well, yes and no.

I have always been a graphics programmer. The whole "CPU architect" thing was a total accident along the way. As such, I am sad that we didn't get to show what KNC could do as a really amazing emulation of a GPU. But I am still hugely proud of what the team managed to do with KNC given the brief of first and foremost being a real highly-coherent legacy-compatible CPU running a grown-up OS.

Do I think we could have done more graphics with KNC? In terms of hardware, honestly not really. There's nothing I sit here and think "if only we'd done X, it all would have been saved". KNC was a hell of a chip - still is - I don't see how we could have squeezed much more out of the time and effort budgets we had - we already had some features that were kicked in the arse until they squeezed in under the wire (thanks George!). Having 16 texture samplers instead of just 8 would have helped, but probably not enough to prevent us getting cancelled. We could certainly have done with another year of tuning the software. We could have also done with a single longer-term software effort, rather than 4-5 different short-term ones that the random whims of upper management forced upon us (that's a whole long story all by itself). I could also wish for a graphics-capable version of KNL (i.e. add the texture units back in), but honestly just imagine what people could have done in the five years since KNC shipped if they had allowed normal people to access the existing texture units, and write their own graphics pipelines - it would have been amazing! So my biggest regret is that we were forbidden from exposing the goodness that already existed - that's the thing that really annoys me about the whole process.

Even then, it's hard for me to be too morose when so many of my concepts, instructions and architecture features are now part of the broader x86 ecosystem in the form of AVX512. I'm not sure any other single person has added so many instructions to the most popular instruction set in the world. It's a perfectly adequate legacy for an old poly-pusher! And now I'm doing that VR thing instead. Maybe it all worked out for the best.
 
Given where Intel has been throwing a lot of its new product launches that would probably be a monster of a DL/AI card. Makes me want one just to play with it.
The monster in me wants to create a massive AI machine and teach it to play EvE just to see what kind of thing it becomes.
 
first gen seems more of a disaster then Larrabee ever was!
If Larrabee was so good as the article says, why haven't they build upon it for the current graphics efforts?
 
If Larrabee was so good as the article says, why haven't they build upon it for the current graphics efforts?
They determined they didn’t have a market and the manufacturing process for making the chips is different enough that their existing facilities are not up to the task. Long story short the startup costs for them exceed any profits they could hope to gain from it while they struggled to claw market share from nVidia and AMD.
 
If Larrabee was so good as the article says, why haven't they build upon it for the current graphics efforts?

Because if you actually read it, it was a success in that it met the design goals of a highly parallel x86 compute unit. Today's workstation and gaming GPUs don't need that x86 bloat, with gaming having never needed it. Larrabee was x86 first with graphics rendering almost as an afterthought, and not suitable to be built upon as a rendering first design.
 
They determined they didn’t have a market and the manufacturing process for making the chips is different enough that their existing facilities are not up to the task. Long story short the startup costs for them exceed any profits they could hope to gain from it while they struggled to claw market share from nVidia and AMD.
Because if you actually read it, it was a success in that it met the design goals of a highly parallel x86 compute unit. Today's workstation and gaming GPUs don't need that x86 bloat, with gaming having never needed it. Larrabee was x86 first with graphics rendering almost as an afterthought, and not suitable to be built upon as a rendering first design.
Which would make it a failure by all practical and relevant metrics, and not some misunderstood and mistreated wonder child.
 
Which would make it a failure by all practical and relevant metrics, and not some misunderstood and mistreated wonder child.

As an AMD/nVidia gaming and workstation GPU competitor? Yes, it would, but it never really competed in that market, despite what Intel's marketing would have you believe. The real market it competed in was in supercomputers and HPC, and it did just fine there.
 
s
Which would make it a failure by all practical and relevant metrics, and not some misunderstood and mistreated wonder child.
Commercially on its own it was a failure but much of it got rolled into Phi, and their integrated GPUs. So the R&D was recouped so while the individual product was a failure the tech it developed was a success, just not where they had planned.
 
Still think it's funny so many thought Intel was going to release a gaming card anyone would actually want. They are at least 4 years out on having anything close to a gaming card that someone would want. Even then I expect their focus to remain on the server front and not gaming.
 
Still think it's funny so many thought Intel was going to release a gaming card anyone would actually want. They are at least 4 years out on having anything close to a gaming card that someone would want. Even then I expect their focus to remain on the server front and not gaming.
I think they will have a viable consumer product out by fall 2021, but they are going to hit the server market hard nVidia probably won't like that.
 
I think they will have a viable consumer product out by fall 2021, but they are going to hit the server market hard nVidia probably won't like that.
I would be shocked if they could come out with something to compete with Nvidia on any front in that timefrime.
 
I think Intel have an uphill battle on the server and HPC front given NVIDIA are deeply embedded in those industries at this point. Seems like Xeon is still the CPU of choice, though, but AMD is creeping up on them on that front. Does Intel have the talent or resources at this point in time to try and assert dominance in these fields? Their products all seem way too power hungry when efficiency is the name of the game now that the climate whackos are coming after HPC.
 
I think Intel have an uphill battle on the server and HPC front given NVIDIA are deeply embedded in those industries at this point. Seems like Xeon is still the CPU of choice, though, but AMD is creeping up on them on that front. Does Intel have the talent or resources at this point in time to try and assert dominance in these fields? Their products all seem way too power hungry when efficiency is the name of the game now that the climate whackos are coming after HPC.
Intel has already signed agreements with TSMC for their GPU's so power efficiency shouldn't be an issue there and over the last 2 years they have poached a lot of talent from nVidia and AMD alike, they also have a very large patent portfolio on the GPU side as technically Intel is the world's largest graphics provider so portfolio wise they should have something there. Overall they have all the components they need, question is can they actually put something together. I have no doubt that they could provide a solid server/data center lineup as the requirements are very strict and the support scope very defined, the consumer side though that will be a different story and I expect them to deliver, the GPU market has exploded over the last few years and by not having a solid product there they are just leaving a lot of cash on the table and their investors/management can't be too pleased with that.
 
Wasn't it already confirmed that Intel won't have a gaming dGPU for quite a while more since the first gen is scrapped?
When did we learn that first gen GPU was scrapped? I did a Google search when I posted just now and see nothing about the GPU project being scrapped at all.
 
I think most of the focus is on improving integrated graphics. I doubt we are going to see a gaming card coming out of intel any time soon. People are getting their hopes up on something that will take years before it bears any fruit.
 
Ok. I mean I have heard that rumor too and frankly it seems weird for them to use TSMC 7nm -- especially if it was originally supposed to be on Intel 7nm -- as moving to TSMC 7nm would essentially be moving to a much larger node. That rumor would have more teeth IMHO if it said moving to TSMC 5nm. I guess we will see...
 
Seems rumors are being crossed up. I thought Intel was working with Samsung fabs to produce 7nm chips...
 
Still think it's funny so many thought Intel was going to release a gaming card anyone would actually want. They are at least 4 years out on having anything close to a gaming card that someone would want.

Who thought?

What's a 'gaming card'?

Where is 'what's wanted' defined?

Even then I expect their focus to remain on the server front and not gaming.

Like AMD and to a significant extent, Nvidia?

I think Intel have an uphill battle on the server and HPC front given NVIDIA are deeply embedded in those industries at this point. Seems like Xeon is still the CPU of choice, though, but AMD is creeping up on them on that front. Does Intel have the talent or resources at this point in time to try and assert dominance in these fields?

Where AMD GPUs have mostly fallen short has been on the software side; AMD, as we all know, typically offers higher-performing compute products through much of their lineup versus Nvidia. While I'd hesitate to harp on the absolute quality of Intel's graphics drivers relative to AMD and Nvidia, despite how far Intel has come, I will point out that software and drivers have traditionally been a strong point for Intel, and further, Intel remains one of the largest contributors to the Linux kernel.

At least with respect to HPC, I'd bet that Intel will have the software support from kernel drivers up to major applications all sorted before they ship. I won't predict them ousting Nvidia here, but they do seem to have what it takes to give them a run for their money.

Their products all seem way too power hungry when efficiency is the name of the game now that the climate whackos are coming after HPC.

I'll throw this under 'who knows'. Intel and Nvidia have lead in efficiency for much of the last decade so we can assume that they have the know-how. We just don't know how well it's been applied here, let alone what application targets they are optimizing for.

I think most of the focus is on improving integrated graphics. I doubt we are going to see a gaming card coming out of intel any time soon.

I don't disagree, and I think it's going to be a bit of a 'reverse' effect, similar to the benefits AMD gets with their APUs. For AMD, by having higher performance discrete parts with the same architecture, they give developers a target worth optimizing for. Intel, thus far, has really only been able to court support from developers whose products would run well on their more limited hardware.

Not only is Intel quite interested in growing their iGPU performance envelope, but by pushing this technology out onto discrete products, they'll be able to win crucial attention from developers.
People are getting their hopes up on something that will take years before it bears any fruit.

Honestly I don't see those 'people' here. Despite Intel's technological prowess, they're still in the game to turn a profit, and even a decent mid-range GPU seems like a poor place to start, let alone aiming for the top, without tossing money out the door without return on the investment.

Newsmongers will say otherwise for clicks, but I've yet to see anyone on this site believably claim that Intel is going to turn around and knock Nvidia off their pedestal on their first try.
 
I am most excited about this being an MCM package for a GPU, I mean supposedly the new nVidia Tesla GPU's will be MCM based but that is unconfirmed last I checked but this is great. Hopefully Intel will start moving that way for its CPU's in the near future as well.
 
When did we learn that first gen GPU was scrapped? I did a Google search when I posted just now and see nothing about the GPU project being scrapped at all.
DG1 supposedly won't be released as a consumer product:






This makes it believable:

 
If this is real, I'll give it about 90% certainty that it is NOT a gaming part, but is rather some sort of enterprise compute/machine learning part priced way outside where gaming would make sense, and probably without gaming optimized drivers.

I would say 100% since the details say 400W+ parts only run on 48V, making them data center only parts.
 
Back
Top