Intel's supposedly RTX 3070-level GPU, the Xe DG2, is running in its labs right now

Admittedly, I haven't been following Intel's discrete graphics very closely, but I thought the parent designs were heavy-hitting number crunchers/on-the-fly video encoders for companies like Amazon and Netflix; there's got to be some potential to squeeze gaming performance out of that kind of framework.
If you follow my link, that is their Iris XE Max DG1 is a discrete GPU for laptops (available to OEMs only since they are laptop GPUs)... not for number crunching data centers. It barely outpaces an mx330.. which is slower than the mx350 and mobile RX 550. The claim they are going to go from this performance level up to 3070... well, it's just not a typical jump in performance. Obviously it's possible since the 3070 exists and AMD was able to get > 3070 performance, but still that's a large jump. The DG1 is almost as slow as iGPU's, almost a waste of a power budget to even install it in a laptop and call it discrete. I mean, you have to start somewhere, but claiming the next iteration will push 3070 seems a bit lofty. It reminds me of the old Raja and company trying to hype up AMD GPU's just to be completely let down by actual performance. I was happy to hear he was leaving AMD and was wondering how long it would take at Intel to start hearing outlandish claims. Considering it won't be out until around Refresh time for Nvidia, I don't see these taking to much market from NVidia or AMD in the enthusiast space. What I do see is Intel forcing... errr recommending/discounting system integraters and OEM's to use their discrete GPU's in pre-builts to gain market share, which they'll need to get any sort of developers onboard with optimizing for their cards.
 
Admittedly, I haven't been following Intel's discrete graphics very closely, but I thought the parent designs were heavy-hitting number crunchers/on-the-fly video encoders for companies like Amazon and Netflix; there's got to be some potential to squeeze gaming performance out of that kind of framework.

I would disagree they may have made a asic optimized specifically for those taskes as they saw a hole in the market they could fill. A competitive GPU takes much more complex silicon and a bunch of software design. Intel has the budget and aperently the fabs to make it happen bit we will see how they actually execute it to know for sure. These rumers seem lofty and the power targets dont make since.
 
MAybe if you buy a 6800/6900 and underclock/undervolt it you can get there :).

I would be intrested to see how all the cards from this power hungry generation do with abit of undervolting. It should bring them closer to the peak of there efficiency curve.
 
That's true, if I were them though and the performance is indeed what they say it is, if they priced it $100-150 dollars under AMD and Nvidia they would sell like hotcakes. Could you imagine greater than 2080ti performance (based on 3070 rumors) for $350 bucks?! :eek:
I can imagine it. Problem is it won't happen until 5 to 7 years from now
 
So they go from decent super lower end performance to 3070 level in under a year? Buuuuuuuuuuuuuuuuuuuuuuuuuuuuullshiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiit.
 
I don't see it... DG1 can't barely outperform an mx330.... They have DG2 that's supposed to be 3070 speeds? I just don't see that jump happening. You don't normally go from one of the slowest discrete laptop GPUs available to outperforming desktop GPUs in a single cycle.

https://www.tomshardware.com/amp/news/intel-iris-xe-max-dg1-gpu-performance-benchmark
Yeah, I read that and was like WTF but after digging it seems the DG1 can be paired with anything from DDR4 to GDDR6 and operated at a number of voltages and frequencies, the leaked system that Tomshardware reported on seems to be running LPDDR4 in an Asus flipbook so that would be its lowest power draw possible to extend battery life I would think. So if anything after looking at the specs and the battery life of that unit I'm actually left impressed by the turn out.
 
I would disagree they may have made a asic optimized specifically for those taskes as they saw a hole in the market they could fill. A competitive GPU takes much more complex silicon and a bunch of software design.
IIRC the reason it evaporated was that it was a high-end encoder, a high-end machine learning card, a high-end pixel-pusher, and an all-around magic box that could do everything for everyone, but it didn't do anything particularly well except for I think on-the-fly video (I'm really scratching my head on this and don't really want to do any research) but it wasn't going into production and so (again, super fuzzy here) I think it was Amazon dropped their agreement and moved on.

Being that it's Intel, they have enough coin on hand to turn into production like it was a game of CIV. It wouldn't surprise me if they owned some very good, very scalable graphics IP. But also being that it's Intel, they have enough cash on hand to give to themselves and do a song and dance that over-promises and under-delivers for years to come.

And if the furniture at headquarters starts to get shaggy, they can always sell off another chunk of the company like they did with storage. I wonder what they'll hand off next. I hope it's the networking division, those guys are great they shouldn't have to bother with the rest of the Intel.
 
Not what I am hearing...
So no Xe discrete card without buying an OEM system and poor performance?
If anyone here had inside info, it would be you.
Was hoping to see at least a few retail Blue cards available with midrange performance to sweeten the price pool.
 
Back
Top