Intel introduces new high-performance graphics brand: Arc

If the [H]ardocp.com comes back with only 1 review, this would be the card review we all would read.
Good, honest real world reviews are why I started coming here.
I think Kyle had inside access @ Intel for a while working with them specifically with this department.
Would be great if he had the first review.
 
Intel’s ARC is going to hurt their competitors ability to produce GPUs. Great so how does this benefit gamers exactly?

https://www.tomshardware.com/news/r...-intels-outsourcing-gpu-manufacturing-to-tsmc

“Of course, opting for TSMC also means there's less capacity for its rivals to produce GPUs and CPUs on the best manufacturing nodes available, as well. So moving GPU production to TSMC means Intel will be able to produce more devices in a silicon-hungry market while simultaneously assuring its competition can't make as many chips as they could sell. That's one way to beat the competition.”
 
Intel’s ARC is going to hurt their competitors ability to produce GPUs. Great so how does this benefit gamers exactly?

https://www.tomshardware.com/news/r...-intels-outsourcing-gpu-manufacturing-to-tsmc

“Of course, opting for TSMC also means there's less capacity for its rivals to produce GPUs and CPUs on the best manufacturing nodes available, as well. So moving GPU production to TSMC means Intel will be able to produce more devices in a silicon-hungry market while simultaneously assuring its competition can't make as many chips as they could sell. That's one way to beat the competition.”
No one is entitled to any wafers. You get what you book and pay for. Intel is competing for wafers just like everyone else. I doubt this hurts AMD or NV at all. Just more reeeeeeee if you ask me. YMMV.
 
No one is entitled to any wafers. You get what you book and pay for. Intel is competing for wafers just like everyone else. I doubt this hurts AMD or NV at all. Just more reeeeeeee if you ask me. YMMV.

What I mean is, it would be great if Intel could use their existing fabs to make GPUs, since theres' no shortage of Intel CPUs. But using 6nm TSMC isn't going to help at all with chip shortages.
 
well the cards must be in production by now, so i assume intel have had tsmc making them wafers for quiet a while.
 
What I mean is, it would be great if Intel could use their existing fabs to make GPUs, since theres' no shortage of Intel CPUs. But using 6nm TSMC isn't going to help at all with chip shortages.
Intel is a few more years out from being able to manufacture competitive GPUs on a competitive process.
 
Intel’s ARC is going to hurt their competitors ability to produce GPUs. Great so how does this benefit gamers exactly?

https://www.tomshardware.com/news/r...-intels-outsourcing-gpu-manufacturing-to-tsmc

“Of course, opting for TSMC also means there's less capacity for its rivals to produce GPUs and CPUs on the best manufacturing nodes available, as well. So moving GPU production to TSMC means Intel will be able to produce more devices in a silicon-hungry market while simultaneously assuring its competition can't make as many chips as they could sell. That's one way to beat the competition.”
Not really, Intel purchases almost as many wafers from TSMC as AMD does and has for a long time. So it doesn’t really change any supply constraints. Intel’s 7 isn’t really optimized for GPU’s Intel 5 is supposed going to be good for them same with Intels 20 scheduled for 2024.
 
They were first to tape out on 6nm, and they’ve put DG1 into production, even as a known non-product. I think people are underestimating how much of a lead Intel’s got on this.
Intel has a problem with capacity on leading edge nodes. Didn't buy enough EUV equipment for later on.
 
They were first to tape out on 6nm, and they’ve put DG1 into production, even as a known non-product. I think people are underestimating how much of a lead Intel’s got on this.
DG1 was a solid proof of concept, it showed that their designs work, but their process does not. It was too late to make the required tooling changes for 7nm but they could still integrate those into 5 onwards instead.

The DG1 worked fine as an OEM only launch for budget builds needing something beefier than an iGPU for office productivity but doesn’t need a Quadro.

It let them get some real practice and helped them fix issues that were holding up Ponte Vecchio on both the hardware and software side of things. Ponte Vecchio is supposedly killing it by the way. It was edging out NVidia’s best based on the engineering sample data leaks I can only imagine it’s gotten better since then. But that does mean that Intels best is at least a gen behind in terms of raw performance.
 
Intel has a problem with capacity on leading edge nodes. Didn't buy enough EUV equipment for later on.
Not sure how much of that is by accident or design. The Intel 10 and 7 nodes are going to be very short lived as both are being replaced by 4 in late 2023, though it was supposed to be replacing it for Q3 2022 when they started planning their stuff out.

The huge delay on 10 has squished all their node releases and they are being phased out seemingly as fast as they bring them in.
 
Intel has a problem with capacity on leading edge nodes. Didn't buy enough EUV equipment for later on.

Yes they’re technically in HVM at 7nm but with poor yield while TSMC is 2 gens ahead at 3nm HVM. Their copy exact philosophy just doesn’t work in todays brutal semiconductor manufacturing world. The R&D fab sends out a process and the HVM fabs have to follow it and can’t make their own improvements like TSMC fabs do. It’s one of many corporate culture reasons Intel struggles to keep up with TSMC.

Not sure how much of that is by accident or design. The Intel 10 and 7 nodes are going to be very short lived as both are being replaced by 4 in late 2023, though it was supposed to be replacing it for Q3 2022 when they started planning their stuff out.

The huge delay on 10 has squished all their node releases and they are being phased out seemingly as fast as they bring them in.

It doesn’t work that way. It’s like saying you struggled with Algebra so you’re going to skip it and go to Calculus. Global Foundries tried this and it was a failure.
 
It doesn’t work that way. It’s like saying you struggled with Algebra so you’re going to skip it and go to Calculus. Global Foundries tried this and it was a failure.
Which is why Intel paid IBM to take the test for them, IBM designed Intel 4 and 20 nodes.
This is why IBM had to sue Global Foundries a while back to formally terminate their agreements so they could officially partner with Intel on their process nodes.

In regards to TSMC 3, it’s looking to be delayed which puts Intel’s 20 process out ahead of it and it’s been showing better density and power than TSMC 3 which is why both Intel and TSMC are confident in Intels ability to take the “lead” for 2025 and they have both indicated as such to their respective investors.
 
Last edited:
Which is why Intel paid IBM to take the test for them, IBM designed Intel 4 and 20 nodes.
This is why IBM had to sue Global Foundries a while back to formally terminate their agreements so they could officially partner with Intel on their process nodes.

In regards to TSMC 3, it’s looking to be delayed which puts Intel’s 20 process out ahead of it and it’s been showing better density and power than TSMC 3 which is why both Intel and TSMC are confident in Intels ability to take the “lead” for 2025 and they have both indicated as such to their respective investors.

I hope it’s true cause we need Intel to be competitive and having US based chipmaking would be a great thing.

I’m not sure teaming with IBM is a big help. Lot of smart people there but they’re not experts at actual doing HVM at 3 or 6nm. A lot of the process knowledge is in the equipment makers like ASML, TEL, AMAT etc, and IBM is not as important as they used to be since they don’t buy anything. TSMC gets the most attention now.
 
Intel’s ARC is going to hurt their competitors ability to produce GPUs. Great so how does this benefit gamers exactly?

https://www.tomshardware.com/news/r...-intels-outsourcing-gpu-manufacturing-to-tsmc

“Of course, opting for TSMC also means there's less capacity for its rivals to produce GPUs and CPUs on the best manufacturing nodes available, as well. So moving GPU production to TSMC means Intel will be able to produce more devices in a silicon-hungry market while simultaneously assuring its competition can't make as many chips as they could sell. That's one way to beat the competition.”
Nobody else is making products on N6, as far as I'm aware. 6nm wasn't even on TSMC's roadmap until a relatively short time ago. Everybody is probably competing for 3nm at this point, as 5nm must already be all booked up at this point in the game.
 
I hope it’s true cause we need Intel to be competitive and having US based chipmaking would be a great thing.

I’m not sure teaming with IBM is a big help. Lot of smart people there but they’re not experts at actual doing HVM at 3 or 6nm. A lot of the process knowledge is in the equipment makers like ASML, TEL, AMAT etc, and IBM is not as important as they used to be since they don’t buy anything. TSMC gets the most attention now.
Yeah so far it’s looking good.
https://www.forbes.com/sites/linley...-could-revive-intel-fab-tech/?sh=580cd4b84426

https://www.hardwaretimes.com/intel...pyc-sales-3nm-2nm-on-track-for-2023-2024/amp/

“there are no delays affecting Intel’s upcoming 4nm, 3nm, 2nm, or 1.8nm nodes. The 4nm node which will power Meteor Lake (and Arrow Lake?) has already been taped out and should enter mass production later this year. The 3nm node will enter mass production in the second half of 2023, followed by the 20A/2nm node in 2024.”
 
Nobody else is making products on N6, as far as I'm aware. 6nm wasn't even on TSMC's roadmap until a relatively short time ago. Everybody is probably competing for 3nm at this point, as 5nm must already be all booked up at this point in the game.
Intel and Apple basically booked all of the 5nm fab time between them.
6nm is what TSMC calls their 7++ process but they have only finished the upgrades at half of their 7nm fabs.

TSMC has already announced they’ve had to delay their 3nm process which has forced Apple to do some changes to the next iPhone as it will now be shipping on 4nm instead (5++). 3nm now isn’t expected until mid to late 2023.
 
Yeah so far it’s looking good.
https://www.forbes.com/sites/linley...-could-revive-intel-fab-tech/?sh=580cd4b84426

https://www.hardwaretimes.com/intel...pyc-sales-3nm-2nm-on-track-for-2023-2024/amp/

“there are no delays affecting Intel’s upcoming 4nm, 3nm, 2nm, or 1.8nm nodes. The 4nm node which will power Meteor Lake (and Arrow Lake?) has already been taped out and should enter mass production later this year. The 3nm node will enter mass production in the second half of 2023, followed by the 20A/2nm node in 2024.”
Meteor Lake is going to be on Intel 4, which is Intel's 7nm process. Intel 3 is the optimization step to the process step of Intel 4, meaning that is "7nm+." The Hardware Times article is mistaking Intel's node names for the process "size."

The Forbes article also falsely claims Intel is "a generation behind AMD," when that isn't true anymore. This is why Intel came up with these names in the first place. Intel 4 means transistor density on their 7nm is comparable to the competition's 4nm, according to Intel. There is no reason to believe that to not be true looking at transistor densities of TSMC compared to Intel up to this point.
 
Meteor Lake is going to be on Intel 4, which is Intel's 7nm process. Intel 3 is the optimization step to the process step of Intel 4, meaning that is "7nm+." The Hardware Times article is mistaking Intel's node names for the process "size."

The Forbes article also falsely claims Intel is "a generation behind AMD," when that isn't true anymore. This is why Intel came up with these names in the first place. Intel 4 means transistor density on their 7nm is comparable to the competition's 4nm, according to Intel. There is no reason to believe that to not be true looking at transistor densities of TSMC compared to Intel up to this point.
And??? TSMCs 7nm isn’t 7nm, their 6 isn’t 6 and their 5 isn’t 5, nobodies nose names have actually matched the node sizes since 28nm, and even then it was debatable.
 
And??? TSMCs 7nm isn’t 7nm, their 6 isn’t 6 and their 5 isn’t 5, nobodies nose names have actually matched the node sizes since 28nm, and even then it was debatable.
Notice I specifically pointed out transistor density, which is all that matters. I also put "size" in quotes.
 
Notice I specifically pointed out transistor density, which is all that matters. I also put "size" in quotes.
I interpreted the quotations differently, thanks for that clarification.

But they are sort of behind, Intel has those nodes in the works and they are working but they aren’t producing anything publicly TSMC has things that size or similar in production. If anything the fact they are now only technically a year behind is somewhat remarkable given how much their 10nm struggles set them back. But TSMC announcing their delays, and Samsungs relative silence does make me think they are all pretty much caught up to one another.
 
Hopefully Intel prices thier top of line again at the 699 range FORCING AMD and nV to come back out of drug induced pricing orbit
 
Hopefully Intel prices thier top of line again at the 699 range FORCING AMD and nV to come back out of drug induced pricing orbit
The majority of the pricing bloom we are in is caused by a combination of Tariffs, shipping, and the AIB's. AMD and Nvidia have only adjusted their pricing to match the increase from TSMC and Samsung, so their margins are relatively unchanged, AMD has increased theirs marginally but only bringing them to parity with NVidia, AIB's have increased their margins a good 15% on top of this, shipping rates have more than doubled due to lots of different factors, then you have the US 25% tariff on top of all that so AMD and NVidia are responsible for maybe 3-5% of the price increase of the GPU's at market, but the other factors have added almost 50% above MSRP before you get to the retailers adding their margin which is generally another 30% so boom here you sit with an 80% markup over MSRP just to get something on a store shelf.

If anything Intel will be able to use the fact that their manufacturing facilities exist outside of China so they can avoid the 25% tariffs which will help greatly. This is why I think they are going to get the bulk of these to OEMs first, by doing so it lets them greatly undercut AMD and NVidia on their pricing to these parties, then the retail parts are just going to be what they can manage with their first-party board manufacturing facilities. Otherwise, they will have to enter an agreement with an AIB or two at which point they are stuck assembling the cards in China again and that hits them with that 25% and dodging that will likely be key to their pricing this time around as it gives them a large amount of competitive flexibility that the others just cant currently get.
 
Last edited:
Even ignoring drivers I assumed this would be delayed. Aren't these supposed to be made by TSMC? I'm not seeing anything GPU/CPU related coming on time for the next year.
 
Since Raja Kuduri is working with Intel, does Arc have AMD's RDNA architecture as base?

No, but it is chiplet-based. All three have chiplets either in production or planned (Nvidia won't move to chiplets until Hopper). They have their own version of DLSS and will be using FSR, so even if their best at launch is mid-tier, image quality should be good.

In the previous video, if the rumor's true, they're getting better performance through Vulcan than other APIs, so they know they have room for improvement in their drivers.
 
It’s also been added to the Adobe software suite. This is kinda dope, this could give Intel some oomph in the prosumer space.
It’s also full open source and has a handful of people working on getting it support for the AMD CPU’s.
I hope XeSS will also be open source and available for use to other GPUs as well.

Reveal video probably for low entry cards only
 
Last edited:
I hope XeSS will also be open source and available for use to other GPUs as well.

Reveal video probably for low entry cards only

It isn't open source.
https://www.intel.com/content/www/u...al-technology/arc-discrete-graphics/xess.html

1648645322379.png
 
Back
Top