i9-13900K benchmark leak?

Like it was with DDR4 was released with a base spec of 2133. DDR5 won't be a thing til we see 8000+ with better Cas latency. The base spec DD5 is 4800 with a Cas of 40. That is poop.
I do kind of think CAS Latency is going to shit, permanently, unless you are willing to pay ungodly sums of money to get something in the the 30's. Even then the performance gains will be negligible. The focus will be on the frequency gains. It seemingly already has been like that on DDR4 for a while now. I imagine DDR6 will have CAS latency of 80 or something... lol
 
I do kind of think CAS Latency is going to shit, permanently, unless you are willing to pay ungodly sums of money to get something in the the 30's. Even then the performance gains will be negligible. The focus will be on the frequency gains. It seemingly already has been like that on DDR4 for a while now. I imagine DDR6 will have CAS latency of 80 or something... lol
I noticed the higher speed DDR5 sticks are actually lower timings. Normally they go up with speeds.
 
I noticed the higher speed DDR5 sticks are actually lower timings. Normally they go up with speeds.
I think it's a symptom of quality of the memory required to sustain those frequencies. We will probably hit a point where everyone and their cousin is manufacturing DDR5 and then the latencies will be all over the place for a while until the process matures. I think we saw stuff like this with high end DDR4 from Samsung and Micron (with all the cheapo brands offering similar frequencies at higher latencies).
 
Honestly, I have never been a fan of the NV/Intel graphics switching. It has caused all manner of issues and plenty of hardware acceleration headaches. It would be nice if you could pick the video card that would be primary for everything or just disable the iGPU entirely, but... That would be asking for too much.

Now, I did have an AMD laptop back in the day with a paired discreet and decent iGPU that would crossfire. The laptop itself was sort of anemic, but the graphics capability was exceptional if the titles supported crossfire or you could force the 3D acceleration to use Crossfire (there was a way to designate the exe / launch file to default to the crossfire profile).

But since multiple GPU acceleration has seemingly disappeared... I would prefer one damn GPU that worked well instead of switching between multiple ones.

Question, isn't DX12 supposed to support leveraging multiple GPUs for extra acceleration? I was under the impression the cards didn't even have to be the same and seem to recall an Ashes of Singularity Benchmark some years ago showing that it was possible.... I guess no one really supports DX12 and that is part of the problem
You can disable the Intel and go exclusively Nvidia, it destroys battery life though but you can do it, it is in the windows settings.
DX12 does allow for leveraging multi GPU for acceleration but the fun thing about low-level coding is you have to do all that work manually, it is not a small job, memory management alone would require a team of engineers to build and maintain that one aspect let alone the rest of the internal logistics. There aren't enough people willing to drop 4K+ on GPUs to justify that sort of investment on the part of the developers, so they don't.

And honestly, it still doesn't work, resource sharing is a major PITA, Direct access for the GPU to go straight to storage and ram helps in theory but if it is going to be possible on consumer hardware then dual 16-lane PCIe5 slots are going to be the minimum needed to pull it off. NVLink on the enterprise side manages it very well by using 32 NVLink lanes operating at a little over 6 GB/s each half for data in, and half for data out. The only company really doing consumer-level multi GPU is Apple and they pulled it off by designing an interposer capable of moving 2.5 TB/s of data.

So despite DX12 having full support for Multi GPU configurations, at the resolutions that users would want to use it for, there is just too much data to move and current consumer designs just can't facilitate it, there aren't enough PCIe lanes on the boards or on the consumer CPU's to make it work in any usable manner.
 
You can disable the Intel and go exclusively Nvidia, it destroys battery life though but you can do it, it is in the windows settings.
DX12 does allow for leveraging multi GPU for acceleration but the fun thing about low-level coding is you have to do all that work manually, it is not a small job, memory management alone would require a team of engineers to build and maintain that one aspect let alone the rest of the internal logistics. There aren't enough people willing to drop 4K+ on GPUs to justify that sort of investment on the part of the developers, so they don't.

And honestly, it still doesn't work, resource sharing is a major PITA, Direct access for the GPU to go straight to storage and ram helps in theory but if it is going to be possible on consumer hardware then dual 16-lane PCIe5 slots are going to be the minimum needed to pull it off. NVLink on the enterprise side manages it very well by using 32 NVLink lanes operating at a little over 6 GB/s each half for data in, and half for data out. The only company really doing consumer-level multi GPU is Apple and they pulled it off by designing an interposer capable of moving 2.5 TB/s of data.

So despite DX12 having full support for Multi GPU configurations, at the resolutions that users would want to use it for, there is just too much data to move and current consumer designs just can't facilitate it, there aren't enough PCIe lanes on the boards or on the consumer CPU's to make it work in any usable manner.
I appreciate the breakdown.
 
SOOOOOO what do we know so far?

1) Will be slightly faster that Ryzen 7000 in gaming
2) Will pull close at the high end in multitasking, will likely lead in the midrange
3) Will use slightly more power gaming than 7000, will melt your home's wiring in multicore
4) Last generation on the socket, so dead end platform
5) Will lose out to the 7800x3D in most games

what else?

I think the big question is how this will end up looking for AMD and really how Intel 13th gen will compare. Will it be only slightly better in gaming in certain benchmarks or will it be significantly so? For multitask/content creation, even if Zen4 remains king on the biggest parallelizable tasks will Intel 13th do well in the rest? What will be the downsides of the Intel 13th platform, from power draw/cooling, dead-end socket, weirdness with P-Core/E-Core issues, to platform price (ie even if Intel continues with 12 series pricing for the platform it will be more expensive than Zen4 etc) ? The frustrating truth is that companies like Nvidia and Intel seem to be able to use their money to throw around the PR apparatus so that unless they are basically being beaten to a pulp by competitors they can position themselves as the natural choice, the best etc. I am really glad to see Zen4 and the platform improvements from PCI-E 5.0 , DDR5, to things like USB4 40gbps and other enhancements, and even the performance across the board is a nice uplift but if it will be very easily countered by the arrival of Intel 13th - or can be perceived that way - what will they do? If it turns out that Raptor Lake is only 5% better in a handful of games or other very situational stuff, trading blows or losing out vs Zen4 for others , then its likely lots of people will continue to to buy Zen4 especially if it is priced lower than Intel or has other benefits like socket lifespan. However, if something like this isn't the case an Intel 13th seems to be a step up almost across the board then people are going to buy it instead outside of some edge cases (ie extreme multithreaded work, AVX512 users, those who buy based on socket lifespan and the hopes to drop in a new CPU etc).

Frankly, if the signs and leaks have been pointing to 13th gen being able to strike back at Zen4 or at least can be portrayed that way by Intel's well honed advertising/PR wing, I can't see why AMD didnt...plan better? The big question at the end of this will be about the Zen4 3D chips, which hopefully will be across the line allowing for things like a 7950X3D and also ensuring that it isn't capped on clock speed and other restrictions like the 5800X3D. If Zen4 is depending on 3D VCache to "save" them or "take back" the crown vs Intel 13th then they cannot wait until Q1 2023 to do so! By that time, Intel would have been able to spend this quarter and the holiday season waving its dick about being the best CPU/mobo platform etc.. for gamers and the like, leading to a major installed base and kneecapping AMD's ability to capitalize on their brand new socket and significant gains/uplift. The thing I HOPE that AMD had the foresight to do, knowing all this could be possible, is to bring the 3D chips to market as soon as possible preferably right after Intel 13th launches and is in stores. If that is their plan AND the enhanced vcache turns out to offer an overall benefit to gaming and other tasks sufficient to counter Intel 13th where it is strongest, then that would be smart. However this means an October / November launch of the Zen4 3D chips, and I'm concerned we wont see that either because they won't or perhaps they can't bring them to market that quickly. Still, if they are forced to wait until Q1 then they may pick up some later builders, but Intel would be able to advertise that it "won" this generation overall with all its 2022 sales and then surely right around the time that Zen4 3D would finally come out to turn the tables a bit, Intel will be sure to focus on CES and all the Q1 electronics shows on the upcoming 2023 tech which is on the horizon!

So if it turns out that Raptor Lake is a serious competitor for Zen4 , most of the hopes seem to rest on them bringing the Zen4 3D versions out as soon as possible to counter, but its way up in the air if they can/will do that; something quite frustrating for those of us who would otherwise want to support AMD's who seems to often make more customer-friendly and open-tech decisions vs their major competitors with CPU platforms and GPUs alike.
 
I think the big question is how this will end up looking for AMD and really how Intel 13th gen will compare. Will it be only slightly better in gaming in certain benchmarks or will it be significantly so? For multitask/content creation, even if Zen4 remains king on the biggest parallelizable tasks will Intel 13th do well in the rest? What will be the downsides of the Intel 13th platform, from power draw/cooling, dead-end socket, weirdness with P-Core/E-Core issues, to platform price (ie even if Intel continues with 12 series pricing for the platform it will be more expensive than Zen4 etc) ? The frustrating truth is that companies like Nvidia and Intel seem to be able to use their money to throw around the PR apparatus so that unless they are basically being beaten to a pulp by competitors they can position themselves as the natural choice, the best etc. I am really glad to see Zen4 and the platform improvements from PCI-E 5.0 , DDR5, to things like USB4 40gbps and other enhancements, and even the performance across the board is a nice uplift but if it will be very easily countered by the arrival of Intel 13th - or can be perceived that way - what will they do? If it turns out that Raptor Lake is only 5% better in a handful of games or other very situational stuff, trading blows or losing out vs Zen4 for others , then its likely lots of people will continue to to buy Zen4 especially if it is priced lower than Intel or has other benefits like socket lifespan. However, if something like this isn't the case an Intel 13th seems to be a step up almost across the board then people are going to buy it instead outside of some edge cases (ie extreme multithreaded work, AVX512 users, those who buy based on socket lifespan and the hopes to drop in a new CPU etc).

Frankly, if the signs and leaks have been pointing to 13th gen being able to strike back at Zen4 or at least can be portrayed that way by Intel's well honed advertising/PR wing, I can't see why AMD didnt...plan better? The big question at the end of this will be about the Zen4 3D chips, which hopefully will be across the line allowing for things like a 7950X3D and also ensuring that it isn't capped on clock speed and other restrictions like the 5800X3D. If Zen4 is depending on 3D VCache to "save" them or "take back" the crown vs Intel 13th then they cannot wait until Q1 2023 to do so! By that time, Intel would have been able to spend this quarter and the holiday season waving its dick about being the best CPU/mobo platform etc.. for gamers and the like, leading to a major installed base and kneecapping AMD's ability to capitalize on their brand new socket and significant gains/uplift. The thing I HOPE that AMD had the foresight to do, knowing all this could be possible, is to bring the 3D chips to market as soon as possible preferably right after Intel 13th launches and is in stores. If that is their plan AND the enhanced vcache turns out to offer an overall benefit to gaming and other tasks sufficient to counter Intel 13th where it is strongest, then that would be smart. However this means an October / November launch of the Zen4 3D chips, and I'm concerned we wont see that either because they won't or perhaps they can't bring them to market that quickly. Still, if they are forced to wait until Q1 then they may pick up some later builders, but Intel would be able to advertise that it "won" this generation overall with all its 2022 sales and then surely right around the time that Zen4 3D would finally come out to turn the tables a bit, Intel will be sure to focus on CES and all the Q1 electronics shows on the upcoming 2023 tech which is on the horizon!

So if it turns out that Raptor Lake is a serious competitor for Zen4 , most of the hopes seem to rest on them bringing the Zen4 3D versions out as soon as possible to counter, but its way up in the air if they can/will do that; something quite frustrating for those of us who would otherwise want to support AMD's who seems to often make more customer-friendly and open-tech decisions vs their major competitors with CPU platforms and GPUs alike.
I suspect that 13th Gen Intel will be beastly. They are doubling their L2 Caches and adding another 4MB to L3 I believe. If AMD's 7000 Series is just now trading blows with 12th Gen Parts, they're gonna be in serious trouble with the 13th Gen Intel Parts. If that wasn't enough, they allegedly have clock speeds of up to 5.8 Ghz on a single core and their efficiency is back-ish. No small feat on the same node.

I think AMD was guessing they had a clear winner on their hands and didn't give Intel enough credit. It seems to be a common enough issue with AMD these days. Undervaluing the competition. That and the fact that they never release their best stuff first, they always kind of drag over older tech from generation to generation (look at how long Vega lasted in their APU lineup, well after they have RDNA1 & 2).

The X3D parts probably won't be released this year if Intel doesn't really punch back hard enough with 13th gen. AMD would hold off to maximize profits. If Intel is beating them up, the X3D parts will probably make it out at the end of the year. Maybe. They're having issues with stacking the Cache on this generation, I recall seeing something about that. So, even if AMD wanted to come out swinging I don't think the tech was ready.
 
Last edited:
I think the big question is how this will end up looking for AMD and really how Intel 13th gen will compare. Will it be only slightly better in gaming in certain benchmarks or will it be significantly so? For multitask/content creation, even if Zen4 remains king on the biggest parallelizable tasks will Intel 13th do well in the rest? What will be the downsides of the Intel 13th platform, from power draw/cooling, dead-end socket, weirdness with P-Core/E-Core issues, to platform price (ie even if Intel continues with 12 series pricing for the platform it will be more expensive than Zen4 etc) ? The frustrating truth is that companies like Nvidia and Intel seem to be able to use their money to throw around the PR apparatus so that unless they are basically being beaten to a pulp by competitors they can position themselves as the natural choice, the best etc. I am really glad to see Zen4 and the platform improvements from PCI-E 5.0 , DDR5, to things like USB4 40gbps and other enhancements, and even the performance across the board is a nice uplift but if it will be very easily countered by the arrival of Intel 13th - or can be perceived that way - what will they do? If it turns out that Raptor Lake is only 5% better in a handful of games or other very situational stuff, trading blows or losing out vs Zen4 for others , then its likely lots of people will continue to to buy Zen4 especially if it is priced lower than Intel or has other benefits like socket lifespan. However, if something like this isn't the case an Intel 13th seems to be a step up almost across the board then people are going to buy it instead outside of some edge cases (ie extreme multithreaded work, AVX512 users, those who buy based on socket lifespan and the hopes to drop in a new CPU etc).

Frankly, if the signs and leaks have been pointing to 13th gen being able to strike back at Zen4 or at least can be portrayed that way by Intel's well honed advertising/PR wing, I can't see why AMD didnt...plan better? The big question at the end of this will be about the Zen4 3D chips, which hopefully will be across the line allowing for things like a 7950X3D and also ensuring that it isn't capped on clock speed and other restrictions like the 5800X3D. If Zen4 is depending on 3D VCache to "save" them or "take back" the crown vs Intel 13th then they cannot wait until Q1 2023 to do so! By that time, Intel would have been able to spend this quarter and the holiday season waving its dick about being the best CPU/mobo platform etc.. for gamers and the like, leading to a major installed base and kneecapping AMD's ability to capitalize on their brand new socket and significant gains/uplift. The thing I HOPE that AMD had the foresight to do, knowing all this could be possible, is to bring the 3D chips to market as soon as possible preferably right after Intel 13th launches and is in stores. If that is their plan AND the enhanced vcache turns out to offer an overall benefit to gaming and other tasks sufficient to counter Intel 13th where it is strongest, then that would be smart. However this means an October / November launch of the Zen4 3D chips, and I'm concerned we wont see that either because they won't or perhaps they can't bring them to market that quickly. Still, if they are forced to wait until Q1 then they may pick up some later builders, but Intel would be able to advertise that it "won" this generation overall with all its 2022 sales and then surely right around the time that Zen4 3D would finally come out to turn the tables a bit, Intel will be sure to focus on CES and all the Q1 electronics shows on the upcoming 2023 tech which is on the horizon!

So if it turns out that Raptor Lake is a serious competitor for Zen4 , most of the hopes seem to rest on them bringing the Zen4 3D versions out as soon as possible to counter, but its way up in the air if they can/will do that; something quite frustrating for those of us who would otherwise want to support AMD's who seems to often make more customer-friendly and open-tech decisions vs their major competitors with CPU platforms and GPUs alike.
TSMC can't physically do the 3D stacking chips yet for their 5nm process, The BESI 8800 Ultra is not as stable or accurate as is needed at that size and they are physically failing at lining up the two chips more than 60% of the time, TSMC and BESI are working hard on a solution but they don't have one yet. TSMC though has supposedly developed a new bonding agent for the stacked chips that has better thermal properties than the one used for the 5000 series. The copper-based bonding agent used in the 5800x3d has too high a resistance so when more than a few volts are pumped through it, it would begin to physically melt, this is why AMD had to lock that down hard. The new bonding agent has a lower resistance though still higher than they like, but they can't change the melting point of it because much higher than it currently is and the application of it would damage the chip in the process.
 
Last edited:
Yeah, I think that was a smart move that didn't simply invalidate the shitload of ram that I and many other's have stockpiled over the years. The actual performance shift between DDR4 and 5 is like 3%...

AMD might have saved themselves a couple pennies on their memory controller and likely a lot more with how the Fabric and such work on their chips.... However, this just comes across as lazy. Because when you're running 4 sticks of RAM in AMD systems it dials the DDR5 down to 3600 Mhz. The Fabric is supposed to be independent of the other system busses this time too. I might have jumped on the early adoption for team Red if they had supported DDR4 because I have a tons of it lying around....

Totally agree with your point.
Once you get to DDR5 5600, the benefits can be non-trivial, in CPU limited situations. Hardware Unboxed has been using DDR5-6400 for Intel 12th gen and its pretty significant in CPU limited games:




Here they use a Quad core. Over a 12 game average, even DDR5 4800 averages out to be ~ the same performance as any DDR4 kit. DDR5 6400 averaged out to about 10%. With some specific games showing 15%+



I suspect that 13th Gen Intel will be beastly. They are doubling their L2 Caches and adding another 4MB to L3 I believe. If AMD's 7000 Series is just now trading blows with 12th Gen Parts, they're gonna be in serious trouble with the 13th Gen Intel Parts. If that wasn't enough, they allegedly have clock speeds of up to 5.8 Ghz on a single core and their efficiency is back-ish. No small feat on the same node.

I think AMD was guessing they had a clear winner on their hands and didn't give Intel enough credit. It seems to be a common enough issue with AMD these days. Undervaluing the competition. That and the fact that they never release their best stuff first, they always kind of drag over older tech from generation to generation (look at how long Vega lasted in their APU lineup, well after they have RDNA1 & 2).

The X3D parts probably won't be released this year if Intel doesn't really punch back hard enough with 13th gen. AMD would hold off to maximize profits. If Intel is beating them up, the X3D parts will probably make it out at the end of the year. Maybe. They're having issues with stacking the Cache on this generation, I recall seeing something about that. So, even if AMD wanted to come out swinging I don't think the tech was ready.
I'm not sure where you are getting this narrative from.

It seemed pretty clear to me that AMD compared to 12th gen during their presentations----because they know they have some really nice gains over that, in some situations. And that's all there seems to be, to that. I don't think they have undervalued their competition, at all.
IMO, if AMD could do Zen4 V-cache right now----they would. Their is no "maximize profits" benefit to waiting. You will pay a lot more for V-cache and the competition has a good product, out soon.

If 13th gen kicks butt in gaming----people could go with that, since Zen4 V-cache isn't available.
 
Last edited:
Looking at the material it looks like a 12 watt increase in boost tdp. wonder how many watts it will actually pull all core load.
 
Once you get to DDR5 5600, the benefits can be non-trivial, in CPU limited situations. Hardware Unboxed has been using DDR5-6400 for Intel 12th gen and its pretty significant in CPU limited games:




I'm not sure where you are getting this narrative from.

It seemed pretty clear to me that AMD compared to 12th gen during their presentations----because they know they have some really nice gains over that, in some situations. And that's all there seems to be, to that. I don't think they have undervalued their competition, at all.
IMO, if AMD could do Zen4 V-cache right now----they would. Their is no "maximize profits" benefit to waiting. You will pay a lot more for V-cache and the competition has a good product, out soon.

If 13th gen kicks butt in gaming----people could go with that, since Zen4 V-cache isn't available.

There is always going to be some give and take in an arms race, Intel got seriously delayed because of their problems with 10nm, which set them back generations, AMD used that time to catch up and even get ahead. TSMC is now struggling with some of their new processes and technologies, which are similar to the stumbling blocks Intel had to work out with their 10nm process delaying some of AMD's planned technologies which will let Intel catchup in the areas they fell behind and possibly retake the lead in some places. It's how a technological arms race is supposed to go, and it flopping back and forth is good for everybody, because if Intel or AMD simply charged ahead unobstructed and left their competitors in the dust it wouldn't be good for anybody.
 
There is always going to be some give and take in an arms race, Intel got seriously delayed because of their problems with 10nm, which set them back generations, AMD used that time to catch up and even get ahead. TSMC is now struggling with some of their new processes and technologies, which are similar to the stumbling blocks Intel had to work out with their 10nm process delaying some of AMD's planned technologies which will let Intel catchup in the areas they fell behind and possibly retake the lead in some places. It's how a technological arms race is supposed to go, and it flopping back and forth is good for everybody, because if Intel or AMD simply charged ahead unobstructed and left their competitors in the dust it wouldn't be good for anybody.
Exactly. See ryzen 5000 pricing and product stack for a clear example of this. Maybe an awesome showing from raptor lake will drive down the 5800x3d pricing some more. I wouldn’t mind that at all :D All these top end CPUs are overkill for their matched GPU tier. The differences are in multithreaded stuff and power usage. For most of us, they’re basically interchangeable (with Intel just needing a 240v outlet)
 
Exactly. See ryzen 5000 pricing and product stack for a clear example of this. Maybe an awesome showing from raptor lake will drive down the 5800x3d pricing some more. I wouldn’t mind that at all :D All these top end CPUs are overkill for their matched GPU tier. The differences are in multithreaded stuff and power usage. For most of us, they’re basically interchangeable (with Intel just needing a 240v outlet)
Sadly if anything I expect what I am seeing to do the opposite and drive the costs of the 5800x3d up, if it turns out it still competes well against the new generation it gives much further incentive for existing AM4 users to just get that and hold off on a new system, the demand spike would send the pricing back up not further down.
 
Exactly. See ryzen 5000 pricing and product stack for a clear example of this. Maybe an awesome showing from raptor lake will drive down the 5800x3d pricing some more. I wouldn’t mind that at all :D All these top end CPUs are overkill for their matched GPU tier. The differences are in multithreaded stuff and power usage. For most of us, they’re basically interchangeable (with Intel just needing a 240v outlet)
With all the new attention that 79XX reviews are ending up bringing to 5800X3D since it's so strong on benchmarks, I see the $399 price staying put for quite a while. The $409 13700K is going to edge out 5800X3D for gaming, but for anyone with an existing AM4 investment that primarily games, the latter is a no brainer drop-in replacement.
 
Last edited:
Once you get to DDR5 5600, the benefits can be non-trivial, in CPU limited situations. Hardware Unboxed has been using DDR5-6400 for Intel 12th gen and its pretty significant in CPU limited games:




Here they use a Quad core. Over a 12 game average, even DDR5 4800 averages out to be ~ the same performance as any DDR4 kit. DDR5 6400 averaged out to about 10%. With some specific games showing 15%+




I'm not sure where you are getting this narrative from.

It seemed pretty clear to me that AMD compared to 12th gen during their presentations----because they know they have some really nice gains over that, in some situations. And that's all there seems to be, to that. I don't think they have undervalued their competition, at all.
IMO, if AMD could do Zen4 V-cache right now----they would. Their is no "maximize profits" benefit to waiting. You will pay a lot more for V-cache and the competition has a good product, out soon.

If 13th gen kicks butt in gaming----people could go with that, since Zen4 V-cache isn't available.

If they had offered both DDR4 & 5 as options they would have gained a wider range of adopters of the new platform. DDR5 can provide benefits, however, Hardware unboxed was using their CL14 DDR4 3200 Mhz. CAS Latency can help but in the applications they were testing, the frequency likely made up a lot of the difference in performance and you can easily narrow that gap with faster DDR4 RAM. I don't really care for everything that Hardware Unboxed does on their testing.
 
Interesting vid on Thread Director 2. Example scenario is starting a render and seeing all P-cores+E-cores loaded to 100%; then a game is loaded and the render job is shifted to the E-cores while the game is given the P cores. This requires latest Windows 11 for its updated scheduler. Intel noted Alder Lake will receive this feature as well, in a coming microcode update.

FWIW this type of core allocation was technically already possible by tinkering with thread affinity manually, but then doing it manually is also a huge PITA.

 
Interesting vid on Thread Director 2. Example scenario is starting a render and seeing all P-cores+E-cores loaded to 100%; then a game is loaded and the render job is shifted to the E-cores while the game is given the P cores. This requires latest Windows 11 for its updated scheduler. Intel noted Alder Lake will receive this feature as well, in a coming microcode update.

FWIW this type of core allocation was technically already possible by tinkering with thread affinity manually, but then doing it manually is also a huge PITA.


Does ryzen have something similar? Oh, I forgot, ryzen is all P core :D
 
If they had offered both DDR4 & 5 as options they would have gained a wider range of adopters of the new platform. DDR5 can provide benefits, however, Hardware unboxed was using their CL14 DDR4 3200 Mhz. CAS Latency can help but in the applications they were testing, the frequency likely made up a lot of the difference in performance and you can easily narrow that gap with faster DDR4 RAM. I don't really care for everything that Hardware Unboxed does on their testing.
If you look at their 12100 video for memory testing: DDR4 4000 CL19 single rank, was barely better than single and dual rank 3200 CL14, in maybe 4 games. And if you look at the 12 game average, both single and dual rank DDR4 3200 CL14 beat DDR4 4000 CL19.
 
Interesting vid on Thread Director 2. Example scenario is starting a render and seeing all P-cores+E-cores loaded to 100%; then a game is loaded and the render job is shifted to the E-cores while the game is given the P cores. This requires latest Windows 11 for its updated scheduler. Intel noted Alder Lake will receive this feature as well, in a coming microcode update.

FWIW this type of core allocation was technically already possible by tinkering with thread affinity manually, but then doing it manually is also a huge PITA.


That was actually a dope demo.
 
Interesting vid on Thread Director 2. Example scenario is starting a render and seeing all P-cores+E-cores loaded to 100%; then a game is loaded and the render job is shifted to the E-cores while the game is given the P cores. This requires latest Windows 11 for its updated scheduler. Intel noted Alder Lake will receive this feature as well, in a coming microcode update.

FWIW this type of core allocation was technically already possible by tinkering with thread affinity manually, but then doing it manually is also a huge PITA.


One wonders what the director has been doing this whole time?-----because I thought one of the most basic features, was to put the current focus/full screen app on the P-cores. and background apps on E- cores. Unless the app has a special flag to stay on P-cores or E-cores, respectively.
 
Does ryzen have something similar? Oh, I forgot, ryzen is all P core :D
You joke but AMD it does not multi-task that well to that degree when all cores are in use, to date AMD's memory and task scheduling has left a lot to desire, and I am hoping that the new IO die on the 7000 series improves it but I am not super hopeful on that front.
 
One wonders what the director has been doing this whole time?-----because I thought one of the most basic features, was to put the current focus/full screen app on the P-cores. and background apps on E- cores. Unless the app has a special flag to stay on P-cores or E-cores, respectively.
Thread director initially made the decision based on workload size and system stress I don't think it took active focus into consideration, so it would put things like your email and excel windows onto the e cores and put Blender on the P cores, but then if you were running Unreal in another task they would try to share the P cores while your web browser and other stuff took up what was left of the E cores.
 
You joke but AMD it does not multi-task that well to that degree when all cores are in use, to date AMD's memory and task scheduling has left a lot to desire, and I am hoping that the new IO die on the 7000 series improves it but I am not super hopeful on that front.
I'll be curious how intel handles it when they start gluing CPUs together. the CCX to CCX hit is real for sure.
 
I'll be curious how intel handles it when they start gluing CPUs together. the CCX to CCX hit is real for sure.
The Sapphire Rapids leaks to date have been promising, Intel has a lot of experience with multi-chip in the form of their multi-socket systems I expect much of that can be scaled down and remain relevant. Intel does use an MCM package for their Ponte Vecchio GPU and from what I have read about that it performs very well at its intended tasks. The hardest part for MCM designs is the interconnect and the scheduling, Intel seems to have a firm grasp on both so I am not worried about it being bad, how it stacks up to TSMC and AMD's process will be interesting though as it's essentially going to be Intel's 1st generation packaging vs TSMC's 4th.
1664480453484.png


For those curious about the Ponte Vecchio chip this is what Intel is playing with in the above picture.

Raja teased that there are 7 advanced technologies at play here, and by our calculation, these would be:
  • Intel 7nm
  • TSMC 7nm
  • Foveros 3D Packaging
  • EMIB
  • 10nm Enhanced Super Fin
  • Rambo Cache
  • HBM2
Following is how Intel gets to 47 tiles on the Ponte Vecchio chip:
  • 16 Xe HPC (internal/external)
  • 8 Rambo (internal)
  • 2 Xe Base (internal)
  • 11 EMIB (internal)
  • 2 Xe Link (external)
  • 8 HBM (external)\
So Intel is playing with some insane levels of MCM here.
 
Last edited:
Well, it
If you look at their 12100 video for memory testing: DDR4 4000 CL19 single rank, was barely better than single and dual rank 3200 CL14, in maybe 4 games. And if you look at the 12 game average, both single and dual rank DDR4 3200 CL14 beat DDR4 4000 CL19.
I see what you are attempting to prove. It's highly game specific and the DDR4 4000 kit matches or edges out the DDR5 4800 in many benchmarks that outstrip the DDR4 CL14 3200 memory. The CL14 3200 CAN beat the DDR5 4800 and the DDR4 4000 but it's not in every scenario. Clearly, some games are more sensitive to the CAS Latency and others are more interested in frequency. In many of the benchmarks the DDR4 4K outstrips the DDR5 4800, or matches it exactly. This would effectively prove my point. The CL14 3200 Mhz stuff can beat them both sometimes (in a good amount of titles).

Back to your point, all the memory sticks were barely better than one another. A couple FPS difference. No matter which way you spin it. Over half the benchmarks had the DDR4 4000 match the 4800. Actually the 4000 ram outperformed the 3200 in four games and matched in 2. I did see the 3200 Sticks pull ahead by double digits in at least one benchmark.

Thank you for making me watch that video again. It just underscores how you don't need DDR5 for anything and nothing short of a 6000 Mhz kit for double the cost of DDR4 will cut it. I'm still curious how DDR4 3600-4000 stacks up against the 6000 kit. I answered this below....lol

I did a little more digging and found that the only instances where DDR5 6000 seems to outshine DDR4 of either CL14 3200 or 4000 Mhz varieties is in benchmarks that are already pulling over 200 FPS. A couple titles it gets double digit gains like 10-20 FPS. The rest it is within a couple FPS. Anything less than DDR6000 fares worse.
 
Well, it

I see what you are attempting to prove. It's highly game specific and the DDR4 4000 kit matches or edges out the DDR5 4800 in many benchmarks that outstrip the DDR4 CL14 3200 memory. The CL14 3200 CAN beat the DDR5 4800 and the DDR4 4000 but it's not in every scenario. Clearly, some games are more sensitive to the CAS Latency and others are more interested in frequency. In many of the benchmarks the DDR4 4K outstrips the DDR5 4800, or matches it exactly. This would effectively prove my point. The CL14 3200 Mhz stuff can beat them both sometimes (in a good amount of titles).

Back to your point, all the memory sticks were barely better than one another. A couple FPS difference. No matter which way you spin it. Over half the benchmarks had the DDR4 4000 match the 4800. Actually the 4000 ram outperformed the 3200 in four games and matched in 2. I did see the 3200 Sticks pull ahead by double digits in at least one benchmark.

Thank you for making me watch that video again. It just underscores how you don't need DDR5 for anything and nothing short of a 6000 Mhz kit for double the cost of DDR4 will cut it. I'm still curious how DDR4 3600-4000 stacks up against the 6000 kit.

I did a little more digging and found that the only instances where DDR5 6000 seems to outshine DDR4 of either CL14 3200 or 4000 Mhz varieties is in benchmarks that are already pulling over 200 FPS. A couple titles it gets double digit gains like 10-20 FPS. The rest it is within a couple FPS. Anything less than DDR6000 fares worse.
DDR5 has some incredibly cool features, the channel splitting, EEC, and blah blah blah, it is a very big shift in how consumer memory works. But until the OSs and Game engines are redesigned in a way that takes advantage of those features then the latency will hurt it for a number of use cases and is then going to rely on pure speed to catch it back up. Some years from now when DDR5 is the majority and the next generation of OSs and game engines is out using those feature sets then DDR5 will be the clear winner. Until then you take what you can get, but if you can spring for DDR5 then I expect it will serve you better in the long run.
 
DDR5 has some incredibly cool features, the channel splitting, EEC, and blah blah blah, it is a very big shift in how consumer memory works. But until the OSs and Game engines are redesigned in a way that takes advantage of those features then the latency will hurt it for a number of use cases and is then going to rely on pure speed to catch it back up. Some years from now when DDR5 is the majority and the next generation of OSs and game engines is out using those feature sets then DDR5 will be the clear winner. Until then you take what you can get, but if you can spring for DDR5 then I expect it will serve you better in the long run.
It's like a rinse and repeat scenario ever new generation, until things get normalized and widely accepted. Then the new tech really starts pulling ahead.
 
DDR-5 became clearly better than 4 quite fast too (versus how long it took 4 to be clearly better than 3 and so on), recent history speaking ?

Or maybe it feel that way because time goes faster has you age and fully mainstream cpu with DDR5 being already getting close to be 1 year's old is a lot.
 
Last edited:
DDR-5 became clearly better than 4 quite fast too, recent history speaking ?

Or maybe it feel that way because time goes faster has you age and fully mainstream cpu with DDR5 being already getting close to be 1 year's old is a lot.
Some of it is DDR5, some of it is the OSs doing better at memory management, drivers, blah blah blah, DDR5 is the future and it is better and mass adoption will make it cheaper. I am just hoping that having more than 2 memory channels stops being a bad thing on the consumer side, I would love at least 4 channels for DDR5 operating at full speed on the consumer side. Doubt I'll see it this generation but hopefully next.
 
Well, it

I see what you are attempting to prove. It's highly game specific and the DDR4 4000 kit matches or edges out the DDR5 4800 in many benchmarks that outstrip the DDR4 CL14 3200 memory. The CL14 3200 CAN beat the DDR5 4800 and the DDR4 4000 but it's not in every scenario. Clearly, some games are more sensitive to the CAS Latency and others are more interested in frequency. In many of the benchmarks the DDR4 4K outstrips the DDR5 4800, or matches it exactly. This would effectively prove my point. The CL14 3200 Mhz stuff can beat them both sometimes (in a good amount of titles).

Back to your point, all the memory sticks were barely better than one another. A couple FPS difference. No matter which way you spin it. Over half the benchmarks had the DDR4 4000 match the 4800. Actually the 4000 ram outperformed the 3200 in four games and matched in 2. I did see the 3200 Sticks pull ahead by double digits in at least one benchmark.

Thank you for making me watch that video again. It just underscores how you don't need DDR5 for anything and nothing short of a 6000 Mhz kit for double the cost of DDR4 will cut it. I'm still curious how DDR4 3600-4000 stacks up against the 6000 kit. I answered this below....lol

I did a little more digging and found that the only instances where DDR5 6000 seems to outshine DDR4 of either CL14 3200 or 4000 Mhz varieties is in benchmarks that are already pulling over 200 FPS. A couple titles it gets double digit gains like 10-20 FPS. The rest it is within a couple FPS. Anything less than DDR6000 fares worse.

This guy's take on HUB's pro ddr5 video shows the problems with certain testing conditions. Namely, for Intel gear 1 vs gear 2 for memory controller speed vs memory speed. DDR5 looks good compared to hobbled ddr4 4000, especially if the cpu can't run 4000mhz ram at gear 1.

 
This guy's take on HUB's pro ddr5 video shows the problems with certain testing conditions. Namely, for Intel gear 1 vs gear 2 for memory controller speed vs memory speed. DDR5 looks good compared to hobbled ddr4 4000, especially if the cpu can't run 4000mhz ram at gear 1.


That's an excellent and in depth breakdown. I knew something was wrong with HUB's benchmarks, but I just couldn't put my finger on it.

Thank you!
 
Interesting vid on Thread Director 2. Example scenario is starting a render and seeing all P-cores+E-cores loaded to 100%; then a game is loaded and the render job is shifted to the E-cores while the game is given the P cores. This requires latest Windows 11 for its updated scheduler. Intel noted Alder Lake will receive this feature as well, in a coming microcode update.

FWIW this type of core allocation was technically already possible by tinkering with thread affinity manually, but then doing it manually is also a huge PITA.


Unless I'm missing something, this is very bad behavior. Why would a task release all the P cores just because it was minimized? That's just a waste lf resources.
 
Unless I'm missing something, this is very bad behavior. Why would a task release all the P cores just because it was minimized? That's just a waste lf resources.
It's not the act of minimizing it that freed up the P cores it was the launching of another high-resource application, ensuring performance for the active task. It's actually pretty useful for a working environment and office setting, the application load they displayed there was a little over the top in the Demo but it showed some good points. I mean very few people are going to be doing large compiles in Blender while simultaneously building scenes in UT5, and gaming. But the fact the game was running smoothly while it was still doing those other tasks is kind of a big deal, My Threadrippers, Xeons, Epics, Ryzen's, and other Intel CPUs couldn't pull that off.
 
It's not the act of minimizing it that freed up the P cores it was the launching of another high-resource application, ensuring performance for the active task.
I get that, but that's not what they showed. The P cores weren't relegated to a foreground task, they just went dead before another application was launched.
 
But the fact the game was running smoothly while it was still doing those other tasks is kind of a big deal, My Threadrippers, Xeons, Epics, Ryzen's, and other Intel CPUs couldn't pull that off.
Not even with task priority?
 
Not even with task priority?
I'm fairly certain you had to manually set this before and it was a huge PITA. The Scheduler in the OS is handling everything seamlessly and all without user input. If I understood the video correctly the E cores completely offloaded the tasks from the P cores without a hit to performance in any way. I could be wrong.
 
Not even with task priority?
Prior to this P/E-core nonsense for PC CPUs - yes - this was entirely possible just by virtue of the OS load balancing. On Linux on my 3975WX I can compile something like LLVM with 48 threads ("vcpu"), have other uses logged into the machine doing smaller compiles, and still play a game in the foreground - all concurrently. If I look at something like htop I see most cores pegged to 100% by their respective processes. This doesn't require assigning affinity manually or anything more complicated like that.

Some users here like to spew technobabble but don't actually understand how any of this works to a depth beyond "marketing slides".
 
Prior to this P/E-core nonsense for PC CPUs - yes - this was entirely possible just by virtue of the OS load balancing. On Linux on my 3975WX I can compile something like LLVM with 48 threads ("vcpu"), have other uses logged into the machine doing smaller compiles, and still play a game in the foreground - all concurrently. If I look at something like htop I see most cores pegged to 100% by their respective processes. This doesn't require assigning affinity manually or anything more complicated like that.

Some users here like to spew technobabble but don't actually understand how any of this works to a depth beyond "marketing slides".
I can't argue with the ability to multitask all being there before. The behavior of the Windows 11 Scheduler is more advanced than prior iterations of Windows. I'm not personally a bit fan of P/E cores but it sure looks pretty cool under Windows 11. I suspect that something similar happens with AMD processors in Windows 11, when you have a shitload of cores. IIRC in the past all the cores would just carry a part of the load. What's happening here seems specific to the Intel Architecture.
 
Back
Top