i9-13900K benchmark leak?

...stuff...

Making those commands mandatory as part of the PCIe 5 specification goes a very long way to speeding adoption and making things better for everyone.
This same issue but for GPU to Ram communication is why AMD SmartAccess, and Nvidia GPU Direct were such big deals when they launched.
That's incorrect. These technologies remove the need for a bounce-buffer in system RAM for CPU -> GPU or GPU <-> GPU or GPU <-> Accelerator accesses. Also SmartAccess is primarily (only?) for gaming (AMD's compute equivalent is "large BAR" on systems using their ROCM stack) while GPUDirect is (or was?) primarily for compute.
 
Last edited:
That's incorrect. These technologies remove the need for a bounce-buffer in system RAM for CPU -> GPU or GPU <-> GPU or GPU <-> Accelerator accesses. Also SmartAccess is primarily (only?) for gaming (AMD's compute equivalent is "large BAR" on systems using their ROCM stack) while GPUDirect is (or was?) primarily for compute.
https://blocksandfiles.com/2020/07/23/nvidia-gpudirect-storage-software/

GDS enables DMA (direct memory access) between GPU memory and NVMe storage drives. The drives may be direct-attached or external and accessed by NVMe-over-Fabrics. With this architecture, the host server CPU and DRAM are no longer involved, and the IO path between storage and the GPU is shorter and faster.

Nvidia-GPU-Direct-diagram-1.jpg
Blocks & Files GPUDirect diagram. Red arrows show normal data flow. Green arrows show shorter, more direct, GPUDirect Storage data path.
GDS extends the Linux virtual file system to accomplish this – according to Nvidia, Linux cannot currently enable DMA into GPU memory.

The GDS control path uses the file system on the CPU but the data path no longer needs the host CPU and memory

AMD Smart Access does the same thing, AMD advertises it most for Gaming though because their datacenter presence is so small in comparison, but they use the same technology still called smart access for their Instinct lineup.
 
Last edited:
I am not sure how relevant it would be for something like gaming considering game tend to be small amount of large file for asset (specially with directstorage), seem more of interest for the database-AI-learning-server work.

Removing decompression from the CPU could be big, removing the work of addressing the rare harddrive file demand adressing...
 
You joke but AMD it does not multi-task that well to that degree when all cores are in use, to date AMD's memory and task scheduling has left a lot to desire, and I am hoping that the new IO die on the 7000 series improves it but I am not super hopeful on that front.

This is the kind of thing that is bothering me both with Intel and perhaps AMD. How much of this comes down to software that is platform/OS dependent and how much to hardware/microcode? The idea that you need to be running Windows 11 and its latest update to get the above Intel Thread Scheduler 2 on Alder and Raptor for instance is concerning and likewise any issues with AMD hardware etc. How does Linux fare by comparison? I'm guessing it either is doing tremendously better and faster because of it being supported in the kernel faster - something that Intel and AMD are likely going to want to get merged ASAP for server use perhaps ? - or its going to take much longer an we''ll need to contend with the crumbs if a theoretical software component is focused on Windows etc?
 
Latency has taken a hit on every generation from SDRAM to DDR. RAMBUS was brutal on the latency issues IIRC. It's always been a thing as the RAM has advanced architecturally. I remember SD RAM being incredibly low latency
While it didn't have the data transfer rates, I do miss the days of DDR1 with 2-2-2-5.
 
With people being forced to adopt DDR5 moving forward (for Zen4 and 14th Gen at least), it'll bring prices down. How much and how quickly are unknowns, but at least there's that.

DDR5 on Newegg doesn't look that expensive. 2x 16GB Expo 6000 for $249. Pretty cheap if you ask me. Especially under Brandonflation, that was like $150 two years ago.
 
Man oh man, blast from the past. Better than the 2nd game for sure.
FYI - SOTS still runs under Windows 11. Installed fine, runs great. The only issue I have run into is with Alt+Tabbing out of the game, which it did flawlessly before. Now, I get graphical issues and the screen is totally skewed when I go back in. Worth noting. Must have something to do with the Compatibility Layer built into windows.
 
While it didn't have the data transfer rates, I do miss the days of DDR1 with 2-2-2-5.
If I am not mistaken DDR 400 mhz CL 2 has a 10 ns latency like modern DDR-4 3600 CL-18 or DD5-6400 CL-32

Top end kit well we got well advanced in a generation latency did not move that much if at all since DDR start and DDR-5 latency just 1 year in to seem much better than what we had with DDR 3 or 4.

I could be all wrong or looking at just one of the latency measure that did not move much and not the other ones.

According to this:
https://en.wikipedia.org/wiki/CAS_latency
PC100 SDRam was 20 ns, 50 ns and 90 ns for the first , 4th and 8th word.

DDR5 latency for a 6400 kit CL-36 is down to 11.25, 11.72 , 12.34 ns, by end of 2023 would not surprise me if it beat the best of DDR-4 4800 mhz that reached 7.92, 8.54, 9.38 ns which was much lower than anything in the SDRAM-DDR or DDR2 days
 
Sorry this post isn't related to the technical specifics of this CPU. Just commentary on how far we've come. We're like, almost ten generations past the 5820k, and the 5820k in my desktop PC is still trucking along despite having been being overclocked to it's limit. I have a 9900k in my HTPC. The 3090 and 5820k, powered by a 650W PSU, performs relatively similar to my 9900K + 3090 Ti build at 4k.

I guess all I'm saying is that it doesn't seem like you have to upgrade your processor for gaming anymore. Or maybe I lucked out because when I bought the 5820k, is when they started to better optimize everything for multithreading - it took years before the potential of that CPU was even tapped.

I'm just curious if the shape of the market has changed or is going to change due to the lack of upgrade pressure for any PC component specifically in regards to gaming. SSD's are cheap, RAM requirements have stayed at 16GB forever, and nothing is really seeking to utilize that GPU or CPU power outside of specific workstations.

I see people complain that their FPS is "only 80" and isn't maxing out their 85Hz monitor. The perspective on PC hardware seems to be changing quickly.
 
  • Like
Reactions: Wat
like this
I'm just curious if the shape of the market has changed or is going to change
Historically they tend to find ways, it can take significant time (if we think very fast SSD versus regular SSD), for the cpu upgrade for example:
https://www.techspot.com/article/2520-spiderman-cpu-benchmark/

At 4k very high quality with raytracing on, a
12900K with DDR5-6400 average 96 fps and 76 fps on the 1% low
.9900k with DDR4 average 78 fps and 56 fps on the 1% low
.2700x with DDR4 average 57fps and 40 fps on the 1% low


One will be able to more than double their framerate on a RTX 4090 I suspect going from a nice Ryzen 3600 to a 13900K or a 7900x with a nice DDR5 kit, when playing at 4K with RT on or at least close to it. without RT a 13900K 6400 mhz ram almost double a Ryzen 2600x in the 1% low.

It is still limited but it is starting to show imo, even on old title
https://www.hardwaretimes.com/amd-r...rown-with-potent-ray-tracing-cpu-performance/

Shadow of tomb raider 1% low goes from 73.5 fps to 117.3 fps going from a Ryzen 5900x to a 7900x, Crysis goes from 60 to 84, going from a 3600x to a 7900x with a 4090 would probably show a big gap.

Ghostwire Tokyo even show a big jump from 102.1 to 126.2 fps on the average FPS, hitman 3 show 85.1 to 115.3.

And that going from what was the fastest gaming CPU not so long ago and not with the most powerful card available. The RDNA3-Lovelace card on many 2022 title I suspect will show massive CPU jump between 8700k, 9900k and 5800x3d, 13900K, 7900x even in what were under 110 average fps scenario with a 3090TI.

I am not sure if you have tried to run some of those nanite-lumen fully raytrace demo affair, but there is no limit on how hard to run they can make stuff and how powerful the "better" they can use.
 
Historically they tend to find ways, it can take significant time (if we think very fast SSD versus regular SSD), for the cpu upgrade for example:
https://www.techspot.com/article/2520-spiderman-cpu-benchmark/

At 4k very high quality with raytracing on, a
12900K with DDR5-6400 average 96 fps and 76 fps on the 1% low
.9900k with DDR4 average 78 fps and 56 fps on the 1% low
.2700x with DDR4 average 57fps and 40 fps on the 1% low


One will be able to more than double their framerate on a RTX 4090 I suspect going from a nice Ryzen 3600 to a 13900K or a 7900x with a nice DDR5 kit, when playing at 4K with RT on or at least close to it. without RT a 13900K 6400 mhz ram almost double a Ryzen 2600x in the 1% low.

It is still limited but it is starting to show imo, even on old title
https://www.hardwaretimes.com/amd-r...rown-with-potent-ray-tracing-cpu-performance/

Shadow of tomb raider 1% low goes from 73.5 fps to 117.3 fps going from a Ryzen 5900x to a 7900x, Crysis goes from 60 to 84, going from a 3600x to a 7900x with a 4090 would probably show a big gap.

Ghostwire Tokyo even show a big jump from 102.1 to 126.2 fps on the average FPS, hitman 3 show 85.1 to 115.3.

And that going from what was the fastest gaming CPU not so long ago and not with the most powerful card available. The RDNA3-Lovelace card on many 2022 title I suspect will show massive CPU jump between 8700k, 9900k and 5800x3d, 13900K, 7900x even in what were under 110 average fps scenario with a 3090TI.

I am not sure if you have tried to run some of those nanite-lumen fully raytrace demo affair, but there is no limit on how hard to run they can make stuff and how powerful the "better" they can use.
Great post. A lot of people severely underestimate the impact of cpu even on 4k.
 
  • Like
Reactions: noko
like this
Personally, I see these as a pretty legit alternative to AMD's offerings. Especially as someone who is mostly playing games and using the 2D half of Adobe Creative Suite. I'm pretty unlikely to buy another new CPU in less than 2 years, so I don't really care about platform longevity. I'm going to watch Microcenter's Mobo/CPU bundle deals once PCIE 5 drives and newer PSU's start hitting shelves. Whoever has the better bundle deal is likely to get my $.

It looked like the 12700k still held a slight edge in 1080p gaming vs the 7900.

Though I'm sure the x3d version will turn that around reel quick.
 
Last edited:
Back
Top