I just built a Gigabyte Aorus Ultra X570 MoBo with 2x Samsung 980 Pro NVMe's (i.e. everything is PCIe v4.0) - getting just under 7,000 MB/s read, 5,000 MB/s write on both.
Question: has anyone ever tested the real-world performance of a CPU-connected NVMe vs. chipset-connected? The 2nd & 3rd M.2 slots are connected via the chipset, which is PCIe v4.0, so the serial throughput is the same in raw throughput testing.
But I'm curious about the latency, as I want to use PrimoCache to L2 cache my 36TB of RAID6 spinning drives.
I'm considering swapping my boot drive (OS & apps only, currently in M.2 slot 1) to the second M.2 slot, and putting the Cache drive in the first slot connected directly to the CPU to reduce latency.
Main uses for the system are video and photo editing. So I'm not launching lots of apps, but bouncing back and forth through footage and pics (hence the question on latency over throughput).
Thoughts pro/con/WTH are you doing?
Question: has anyone ever tested the real-world performance of a CPU-connected NVMe vs. chipset-connected? The 2nd & 3rd M.2 slots are connected via the chipset, which is PCIe v4.0, so the serial throughput is the same in raw throughput testing.
But I'm curious about the latency, as I want to use PrimoCache to L2 cache my 36TB of RAID6 spinning drives.
I'm considering swapping my boot drive (OS & apps only, currently in M.2 slot 1) to the second M.2 slot, and putting the Cache drive in the first slot connected directly to the CPU to reduce latency.
Main uses for the system are video and photo editing. So I'm not launching lots of apps, but bouncing back and forth through footage and pics (hence the question on latency over throughput).
Thoughts pro/con/WTH are you doing?