NVME and threadripper slower than old intel system.

Filter

[H]F Junkie
Joined
Dec 30, 2001
Messages
9,524
So i came from a 5960x to a 1950x thread ripper.

i have a samsung 960 pro. on my old intel system i can easily transfer ( my test files 40gigs) at 2.0GB/s

copying the same files on my new thread ripper install im at 1.46 GB/s

i have the same samsung NVME driver install.

am i missing something or is amd /threadripper performance slower and nvme performance?

i have a asus zenith extreme motherboard. i have the newest bios installed from asus website.

here is my crystal diskmark score.

is it possible since i was running 125 bus speed over 100 on the intel system?
 

Attachments

  • Capture.PNG
    Capture.PNG
    36.4 KB · Views: 38
Last edited:
The chipset on the motherboard might be the limitation, even if the AMD board runs the NVME drive at PCIE 3.0 4X. Something about how the CPU interfaces with the add in devices and how much total bandwidth is available for use.

Or it could be a limitation purely of the CPU. Faster RAM speeds and an overclock on the CPU might help as it has for other Ryzen chips.
 
I'd agree with Arestavo's RAM comment. Ryzen relies heavily on RAM speed, where the Intels not as much.

Make sure you have latest chipset drivers as well. The intel had more PCI-E lanes, but I assume you aren't using SLI/Crossfire on your new AMD board
 
Did you install Windows fresh on the new AMD system or just swap/clone the drive?
 
the 5960x has 40 pcie lanes the threadripper has 64 pcie lanes. so dont see that being the issue.

i have the newest amd chipset drivers.

this is a fresh install of win 10 pro.

its the same 2400mhz corsair ram that was in the intel system. is it worth to upgrade it for threadripper?
 
the 5960x has 40 pcie lanes the threadripper has 64 pcie lanes. so dont see that being the issue.

i have the newest amd chipset drivers.

this is a fresh install of win 10 pro.

its the same 2400mhz corsair ram that was in the intel system. is it worth to upgrade it for threadripper?

Again, PCIE lanes (3.0 4X) might have nothing to do with it, it might be a limitation in the total bandwidth available to all devices. If the chipset only has 2GB/s available for all PCIE and other devices through the interconnect then it doesn't matter - 2GB/s (just an example) is all you will ever get on devices that have to utilize it.
 
Again, PCIE lanes (3.0 4X) might have nothing to do with it, it might be a limitation in the total bandwidth available to all devices. If the chipset only has 2GB/s available for all PCIE and other devices through the interconnect then it doesn't matter - 2GB/s (just an example) is all you will ever get on devices that have to utilize it.


Wouldnt ssd benchmarks reflect that??
 
Wouldnt ssd benchmarks reflect that??

Maybe not if you are copying from one drive to another that both use the same interconnect - if that's the case, it could be halved. IF it really is a limitation like that and not a driver / Windows futzing up.
 
Isn't the ram speed tied to bus? You ram might be the answer there. Loan/borrow some 3200 B-die.
 
im debating one which kit to get right now. i have 64 gigs of corsair 2400.

Reckon best to be safe than sorry and beg/borrow steal a 3000-3200 b-die kit, then see what it does if it's that critical to have full speed.
https://hothardware.com/reviews/amd-ryzen-threadripper-processor-review?page=3
Block diagram there looks like it (SSD HV-E) comes direct from the CPU and 'infinity fabric' is tied to RAM speed, so I'd presume NVME is also.

8 drives in raid on higher speed ram pulled 28Gb/sec... 3.5GB/sec per drive, enough said.
 
im confused i set memory access mode to local from distributed and gained 250 of the 500 missing back.
 
Back
Top