School me on drives please

TeleFragger

[H]ard|Gawd
Joined
Nov 10, 2005
Messages
1,092
Ok I know whats going in our head...


yeah this post wonders all over the place. hope you can follow!!!! hah sorry my adhd brain is on fire!!!!!!!


oh crap another n00b.. oh great.... your partially right. but

I have been doing computers long enough to tell you that my favorite O/S was … Dr Dos 7.0...
I used o-scope and schematics to repair IBM P/S 2 and Value Point systems...
fixed token ring cards and memory expansion cards.
I even learned early - yeah people that don't know what pre internet was...
I learned on the 8088 and 8086 what crystal was the driver for the cpu and was overclocking since then... take the 4mhz crystal out and put an 8 in...
hell my fav cpu was the cyrix 6x86!!!!!! LOL....

I was a benchmark whore... loved me some checkit … ok found this while typing this.. had to look... LOL
https://winworldpc.com/product/checkit-pro/pro-1x

I was so fluent on hardware but lost site when it started getting too hard to keep up and that was around the q6600 time...
just so many cpu types, cpu sockets, memory speeds.. just too much for me to keep up with...


ive been doing my main job as an Altiris Administrator since 2006 for a few companies now and yeah im out of the loop on what the cool jobs are today. I am seeing...virtualization, web services, storage and the sort...
I do my part to stay up with my old dwindling hardware but effectively am running...

win10, server 2016
esxi - full lab running 2k16, ad, dns, dhcp, etc... with Altiris and all of its solutions...
now venturing into freenas

my latest posts craz has been about 10gb on the cheap... im $55 in on old Mellanox cx4 style ConnectX-1 cards that I have working on win10, esxi, freenas 11 (put a ticket in as not working on 12)


so with that all out of the way... lets get to the learning im after!!!

my current job, they use what comes in the Lenovo systems... and they are just cruddy WD Blue drives. Is what it is.. but we deal with scientists and masses of data. I have told my coworkers that we need to look at SSD and I was told.. no too expensive. I took it upon myself to use anvil storage benchmark tool and started benchmarking ALL of our types of HD's... normal spinning sata drives, then our laptops that had the cruddy intel 180gb ssd's that were dying so much onto other things. we started getting a few SSD drives in for special requests and I started benchmarking them.
with that we now put SSD drives in everything and now even the new Lenovo's are getting NVMe drives!!! WOOOT!!!!

so so once again starting back up on the benchmarking of drives... BUT benchmarks are just that.. not real world to me.

with my 10gb network, I have learned a lot and EniGmA1987 was there for a reading and push me forward.
How to use ramdrives to test your true network... etc...
now im there... all tested and starting...

im not getting the numbers to make 10gb worth it to me... unless im missing the point and it isn't about file copies but more of say running multiple vms via 10gb link from iscsi freenas datastore to the esxi host????


so what is needed to really get storage speeds?
I took my plex machine and did tons of various copies to ramdisks, ramdisk locally across 10gb to various SSD and got freakishly wild and low numbers.. nothing high... and my server has server 2016 with essentials role, stable bit drive pool and SSD Optimizer for cache (just installed that)...

during my 10gb file copies
Tried from plex though 10gb nic and got...to remote
120gb Samsung SSD - 30MB/s transfer rate
500gb Crucial SSD - 200MB/s transfer rate

but on my ramdisk to ramdisk I get 700MB/s+
how can I achieve closer to these numbers and not spend a fortune?
I have been at end with trying various things and need to be schooled on it as I am just missing something...


examples...


ramdisk to ramdisk via 1gb network

upload_2019-1-18_19-47-26.png



ramdisk to ramdisk via 10gb network

upload_2019-1-18_19-47-47.png



crystaldiskmark to just that ramdrive

upload_2019-1-18_19-48-6.png





so ive got better numbers but just don't understand how to get these on file copies...


I do photo and video editing and takes a while to copy files around... thus why this would be awesome-sauce!!!!!

anyway I got company so gotta leave it at that.... hopefully someone can help unclutter my mind...
 
So to start with some tests, download and install trial version of something like PrimoCache onto the server you are trying to send files to. Set up a cache task for your drives on the server and make it entirely a write cache. Use a chunk of the RAM so that files you transfer as able to fit into the cache space. Be sure and set defer write on so that it writes to the cache and then writes drives after a short time as it is able to,

Now on your computer you are transferring files from, keep a RAM disk there for now. Put a test file in the ramdisk and send it to the server with the cache space. See if you max out your 10gb connection (should be around 1.2GB/s?). Post back here what speed you got.

Next, remove the RAMdisk on your PC, and transfer the same file from your normal drive top the server. Post back what speed you get. This will be capped at whatever speed your drive read is capable of doing. If its a mechanical drive, that could be anything from 50MB/s to 300MB/s. If its a sata SSD, you could have anywhere from 100MB/s to 500MB/s. Its its a pcie nvme SSD, this may saturate your 10gb connection still.



Once you do those tests and post back results, remove the cache from the server and do both tests again. Then post back your results without the cache.
 
So to start with some tests, download and install trial version of something like PrimoCache onto the server you are trying to send files to. Set up a cache task for your drives on the server and make it entirely a write cache. Use a chunk of the RAM so that files you transfer as able to fit into the cache space. Be sure and set defer write on so that it writes to the cache and then writes drives after a short time as it is able to,

Now on your computer you are transferring files from, keep a RAM disk there for now. Put a test file in the ramdisk and send it to the server with the cache space. See if you max out your 10gb connection (should be around 1.2GB/s?). Post back here what speed you got.

so that's is what I did here..

imDisk ram drive from pc to pc and got 680+ and no way near 1.2gb... but... when I go to copy files, that is where I'm seeing not even these numbers.

ramdisk to ramdisk via 10gb network

View attachment 135953
 
Are you copying files from ramdisk to ramdisk, or just to/from a regular drive on one or both computers?
And are the files large or small?
 
during my 10gb file copies
Tried from plex though 10gb nic and got...to remote
120gb Samsung SSD - 30MB/s transfer rate
500gb Crucial SSD - 200MB/s transfer rate

but on my ramdisk to ramdisk I get 700MB/s+
how can I achieve closer to these numbers and not spend a fortune?
I have been at end with trying various things and need to be schooled on it as I am just missing something....

It sounds like you already have your answer if ramdisk to ramdisk is fine. Your Network is good, your drives are too slow or connected to too slow sata interfaces. Upgrade to faster ssds if the systems support them and or implement some sort of performance oriented RAID.
 
I think there may also be some translation confusion. Storage read/write speeds are in Bytes/Second. Network speeds are measured in Bits/Second.

So when you see a storage speed of 685 MB/s, that is roughly the equivalent of 5.5 Gbps. You are never going to get an on par disk to disk level transfer over the network without massive cost expenses.
 
Are you copying files from ramdisk to ramdisk, or just to/from a regular drive on one or both computers?
And are the files large or small?

all of the above.. remember that 10g thread.. it had benchmarks on this... weird thing was I used like a 2gb file. I found out copying a 7 or 9gb (forget off hand)...file.. that it goes to 1.6gb left and slams a wall...

so im trying to understand what I will need to do for drives.

my plans are a freenas server. I have about 12x 500gb ssd, 6x 2tb wd black and 6x 1tb wd black drives. I get it.. wd red or ironwolf drives are preferred. Told WD black are a bad choice but I have them on hand and can use them now.
from googling so many say don't use black drives for nas but then I read people say they are just fine.. will use a bit more electric as 7200rpm vs 5400rpm and may run a bit hotter...


so I know
WD blue is slower than WD Black
WD Black is slower than SSD
SSD is slower than NVME

however, trying to figure out a cost effective way to run a NAS, get my 10gb speeds and have a happy network life hah...


I run an esxi 6.7 server with local datastores
I run a windows server 2016 with essentials role, stablebit drive pool and now SSD optimizer
plex server with actual movies on 2k16
gaming rig that I edit photos and videos.


so im thinking a freenas box, lots of drives, ssd pool, wd black pool and go from there????



what I have done in my testing was ramdisk to ramdisk and correct... no issues....
ramdisk to disk and I see the issues... SO that is why I started this thread... trying to understand drives for when you go to do 10gb... now is it because I chose a huge file vs many little files? I don't know.


I need to figure out my overall game plan, start there, then start benchmarking.

my esxi lab is just that.. a lab that I tinker in so local storage works but its got 6x 500gb ssd, and I don't need all of that as I have 6 individual datastores and no redundancy (don't need it.. my choice.. its a lab)
so I could pull them , etc...
 
once you go to a normal disk then ya it wont be anywhere close to 10gb speeds. The only way to get above a couple hundred MB/s with large file transfers is to use a solid state drive, preferably an nvme. If I have some time this wqeekend ill try to post my benchmarks transfering from my PC (uses an nvme) to my plex server, which uses an nvme as well as ram for read and write cache. Thats really the only way you will be able to use the bandwidth, to make a cache on the server to write and read from. Though reads will only be faster after it is read the first time and gets cached
 
yeah.. well my gaming rig has nvme drives in the dell quad nvme card needing that hard word to remember.. farburification or whatever.. LOL...

then my server with ssd optimizer as ssd cache to the pool, saw this hit... im really busy the next few days but I too will try and get this narrowed down and benchmarked again....
 
Use iperf3 (there is Windows version) to test network throughput, to rule out network issue.
In iperf I can see 9.5 - 9.6GBit/s, while in file copy you most likely could not see such numbers, you may monitor CPU usage.
Then it's worth noting (as I noted in the other topic) that you may try to use something called Unbuffered copy to avoid Windows caches which has the side effect of this "wall" TeleFragger talks about.
Some programs have the option to use unbuffered, other are smart enough to use unbuffered when you copy big files. TeraCopy seems to use unbuffered but I don't know if they use some filesize threshold.
 
OK so this is a bit all over, im going to try to sum up and simplify..

1. Your concern with the Network, going to say now your Network looks absolutely fine.

2. Hitting a wall after so much of a file copy is probably the cache filling up then moving onto a slower transfer. (specially for RAMDISK to HDD) Dedicated storage systems have there own cache that will fill to 50% write to disk while filling the other 50% to help keep things moving, Windows not so much.

3. Your NAS plan, what is your priority?
3.1 Im going to guess its speed and redundancy? So RAID 6 with SSD's.
3.2 Speed and Reliability? 7200 RPM Drives in RAID 6
3.3 Storage Size and Cost? 5200 RPM Drives in RAID 6

If for your work, you could get 10k RPM Drives for something between 3.1 and 3.2.


FInally your not going to get constant 10GB Network speeds, you will get that untill the cache fills up, and if your building this yourself and not using say a NetApp or EMC (as examples) system then your cache will fill up quickly and drop to the raw data speeds of the Drives in the RAID Config.


Ok so im not sure if this is a NAS for personal use or for your work, if its for your work i would recommend a SAN instead. (based on your criteria of large scientific data sets)

If its for home use, sure thing get a NAS, and any drives will work, the only recommendation i would have is keep them the same model/make and try to get drives from different batches to reduce the likely hood of them all failing at the same time.


Also if you hadn't of guessed, use RAID 6. (2 Drive redundancy with speed )


EDIT:

One last thing, with SSDs there is a cutoff point when speed will drop dramatically dependant on the space used. Just something to keep in mind.
 
Last edited:
How much RAM do you have in the server you transfer stuff to? Enough to use 12GB of it for cache like you have shown in the RAMdisk shots? Really the only way you are going to get the speed to saturate the 10gb link is with some sort of fast cache, and hopefully enough of it to not saturate it ever. Here are some pictures of my setup:




First, a benchmark of my PC (not the server) drive with caching turned off (970 Evo nvme):

1c62qNE.png




Now this one is also my PC, but with my caching software enabled (ram cache over the nvme):

52XKTf6.png




Ok, so now that we know my own PC is good to go on file transfers, time to look at the server. This is a test of the server drive over my network (I have a LAGed 40gb fiber network so these numbers will be higher than yours):

xxdWHe3.png




So we see file transfers should be right around 1GB/s. So I do an actual transfer with a file and get this:

a6TWMWU.png


That 961MB/s is somewhat close to 1GB/s. And by the end of the transfer it had climbed to 991MB/s. It is just hard to take a screenshot because the file is 3GB and it is done so far I cant really get a good screencap of the transfer. Anyway, pretty close to the network throughput.




But then I was thinking that I thought I had this higher before. I realized I had reinstalled the OS on the server a month or two ago, and I never re-tuned the network. So I went ahead and tuned the driver settings and re-ran the tests.
This is the exact same drive on the server, with cache enabled, with the test done over my 80gb network (dual 40gb port LAG set for using both at the same time):

fSdoE00.png


There we go! much better.



Now doing an actual file transfer over the network with the same 3GB sized file:

425YoKf.png





And last, a screenshot of the drive performance when I go and turn caching off:

44Fij1J.png


Eww.....

So as you can see, without cache everything is stupid slow. With cache, everything is like running a local NVME drive in speed but over the network. Which isnt a coincidence, cause I am using a 970 EVO 1TB +16GB of DDR-3600 RAM as the cache on the server. 9.5/10 files I transfer fit within the RAM cache space, but even when I transfer 80GB files I never see speed drop down. This is because the 1TB nvme has an SLC cache of a few dozen GB worth that runs at 3.5GB/s which is faster than my transfers go anyway. So I never really get any slowdown unless I am doing a mega sized transfer of a few hundred GB all at once. Which is like once or a twice a year only.
 
Last edited:
Not that I want to question your results but... copying a 3GB file over ~3GB/s bandwidth takes 1 sec which... wouldn't probably even trigger the Windows copy progress dialog (it has around 1sec threshold on copy/move time and just beyond that it will pop up that progress dialog).
How did you manage to take that screenshot anyway?
 
so first is first.. I understand that I have OLD 10gb nic and they are working... albeit maybe not as good as a newer 10gb card... but I do get 750-ish MB/s....







OK so this is a bit all over, im going to try to sum up and simplify..

3. Your NAS plan, what is your priority?
3.1 Im going to guess its speed and redundancy? So RAID 6 with SSD's.
3.2 Speed and Reliability? 7200 RPM Drives in RAID 6
3.3 Storage Size and Cost? 5200 RPM Drives in RAID 6


3 - NAS not a priority but more of a learning lab thing.. ESXI and want to have iscsi datastore and run my vms from it off of freenas
3.1 speed
3.2 speed
3.3 cost


NAS isn't my priority... speed never was either... my end goal is to be able to just copy files back and forth from my gaming rig to my server...which is win10 to server 2016. My cards wont install firmware 2.9.1200 or up as it doesn't exist and I cant figure out how to make my own fw so it could support rdma or whatever it was (haven't had coffee yet)...

for NAS.. I just need to setup freenas and give a whirl on file copies and see if that issue goes away.

I'm told NAS doesn't need much of a cpu, not much ram, etc.. but then when I read the freenas hw recommendations...
recommend a xeon (to support ecc ram)
recommend ecc - more of a blasphemy if you don't use it

all of them are using wd red or ironwolf drives spinning at 5400rpm and some using 40gb networks... well that is what got me. I cant get 10gb to file copy without hitting cache limits so wasn't sure why they are not.



for my personal files.. family pics, family movies, data (downloaded bills, etc..) and the sort...
I run Server 2016 with essentials role (love it)
stable bit drive pool - love it too for MANY reasons....
now stable bit ssd optimizer for cache to the pool...

If for your work, you could get 10k RPM Drives for something between 3.1 and 3.2.

Home.... Work - wont pay for anything so I scarf what were discarding to my room and swap out to better drives/hw all the time...


Also if you hadn't of guessed, use RAID 6. (2 Drive redundancy with speed )

don't care about redundancy as I never keep stuff I'm afraid to loose on it.


EDIT:

One last thing, with SSDs there is a cutoff point when speed will drop dramatically dependant on the space used. Just something to keep in mind.


so a 9gb file copied to a 120gb or 500gb ssd... is where I'm seeing this...

now I am on windows, so I get that.. and that is what I'm trying to figure out.



Not that I want to question your results but... copying a 3GB file over ~3GB/s bandwidth takes 1 sec which... wouldn't probably even trigger the Windows copy progress dialog (it has around 1sec threshold on copy/move time and just beyond that it will pop up that progress dialog).
How did you manage to take that screenshot anyway?

this is what got me... on my tests... was too busy last night to do anything...

but when EniGmA1987 helped me on my 10gb network thread... I was copying 4gb files....

it wasn't until I decided to give a 9gb (think it was 9.. coulda been 7 but pretty sure 9...)...
that I noticed this issue.....
 
How much RAM do you have in the server you transfer stuff to? Enough to use 12GB of it for cache like you have shown in the RAMdisk shots? Really the only way you are going to get the speed to saturate the 10gb link is with some sort of fast cache, and hopefully enough of it to not saturate it ever. Here are some pictures of my setup:

16gb


First, a benchmark of my PC (not the server) drive with caching turned off (970 Evo nvme):


Now this one is also my PC, but with my caching software enabled (ram cache over the nvme):

how are you turning cache on and off? is it built to the 970 evo nvme?


But then I was thinking that I thought I had this higher before. I realized I had reinstalled the OS on the server a month or two ago, and I never re-tuned the network. So I went ahead and tuned the driver settings and re-ran the tests.
This is the exact same drive on the server, with cache enabled, with the test done over my 80gb network (dual 40gb port LAG set for using both at the same time):
[/quote

so what di dyou tune in driver settings? I'm vary curious....


Eww.....

So as you can see, without cache everything is stupid slow. With cache, everything is like running a local NVME drive in speed but over the network. Which isnt a coincidence, cause I am using a 970 EVO 1TB +16GB of DDR-3600 RAM as the cache on the server. 9.5/10 files I transfer fit within the RAM cache space, but even when I transfer 80GB files I never see speed drop down. This is because the 1TB nvme has an SLC cache of a few dozen GB worth that runs at 3.5GB/s which is faster than my transfers go anyway. So I never really get any slowdown unless I am doing a mega sized transfer of a few hundred GB all at once. Which is like once or a twice a year only.

Eww.... is one way to put it.. HAH

so I'm still curious how your using the evo plus memory as cache... I just don't know how your doing that and yes... that could help me bigtime as I am using 16gb ddr4 memory and will up it if needed....

all my tests have been using a ramdrive that I disable after testing as I don't need to use it afterwards so your loosing me when you say

turned cache on
turned cache off

using this stuff for cache....



plus that was the entire point of my thread post.. was for me to learn the bottlenecks and how to get around it!!!! appreciate all thoughts...
 
Not that I want to question your results but... copying a 3GB file over ~3GB/s bandwidth takes 1 sec which... wouldn't probably even trigger the Windows copy progress dialog (it has around 1sec threshold on copy/move time and just beyond that it will pop up that progress dialog).
How did you manage to take that screenshot anyway?
I have my finger ready on print screen, and as soon as I click copy I wait about a quarter second and then click the screen cap. I usually get it in the middle there as the windows doesnt even really pop up until it is near the middle of the transfer bar, and then by the time I think "maybe I have enough time to take another one" the dialog box is gone. lol. Its actually a 3.2GB file, and the speed is a bit under 3GB/s, so it does pop up, and it is on screen for around 3/4 of a second. Should stay up longer, but windows is a tiny bit slow to pop it up

3 - NAS not a priority but more of a learning lab thing.. ESXI and want to have iscsi datastore and run my vms from it off of freenas
3.1 speed
3.2 speed
3.3 cost

For high speed and keeping costs down you could do a group of 6-7 drives around 4-8TB, and set them as either a RAID5 if you want more space, or RAID6 if you want a little extra redundancy. Then get one of those new 970 EVO Plus drives that are about to launch in a 1TB capacity (for $250) and use that as a cache for the raid array. Just tell it to split the cache 50/50 for read write, and set it to free write cache on written data. That way the write cache doesnt really fill up and you keep the drive between 25-50% empty always for good speed.


And that is running your PLEX too right? Maybe just use 2-4GB of the RAM then for cache and use a solid state for a much larger cache once RAM is used up


how are you turning cache on and off? is it built to the 970 evo nvme?

I use PrimoCache software. Been using it since way back when it was called FancyCache in the beta.


all my tests have been using a ramdrive that I disable after testing as I don't need to use it afterwards so your loosing me when you say

turned cache on
turned cache off

using this stuff for cache....

plus that was the entire point of my thread post.. was for me to learn the bottlenecks and how to get around it!!!! appreciate all thoughts...

With the RAMdisk you are copying a file into the memory disk, then copying it to the server in its memory disk right? So basically it is just a super tiny but very fast drive that you have to use manually. It isnt really cache at all. With actual software to make a cache space over your drives you dont have to do anything beyond that, the cache works on its own behind the scenes seamlessly. You could get an Optane drive (or other SSD should work too) and set up Intel's cache software, or get other 3rd party software like PrimoCache, ExpressCache, MaxVeloSSD.


so what di dyou tune in driver settings? I'm vary curious....
In the advanced area of the driver in device manager, I have a TON of options. I turned up my receive side scaling (how many processors the NIC uses) to take us of all my CPU threads, and I changed the transmit and receive interrupt type from normal to low latency, changed my send and receive buffers to be better tuned, and turned back on jumbo frames. The jumbo frames does the most help in sequential transfer speed, and the low latency tuning and buffers help bring up the small file size transfers. Most people dont like jumbo frames, but I use it on my internal 80gb network and just let my router split the packets down to 1500 when necessary for external traffic. Works well and I have never had an issue. The router CPU never even goes above 10% so its no big deal to me to let the router do more work.
 
Last edited:
For high speed and keeping costs down you could do a group of 6-7 drives around 4-8TB, and set them as either a RAID5 if you want more space, or RAID6 if you want a little extra redundancy. Then get one of those new 970 EVO Plus drives that are about to launch in a 1TB capacity (for $250) and use that as a cache for the raid array. Just tell it to split the cache 50/50 for read write, and set it to free write cache on written data. That way the write cache doesnt really fill up and you keep the drive between 25-50% empty always for good speed.

well currently my file server as mentioned numerous times. hah
server 2016
16gb rarm
xeon
128gb ssd - OS
500gb ssd - pool cache
3x 2tb wd black - in pool
2x 4tb wd black - in pool

stablebit drive pool using all those drives in a pool with ssd as a cache... new plugin that came out.. love this sw... cost me $20... and can use different size drives. also vs other raid... it is nice as you can set folder level duplication...



And that is running your PLEX too right? Maybe just use 2-4GB of the RAM then for cache and use a solid state for a much larger cache once RAM is used up

nope .. just data repository... Plex is on another i7-6700...confirmed


I use PrimoCache software. Been using it since way back when it was called FancyCache in the beta.

SWEET!!! ill check this out...


In the advanced area of the driver in device manager, I have a TON of options. I turned up my receive side scaling (how many processors the NIC uses) to take us of all my CPU threads, and I changed the transmit and receive interrupt type from normal to low latency, changed my send and receive buffers to be better tuned, and turned back on jumbo frames. The jumbo frames does the most help in sequential transfer speed, and the low latency tuning and buffers help bring up the small file size transfers. Most people dont like jumbo frames, but I use it on my internal 80gb network and just let my router split the packets down to 1500 when necessary for external traffic. Works well and I have never had an issue. The router CPU never even goes above 10% so its no big deal to me to let the router do more work.
[/quote]

ahhh yeah ive got options but not like that
 
Last edited:
so here are 2 pics I took a bit ago... when I asked a question on the stable bit ssd optimizer.. no one has replied.. but these pics show it..

post.png pre.png





and if you wanna hear my cruddy voice but also have a visual...

https://flic.kr/p/2dbPGNd
 
so here are 2 pics I took a bit ago... when I asked a question on the stable bit ssd optimizer.. no one has replied.. but these pics show it..

View attachment 137221 View attachment 137222



and if you wanna hear my cruddy voice but also have a visual...

https://flic.kr/p/2dbPGNd

My guess would be the issue is with the StableBit software more than anything else. It is probably causing a lot more disk usage and CPU usage to software pool data like that, and it could be that their SSD cache software just isnt very good. The behavior of their cache looks ways off like it isnt even truly being used.
Try installing PrimoCache and disabling the SSD cache plugin on StableBit. In Primo, make a cache task with that same SSD and test file transfer speed and see if it hits the same cliff after the same size. Be sure and set defer-writes to on, I usually do a 30 second time. The SSD part of the cache task in Primo is called "L2 storage", where RAM is considered L1. After you test the SSD performance in the new software then play around and add some RAM to it as well and see how things go. You can also mess with the ratio of read to write space in the cache, as well as change the behavior slightly of how the write cache works (how full it lets things get)
 
will do.. not sure until when tonight. at work now... 1:24pm.. done at 3:30pm... 65 mile drive home in this rain (downpour) will suck.. so 5:30pm maybe home. .then kiddo has MMA ( starts training for 2nd degree black soon!!!!!) so
maybe... 9pm.. maybe.. UGHHHHHHHHH

and thanks!

[:-P>
 
In the advanced area of the driver in device manager, I have a TON of options. I turned up my receive side scaling (how many processors the NIC uses) to take us of all my CPU threads, and I changed the transmit and receive interrupt type from normal to low latency, changed my send and receive buffers to be better tuned, and turned back on jumbo frames. The jumbo frames does the most help in sequential transfer speed, and the low latency tuning and buffers help bring up the small file size transfers. Most people dont like jumbo frames, but I use it on my internal 80gb network and just let my router split the packets down to 1500 when necessary for external traffic. Works well and I have never had an issue. The router CPU never even goes above 10% so its no big deal to me to let the router do more work.

ok so for me..
Receive Side Scaling - Enabled (was set)
Rx Interrupt - just set to low latency
Tx Interrupt - just set to low latency

so Ive been pulling my hair out (what little I have left..)..
sitting here on my gaming rig now that I have the switch and cant get over 450MB/s.....
tried so much then I said wait.. all this testing ive been doing is on my plex box to my server as they are ontop of each other.
so I went and tested .. yup 750MB/s+ ….
GRRRRRRRRRR so why 450MB/s....

another weird anomaly is my gaming rig is the machine I started my 10gb thread on that you've been helping me on... and ive done screenshots of a command that no longer works on my machine...
ibstat…
it just returns a command prompt.. wonder if there is something odd there and that is hurting it.

ive tried drivers, etc.. but maybe I need to uninstall it all and try again as my Plex and Server do not have WinMST pack installed and work fast...
 
Run some iperf tests on the computers and see what the network gets:


will do.. my machine that is only getting 400's is via ramdrive to ramdrive... so its definitely the network on that one machine. most likely something with that machine. I even swapped cards as I have a few and still not as fast .. albeit almost half as fast..
 
here we go...
this is from my machine that I say isn't going as fast as my other...


PS C:\temp\iperf-3.1.3-win64> .\iperf3.exe -c 10.60.1.1
Connecting to host 10.60.1.1, port 5201
[ 4] local 10.60.1.6 port 50102 connected to 10.60.1.1 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 1.08 GBytes 9.29 Gbits/sec
[ 4] 1.00-2.00 sec 1020 MBytes 8.56 Gbits/sec
[ 4] 2.00-3.00 sec 1.03 GBytes 8.81 Gbits/sec
[ 4] 3.00-4.00 sec 1.04 GBytes 8.90 Gbits/sec
[ 4] 4.00-5.00 sec 1.04 GBytes 8.92 Gbits/sec
[ 4] 5.00-6.00 sec 1.07 GBytes 9.19 Gbits/sec
[ 4] 6.00-7.00 sec 1.04 GBytes 8.94 Gbits/sec
[ 4] 7.00-8.00 sec 1.02 GBytes 8.75 Gbits/sec
[ 4] 8.00-9.00 sec 1.07 GBytes 9.20 Gbits/sec
[ 4] 9.00-10.00 sec 1.06 GBytes 9.10 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 10.4 GBytes 8.97 Gbits/sec sender
[ 4] 0.00-10.00 sec 10.4 GBytes 8.96 Gbits/sec receiver
iperf Done.
PS C:\temp\iperf-3.1.3-win64> .\iperf3.exe -c 10.66.1.230
Connecting to host 10.66.1.230, port 5201
[ 4] local 10.66.10.34 port 50104 connected to 10.66.1.230 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 112 MBytes 942 Mbits/sec
[ 4] 1.00-2.00 sec 112 MBytes 943 Mbits/sec
[ 4] 2.00-3.00 sec 112 MBytes 942 Mbits/sec
[ 4] 3.00-4.00 sec 112 MBytes 943 Mbits/sec
[ 4] 4.00-5.00 sec 112 MBytes 941 Mbits/sec
[ 4] 5.00-6.00 sec 112 MBytes 943 Mbits/sec
[ 4] 6.00-7.00 sec 112 MBytes 942 Mbits/sec
[ 4] 7.00-8.00 sec 112 MBytes 942 Mbits/sec
[ 4] 8.00-9.00 sec 112 MBytes 942 Mbits/sec
[ 4] 9.00-10.00 sec 112 MBytes 942 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 1.10 GBytes 942 Mbits/sec sender
[ 4] 0.00-10.00 sec 1.10 GBytes 942 Mbits/sec receiver
iperf Done.
PS C:\temp\iperf-3.1.3-win64>







then here is my machine I feel is faster...

PS C:\Test\iperf-3.1.3-win64> .\iperf3.exe -c 10.60.1.1
Connecting to host 10.60.1.1, port 5201
[ 4] local 10.60.1.2 port 61126 connected to 10.60.1.1 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 880 MBytes 7.38 Gbits/sec
[ 4] 1.00-2.00 sec 880 MBytes 7.39 Gbits/sec
[ 4] 2.00-3.00 sec 884 MBytes 7.42 Gbits/sec
[ 4] 3.00-4.00 sec 884 MBytes 7.41 Gbits/sec
[ 4] 4.00-5.00 sec 888 MBytes 7.45 Gbits/sec
[ 4] 5.00-6.00 sec 892 MBytes 7.48 Gbits/sec
[ 4] 6.00-7.00 sec 888 MBytes 7.45 Gbits/sec
[ 4] 7.00-8.00 sec 898 MBytes 7.53 Gbits/sec
[ 4] 8.00-9.00 sec 882 MBytes 7.40 Gbits/sec
[ 4] 9.00-10.00 sec 900 MBytes 7.55 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 8.67 GBytes 7.45 Gbits/sec sender
[ 4] 0.00-10.00 sec 8.67 GBytes 7.45 Gbits/sec receiver
iperf Done.
PS C:\Test\iperf-3.1.3-win64> .\iperf3.exe -c 10.66.1.230
Connecting to host 10.66.1.230, port 5201
[ 4] local 10.66.10.16 port 61129 connected to 10.66.1.230 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 112 MBytes 940 Mbits/sec
[ 4] 1.00-2.00 sec 112 MBytes 942 Mbits/sec
[ 4] 2.00-3.00 sec 112 MBytes 941 Mbits/sec
[ 4] 3.00-4.00 sec 112 MBytes 940 Mbits/sec
[ 4] 4.00-5.00 sec 112 MBytes 942 Mbits/sec
[ 4] 5.00-6.00 sec 112 MBytes 942 Mbits/sec
[ 4] 6.00-7.00 sec 112 MBytes 942 Mbits/sec
[ 4] 7.00-8.00 sec 112 MBytes 942 Mbits/sec
[ 4] 8.00-9.00 sec 112 MBytes 942 Mbits/sec
[ 4] 9.00-10.00 sec 112 MBytes 942 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec sender
[ 4] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec receiver
iperf Done.






so are we saying that the windows screen that shows speed/time is just wrong? and that I am actually running faster on my 10gb gamer than my 10gb plex?
 
The network tests do in fact say you are getting 10gb speed on your network between your gamer PC and the server. The problem is drive related.


Did you install PrimoCache software and try that? I would try putting the trial version of that on your gamer PC and the server. Set an 8GB cache on both computers if possible, and manually partition it at 4GB read, and 4GB write.
Then leave the PrimoCache window open on both computers. Find a 3 to 3.5GB file and copy it to the server. As soon as you start sending it, watch to see if your pc had its free space in the read cache go down, and then go and look if the free space in the write cache went down on the server. As soon as you are done transferring the file, delete it on the server and then copy the same file again. This time it should be in cache on your gamer pc, so speed should easily max out your network.

If that worked, then it is just simply a drive related bottleneck on your pc. As sata based SSDs do not have the speed to saturate a 10gb network. The only thing faster than 10gb right now is newer nvme drives.
 
I did install primocache but didnt get to start messing with it. Got busy.

Thing is.. 400 is ramdisk to ramdisk so not drive bottleneck.
Ill sit down and take my time to look at things...
 
if you are already transferring to and from RAM and it still runs 400-450MB/s, then honestly I dont know. it has to be something on your computer itself, as iperf shows you are clearly getting near 10gbit on it. It even shows higher than your PLEX to server speeds. So it has to be something on the computer itself causing a major filesystem bottleneck.
 
May reinstall windows. Ive got a backup of my machins so if it doesnt do anything can just restore...
 
if you are already transferring to and from RAM and it still runs 400-450MB/s, then honestly I dont know. it has to be something on your computer itself, as iperf shows you are clearly getting near 10gbit on it. It even shows higher than your PLEX to server speeds. So it has to be something on the computer itself causing a major filesystem bottleneck.

thinking about this more... the only drives in my machine are nvme on a dell quad card so not like they are slow. ill be playing more on this tomorrow... during superbowl (boycotting)
but got my hp procurve switch updated.. woooot.. and no new featuers.

there really isn't much to set in these things.
 
setup a new box.. it is now an i5 6600k vs a xeon.. wanted the xeon but had this handy...

server 2019...
5x 500gb crucial SSD in a stablebit pool and copying to that is world of difference...


10gb

upload_2019-2-12_15-15-56.png



1gb - same file

upload_2019-2-12_15-16-8.png
 
Nice. That is your full 10gb speed right there.
yes it is!!!!!!!!!

got enough ssd drives to be happy with... and still enough spinning drives to offload too for redundancy. not sure how im going to do that yet.. contemplating just keeping my important fast access stuff on ssd and when I copy to ssd, robocopy over to spinning drives do duplicate.
I have an ssd cache for stablebit on the spinning drives, so hopefully that helps the copying but if not, ill throw it into the ssd pool....


antec 900... got life again... gonna replace fans as these are toast..

20190212_163421.jpg 20190212_163522.jpg
 
Back
Top