Disk performance SLOWER over GigE network?

VulcaN

[H]ard|Gawd
Joined
Aug 22, 2001
Messages
1,917
I thought over a gigabit network, that the hard-drive access speed should be the limiting factor, yet when I benchmark or transfer files over my network its slower than doing a disk to disk copy on the server.

Setup:
client windows 7, intel CT gigabit NIC, intel80 gb g2 SSD
server win2k8r2, intel CT gigabit NIC, hitachi deskstar 2TB x2 JBOD

To simplify things I have removed the network and just connected the client to the server using a cat 6 crossover cable.
======================================

If I transfer a file on the server from one hitachi to the other I get an average of 120-125 MB/sec once the burst has dissipated.

If I transfer a file from a SSD in the client computer to the a hitachi in the server I get an average of 90-95 MB/sec once the burst has dissipated.

======================================

If I run Crystal Disk Mark on the server, on one of the hitachi storage drives I get this:
27zyfr8.jpg


If I run it across the network on the same drive, I get slower results:
6thd8p.jpg


I tried enabling jumbo frames (I ran a ping with a 8972 packet size to be sure it was working) but actually got WORSE results!

2ihq8ic.jpg



What could be causing the inconsistency???
 
Very good speeds IMO. SMB/CIFS is not that high-performance for single data streams. It virtually always is slower than a simple FTP transfer. But your scores are quite high IMO. The only thing that lags behind are the reads.
 
So its SMB thats slowing me down?? :mad: What happens if I put a SSD in the server? Will it still be that slow since SMB is the culprit or does the slowness of SMB scale with the drive speed? (aka its always a certain % slower than your actual drive speed due to overhead?)
 
I thought over a gigabit network, that the hard-drive access speed should be the limiting factor, yet when I benchmark or transfer files over my network its slower than doing a disk to disk copy on the server.

Setup:
client windows 7, intel CT gigabit NIC, intel80 gb g2 SSD
server win2k8r2, intel CT gigabit NIC, hitachi deskstar 2TB x2 JBOD

To simplify things I have removed the network and just connected the client to the server using a cat 6 crossover cable.
======================================

If I transfer a file on the server from one hitachi to the other I get an average of 120-125 MB/sec once the burst has dissipated.

If I transfer a file from a SSD in the client computer to the a hitachi in the server I get an average of 90-95 MB/sec once the burst has dissipated.

======================================

If I run Crystal Disk Mark on the server, on one of the hitachi storage drives I get this:
27zyfr8.jpg


If I run it across the network on the same drive, I get slower results:
6thd8p.jpg


I tried enabling jumbo frames (I ran a ping with a 8972 packet size to be sure it was working) but actually got WORSE results!

2ihq8ic.jpg



What could be causing the inconsistency???

Actually, you are hitting the limit of Gig-E. You see, even Gig-E is about 2.5 times slower than SATA II in maximum sustainable transfer speeds. And Gig-E is not 1 GB/s at all - but 1 Gbit/s. Hence, the maximum sustainable read or write speed through Gig-E is just over 100 MB/s. However, your read speed is quite a bit lower than that due to the use of SMB.
 
Yeah I just thought it would max it out at ~125 MB/s but I guess there is quite a bit of overhead and SMB only adds to that.

Any idea why reading is so much more taxing on SMB than writing? I always thought hardware took longer to write data than read it
 
Yeah I just thought it would max it out at ~125 MB/s but I guess there is quite a bit of overhead and SMB only adds to that.

Any idea why reading is so much more taxing on SMB than writing? I always thought hardware took longer to write data than read it

Because receiving data (Rx) is far more taxing than transmitting data (Tx), from a networking perspective.
 
If I transfer a file from a SSD in the client computer to the a hitachi in the server I get an average of 90-95 MB/sec once the burst has dissipated.

BTW 90-95MB/s to a single Hitachi is not bad. You could be on the slower portion of the disk as those things are not that fast.

Still 90-95MB/s min is pretty darn good over GigE.
 
I have an HP Procurve switch and my computers have Gigabit NICs (realtek). I also use Belden High-quality CAT6 cables and my pfsense box being the dhcp server also with a gigabit nic (intel).

When I transfer files over the network, I just get around 20 MBps, is there something wrong with my setup? I was amazed that 100 MBps was reached by the OP.
 
Try removing the crossover cable and using a straight-through. It may or may not make a difference and shouldn't, but it is worth a try.

Still, your performance is quite fine, considering a single drive in JBOD.
 
I have an HP Procurve switch and my computers have Gigabit NICs (realtek). I also use Belden High-quality CAT6 cables and my pfsense box being the dhcp server also with a gigabit nic (intel).

When I transfer files over the network, I just get around 20 MBps, is there something wrong with my setup? I was amazed that 100 MBps was reached by the OP.

Try updating your Realtek drivers. Those NICs are crummy, but you should be able to do better than that. I top 80mbps with the one in my workstation.
 
I have all updated drivers. That's weird. An HP Procurve Switch and an Intel Gigabit NIC for my network infrastructure, as far as I know, are good devices. Why do I have low transfer speed?
 
If they are not RAIDed, then yes. Also HDDs can only do beyond 1MB/s if the workload is sequential in nature.

Also, if your network interface is on PCI, or the storage on PCI (or worse, both on PCI) then your performance would be quite a bit lower as well.

But even without any hardware bottlenecks, SMB/CIFS still manages to perform poorly; courtesy of Microsoft perhaps; every other protocol does perform properly, except SMB/CIFS. So you could try a simple FTP transfer if that gets you much higher speeds then the protocol is the weak link.
 
I have all updated drivers. That's weird. An HP Procurve Switch and an Intel Gigabit NIC for my network infrastructure, as far as I know, are good devices. Why do I have low transfer speed?

What are you copying? Big files, small files? Is your source disk generally defragmented?
 
But even without any hardware bottlenecks, SMB/CIFS still manages to perform poorly; courtesy of Microsoft perhaps; every other protocol does perform properly, except SMB/CIFS. So you could try a simple FTP transfer if that gets you much higher speeds then the protocol is the weak link.

CIFS can be sensitive to things like TCP window scaling and other RFC1323 extensions in conjunction with some driver configurations, but if those are the issue, transfers are 1mb/sec or less for all loads. If he's getting 20mb/sec, its because his storage can't do any faster or there's a network issue like a duplexing mismatch.
 
What are you copying? Big files, small files? Is your source disk generally defragmented?

I'm usually copying bluray files, around 30-50GB in size. Well the Realtek NICs in my workstations are on-board and I believe that is PCI-E as indicated by the Realtek driver when I install it.

The Intel Gigabit NIC in my pfsense box though is using PCI-E interface.

Both source and destination are defragmented. Source workstation is in RAID0 with Samsung Spinpoint F3's.

So the main culprit here would be the protocl that Win7 uses to transfer files?
 
You will never get 125 MB/sec over gigabit ethernet. The absolute best you can hope for is 115-117 megabytes/sec as you will cap out at around 940-950 megabits (tops) with overhead.

I am doing about 100 megabytes/sec with rsync to my open solaris box but if it werent for a CPU limitation I am hitting that limits writes every 5 seconds that number would be more like 112 megabytes/sec which is just about best case. I would be happy with 95 as others have said the hitachi will not be able to go that fast once it gets to the lower speed section of the drive.
 
Try removing the crossover cable and using a straight-through. It may or may not make a difference and shouldn't, but it is worth a try.

Just to expand on this comment, the gigE standard is supposed to automatically detect the receive and transmit direction, so no crossover cable should be required unless the network equipment is not following the spec.

I am surprised that anyone actually owns a CAT-6 crossover cable...
 
kevin:

I suggest you start a new thread, and provide more details about your setup.

Also, it would be a good idea to try a local copy (no network involved) to see what speed you get.
 
Just to expand on this comment, the gigE standard is supposed to automatically detect the receive and transmit direction, so no crossover cable should be required unless the network equipment is not following the spec.

I am surprised that anyone actually owns a CAT-6 crossover cable...

Really?? I wonder if that puts anymore (or less) stress on the devices connected having them figure it out...

I bought it on monoprice but I was actually using it to connect a Gig-E switch to a one of the LAN ports on a 10/100 router
 
A 10/100 switch may not have auto-crossover, so I can see using a crossover cable for that. But what boggled me was a CAT-6 crossover cable. CAT-5 I could understand.

I don't think there is any overhead for auto-crossover (other than probably taking a little longer to establish the link when you first plug in the cable). Once the link is negotiated and the TX/RX lines are connected properly, the performance should be normal.
 
You can get Infiniband cards and cables quite cheap on Ebay if you're disappointed with GigE. The switches are super expensive so no to that though. You can also use multiple GigE connections bonded together but that's only useful for multiple streams.
 
Back
Top