crimpshrine
n00b
- Joined
- Jan 23, 2008
- Messages
- 58
Please help.. pulling my hair out here.
I have a server that I am using for my NAS. It is connected to my local switch (by 4 foot cat 6 cable) Intel Gigabit NIC on the PCI express bus. Running CentOS 6.0, tried the newest stable SAMBA that comes with CentOS 6.0, and even the new 3.6.0 version with SMB2 enabled.
The NAS is configured with JBOD, no raid going on.
The workstation I am testing from, connects to the same switch with an Intel gigabit adapter (on the PCI express bus) by a 6 foot Cat 6 cable (so there is 12 feet between the testing workstation and the server + a gigabit switch that supports jumbo frames.
Locally on the NAS I can copy 16+ gig files from one drive to another at around 95 Megabytes per second for the entire transfer.
From the workstation I can copy to ANY drive in the NAS over the network at a sustained 100 Megabytes per second, never drops below 98 Megabytes per second, goes as high as 112.
Reading from the workstation from the NAS, the MAX I can hit is 40 Megabytes per second. The workstation is a Windows 7 64 bit machine. Not even half the performance as writes!
I installed VSFTPD on the NAS and can upload and download from the box (tested with a 20 gig ghost image) at 100 Megabytes per second BOTH ways the entire transfer from any drive on the NAS.
I have tried messing with Jumbo frames, window sizes, nothing seems to increase the read performance from the NAS. You might say 40 megabytes per second is fast, but 50% decrease is something, especially when copying DVD's and other large media.
Here are some of my tests to show performance:
192.168.0.4 is the NAS
C:\Downloads>iperf -c 192.168.0.4
------------------------------------------------------------
Client connecting to 192.168.0.4, TCP port 5001
TCP window size: 63.0 KByte (default)
------------------------------------------------------------
[148] local 127.0.0.1 port 49883 connected with 192.168.0.4 port 5001
[ ID] Interval Transfer Bandwidth
[148] 0.0-10.0 sec 1.08 GBytes 930 Mbits/sec
C:\Downloads>iperf -c 192.168.0.4 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.0.4, TCP port 5001
TCP window size: 63.0 KByte (default)
------------------------------------------------------------
[176] local 192.168.0.182 port 49905 connected with 192.168.0.4 port 5001
[188] local 192.168.0.182 port 5001 connected with 192.168.0.4 port 54135
[ ID] Interval Transfer Bandwidth
[188] 0.0-10.0 sec 903 MBytes 757 Mbits/sec
[176] 0.0-10.0 sec 769 MBytes 644 Mbits/sec
ALL of my drives in the NAS are the same as the results below, (they are all the same type of SATA II drive)
[root@misc01 src]# hdparm -Tt /dev/sdc
/dev/sdc:
Timing cached reads: 2416 MB in 2.00 seconds = 1208.19 MB/sec
Timing buffered disk reads: 384 MB in 3.00 seconds = 127.87 MB/sec
Anyone have a nicely tuned SAN in Linux that manages to sustain 100Mbyte per second transfers both ways that can help me out here with some ideas of where to go with this?
I have not run any packet captures yet, that will be my next step to see if that says anything.
I have a server that I am using for my NAS. It is connected to my local switch (by 4 foot cat 6 cable) Intel Gigabit NIC on the PCI express bus. Running CentOS 6.0, tried the newest stable SAMBA that comes with CentOS 6.0, and even the new 3.6.0 version with SMB2 enabled.
The NAS is configured with JBOD, no raid going on.
The workstation I am testing from, connects to the same switch with an Intel gigabit adapter (on the PCI express bus) by a 6 foot Cat 6 cable (so there is 12 feet between the testing workstation and the server + a gigabit switch that supports jumbo frames.
Locally on the NAS I can copy 16+ gig files from one drive to another at around 95 Megabytes per second for the entire transfer.
From the workstation I can copy to ANY drive in the NAS over the network at a sustained 100 Megabytes per second, never drops below 98 Megabytes per second, goes as high as 112.
Reading from the workstation from the NAS, the MAX I can hit is 40 Megabytes per second. The workstation is a Windows 7 64 bit machine. Not even half the performance as writes!
I installed VSFTPD on the NAS and can upload and download from the box (tested with a 20 gig ghost image) at 100 Megabytes per second BOTH ways the entire transfer from any drive on the NAS.
I have tried messing with Jumbo frames, window sizes, nothing seems to increase the read performance from the NAS. You might say 40 megabytes per second is fast, but 50% decrease is something, especially when copying DVD's and other large media.
Here are some of my tests to show performance:
192.168.0.4 is the NAS
C:\Downloads>iperf -c 192.168.0.4
------------------------------------------------------------
Client connecting to 192.168.0.4, TCP port 5001
TCP window size: 63.0 KByte (default)
------------------------------------------------------------
[148] local 127.0.0.1 port 49883 connected with 192.168.0.4 port 5001
[ ID] Interval Transfer Bandwidth
[148] 0.0-10.0 sec 1.08 GBytes 930 Mbits/sec
C:\Downloads>iperf -c 192.168.0.4 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.0.4, TCP port 5001
TCP window size: 63.0 KByte (default)
------------------------------------------------------------
[176] local 192.168.0.182 port 49905 connected with 192.168.0.4 port 5001
[188] local 192.168.0.182 port 5001 connected with 192.168.0.4 port 54135
[ ID] Interval Transfer Bandwidth
[188] 0.0-10.0 sec 903 MBytes 757 Mbits/sec
[176] 0.0-10.0 sec 769 MBytes 644 Mbits/sec
ALL of my drives in the NAS are the same as the results below, (they are all the same type of SATA II drive)
[root@misc01 src]# hdparm -Tt /dev/sdc
/dev/sdc:
Timing cached reads: 2416 MB in 2.00 seconds = 1208.19 MB/sec
Timing buffered disk reads: 384 MB in 3.00 seconds = 127.87 MB/sec
Anyone have a nicely tuned SAN in Linux that manages to sustain 100Mbyte per second transfers both ways that can help me out here with some ideas of where to go with this?
I have not run any packet captures yet, that will be my next step to see if that says anything.