gigabit switch question...

Boscoh

[H]ard|Gawd
Joined
Nov 25, 2003
Messages
1,159
Qwerty,

First of all, before you go and fork out the money for gigabit evaluate if you're even going to see an improvement.

What speed are your hard drives in your servers, and your controller card. Im assuming your servers are some form of SCSI. 160, 320?

Are you serving up anything else other than static webpages? Do you have any big databases that programs pull from? Multimedia?

The reason I ask is because is because most SCSI Ultra 160 drives out there are going to transfer data anywhere from 20MB/Sec to 50MB/Sec. It depends on your particular drive and where that data is on the platter and what (if any) RAID type you're running. I've seen 320 drives than can transfer as fast as 80MB/Sec, but I'm sure there are faster.

100Mb Ethernet=12.5MB/Sec (Theoretical Max)
1000Mb (Gig)Ethernet=125MB/Sec (Theoretical Max)

So dont assume that just by getting GigE you're going to see a massive performance increase hitting your servers. Its going to come down to the hardware on those servers and your access patterns. If you're running old stuff and just serving up 50-100kb websites all day long, you'd probably be better off going with 100mb and perhaps doing Link Aggregation across two links or doing DNS load balancing IF you need it. Your bottleneck here is probably going to be the server, not the network. Having GigE to the servers will however make sure that all the data requests make it to the server (in the case that 100Mb ethernet would be maxed out) and get queued.

If you've got top of the line equipment and you're serving up tons of streaming video, flash or shockwave applets, java, doing database transfers or replication, file system replication, or anything I/O intensive, you'll probably see a justification for moving to GigE.

The best thing for you to do right now would probably be to configure Performance Monitor on your windows boxes to monitor the usage of your 100Mb NIC cards in your servers. I'd open 3 instances, one updating in real time, one every 15 seconds, and one every 30 seconds. That should give you both real time stats and some historical figures to look at. If you see an average of anything less than 75% usage, I wouldnt even consider the move to GigE. Doing Link Aggregation or DNS lb across two or more 100mb links will probably be more than enough for your needs and buying an extra NIC for each box will probably end up being quite a bit cheaper than a Gig switch. Consider the cost of the cable necessary to run Gig too...if you're running Cat 5 you're going to need to move up to at least 5e. Cable aint cheap either when you're on a strict budget. Every dollar counts man.

Make sure you get the 48 port, unless you get 2x24's that have one or two gig uplink ports on them.
 
Originally posted by Boscoh
Qwerty,

First of all, before you go and fork out the money for gigabit evaluate if you're even going to see an improvement.

What speed are your hard drives in your servers, and your controller card. Im assuming your servers are some form of SCSI. 160, 320?

Are you serving up anything else other than static webpages? Do you have any big databases that programs pull from? Multimedia?

The reason I ask is because is because most SCSI Ultra 160 drives out there are going to transfer data anywhere from 20MB/Sec to 50MB/Sec. It depends on your particular drive and where that data is on the platter and what (if any) RAID type you're running. I've seen 320 drives than can transfer as fast as 80MB/Sec, but I'm sure there are faster.

100Mb Ethernet=12.5MB/Sec (Theoretical Max)
1000Mb (Gig)Ethernet=125MB/Sec (Theoretical Max)

So dont assume that just by getting GigE you're going to see a massive performance increase hitting your servers. Its going to come down to the hardware on those servers and your access patterns. If you're running old stuff and just serving up 50-100kb websites all day long, you'd probably be better off going with 100mb and perhaps doing Link Aggregation across two links or doing DNS load balancing IF you need it. Your bottleneck here is probably going to be the server, not the network. Having GigE to the servers will however make sure that all the data requests make it to the server (in the case that 100Mb ethernet would be maxed out) and get queued.

If you've got top of the line equipment and you're serving up tons of streaming video, flash or shockwave applets, java, doing database transfers or replication, file system replication, or anything I/O intensive, you'll probably see a justification for moving to GigE.

The best thing for you to do right now would probably be to configure Performance Monitor on your windows boxes to monitor the usage of your 100Mb NIC cards in your servers. I'd open 3 instances, one updating in real time, one every 15 seconds, and one every 30 seconds. That should give you both real time stats and some historical figures to look at. If you see an average of anything less than 75% usage, I wouldnt even consider the move to GigE. Doing Link Aggregation or DNS lb across two or more 100mb links will probably be more than enough for your needs and buying an extra NIC for each box will probably end up being quite a bit cheaper than a Gig switch. Consider the cost of the cable necessary to run Gig too...if you're running Cat 5 you're going to need to move up to at least 5e. Cable aint cheap either when you're on a strict budget. Every dollar counts man.

Make sure you get the 48 port, unless you get 2x24's that have one or two gig uplink ports on them.

I couldn't have said it any better myself.
 
Back
Top