DermicSavage
[H]ard|Gawd
- Joined
- Jun 8, 2004
- Messages
- 1,107
So I've hit an impasse with my boss in regards to our cheaply built datacenter.
Currently the entire place is run on a 1GbE network all based on a Cisco 6500 core switch. We went this route because the hardware was stupid cheap, but we are now feeling the performance hit on our storage network because of it.
We have a nimble SAN with four 1GbE links, and five hosts also with four 1GbE links.
The main issue we are hitting now is that we have a large number of SQL VMs in this cluster, and we are really getting hit bad on performance and latencies of the storage from all the load. This is impacting our latencies and bottlenecking our disk throughout from an expected 100MB/s to 10-15MB/s read and write speeds.
I've been begging for the investment to put the whole system on a 10gig infrastructure, but have been largely knocked down due to cost. With this recent issue cropping up, they are considering allowing me to install 10gig on the SAN and have that service the hosts which will remain 1gig.
Will having a SAN with 4x1Gb upgrade to 2x10Gb help reduce much of our issue? The SAN is clearly over subscribed right now and I'm hoping just a bandwidth increase will alleviate the issue significantly enough for the year until we get a budget to overhaul the network.
Does anyone have any insights on the storage performance here? I'm no storage engineer and am trying to piece it all together.
As a note, the SAN is a Nimble with 680GB of flash cache, the protocol used is all iSCSI, and the storage network is all connected via layer 2 on the 6500 chassis.
Note 2: Any recommendations on switches that would provide low latencies, primarily offer 1GBASE-T, and 4x 10GbE SPF+ ports? I am hoping to find something less than $7-10k per switch. Cisco 3850 provides the port counts, but I'm not sure about latencies. I'm welcome to any comments
Currently the entire place is run on a 1GbE network all based on a Cisco 6500 core switch. We went this route because the hardware was stupid cheap, but we are now feeling the performance hit on our storage network because of it.
We have a nimble SAN with four 1GbE links, and five hosts also with four 1GbE links.
The main issue we are hitting now is that we have a large number of SQL VMs in this cluster, and we are really getting hit bad on performance and latencies of the storage from all the load. This is impacting our latencies and bottlenecking our disk throughout from an expected 100MB/s to 10-15MB/s read and write speeds.
I've been begging for the investment to put the whole system on a 10gig infrastructure, but have been largely knocked down due to cost. With this recent issue cropping up, they are considering allowing me to install 10gig on the SAN and have that service the hosts which will remain 1gig.
Will having a SAN with 4x1Gb upgrade to 2x10Gb help reduce much of our issue? The SAN is clearly over subscribed right now and I'm hoping just a bandwidth increase will alleviate the issue significantly enough for the year until we get a budget to overhaul the network.
Does anyone have any insights on the storage performance here? I'm no storage engineer and am trying to piece it all together.
As a note, the SAN is a Nimble with 680GB of flash cache, the protocol used is all iSCSI, and the storage network is all connected via layer 2 on the 6500 chassis.
Note 2: Any recommendations on switches that would provide low latencies, primarily offer 1GBASE-T, and 4x 10GbE SPF+ ports? I am hoping to find something less than $7-10k per switch. Cisco 3850 provides the port counts, but I'm not sure about latencies. I'm welcome to any comments