Question about ESX guests and SAN storage

idea

Gawd
Joined
Jan 24, 2005
Messages
615
I'm drawing up plans for a new vSphere lab. I will have a storage server and multiple ESX hosts, tied together with infiniband SAN. Obviously the storage will be used to store VM guests.

My question is: How can the guests make use of SAN storage as well? Is it even possible? It must be, but I cannot visualize it.
EXAMPLE: I have a shared www directory on the storage system and 3 webserver VM guests. How can they access this data via the IB?

Another way for the guests to make use of the storage, since the storage system has 1Gb LAN, is to simply connect the guests with NAS (NFS/SMB). Unfortunately this is nowhere near the speed of theoretical 40Gbps QDR infiniband.
 
Spending money on Infiniband is a waste. 1GB Ethernet should be more than sufficient for your lab. Spend the extra cash on more memory, spindles and SSD's (if possible).

And to answer your question, you would create additional shares on the NAS device, and present them over CIFS/NFS.
 
Spending money on Infiniband is a waste. 1GB Ethernet should be more than sufficient for your lab. Spend the extra cash on more memory, spindles and SSD's (if possible).

And to answer your question, you would create additional shares on the NAS device, and present them over CIFS/NFS.

I will be spending less than $1000 on used IB equipment and I think its worth it for 20Gb/s (or 40Gb/s depending on the switch I get). Whats the point of adding more spindles and SSDs if 1Gb/s throughput will bottleneck me?

Code:
Worst case prices based off of eBay buy-it-now auctions:
$300 2x IB QDR HCA Mezzanine Card for (2) ESX hosts
$200 1x IB QDR HCA for storage host
$200 1x IB switch (DDR is cheaper than QDR)
$100 Misc cables...
$800 TOTAL

*fingers crossed* hoping someone has a better idea on how to connect the guests to the SAN
 
Spending money on Infiniband is a waste. 1GB Ethernet should be more than sufficient for your lab. Spend the extra cash on more memory, spindles and SSD's (if possible).

And to answer your question, you would create additional shares on the NAS device, and present them over CIFS/NFS.

He said it.
 
WAIT a second, am I complicating this? Is it just like setting up regular ethernet? i.e. Can you create guests with an infiniband adapter, and can you create an infiniband vSwitch?
 
I setup 10G infiniband for my setup for $323

I don't have it fully up and running yet, but between two servers I am getting 3Gbit/s (quite low imho, I need to see what tweaks I need to make).
 
$800 is a waste of money to upgrade to 20Gbps? Tell me your thoughts

Just my opinion here. It's for a lab so why go you need that much bandwidth and complication of setup? If you really need more bandwidth than 1gig E then why not just get 10gig E?
 
You are overcomplicating this. Why do you need a high amount of throughput in your lab?

I have run over 200 guests on 1-gbps Ethernet. Infiniband has it's place. A "VMware lab" is not it.
 
I don't think the OP has a clear 'idea' of how these technologies work to begin with so that's the biggest issue. However, a lab is where you setup stuff to mess around with. If that means you want to mess around with Infiniband, then go for it. I don't see the issue here. No, it's completely unnecessary for a lab but there's plenty of people around here that do completely unnecessary things with their home labs. OP you need to do more research on how SAN storage and guests work with virtualization first. Play with other technologies after you have some fundamental knowledge.
 
Last edited:
Aaaand all of the above comments are why you don't ask questions on [H]. Too many assholes with their head in the clouds :rolleyes::rolleyes::rolleyes:
 
Aaaand all of the above comments are why you don't ask questions on [H]. Too many assholes with their head in the clouds :rolleyes::rolleyes::rolleyes:

So logical advice is in the clouds?

Faster is about a lot more than one link. 1gb link is plenty good for most modest sized setups.

Shit, we run almost 1000 VMs on a 1gb backbone and the network is not the performance limitation.
 
Aaaand all of the above comments are why you don't ask questions on [H]. Too many assholes with their head in the clouds :rolleyes::rolleyes::rolleyes:

The advice could have better been conveyed, but that doesn't mean they aren't correct. The OP asked a question that shows a general lack of understanding of a fundamental concept of virtualization with regards to storage and how it fits in. Understanding Ethernet technologies is the simplest and cheapest way to gain fundamental knowledge of storage protocols. After that, then start messing with different protocols and connectivity types.
 
Aaaand all of the above comments are why you don't ask questions on [H]. Too many assholes with their head in the clouds :rolleyes::rolleyes::rolleyes:
Hey, that offends me!

My head isn't in the clouds! :)

He asked a question, and he got honest answers. Its up to the OP to figure out what he wants to do.
 
I've not personally messed with infiniband but I read the same article you did about the whole kit-and-kaboodle for less than a grand.

What you have to understand is that Fibre Channel/Ethernet/Infiniband/etc are all just pathways to shared storage. Just like the data bus on your motherboard to your locally attached SATA/SCSI/SAS disk. Each have their own differences. You'll end up presenting LUNs to the ESXi hosts and then those hosts will be able to use those LUNs as datastores.

here's a good pdf to get you started

http://pubs.vmware.com/vsphere-51/t...here-esxi-vcenter-server-51-storage-guide.pdf
 
WTF happened to my thread? I almost want to delete it and start from scratch. It's my lab. It's my money. If I want to invest $800 in infiniband then I will. The $800 is an investment in my career, so I may gain skills to make me more valuable. So I don't need to hear anyone crying about how 1000mbit or 100mbit or 10mbit or 2800 baud is sufficient.

Can we focus on the question at hand now? Refer to post #1, thank you
 
WTF happened to my thread? I almost want to delete it and start from scratch. It's my lab. It's my money. If I want to invest $800 in infiniband then I will. The $800 is an investment in my career, so I may gain skills to make me more valuable. So I don't need to hear anyone crying about how 1000mbit or 100mbit or 10mbit or 2800 baud is sufficient.

Can we focus on the question at hand now? Refer to post #1, thank you

Slow your roll. You asked a question and some answered the question with what they thought would best help you and uncomplicate things. I did a quick search and found this: http://communities.vmware.com/docs/DOC-15437 It may or may not help you.
 
I'll refer back to I don't think you understand how storage and virtualization works. Your question is pretty fundamental. Start there.

Which is strange considering from your signature it looks like you were able to get it setup....
 
Last edited:
Spending money on Infiniband is a waste. 1GB Ethernet should be more than sufficient for your lab. Spend the extra cash on more memory, spindles and SSD's (if possible).

And to answer your question, you would create additional shares on the NAS device, and present them over CIFS/NFS.

OP, ignore all the bad advice like this that steer you towards a solution without understanding exactly what your needs are by doing load testing and such. A 1Gbps ethernet connection NAS/SAN can handle some virtualization loads, but far from all. You didn't give us enough information but this dude thought he could answer the question anyway. While most of the time with just a few VMs, 1Gbps SAN/NAS storage is okay, I can set up certain loads where 1Gbps won't perform that well even with just one VM. It depends on many factors. People here spew info about IOPS IOPS IOPS are all that matters, but they're wrong. That's like claiming FPS is all that matters to gaming. But reviewers are finally realizing individual frame delay/latency data is important as well, not just overall FPS. Networking is like that, too. IOPS matter, but it's not the only thing that matters.

OP, you can do IP over Infiniband and use NFS or whatever if you need multiple hosts to access shared data. Or you can iSCSI over Infiniband for exclusive access.
 
Last edited:
...

OP, you can do IP over Infiniband and use NFS or whatever if you need multiple hosts to access shared data. Or you can iSCSI over Infiniband for exclusive access.

Thank you. I will use these terms to google and read. Do you have any ideas on how you would personally set up IP over IB and iSCSI over IB?
 
If I want to invest $800 in infiniband then I will. The $800 is an investment in my career, so I may gain skills to make me more valuable.

If this is really the goal then it would probably be a good idea to look into how and if IB is actually widely used in the field and whether it has any potential of every becoming mainstream in commercial deployments. Otherwise, whether it's your money or not, you are spending it on something of very little RL value.

Whether 1GB is enough or not depends on the workload, but you will find many actual RL setups that never ever even come close to saturating a 1GB link.

Aaaand all of the above comments are why you don't ask questions on [H]. Too many assholes with their head in the clouds :rolleyes::rolleyes::rolleyes:

I think above all what you will find here at [H] is actual professionals handing out decent advice. Yes, some (including myself) are at times opinionated but that doesn't make the advice less valid.

There are tons of people around here who are willing to help and most (myself included) can relate to getting excited about a new project, buying new hardware, setting it all up, working with some new stuff. Still, people who say that you need to understand the basics before you spend a bunch of money aren't assholes, they aren't elitist either, they just speak from experience.
 
If this is really the goal then it would probably be a good idea to look into how and if IB is actually widely used in the field and whether it has any potential of every becoming mainstream in commercial deployments. Otherwise, whether it's your money or not, you are spending it on something of very little RL value.

Whether 1GB is enough or not depends on the workload, but you will find many actual RL setups that never ever even come close to saturating a 1GB link.



I think above all what you will find here at [H] is actual professionals handing out decent advice. Yes, some (including myself) are at times opinionated but that doesn't make the advice less valid.

There are tons of people around here who are willing to help and most (myself included) can relate to getting excited about a new project, buying new hardware, setting it all up, working with some new stuff. Still, people who say that you need to understand the basics before you spend a bunch of money aren't assholes, they aren't elitist either, they just speak from experience.

You're making a lot of sense so let me give you my perspective. In addition to being very bored with gigabit ethernet and want to learn something new, I also move a lot of big files around. Many of us with storage systems in their home labs use it for music, movies, the occasional pornographic cinema, etc. My storage is capable of 300-400MB/s, so why do I want to bottleneck it to gigabit speeds? I'm very surprised people are trying to convince me 1Gbps is the end all be all. That's unusual around here.

This thread was doomed from the first reply onwards
 
Why would you need to present SAN storage to a VM that's already running on a vmdk/vhd on SAN? Just mount another virtualized volume and present it like local storage.

[edit] nvm, I missed your example of presenting shared storage for labbing a clustered environment.
 
You're making a lot of sense so let me give you my perspective. In addition to being very bored with gigabit ethernet and want to learn something new, I also move a lot of big files around. Many of us with storage systems in their home labs use it for music, movies, the occasional pornographic cinema, etc. My storage is capable of 300-400MB/s, so why do I want to bottleneck it to gigabit speeds? I'm very surprised people are trying to convince me 1Gbps is the end all be all. That's unusual around here.

This thread was doomed from the first reply onwards

Beyond a few comments that were a bit blatant and too far in the other direction, what most of us were trying to do is get the basic concepts out of the way and explain where money is well spent to start learning so you can go from there.

I started on a $150 build to test the concepts, figured out where I wanted to go with it, and then went. Do I expect everyone to do it the same way? No. But I also see a lot who just want the best of the best out of the gate and never use 10% of it when there money would have been better spent somewhere else.

For example, my current setup I wanted to focus more on multiple hosts for live migration and DR. My hosts are modest with 16gb of RAM each, but it is perfectly sufficient and I knew that going into it based on earlier dealings. I also should add my current setups use less than 100w each, were built for less than the server class mainboards often recommended around here cost, and run flawlessly. Again, I dont think everyone should do it my way, but I think it is good service to make sure people understand what they are getting into before just buying and being unhappy with the results (some of us veterans around here have seen all too many cases of people coming back upset at how their setup does not do X or performs like crap when they just put down a huge chunk of cash).


All that being said, if you want to learn IB, go for it. I have yet to find a deployment of it beyond a lab in my dealings, but that does not mean you will have the same experience. Most of the places I work with are 1gb, 10gb or fiber backbones. I also have found it is difficult to use speeds much beyond gigabit in a small lab environment. Things like copies from your client machines are usually going to be limited by their disk subsystems or network links and inter-server stuff is usually not frequent enough to warrant the slight speed bump, or not relevant in the case of intra-server transfers. I have seen a few around here setup full ecosystems that can utilize the speed though, and that is damn cool, just know what you are getting into :)
 
Back
Top