More info on the hardware i will be working with this summer

AMD_Gamer

Fully [H]
Joined
Jan 20, 2002
Messages
18,287
I started my part time summer job/internship today for a non-profit that i will be helping to virtualize their servers and redo their old network. I just checked in today and took a look at the new hardware I will be installing.

This is the new hardware I get to work with

Drobo model B800i for ESXi Datastore over iSCSI http://www.drobo.com/products/drobosanbusiness.php
Dell PowerConnect 5488 48 Port Switch http://www.dell.com/us/en/enterprise/networking/pwcnt_5448/pd.aspx?refid=pwcnt_5448&cs=555&s=biz
2X Dell PowerEdge R515 2U server with 2X AMD Opteron 4122 and 16GB RAM each for ESXi and HA/FT http://www.dell.com/us/business/p/poweredge-r515/pd

We will be putting 4-5 servers onto it. SQL Server, Exchange Server, DC/Backup DC. Also we will be moving from Exchange 2003 and Server 2003 to Server 2008 and Exchange 2010.

This should be a great learning experience. Has anyone worked with similar hardware or upgrades and have any tips?
 
I started my part time summer job/internship today for a non-profit that i will be helping to virtualize their servers and redo their old network. I just checked in today and took a look at the new hardware I will be installing.

This is the new hardware I get to work with

Drobo model B800i for ESXi Datastore over iSCSI http://www.drobo.com/products/drobosanbusiness.php
Dell PowerConnect 5488 48 Port Switch http://www.dell.com/us/en/enterprise/networking/pwcnt_5448/pd.aspx?refid=pwcnt_5448&cs=555&s=biz
2X Dell PowerEdge R515 2U server with 2X AMD Opteron 4122 and 16GB RAM each for ESXi and HA/FT http://www.dell.com/us/business/p/poweredge-r515/pd

We will be putting 4-5 servers onto it. SQL Server, Exchange Server, DC/Backup DC. Also we will be moving from Exchange 2003 and Server 2003 to Server 2008 and Exchange 2010.

This should be a great learning experience. Has anyone worked with similar hardware or upgrades and have any tips?

Drobo sucks..... way too slow, and their proprietary virtualized storage method is not recoverable outside of using another drobo and even then its still risky.
Google Drobo failures and you will see way too many horror stories.

In the price point you are in, I'd prefer a QNAP TS-809U-RP if you need something simple or a HP StorageWorks X1600 G2 system if you need something more.
 
Last edited:
Ugh good luck with that drobo. Their support people are flaky at best, and we had all kinds of performance issues with the one we demoed (At the time called the Drobo Elite, now the B800i). It is also a single controller array which means if the controller goes down your array is totally hosed and drobo may be able to recover it.......maybe.....if they feel like it because their RAIDs are setup in some proprietary format that only Drobo can recover.

IMHO for the fact that they are an NPO I would have gone to Dell (we get special pricing from dell for NPOs) and gotten a MD3200i for around the same price.

Other then that looks like you have a lot of learning and a lot of fun ahead of you!
 
Ugh good luck with that drobo. Their support people are flaky at best, and we had all kinds of performance issues with the one we demoed (At the time called the Drobo Elite, now the B800i). It is also a single controller array which means if the controller goes down your array is totally hosed and drobo may be able to recover it.......maybe.....if they feel like it because their RAIDs are setup in some proprietary format that only Drobo can recover.

IMHO for the fact that they are an NPO I would have gone to Dell (we get special pricing from dell for NPOs) and gotten a MD3200i for around the same price.

Other then that looks like you have a lot of learning and a lot of fun ahead of you!

Yeah what we have is the DroboElite. They already had this Drobo so we will make the best of it. They had the dell people come in and recommend this hardware so if we did not already have the Drobo I bet they would have recommended something else.
 
Having setup close to that exact same combination (2xR500 servers, DroboElite, HP Procurve 2824), I can tell you: good luck. The Drobo is dog slow, and has never been stable for us. When it gets choked on IOPs (which it does....alot), it has a tendency to disconnect from the ESX boxes, requiring it to be shutdown and brought back up again (which takes quite a looooong while with 8 2TB disks that are 2/3 full), and sometimes requires both of the ESX boxes to be restarted too, depending on what they were doing at the time.

We haven't had any device failures (as in, the device died), but lots and lots of issues. The Drobo support people are all useless as well, as every question I've gone over with them has either been "read the Best Practices PDF" (which was for ESX 3.5 at the time when we were using 4), or "we couldn't replicate that issue in our test lab, sorry." The device itself is also a PITA to administer, as you can either admin via USB or by ethernet, but if the USB is plugged in, the ethernet doesn't work. As both ethernet ports on ours are vLAN'd off to isolate iSCSI traffic, to admin it I either have to take a client to the switch and plug into the iSCSI vLAN, or disconnect the array from the ESX boxes when I plug into the USB plug to make a change. Either way, it's a terrible design. I understand that you don't have any choice in the matter (I didn't either), but be prepared to spend some serious time working around the issues of what you've been given. If I had been given a choice, I'd have whiteboxed a SAN with ZFS (or any other system) instead of going with the Drobo (and am in the process of making that conversion now). You'll end up carrying all the support burden anyway, you might as well do so with a system that isn't so proprietary that you can't get any support for it.
 
Having setup close to that exact same combination (2xR500 servers, DroboElite, HP Procurve 2824), I can tell you: good luck. The Drobo is dog slow, and has never been stable for us. When it gets choked on IOPs (which it does....alot), it has a tendency to disconnect from the ESX boxes, requiring it to be shutdown and brought back up again (which takes quite a looooong while with 8 2TB disks that are 2/3 full), and sometimes requires both of the ESX boxes to be restarted too, depending on what they were doing at the time.

We haven't had any device failures (as in, the device died), but lots and lots of issues. The Drobo support people are all useless as well, as every question I've gone over with them has either been "read the Best Practices PDF" (which was for ESX 3.5 at the time when we were using 4), or "we couldn't replicate that issue in our test lab, sorry." The device itself is also a PITA to administer, as you can either admin via USB or by ethernet, but if the USB is plugged in, the ethernet doesn't work. As both ethernet ports on ours are vLAN'd off to isolate iSCSI traffic, to admin it I either have to take a client to the switch and plug into the iSCSI vLAN, or disconnect the array from the ESX boxes when I plug into the USB plug to make a change. Either way, it's a terrible design. I understand that you don't have any choice in the matter (I didn't either), but be prepared to spend some serious time working around the issues of what you've been given. If I had been given a choice, I'd have whiteboxed a SAN with ZFS (or any other system) instead of going with the Drobo (and am in the process of making that conversion now). You'll end up carrying all the support burden anyway, you might as well do so with a system that isn't so proprietary that you can't get any support for it.

Thanks for the info. Sounds like a nightmare. As I said, they already had the Drobo and as a non-profit money is tight. This is not a time sensitive project. The guy I am working for just wants it done by the end of the summer so I will have time to learn and fine tune everything. I have never worked with a Drobo before so this will be a good learning experience either way.

Do you have HA setup?

I tested out my servers this morning with the iDRAC, pretty cool stuff.
 
I had HA set up for testing at one point, but our cluster is loaded to the point where there aren't really any resources left for doing a failover anyway, so I ended up turning it off to simplify the configuration. I'm hoping when I get the new budget in July to add a couple more servers to the cluster (along with the new SAN that I'm working on), and go to a full HA setup again.

A few tips: the Drobo is very IOP-bound for us, and guessing from your listed servers will be for you as well. If you haven't already bought drives for it yet, stay away from 4k sector drives. The system performs even worse with these, even if you are using aligned filesystems. Also, try to give the system some "breathing room". I find that it's far faster to start 1-2 VM's and wait for them to get loaded before starting the next than it is to just kick them all off and wait. Dual-disk redundancy mode tends to magnify the performance issues we have, so we stuck with single drive, YMMV. I could never get the thing to be stable with the Broadcom iSCSI TOE HBA's for more than a few minutes, so I ended up using the software HBA instead, again YMMV. Drobo claims they have no similar issues in their test lab, but it locks up every ~5 minutes or so on our configuration, while the software HBA only does it once every couple of weeks or so when all the users randomly decide to flog the system at once.
 
Stupid question but do you NEED to run HA for all your VM's? What are your SLA's for your systems that you need to hit? Maybe you could go with local storage for some - the secondary DC for example.
 
That cluster runs a bunch of student project/programming boxes in a University environment, so the SLA is "best effort". We (read: I) make no guarantees on uptime or performance, due to both hardware issues (grossly oversubscribed, they gave me ~25% of the budget I asked for) and lack of staffing (just me for this one, and I'm not on call 24/7). If we were running actual DC's or similar on the system, the requirements would be much different. I'm stuck in a chicken-and-egg situation where the powers that be don't want to commit money to the infrastructure to host actual production-level boxes (read: faculty stuff instead of student stuff) due to the current performance/availability issues, but I can't fix those issues due to the current lack of planning on the budget by the powers that be. I've scraped enough money out of this year's funds to do a ZFS whitebox build that I'm going to add into the cluster to take the load off the Drobo. I'm hoping that things will improve to the point where I'll be able to get the rest of the funds I reuqested to go ahead and get this up to a level that's actual Enterprise-worthy.
 
Umm if money is that tight....how about ebay the drobo and buy something else. Even at a loss you could get something that meets your storage requirements. AND it would make your life and that system phenomenally more functional/reliable.
 
Umm if money is that tight....how about ebay the drobo and buy something else. Even at a loss you could get something that meets your storage requirements. AND it would make your life and that system phenomenally more functional/reliable.

I'd love to, but one of the wonderful things about working for a state agency is that all purchases either have to be from a vendor with a state purchase contract, with a couple of exceptions for smaller items, or put up for a public bid. E-bay is right out, even though it's just stupid alot of the time. I ended up having to buy a replacement part for an old Dell server from Dell for $299 last summer that I could have E-bay'd for $20. The whole reason we got shafted with the Drobo in the first place is that at the time it was by far the cheapes solution from a certified vendor that was on the HCL for ESX.
 
Is it ok to have the windows server shares on the same Drobo Datastore? What is the best practice for storing windows shared folders with vmware?

We currently have a lot of shared folders on one of the Server 2003 machines.
 
Just leave them in the virtual disk... nothing wrong w/ that. Honestly, if you have problems with that, you're not going to fix it by using an RDM :|

An RDM will give you the ability to move the LUN to a physical Windows server though without having to do a data copy, so there is SOME justification for it. I always rail against it, the tradeoffs aren't worth it to me.
 
The best thing we found with the drobo was to use it as NFS and not iSCSI storage and run all the VMs off that. Then create a VMDK that gets attached to your file server for file shares and storage.

AND FOR THE LOVE OF GOD HAVE A BUDR PLAN. It doesn't need to be anything fancy like veem (though if you have the budget for it, get it), it could be an rsync from the drobo to a desktop with a 2TB external drive attached, but please put something in place
 
The best thing we found with the drobo was to use it as NFS and not iSCSI storage and run all the VMs off that. Then create a VMDK that gets attached to your file server for file shares and storage.

AND FOR THE LOVE OF GOD HAVE A BUDR PLAN. It doesn't need to be anything fancy like veem (though if you have the budget for it, get it), it could be an rsync from the drobo to a desktop with a 2TB external drive attached, but please put something in place

Of course. I have a Buffalo Terrastation sitting here along with the old servers that will be freed up that we will be using for some kind of backup solution.
 
What is the best practice for setting up an iSCSI Target to be used as the shared datastore. Are you supposed to create a partition/LUN for each VM you have or should you just create one large partition on your iSCSI target?
 
What is the best practice for setting up an iSCSI Target to be used as the shared datastore. Are you supposed to create a partition/LUN for each VM you have or should you just create one large partition on your iSCSI target?

Definitely not one per vm, at least I don't...

I usually split them up in 700-800gb LUNs.
 
What he said. You make larger iSCSI LUNs and put several VMs on it. For a Drobo you probably just make one large LUN (up to 2TB - 512Bytes) since you'll be spanning all the drives anyway.
 
What he said. You make larger iSCSI LUNs and put several VMs on it. For a Drobo you probably just make one large LUN (up to 2TB - 512Bytes) since you'll be spanning all the drives anyway.

Thanks. We only have 2.64TB available on the Drobo anyways so one large 2TB partition is what I will do.
 
Thanks. We only have 2.64TB available on the Drobo anyways so one large 2TB partition is what I will do.

Remember...it's not 2TB. It's 2TB - 512Bytes. Very small difference but a big one when vSphere won't work with that LUN. ;)
 
So it will be ok to run all the windows shares, exchange server and sql server all from the iSCSI target and have HA/DRS enabled? seems like a lot of information is going to be going and coming from the Drobo
 
I am backing up a bunch of stuff to my server 2008 workstation that was on the Drobo before I set it to use with ESX. I am getting a steady 100MB/sec. Hopefully i get that performance with ESXi.
 
I am backing up a bunch of stuff to my server 2008 workstation that was on the Drobo before I set it to use with ESX. I am getting a steady 100MB/sec. Hopefully i get that performance with ESXi.

You're seeing sequential read speeds. You won't get that, nor most likely need that, with the VMware environment.
 
I setup both of my intel nics to connect to the iSCSI Datastore on my drobo following these instructions http://www.techhead.co.uk/vmware-esxi-4-0-vsphere-connecting-to-an-iscsi-storage-target

I made the new vswitch and assigned both NICS to them then added the datastore to my host. It looks like my host is using one of the Machine port Group nics and ip address to access the iSCSI datastore because performance monitor does not show any action from the nics i setup for iSCSI?:confused:

I did not yet put everything in a separate VLAN, do you have to do that or it will default to the other nic?
 
Post a screenshot of your network config.

Wlhi3.jpg
 
You have two VMkernel interfaces. It's always going to use vmk0 by default if it is on the same IP subnet as storage. That's why you need to put iSCSI/NFS on a different subnet.
 
Now that I have a minute...

You have 3 NICs. If you had 4 I'd just say put two for VMs and two for NAS, but you can't do that. So you need to get more creative. I'd put all 3 NICs in the same vSwitch and then go in to the portgroups (VM Network, VMKernel, etc) and set NIC preference. So for VMs you could prefer vmnic0 but have vmnic1 and vmnic3 be set as standby so that if vmnic0 fails one of those will take over. Then on the NAS portgroup set vmnic1 and vmnic3 be active and just let them failover for each other.

Where is vmnic2? Just not put in a vSwitch? If that's the case forget what I just said and put it in vSwitch0.
 
Thanks for all the info. I do have 4 nics. I was going to have 2 for iscsi, one for vm's and then one for a dedicated management vlan. I figured HA would take care of the vm nic failing.

So my problem is that the scsi nics need to be in a different subnet otherwise vmk0 is used? Now when I configued them it only asked for one ip address, do they do automatic failover or do I need to set that up?
 
For multipathing iSCSI read the link I posted. If you don't care if iSCSI uses both NICs and just want failover you're fine. You don't need a dedicated management NIC in an environment that small and you don't want HA to fail an entire node on a single port or cable failure.
 
I figured HA would take care of the vm nic failing.

You don't want to rely on HA for NIC failure when it can be avoided up front. Configure it the way NetJunkie suggested. The VM network and the Mangement Network will be redundant.
 
How come I keep reading that you need to do some kind of port binding from the command line? is that with old version of ESXi?

Also how come some tutorials and pictures i see show two VMkernal ports each with its own IP address. The way i set it up now just does automatic fail-over in the event that one of the nics goes down?
 
Back
Top