ESXi - do I need a hardware RAID card

BSDMan

Weaksauce
Joined
May 23, 2012
Messages
105
I have a Supermicro X10SL7-F motherboard and this has an onboard LSI2308 controller with 8 SAS ports.

I am wanting to install ESXi on this machine so that I can learn ESXi 5.5.

What I am unsure of is, do I need a separate hardware RAID card or can I use the onboard LSI2308?

In other words, if I create a mirrored volume using drives connected to the LSI2308 SAS ports, will ESXi see the individual drives? Or will it see a single volume?

I don't want to purchase a hardware RAID card if theres no need to...:confused:

Thanks!
 
If that controller is supported by ESXi of the version you are using, it will see the mirror. Otherwise, it will not. Whether you want the mirror or not is up to you.
 
As an alternative, a lot of users boot ESXi off of a single drive, containing one guest with Napp-IT or FreeNAS (or manual ZFS) using a Direct I/O forwarded SAS card, and then share the ZFS pool back to ESXi using NFS.

I don't do this (I have my datastore on a single SSD which I back up frequently, and use my freeNAS guest for - well - NAS storage) but it seems to be a rather popular solution, so it's worth mentioning. I *believe* that LSI controller you have is the same as the IBM M1015 which is very popular for this purpose on here, as it can be flashed with LSI's IT firmware, which turns it into a JBOD controller, perfect for use with ZFS.
 
Thanks for the reply!

I currently use the LSI2308 in IT mode for my FreeBSD/ZFS server. However I plan on formatting the server and setting up an ESXi host. This will be my first time setting up,installing and configuring ESXi.

Do I need to flash my LSI2308 with the IR firmware again so that I can RAID mirror two drives that will be used as a single volume in ESXi?

I also have some questions regarding VLAN tagging, port groups and vSwitches.

If I have two portgroups called:

DMZ
Production


and I assign VLAN ID 100 to DMZ and VLAN ID 200 to Production how do I get these two VLANs to "talk" to each other?

I've started reading up about "trunk ports" but still don't quite fully understand it. I do understand that you need a router to connect the two VLANs but am unsure of where/what/how the trunk port works. Do I need to connect the physical NIC from the ESXi host into a "special" physical switch?

I'll probably start with a single physical NIC configured in my ESXi setup.
 
I'm not good with VLAN's so I'll leave that to someone else.

As far as the controller goes, I am not sure if ESXi has any built in software raid, so this probably does mean you'd need to flash the controller back to it's original RAID capable firmware and use the Option ROM at boot time to set up the mirror, and install to it. (that is, unless you go the ZFS in guest route above)

I presume your CPU and motherboard support VT-D?
 
I've started reading up about "trunk ports" but still don't quite fully understand it. I do understand that you need a router to connect the two VLANs but am unsure of where/what/how the trunk port works. Do I need to connect the physical NIC from the ESXi host into a "special" physical switch?

I think you may have misunderstood trunking. (either that, or I have)

On my HP Procurve switch - at least - trunking has nothing specifically to do with VLAN's but rather is a synonym for Link Aggregation (Bonding in Linux or Lagg on BSD, which you seem familiar with, based solely on your name) but I have been wrong before, and likely will be again :p
 
Zarathustra[H];1041102201 said:
I'm not good with VLAN's so I'll leave that to someone else.

As far as the controller goes, I am not sure if ESXi has any built in software raid, so this probably does mean you'd need to flash the controller back to it's original RAID capable firmware and use the Option ROM at boot time to set up the mirror, and install to it. (that is, unless you go the ZFS in guest route above)

I presume your CPU and motherboard support VT-D?

So once I have flashed the firmware back to IR mode and created a mirrored volume, will ESXi see the datastore as a mirrored volume? Or as single drives?

Yes, my CPU/mobo supports VT-D. I have the Xeon 1230v3 with 16GB ECC RAM.
 
Zarathustra[H];1041102212 said:
I think you may have misunderstood trunking. (either that, or I have)

On my HP Procurve switch - at least - trunking has nothing specifically to do with VLAN's but rather is a synonym for Link Aggregation (Bonding in Linux or Lagg on BSD, which you seem familiar with, based solely on your name) but I have been wrong before, and likely will be again :p

I think I am confused about this (been reading lots!).

I guess my question should be:

How does VLAN tagging work and what do I need to use it? (I'm not referring to Link Aggregation or bonding).

Lets use an example:

I have two VLANs in my ESXi host:

DMZ - VLAN 100

Production - VLAN 200

How do I get a VM in VLAN 100 to communicate with a VM in VLAN 200 (or vice versa)?

Do I need a router? A trunk port? I am confused over all this! :confused: :D

FYI: In the beginning I'll only have one PHYSICAL NIC connected in my server.
 
With vlans you can transport physically seperated networks over the same cable. To seperate them you need a vlan capable switch or a vlan capable OS.

With ESXi you can use vlans over your single physical nic (and connect to a vlan capable switch) and assign one or all vlan to a virtual nic (untagged or tagged with all vlans).

If you want to connect vlans you must use the same as if you want to connect physical seperated networks - you need a router.
 
So once I have flashed the firmware back to IR mode and created a mirrored volume, will ESXi see the datastore as a mirrored volume? Or as single drives?

Yes, my CPU/mobo supports VT-D. I have the Xeon 1230v3 with 16GB ECC RAM.

Since you have VT-D, you can use what is called Direct I/O forwarding to take any PCie (and some motherboard integrated peripherals) and directly assign them to one of your guest OSs. They then become statically assigned to that Guest OS

This is how many people do ZFS in ESXi. They forward the LSI controller to a guest. (ESXi then no longer sees it for itself) and that guest then directly uses the LSI driver and directly gets any drive attached to it. They boot ESXi off of a different controller (usually on board SATA)

Sorry, can't help much on VLAN's. I've confused myself on this subject a few times too. What are you trying to do? Why do you feel you need multiple VLAN's? I always thought VLAN's were used to isolate parts of the network from eachother, but you want them to talk to eachother?
 
I have that same board, currently running ESXi 5.5 with the onboard SAS in IT mode passed to OmniOS (ZFS All-In-One). Would recommend that route if you're comfortable with ZFS. I believe IR mode would be better if you just want to do RAID instead.

That board has two NICs, wouldn't you assign each one a NIC for external communication? You could then use a dirt cheap router if you needed (like Gea mentioned). Or if you want to go single cable and software router, I think pfSense would be your best option. Run incoming network to the WAN side of pfSense, then the LAN side of it would go to an internal vSwitch.

I'm still learning VLANs and pfSense myself, but I'm looking to do something similar. Have a "home" network and a "lab" network I'm putting together. Right now I'm looking to set up VPN access as the primary means to access the lab side as it should work for both remote and home access.
 
I have that same board, currently running ESXi 5.5 with the onboard SAS in IT mode passed to OmniOS (ZFS All-In-One). Would recommend that route if you're comfortable with ZFS. I believe IR mode would be better if you just want to do RAID instead.

That board has two NICs, wouldn't you assign each one a NIC for external communication? You could then use a dirt cheap router if you needed (like Gea mentioned). Or if you want to go single cable and software router, I think pfSense would be your best option. Run incoming network to the WAN side of pfSense, then the LAN side of it would go to an internal vSwitch.

I'm still learning VLANs and pfSense myself, but I'm looking to do something similar. Have a "home" network and a "lab" network I'm putting together. Right now I'm looking to set up VPN access as the primary means to access the lab side as it should work for both remote and home access.

Sounds like a nice setup. Didn't realize the on board controllers could be crossflashed to IT as well!

What do you boot off of if the Onboard is forwarded to your ZFS guest? Are the Intel SATA controllers still operational once you use the onboard SAS? On the server boards I have seen with onboard SAS, SAS and SATA are an "either or" proposition selectable in BIOS.
 
My current setup is to boot ESXi 5.5 off a MicroCenter 16GB USB 3.0 thumb drive that's plugged into the port on the motherboard. I have a 256GB Crucial MX100 attached to the first motherboard SATA port (6Gbps), and on that I have Omni OS running.

The SAS controller is passed through to the OmniOS VM, on which I have a few hard drives set up as ZFS mirrors. Once passed through in ESXi, those ports can't be used by any other system, including ESXi. OmniOS then shares out the ZFS pool via NFS back to ESXi, which then uses it as a standard datastore.

Seems kind of convoluted to write it all out, but essentially you're using ZFS as a platform for your ESXi datastores. That way you get all the benefits of ZFS and ESXi just sees a raw datastore.
 
Thank you all for your replies! I've been giving this much thought and having just finished watched the ESXi 5.5 CBT Nuggets video so I have a bit more of an idea as to what I want to do but I do still have some questions!

First some background. My current FreeBSD 10 server has the following hardware:

Mobo: Supermicro X10SL7-F
CPU: Intel Xeon E3-1230 V3 Haswell
RAM: 2 x 8GB DDR3 PC3-12800 Unbuffered ECC 1.35V
SSD: 2 x 128GB Samsung 840 PRO SSD
PSU: 550W Seasonic G-550
Case: Fractal Design define mini R4 (3 x 120mm fans and 1 x 140mm fans) (this case can accomodate 4 x 2.5" SSD drives and 6 x 3.5" SATA drives)

This currently hosts my email and a few other items but email is its primary role.

Moving forward, I would like to format the above machine (I can run my email on a spare machine temporarily) and install ESXi 5.5 on it. So I *think* I need the following hardware to make it run well for virtualisation:

1) An extra 16GB of RAM so that the total RAM is 32GB

2) Another two SSD drives mirrored - I'm thinking of going with the new Samsung 850 Pro drives...I was thinking of getting the 256GB or 512GB sized drives. I want ALL my guest VMs system drives to run off SSD storage.

3) Two Western Digital Red 6TB drives

The SSDs will be used for the guests system drives whereas the SATA Red drives will be used for mass storage (backups, file server etc).

I am still not 100% sure whether or not the LSI 2308 running in IR mode will be sufficient for my needs? I read somewhere that there is no caching on this controller so the performance is dreadful with ESXi, is this true? I think the person said they got under 10MB/sec using the LSI 2308 with SATA drives attached to it for use in ESXi?! :confused: Something about ESXi doesn't do native caching hence the need for a hardware RAID card?

Also can anyone recommend a good USB 2/3 drive I can use to install ESXi on that won't die after a few months use? I don't want to install ESXi on any of the hard drives so I can use them all for guest storage and backups.

I plan on running up to 8 or 10 guest VMs on this server. Will be running Windows Server 2012 R2, Exchange 2013, SQL 2014 and some VMware guests (vCenter and Data Protector). I'll be pushing this ESXi host to the max ;)

So am I on the right track? Although I have installed ESXi in a test VM, I have never actually installed it on a proper host. Hence my confusion and questions about the RAID card.

So should I use the LSi 2308 card in IR mode or make the jump to a separate hardware RAID card? If the later, which RAID card considering my needs descibed above?

Thanks everyone for your help. If I have left anything out or you have any questions please let me know and I will do my best to answer them! :cool:

Edit: I thought I would leave out VLANs and networking for now!

Edit2: Here is the link re poor performace from onboard RAID:

https://communities.vmware.com/thread/488202
 
Last edited:
I don't see the point in mirroring the SSD drives ... that space is premium and you'd half it with a mirror.
If anything .. mirror your 6TB drives that will hold your backups.

Most folks are buying this motherboard (X10SL7) for bare metal storage boxes or
virtualized storage with the controller passed through to a storage VM (like RaiderJ and Zarathrusta mentioned).

Since VMware is not doing caching .. performance will be slow if used bare metal ...why people pass the controller
to a storage VM that will do caching in the VM. (ie what Raider does and Zarathrusta mentions).

Alternate: IT mode and each drive is a datastore you can assign VMs to (spread IO load as appropriate).

USB: I have used 4/8GB Sandisk Cruzer Blade and Kingston DataTraveler drives without issue up to ESXi 5.1 (not used 5.5 on bare metal yet).
 
Your storage concerns are exactly why I just put my onboard SAS into IT mode and used ZFS. If you don't want to mess with ZFS, then use the controller in IR mode.

Netwerkz's suggestion for using the 6TB in a mirror, but having the SSDs in standalone mode has merit. Just keep in mind that that if a 6TB drive fails, you're going to be waiting a LONG time to get a new one resilvered. Means you will be unprotected from another failure. Make good backups off system!

My plan for storage is to set up 4x256GB Crucial MX100 SSDs in a ZFS striped mirror set. Gives me 512GB of fault-tolerant storage, and is easy to expand by adding another mirrored pair if I need it.

Not sure why you'd need another RAID card if you put the 2308 in IR mode. I've never used it like that, but I can't imagine performance is that bad (~10MB/s).
 
I don't see the point in mirroring the SSD drives ...

To prevent the entire environment from shutting down due to a hard drive failure. If I have 8 VMs running off a single drive and it fails...ouch.

Most folks are buying this motherboard (X10SL7) for bare metal storage boxes or
virtualized storage with the controller passed through to a storage VM (like RaiderJ and Zarathrusta mentioned).

I don't think this is an option for me. If I have a VM doing virtualised storage and it uses ZFS, then I need at least 8GB of RAM for that VM and that means sacrificing 25% of my total RAM for storage alone. I want to use ALL 32GB of RAM for guest VMs (I know the virtualised storage VM is a guest but you know what I mean)

Since VMware is not doing caching .. performance will be slow if used bare metal

Aaah, well I do want a fast performing ESXi host and I am willing to spend a bit of cash on it. I am going to be running up to 10 (maybe more if I am careful with resource allocation) and the last thing I want is for the disk IO to be a bottleneck. If this means I need a RAID card with cache memory then so be it. Please make some recommendations if you don't mind!

USB: I have used 4/8GB Sandisk Cruzer Blade and Kingston DataTraveler drives without issue up to ESXi 5.1 (not used 5.5 on bare metal yet).

Thanks for that, I shall consider these.

Is this for production or home lab?

Its for production use at home. I host my own email as well as others (I have a static IP). If there are spare resources on the ESXi host I shall have some test VMs running but these are not important. I can always test software on my desktop with VMware workstation

So to summarise:

1) I don't think I want to use virtualised storage. In fact, you can ignore ZFS altogether as I want to have a local VMFS datastore.

2) Giving it some thought, maybe I can operate a single 512GB SSD and a single 6TB SATA drive. I can and will keep some backups on a separate machine in case I lose the drive with the guest VMs on it. Great thing about this is that is halves the amount I have to spend on drives. Hmmm, still not sure about running a live environment on non RAIDed volumes!

3) For home use, what sort of ESXi compatible RAID cards with cache should I be looking at?

I think I pretty much have everything figured out for setting up an ESXi host but its just the storage I am battling with.

Thanks again for everyones input.
 
I believe ESXi should work with just about any RAID card, assuming it will just show a RAID volume back to the BIOS. Your costs on the card will be somewhat high since you need a decent card that has some onboard cache, battery back up (BBU), and ideally can support an SSD cache.

I would still argue for a ZFS setup. Even if you just give it 8GB RAM, you should still be able to get really good performance out of it, have an SSD cache, and better data integrity. Not to mention easier backups/snapshots. It IS more complicated... which is why I've avoided it for work setups (since I'm the only ZFS guy around).
 
I believe ESXi should work with just about any RAID card, assuming it will just show a RAID volume back to the BIOS. Your costs on the card will be somewhat high since you need a decent card that has some onboard cache, battery back up (BBU), and ideally can support an SSD cache.

I would still argue for a ZFS setup. Even if you just give it 8GB RAM, you should still be able to get really good performance out of it, have an SSD cache, and better data integrity. Not to mention easier backups/snapshots. It IS more complicated... which is why I've avoided it for work setups (since I'm the only ZFS guy around).

I appreciate your reply and it actually got me thinking about this some more :D

I'm beginning to think ESXi storage is a bad idea (slow). I also don't think a RAID card is going to help me unless I go for some expensive option.

I did a quick (very rough guide) of how my RAM will be used by the VMs and I think I can spare 8GB of RAM for the FreeNAS server. Will 8GB give good performance? I know with FreeBSD 10 and 16GB of RAM it absolutely flies! But I can't give FreeNAS 16GB RAM. Is 8GB ok? At a push I could do 10GB...maybe 12GB.

I don't mind that its more complicated as long as it makes sense. So from what I understand it works as follows:

1) Leave LSI 2308 in IT mode
2) Connect SSD and SATA drives to LSI 2308 SAS ports
3) Create FreeNAS VM in ESXi
4) Pass through LSI 2308 controller to FreeNAS VM
5) Setup disks in FreeNAS in appropriate ZFS RAID
6) Setup iSCSI to present ZFS storage to ESXi host
7) Create ESXi datastore using iSCSI storage

How does that sound? :)
 
I appreciate your reply and it actually got me thinking about this some more :D

I'm beginning to think ESXi storage is a bad idea (slow). I also don't think a RAID card is going to help me unless I go for some expensive option.

I did a quick (very rough guide) of how my RAM will be used by the VMs and I think I can spare 8GB of RAM for the FreeNAS server. Will 8GB give good performance? I know with FreeBSD 10 and 16GB of RAM it absolutely flies! But I can't give FreeNAS 16GB RAM. Is 8GB ok? At a push I could do 10GB...maybe 12GB.

I don't mind that its more complicated as long as it makes sense. So from what I understand it works as follows:

1) Leave LSI 2308 in IT mode
2) Connect SSD and SATA drives to LSI 2308 SAS ports
3) Create FreeNAS VM in ESXi
4) Pass through LSI 2308 controller to FreeNAS VM
5) Setup disks in FreeNAS in appropriate ZFS RAID
6) Setup iSCSI to present ZFS storage to ESXi host
7) Create ESXi datastore using iSCSI storage

How does that sound? :)

I've used ZFS with 8GB RAM and had no performance issues, but that was for my home media server (primarily reads). Different use case than what you have, but it will definitely work. You can always increase the RAM later if performance isn't up to speed, but it's very likely that you could be I/O limited before you're RAM limited.

In general, you give ZFS as much RAM as possible. Then, try adding in an SSD drive as L2ARC for additional performance. From there you can see how many hits to the L2ARC you're getting. Will tell you about your performance needs in a bit more detail.

You're steps above look good, but personally I've found NFS to be the better share method. Can't say I'm very experienced with either iSCSI or NFS, so don't let my experience guide you there.
 
Sounds pretty good, but I would use NFS instead of iSCSI.

I find it performs better, and is more secure during power loss as iSCSI doesn't do sync writes (as I understand it).

Besides NFS has the added bonus of being able to access the datastore from other machines, making moving data to and from it more convenient. iSCSI requires that you make an image which is then treated sort of like a physical drive in that it is dedicated to one machine (in this case ESXi) at a time.
 
Oh, one comment. Keep in mind that you will likely want to disable SYNC on any hard drive based ZFS pools. Without a SLOG your performance on those pools will be less than you want. You do give up some data integrity, but I'll assume you have a UPS to protect against hard shutdowns.
 
Oh, one comment. Keep in mind that you will likely want to disable SYNC on any hard drive based ZFS pools. Without a SLOG your performance on those pools will be less than you want. You do give up some data integrity, but I'll assume you have a UPS to protect against hard shutdowns.

Yeah, I should clarify here as well.

By default NFS will use sync writes, iSCSI will not. Simply based on this, if you don't have a good dedicated SLOG (a ZIL drive) iSCSI will outperform NFS. If you either disable sync on NFS, or get a decent SLOG drive (an Intel S3700 seems to be the preferred choice on here) NFS will be faster.

The ZIL is the ZFS intent log. Some people call it a write cache, but that is really wrong, as during normal use it is never read from, just written to.

It works like this: In order to be more efficient, ZFS (and most RAID systems) group writes in RAM and then write them to disk once every cycle (the length of the cycle varies on the configuration)

This represents a typical async write. The system receives the data, keeps it in RAM and then lies to the client telling it that it has been comitted to the stable drives, and to continue along. This increases performance, but in the one or two seconds until the next write cycle, the write data is in RAM only and can be lost if the system hangs or loses power.

This may seem trivial if you are just dealing with a file server. If you lose the last file or so, it can be annoying, but not a huge deal. When you are dealing with disk images (like a Guest operating system on an ESXi server) - however - the last second or two of writes being lost could easily result in a corrupt unusable disk image.

So, instead you can enable sync writes. Much more secure, but it easily bogs down your RAID array with frequent small writes and can absolutely kill performance.

Different RAID systems work around this in different ways (high end enterprise server systems have battery backup RAM on RAID controllers, but those can be pricey).

The way ZFS deals with this is by allowing you to add a SLOG (a separate log device) to house the ZIL (ZFS Intent log). This has to be on a fast, low latency SSD.

The way the ZIL works is that instead of lying to the client and telling it the data has been committed to stable media, it writes the log of what it intends to write to the main array to this fast SSD. The data to be written to the hard drives remains in RAM and is committed to the drives at the next write cycle, and then removed from the SLOG. The SLOG drive is never read from in normal use, unless something happens, and the server is restarted (powerloss, crash, etc.) before the next write cycle is complete.

If this happens, on the next mount, ZFS then reads the ZIL from the SLOG, and reconstructs what should have happened during the write and saves the data.

Pretty much any modern SSD as a SLOG will be faster than doing sync writes directly to hard disks, but in order to approach the speed of async writes while sync writing you need a SSD that is optimized for single queue depth writes. This pretty much precludes most performance consumer SSD's that we know and love on these forums, as they are usually optimized for higher queue depths as this is better for desktop use.

The 100GB Intel S3700 is a very popular model for this purpose on here. Since the SLOG only needs to be large enough to deal with a couple of seconds of writes, it only needs to be a few GB in size, so most people will underprovision these drives by manually creating a small partition (2-3GB should be sufficient, but I did 15GB just to be safe) and adding that partition as the SLOG. This ensures the SLOG device lasts longer

Doing async writes can be risky, if you really care about your data, but then again, many people do it for years without having any problems at all. You'll need to determine your own risk tolerance and weigh it against how much money you want to spend.

if you want to be REALLY safe you can mirror two devices for a slog, as there is still a small risk of data loss if you both lose one of the SLOG ssd's AND lose power/crash within a few seconds of each other. Very small risk, but for critical data, this is still considered best practice.

Anyway, the more you know... :p
 
To prevent the entire environment from shutting down due to a hard drive failure. If I have 8 VMs running off a single drive and it fails...ouch.

<snipped for brevity>

I think I pretty much have everything figured out for setting up an ESXi host but its just the storage I am battling with.

lol ... seems like we go in circles, no??

Okay ... few options based on what you say you want and limitations given:

1. Buy Hardware RAID w/cache + BBU (I've use(d) (LSI) Dell PERC 5i/6i/H700)
There are much newer/more advanced controller offerings from LSI and others now if you
want to spend the money. Newer ones don't use BBU ...very nice.

2. Buy NAS (purpose built ... ie ...QNAP/Synology)
3. Buy new ESXi host (w/o storage cntrlr) and turn current system into bare metal NAS.

I have done #1 and #2 - separating hosts and storage works better for me
but I have no special performance level or IO requirement like you (SQL).
Maybe having local storage will be better for SQL/Exchange if they will be home "production".

I hit the 125MB limit for Gigabit ...but NAS IO itself is no problem.
Since I bought the X10SL7 for a bare metal NAS, I can upgrade to 10GbE when it's time. (#3).
 
Thanks again to all for the comments and posts! Rather than respond individually I thought I would reply in one post :)

From reading this thread I have come to the conclusion that a hardware RAID card will not help the performance in ESXi. Thats ok, I will use the onboard LSI2308 controller and use passthrough mode in ESXi to a FreeNAS guest.

I'm still not sure whether to use NFS or iSCSI. I have Googled it but that didn't help! I don't mind which option I go with as long as I get good read/write performance from my SSD storage. I will be using a UPS with this ESXi host. I like the idea with NFS that I can access my files from another machine. I don't really want to go the L2ARC route if I can avoid it? I don't have a L2ARC in my FreeBSD server and the speed is stupid fast but then again this ESXi host will be performing a far different role.

Keep in mind that you will likely want to disable SYNC on any hard drive based ZFS pools. Without a SLOG your performance on those pools will be less than you want.

I have never done this before in ZFS. Can you expand a bit more on this please? I do want my SSD (and SATA!) ZFS pools in FreeNAS that are presented to ESXi to run optimally.

I think what I am going to do to get started with this project is:

1) Upgrade RAM to 32GB
2) Re-use a 1TB SATA drive I have and use this for Veeam backups
3) Re-use a 2TB SATA drive I have for other data
4) Purchase a single 512GB Samsung Pro 850 drive to be used for guest OS drives (later on I will mirror this)

Since I will have 32GB of RAM I am planning on allocating it to guest VMs as follows:

Domain Controller 2GB
File Server/Veeam backups 4GB
Exchange 2013 (CAS/mailbox) 8GB
vCenter Storage Appliance 4GB
FreeNAS/ZFS 8GB
Exchange 2013 Edge 4GB
SQL 2014 Express 2GB

Total: 32GB

As you can see I will be using ALL the RAM on this host. I'm not even sure if this is ok? Does the ESXi host need some of its own RAM to manage the guests?

I was planning on passing the LSI 2308 controller to the FreeNAS guest and then using the 1TB, 2TB, two 120GB SSDs and single 512GB SSD for guest VMs. Only thing I am unsure of is: Where do I install the FreeNAS to? :confused:

Overall, am I on the right track? This sure is going to be an interesting home project! :cool:
 
Now that you have decided to run FreeNAS as a VM ...
just want to pass on this link for your review: FreeNAS as a VM???

Basically ... it seems if you virtualize FreeNAS, just don't go asking for support in the FreeNAS forums for help.

I'm new enough, dangerous enough, and scared enough to run it bare metal for my
"production" but .. I do use it in a VM when I'm on my "non-prod" volatile lab (VMware Workstation).

Isn't this all so much fun??? :D
 
Now that you have decided to run FreeNAS as a VM ...
just want to pass on this link for your review: FreeNAS as a VM???

Basically ... it seems if you virtualize FreeNAS, just don't go asking for support in the FreeNAS forums for help.

I'm new enough, dangerous enough, and scared enough to run it bare metal for my
"production" but .. I do use it in a VM when I'm on my "non-prod" volatile lab (VMware Workstation).

Isn't this all so much fun??? :D

Yeah, some of the people over in the FreeNAS forums are kind of douchebags with strict recommendations.

I've been running FreeNAS in a VM since September 2012 without any issues.

FreeNAS IS based on BSD and thus the scheduler isn't the most efficient in a guest environment (though I hear that is solved in FreeBSD 10) but I find it to be fine.

I like FreeNAS, but if you are more comfortable with it, Napp-IT is also ZFS based and fully supports virtualized environments, and even comes in a VMWARE Appliance form.

The FreeNAS argument is kind of stupid and goes something like this.

Don't virtualize FreeNAS because you might do something stupid (like try to run it off of image files instead of with pass through controllers) instead of embracing proper virtualization of ZFS techniques.

They are throwing out the baby with the bathwater if you ask me, and they have a grumpy ass moderator over in their forums that tends to over-police and criticize anyone who doesn't tow the line.

I make a point out of frequently posting things I know will annoy him (like lots about my VMWare experiences) :p
 
How about I try approaching this from another angle:

What if I have two 6TB drives passed through to FreeNAS, mirrored them with ZFS in FreeNAS and then setup either a L2ARC/ZIL (or both) to improve the NFS performance to ESXi?

I have two Samsung Pro 840 128GB SSDs which I could potentially use for this task but they may not be ideal. I don't mind buy another SSD drive(s) if needed.

Would this setup give me good read/write performance for the VMs running in ESXi?
 
How about I try approaching this from another angle:

What if I have two 6TB drives passed through to FreeNAS, mirrored them with ZFS in FreeNAS and then setup either a L2ARC/ZIL (or both) to improve the NFS performance to ESXi?

I have two Samsung Pro 840 128GB SSDs which I could potentially use for this task but they may not be ideal. I don't mind buy another SSD drive(s) if needed.

Would this setup give me good read/write performance for the VMs running in ESXi?

lets clarify one thing first. Pass through your controller, not your individual drives. There are ways to pass through individual drives, but they are of questionable reliability, and I just wouldn't do it.

Then, let me say that I have never used my ZFS as a datastore, so I lack personal experience here. Mine serves files and is storage for my MythTV DVR.

Essentially, there is no "one size fits all" ZFS implementation. There are volumes of forum discussions on how to optimize performance of ZFS, and it is usually very workload specific.

That being said, if you plan on doing async writes (which we have determined has some risk) a SLOG (Separate log device for your ZIL) will not make any difference at all. The ZIL is only used during sync writes.

an L2ARC (read cache) device MIGHT help though, but the help is usually rather small, unless your working dataset fits within the new combined ARC (RAM Cache) and L2ARC.

As far as mirroring two 6TB drives, I may have to defer to someone else on the performance considerations on this. My understanding is that mirrors tend to have the same performance as the slowest of the single drives in the mirror on writes, but have some increased speeds during reads due to being able to read from both drives at the same time. Not sure how this impacts guest datastore performance compared to a RAIDz or RAIDz2 vdev.

What I can discuss - however - is size efficiency. If you go with the two 6TB drives in a mirror, you will be paying for 12TB in drives, and getting 6TB in space. At almost $300 a piece for the 6TB drives, that's a lot of money spent, especially considering the "largest drive" penalty cost per TB.

If you already have a couple of 2Tb drives or something like that not sure whats in your parts pile) you might be able to do a 6 drive RAIDz2 with 2TB drives. You'll get 4TB available, with two drive redundancy. Your sequential speeds will be much higher than a mirror, but your IOPS may go down (which may or may not be relevant considering considering SLOG/L2ARC/async considerations. Here I would have to defer to someone with datastore on ZFS experience.

The added bonus of a setup like this is that if you ever want to expand space, you can do so by pulling, replacing and rebuilding one drive at a time to grow the vdev, without losing data.

Also, mirrored drives have the same data integrity problems as RAID5/RAIDz arrays do, which is, once one drive is down, and you are rebuilding, you are very vulnerable to a second drive failure or URE's, especially with a HUGE disk size, like 6TB.

Consider this,

You have a mirror of two 6TB drives (or a RAID5/RAIDz volume of 6TB drives, where there is one redundant drive)

According to WD's specs, as many as 1 in 10^14 bits could be flipped.

During normal operation, if there is a flipped bit, the system will correct it automatically during read from parity.

if however, you lose one drive, you have no more parity, so while you are rebuilding, every read is sensitive to bit flipping.

In a mirror with 6TB you are reading 6*1000^4*8 = 4.8*10^13 bits, so you have a 48% chance of at least one bit flipping uncorrected during the process.

It gets even worse, if - say - you have a 5 drive RAIDz volume and a drive fails. Then you are reading 4*6*1000^4*8 bits to rebuild it, then you are going to ahve on average 2 flipped bits per rebuild that won't get corrected.

This is why those of us obsessed with data integrity always recommend RAID6/RAIDz2, (and replacing failed drives immediately :p )
 
I think I need to take a step back as I am getting bogged down with all the nitty gritty details here :confused: Lets start from the beginning and then drill down into details.

The objective here is to turn an existing server into an ESXi host.

To do that I need some storage...high speed SSD storage that can do 200+MB/sec. This is so that I can have a datastore in ESXi that holds all my guest virtual machines VMDK (hard drive) files and so that the VMs are responsive and run well.

So if you were in my position what would you do?

If (big if) I was to just use the LSI2308 in IR mode and then RAIDed a couple SSD drives, what sort of speed (MB/sec) and IOPS could I expect in ESXi and VMs?

Sorry if this sounds like a stupid post but I am battling to come to a conclusion to move forward :D

Appreciate everyones input and time.
 
Sorry for getting down into the weeds.

ZFS is very easy to set up in a basic configuration using one of the many appliance/frontends out there like FreeNAS and Napp-IT, but once you get into making a specific setup oriented towards an application, it can quickly get complex, and require some reading.

Very interesting reading if you ask me, but a lot of it none-the-less.

I'll leave your questions above to someone else, as I have not done that type of setup, so I don't have any experience to draw from.
 
No need to apologise :D I think I am just overwhelmed!

Surely the answer to all my questions is to just buy a decent hardware RAID controller with cache and have all my disks connected to it? Would this solve the ESXi performance issues?

I don't mind investing in a good RAID card but I have no idea where to start when choosing one. The great thing about this approach is that I won't need a virtual storage VM which would be GREAT for me as I would have more RAM for extra guests!

For the RAID card I would want to connect 4 SSD drives and 2 SATA drives.
 
No need to apologise :D I think I am just overwhelmed!

Surely the answer to all my questions is to just buy a decent hardware RAID controller with cache and have all my disks connected to it? Would this solve the ESXi performance issues?

I don't mind investing in a good RAID card but I have no idea where to start when choosing one. The great thing about this approach is that I won't need a virtual storage VM which would be GREAT for me as I would have more RAM for extra guests!

For the RAID card I would want to connect 4 SSD drives and 2 SATA drives.

How large of a datastore do you think you need?

For what it's worth, I run 6 guest OS's on my ESXi box (two BSD based, 4 ubuntu based, granted none with GUI's) and all of them run off of a single 128GB Samsung 840 Pro SSD, and it never feels sluggish.

With spinning drives it can take some performance tweaking to get multiple guests moving smoothly, but I have found that using a single SSD makes all those performance woes go away.

I don't even bother with a mirror or redundancy,as I figured I can easily recreate OS installs and configs, and I back them up regularly anyway. All my data (the stuff I REALLY care about) resides on my redundant ZFS file server (one of my guests)
 
Zarathustra[H];1041117698 said:
How large of a datastore do you think you need?

For what it's worth, I run 6 guest OS's on my ESXi box (two BSD based, 4 ubuntu based, granted none with GUI's) and all of them run off of a single 128GB Samsung 840 Pro SSD, and it never feels sluggish.

With spinning drives it can take some performance tweaking to get multiple guests moving smoothly, but I have found that using a single SSD makes all those performance woes go away.

I don't even bother with a mirror or redundancy,as I figured I can easily recreate OS installs and configs, and I back them up regularly anyway. All my data (the stuff I REALLY care about) resides on my redundant ZFS file server (one of my guests)

For the datastore..or datastores, I'll need at least 500GB or so for guest system drives. Then I want to have a file server VM that'll have at least 2TB of space in it for data. I also want to have disk backups so that will be maybe 1TB.

I hear what you saying about the SSD, I ran 5 VMs on my desktop on VMware workstation and boy was it fast! But this server is a different animal...which brings me back to:

A hardware RAID card!

Been having a look around and I came across this card:


MegaRAID SAS 9260-8i


It looks **AMAZING**. Some specs:

Cache Memory 512MB 800MHz DDR II SDRAM
Host Bus Type x8 lane PCI Express® 2.0
Data Transfer Rates Up to 6Gb/s per port


It has 2 Mini-SAS SFF8087 so I guess I can get up to 8 SATA drives connected with the break out cable(s). What I am not 100% sure of is if the SATA drives can connect at 6Gb/s. Some people say they onlyconnect up to 3Gb/s (which would be a shame). I have contacted LSI support asking for clarification on this.

Now, I'm assuming with the above RAID card that, I would get awesome performance with this in ESXi with the 512MB caching?

Looks like it supported by ESXi 5.x:

LSI ESXi 5.5 driver

and my motherboard has the "PCI-Express 2.0 x4 in x8" slot so I assume this is ok.

Appreciate any thoughts! On ebay I can get the card for under £200...
 
LSI 9260-8i is what the Dell PERC H700 is modeled after.
Good cards but it is one of the "older" cards as I mentioned previously.

I don't remember any issue with SATA drives not running at 6Gb/s speeds ..but it's been a while.
 
Back
Top