New home ESXi NAS build - advice much appreciated

Gnavox

n00b
Joined
Feb 25, 2012
Messages
43
Hi.

I am going to completely redesign my mini-ITX NAS and transform it into an ESXi/virtualization capable NAS and home server.

The mini-ITX NAS is currently a DIY "consumer-ish" machine with decent NAS drives, 8 GB non-ECC ram and a motherboard with an onboard E350 CPU. On the software side, it runs Windows Home Server with Drive Pool. The NAS serves media to my household and handles various automation tasks with regards to self-same media. However, it has limited CPU power and its OS is placed in a partition on the worst of its drives - a WD Green that didn't have its head parking issue fixed until it was too late.

What I want is a more powerful "enterprise-ish" machine designed according to best practices, which functions as a 100% reliable NAS and allows me to experiment with virtualization and virtual machines at the same time (as a part of my education and line of work). I imagine that I will install ESXi, but haven't decided which OS that I will use for the NAS software.

Any advice is much appreciated as I am not very familiar with "real" servers or enterprise IT equipment. Please take a quick look and fill in the gaps.

Case: Zalman MS800 (changed)
Link

Motherboard: Supermicro X10SL7-F (changed)
Link

Processor: Intel Xeon E3-1230v3
Link

HDDs:
3x 3 TB Western Digital Red (already own them)
2x 4 TB Seagate NAS HDD (will be purchased)

HBA:
None. LSI 2308 onboard.

RAM:
2x 8 GB Samsung DDR3 1600mhz (PC12800) ECC (M391B1G73BH0-CK0)

SSDs:
Samsung 840 EVO 250gb

Flash drive (USB, mSATA, SATA DOM):
Advice needed! How many do I need and what can be recommended? Is ONE for ESXi enough provided that the NAS operating system is a VM on the SSD?
 
Last edited:
Hi.

I am going to completely redesign my mini-ITX NAS and transform it into an ESXi/virtualization capable NAS and home server.

The mini-ITX NAS is currently a DIY "consumer-ish" machine with decent NAS drives, 8 GB non-ECC ram and a motherboard with an onboard E350 CPU. On the software side, it runs Windows Home Server with Drive Pool. The NAS serves media to my household and handles various automation tasks with regards to self-same media. However, it has limited CPU power and its OS is placed in a partition on the worst of its drives - a WD Green that didn't have its head parking issue fixed until it was too late.

What I want is a more powerful "enterprise-ish" machine designed according to best practices, which functions as a 100% reliable NAS and allows me to experiment with virtualization and virtual machines at the same time (as a part of my education and line of work). I imagine that I will install ESXi, but haven't decided which OS that I will use for the NAS software.

Any advice is much appreciated as I am not very familiar with "real" servers or enterprise IT equipment. Please take a quick look and fill in the gaps.

Case: Inter-Tech IPC 4U-4310L
Link
A 10 bay NORCO-style case. X-CASE in the UK market the same case, but it is out of stock. I do not need more than 10 bays as I imagine that I will begin to replace my drives with larger capacity drives when I reach 30-40 TB.

Motherboard: Supermicro X10SLM+-F
Link
Supports VT-D, IPMI and has dual gigabit LAN. The 4x SATA III and 2x SATA II connectors will suffice for now.

Processor: Intel Xeon E3-1230v3
Link
Requires no further explanation.

HDDs:
3x 3 TB Western Digital Red (already own them)
2x 4 TB Seagate NAS HDD (will be purchased)

HBA:
None. As far as I understand, it is not necessary until I have more than 6 drives connected to the motherboard.

RAM:
2x 8 GB ECC - please feel free to suggest particular brand

SSDs:
Advice needed! Will ONE 256GB for my VM data store suffice? Which SSD can be recommended? Do I need SSDs for caching or something?

Flash drive (USB, mSATA, SATA DOM):
Advice needed! How many do I need and what can be recommended? Is ONE for ESXi enough provided that the NAS operating system is a VM on the SSD?

I'm currently building a very similar setup to yours.

X10SLH-F + E3-1230v3 + 16GB Kingston ECC unbuffered DDR3.

Aerocool VS-9 case with 2x CSE-M35TQB and 1x CSE-M14TB backplanes.

CSE-M14TB will hold 4x 2,5" sata drives in raid10 for host OS + VMs. I'm going to use mdadm + ext4 + lvm for this.

First CSE-M35TQB will hold 5x4TB Seagate NAS HDDs OR Hitachi Deskstars (haven't decided yet) in raid6 for storage. Mdadm + XFS used here (I want to add more disks later without sacrificing more capacity for parity.)

Second CSE-M35TQB will hold 5x1TB Seagates in raidz2 ZFS pool. Going to use OpenIndiana VM for this.

I'll have all my data divided into 3 layers:

1) Everything stored on raid6 XFS array, always available.
2) Important data backed up to ZFS pool. Only available when running backups. (And if I need to recover something of course.)
3) Data I cant live without backed up to cloud.

Will run Gentoo on host and VMs in Xen. Host Gentoo will also be NATing my internet and sharing the XFS array.

Sounds really messy and stupid mixing so many filesystems etc. but I found this suited best for my needs.



I'd suggest to get 2x those SSDs for VMs and run them in raid1 for reliability. Probably partition them and install ESXi in there too. Capacity of SSDs depend on your VMs obviously. If you're running a ZFS pool, you could add a SSD for caching purposes if you really need the performance (I doubt it).

For OS, if you dont want to expand array(s) in the future, you can't go wrong with any BSD based OS + ZFS combo.
 
Last edited:
I'm currently building a very similar setup to yours.

X10SLH-F + E3-1230v3 + 16GB Kingston ECC unbuffered DDR3.

Aerocool VS-9 case with 2x CSE-M35TQB and 1x CSE-M14TB backplanes.

CSE-M14TB will hold 4x 2,5" sata drives in raid10 for host OS + VMs. I'm going to use mdadm + ext4 + lvm for this.

First CSE-M35TQB will hold 5x4TB Seagate NAS HDDs OR Hitachi Deskstars (haven't decided yet) in raid6 for storage. Mdadm + XFS used here (I want to add more disks later without sacrificing more capacity for parity.)

Second CSE-M35TQB will hold 5x1TB Seagates in raidz2 ZFS pool. Going to use OpenIndiana VM for this.

I'll have all my data divided into 3 layers:

1) Everything stored on raid6 XFS array, always available.
2) Important data backed up to ZFS pool. Only available when running backups. (And if I need to recover something of course.)
3) Data I cant live without backed up to cloud.

Will run Gentoo on host and VMs in Xen. Host Gentoo will also be NATing my internet and sharing the XFS array.

Sounds really messy and stupid mixing so many filesystems etc. but I found this suited best for my needs.



I'd suggest to get 2x those SSDs for VMs and run them in raid1 for reliability. Probably partition them and install ESXi in there too. Capacity of SSDs depend on your VMs obviously. If you're running a ZFS pool, you could add a SSD for caching purposes if you really need the performance (I doubt it).

For OS, if you dont want to expand array(s) in the future, you can't go wrong with any BSD based OS + ZFS combo.

I must admit that your thread about the above build inspired me in terms of my selection of parts. However, I opted for the 4U rack case because with the drive caddies, a cheaper tower case will eventually cost about the same.

I think your setup is a bit too advanced for me :) Perhaps in particular because I am a Windows kind of guy. I would prefer to have the hypervisor on one flash drive or similar, data stores on an SSD and all remaining data on my mechanical drives. That's all I know at this point. I have little experience with virtualization, but I imagine that one VM will be the NAS OS (FreeNAS, Unraid or something) and the mechanical drives will thus be formatted accordingly.

Any advice with regards to specific RAM, SSDs or flash drives?
 
Any advice with regards to specific RAM, SSDs or flash drives?

Any USB Flash drive will work for ESXi. Try to spend a little bit more than the minimum, as I find value based USB drives are really slow. 4-8GB in size is PLENTY.

I agree with the RAID1 of the SSD's, assuming your motherboard supports it. If you're backing up the VM's and they aren't running anything mission critical, you can get away with just a single SSD.

Prepare yourself for slow speeds on those spindle drives. Virtualized spinning SATA is pretty damn slow in ESXi (throughput-wise).
 
ESXi storage capabilities are lousy. If you intend serious ESXi use,
thing about a shared NFS storage, either with a dedicated box or via all-in-one (virtualized NAS/SAN)

For all-In-one, you need a dedicated HBA that you can pass-through to your NAS-VM or
your performance is quite bad.

You may read about or compare my all-in-one solution (check a board like a SM X10SL7-F with included HBA LSI 9207)
http://www.napp-it.org/doc/downloads/all-in-one.pdf
 
Any USB Flash drive will work for ESXi. Try to spend a little bit more than the minimum, as I find value based USB drives are really slow. 4-8GB in size is PLENTY.

I agree with the RAID1 of the SSD's, assuming your motherboard supports it. If you're backing up the VM's and they aren't running anything mission critical, you can get away with just a single SSD.

Prepare yourself for slow speeds on those spindle drives. Virtualized spinning SATA is pretty damn slow in ESXi (throughput-wise).

My data is not mission critical, so I can live with backing up the VM's on the mechanical drives.

What do people normally do when using EXSi/virtualization for NAS purposes (among others)?

ESXi storage capabilities are lousy. If you intend serious ESXi use,
thing about a shared NFS storage, either with a dedicated box or via all-in-one (virtualized NAS/SAN)

For all-In-one, you need a dedicated HBA that you can pass-through to your NAS-VM or
your performance is quite bad.

You may read about or compare my all-in-one solution (check a board like a SM X10SL7-F with included HBA LSI 9207)
http://www.napp-it.org/doc/downloads/all-in-one.pdf

Thanks a lot for the feedback. Will definetely read the PDF ASAP.

I've been asking why people buy HBA's and the answer I've been given is that they do so because they need more SAS/SATA ports than available on their boards. But if I understand you corretly, using the onboard SATA ports on the motherboard (in spite of VT-D support) comes with a performance penalty?
 
Thanks a lot for the feedback. Will definetely read the PDF ASAP.

I've been asking why people buy HBA's and the answer I've been given is that they do so because they need more SAS/SATA ports than available on their boards. But if I understand you corretly, using the onboard SATA ports on the motherboard (in spite of VT-D support) comes with a performance penalty?

If you intend to virtualize a ZFS NAS/SAN on ESXi with performance and reiliability that is comparable to a dedicated hardware NAS/SAN, you must assign your storage (controller and disks) directly to your NAS VM via pass-through (vt-d.). to give ZFS full disk control.

Bcause of the nature of vt-d, you cannot pass-through single disks but only complete pci devices. So you need your local sata controller for ESXi and a local datastore (tto put your NAS VM on it) and a second (or more) controller for your disks that you can pass-through.
 
If you intend to virtualize a ZFS NAS/SAN on ESXi with performance and reiliability that is comparable to a dedicated hardware NAS/SAN, you must assign your storage (controller and disks) directly to your NAS VM via pass-through (vt-d.). to give ZFS full disk control.

Bcause of the nature of vt-d, you cannot pass-through single disks but only complete pci devices. So you need your local sata controller for ESXi and a local datastore (tto put your NAS VM on it) and a second (or more) controller for your disks that you can pass-through.

Thank you for the explanation. Seeing as I want an all-in-one box, which is going to be the home server for all purposes, it seems that I will need a controller for my mechanical drives that I can pass through.

Would I be better off with with the currently selected Supermicro X10SLM+-F board and a IBM M1015 flashed to IT-mode or the Supermicro X10SL7-F with build-in LSI 2308? The latter solution is cheaper, but is it better?

http://www.supermicro.com/products/motherboard/xeon/c220/x10sl7-f.cfm
 
Thank you for the explanation. Seeing as I want an all-in-one box, which is going to be the home server for all purposes, it seems that I will need a controller for my mechanical drives that I can pass through.

Would I be better off with with the currently selected Supermicro X10SLM+-F board and a IBM M1015 flashed to IT-mode or the Supermicro X10SL7-F with build-in LSI 2308? The latter solution is cheaper, but is it better?

http://www.supermicro.com/products/motherboard/xeon/c220/x10sl7-f.cfm

It does mostly not really matter but he 2308 (used in LSI 9207 HBA) is the newer and faster one compared to the IBM 1015 (comparable to LSI 9211 HBA). You also need to reflash both of them to IT mode. (LSI 9207 is flashed to IT mode from stock)

When I build all in ones, I use normally two pools. A fast one for VMs build from SSDs (example a mirror) and a large pool build from slow 2-4 TB disksl for storage and backup.
 
Thank you once again. Do you happen to know if it is possible to reflash the LSI 2308 to IT mode?

I will do as you suggest and operate with two pools of drives. However, to begin with and perhaps for foreseeable future, my first pool will only consist of one SSD hosting 3-4 VMs.
 
I had a SSD zfs pool for VMs mounted via NFS but i had very low performance (even with disabled sync) and esxi occasional just droped it if there was heavy load on the omnios VM. So i just moved it directly to esxi as local datastore.
I really don't see the point of having a ssd in allinone setup for VM only mounted via another VM via NFS. If there are multiple drives involved which you can use it in a zfs raid, then maybe. But in this situation, not worth it.
 
Thank you once again. Do you happen to know if it is possible to reflash the LSI 2308 to IT mode?

I will do as you suggest and operate with two pools of drives. However, to begin with and perhaps for foreseeable future, my first pool will only consist of one SSD hosting 3-4 VMs.

It is, and I have done it. However it is a little more nerve racking as if you screw it up then that builtin device is no longer ever usuable. Also atleast with the Supermicro board I have the ports for the onboard are on the edge of the board which in my case is very tight location making getting the cable in and out very difficult.
 
It is, and I have done it. However it is a little more nerve racking as if you screw it up then that builtin device is no longer ever usuable. Also atleast with the Supermicro board I have the ports for the onboard are on the edge of the board which in my case is very tight location making getting the cable in and out very difficult.

Thanks - nice to know that it is possible.

I hope that space won't be a problem in the case that I link to above, but I doubt it.
 
What do people normally do when using EXSi/virtualization for NAS purposes (among others)?f[/url]

As Gea mentioned, esxi sucks at storage. Yes, you can passthrough your controller, but passthrough itself kind of sucks because you can't utilize vMotion, which is one of the best things about virtualization. Granted, you will only have one host, so probably not a big deal.

Regardless, I don't virtualize my storage, perhaps if you used HP's VSA, you could have semi-decent results.

If your plan is to use onboard RAID with 7200RPM drives for your misc. file storage, expect throughput of around 40MB/s. I believe that is about average unless you have a higher quality raid controller.

Shared storage is the way to go, via ISCSI or NFS or fibre. AKA, use a synology for storage and iSCSI everything over.

Perhaps your budget won't cover it, understandable, just don't expect Windows-like performance from those 7200rpm drives in esxi.

The answer? Get a storage box, and get an esxi box.

*EDIT: Also, why spend money on server hardware if you're just going to have 16GB RAM? Just use a cheap mobo/a decent i5 cpu, and two nics. Spend the rest of your dough on storage box. You don't need ECC RAM if it's a home lab. Only use server mobo's if you want to go over the 32GB limit most consumer mobo's have.
 
Last edited:
file storage, expect throughput of around 40MB/s. I believe that is about average unless you have a higher quality raid controller.

Or non windows software raid.
 
As Gea mentioned, esxi sucks at storage. Yes, you can passthrough your controller, but passthrough itself kind of sucks because you can't utilize vMotion, which is one of the best things about virtualization. Granted, you will only have one host, so probably not a big deal.

Regardless, I don't virtualize my storage, perhaps if you used HP's VSA, you could have semi-decent results.

If your plan is to use onboard RAID with 7200RPM drives for your misc. file storage, expect throughput of around 40MB/s. I believe that is about average unless you have a higher quality raid controller.

Shared storage is the way to go, via ISCSI or NFS or fibre. AKA, use a synology for storage and iSCSI everything over.

Perhaps your budget won't cover it, understandable, just don't expect Windows-like performance from those 7200rpm drives in esxi.

The answer? Get a storage box, and get an esxi box.

*EDIT: Also, why spend money on server hardware if you're just going to have 16GB RAM? Just use a cheap mobo/a decent i5 cpu, and two nics. Spend the rest of your dough on storage box. You don't need ECC RAM if it's a home lab. Only use server mobo's if you want to go over the 32GB limit most consumer mobo's have.

WD Red's are only 5900 rpm, if I remember correctly..

So you're saying that running a NAS OS, such as FreeNAS, on ESXi is a bad idea and such an OS would be much better off running on a dedicated storage server? Would performance (in terms of throughput) be better for both the VMs on the ESXi server and client connection to its shares?

Also, how would you configure the aforementioned storage box? FreeNAS proponents, for example, advocate for server-grade hardware and ECC RAM.
 
So you're saying that running a NAS OS, such as FreeNAS, on ESXi is a bad idea and such an OS would be much better off running on a dedicated storage server? Would performance (in terms of throughput) be better for both the VMs on the ESXi server and client connection to its shares? Also, how would you configure the aforementioned storage box? FreeNAS proponents, for example, advocate for server-grade hardware and ECC RAM.

It's not so much a bad idea, but in my opinion storage boxes shouldn't be virtualized. Enterprise storage is very very rarely virtualized. Virtualization is great for many, many things, but everything has overhead, and I find the last place you want overhead is between you and your storage.

If you're set on FreeNAS (a good storage OS), and will be using ZFS, then there is a case to be made for using ECC memory. Does that mean you have to? Absolutely not. Does it mean you need a fairly large amount of memory for ZFS? Yes, if you care about performance, and it helps to use SSD's for your ZFS ZIL and L2ARC. SSD'd ZIL and L2ARC boosts VM performance exponentially (if you used it just for storage of files, you could forego the SSD's).

CPU doesn't need to be anything fancy, although you'll want at least an i3 with a bit of beef if you'll be doing iSCSI (you would be). The main components you will need are ports for your HDD's, so you'll want a motherboard with at least six, and then possibly an HBA on top of that for more drives. Then the power supply, and the case...etc. etc.

Do you want my honest opinion? Get a nice 8-port (or 5 port) NAS (Synology or QNAP), put all your drives in it, and just iSCSI to it from your ESXi box that you will setup. Use the SSD in the NAS as well. You talk about 100% uptime, and that is your answer.

These brands of NAS's have so many cool features now, are rock solid, and are decent storage boxes for ESXi (in a lab). They're quiet, have beautiful UI's, and consume very low power.

ZFS has its place, but since your drive count is so low, you seem a good candidate for a NAS. Then just build a small esxi host (decent CPU, a bunch of memory), and iSCSI (over nic cards) your storage over to it. You'll learn a whole bunch about ESXi and iSCSI in the process and your overall finished product will be more polished.

*shrug*
 
It's not so much a bad idea, but in my opinion storage boxes shouldn't be virtualized. Enterprise storage is very very rarely virtualized. Virtualization is great for many, many things, but everything has overhead, and I find the last place you want overhead is between you and your storage.

If you're set on FreeNAS (a good storage OS), and will be using ZFS, then there is a case to be made for using ECC memory. Does that mean you have to? Absolutely not. Does it mean you need a fairly large amount of memory for ZFS? Yes, if you care about performance, and it helps to use SSD's for your ZFS ZIL and L2ARC. SSD'd ZIL and L2ARC boosts VM performance exponentially (if you used it just for storage of files, you could forego the SSD's).

CPU doesn't need to be anything fancy, although you'll want at least an i3 with a bit of beef if you'll be doing iSCSI (you would be). The main components you will need are ports for your HDD's, so you'll want a motherboard with at least six, and then possibly an HBA on top of that for more drives. Then the power supply, and the case...etc. etc.

Do you want my honest opinion? Get a nice 8-port (or 5 port) NAS (Synology or QNAP), put all your drives in it, and just iSCSI to it from your ESXi box that you will setup. Use the SSD in the NAS as well. You talk about 100% uptime, and that is your answer.

These brands of NAS's have so many cool features now, are rock solid, and are decent storage boxes for ESXi (in a lab). They're quiet, have beautiful UI's, and consume very low power.

ZFS has its place, but since your drive count is so low, you seem a good candidate for a NAS. Then just build a small esxi host (decent CPU, a bunch of memory), and iSCSI (over nic cards) your storage over to it. You'll learn a whole bunch about ESXi and iSCSI in the process and your overall finished product will be more polished.

*shrug*

I hear what you're saying, but have some concerns. First of all, I already have a mini-ITX DIY NAS in a stylish Lian Li PC-Q25 case, but its E350 CPU is too slow. Furthermore, its hardware is limited in terms of future expansion (in the amount of drives). I am concerned that I will have the same problems with a Synology or QNAP. Also, at least here in Europe, an 8-bay NAS is almost as expensive as the entire build that I have suggested in the OP. Furthermore, I would need a new dedicated ESXi box, seeing as my current NAS wouldn't suffice for the purpose. A Dell C1100, for example, would add further to the cost and is not quite as power efficient as the Xeon 1230v3.
 
You should really consider running Linux + Xen and VMs on top of it. Ýou can build the array using ZOL, or even virtual FreeNAS if you want to. Xen isn't really THAT complicated. If you're new to linux, just install some newbie-friend distro (ubuntu, debian) and google for some instructions how to install Xen and get the VMs running.
 
Unless you have a teachable moment use case I would probably go the Linux route here too. I would do KVM+ZFSonLinux.
 
As Gea mentioned, esxi sucks at storage. Yes, you can passthrough your controller, but passthrough itself kind of sucks because you can't utilize vMotion, which is one of the best things about virtualization. Granted, you will only have one host, so probably not a big deal.
.

This affects only your storage VM where you cannot move storage hardware.
You can vmotion all other VMs from shared NFS storage

The idea of all in one is:
- Have all features that you usually have with a dedicated highend SAN but all in one box like:

- easy and highspeed access to VMs via NFS or SMB for VM backup/clone/move/restore
- block based storage virtualisation (iSCSI/FC)
- snapshots (without speed degration and limitation in numbers like with ESXi)
- RAM/ SSD cache
- secure sync writes
- checksums (ZFS) and scrubbing for data security

- and a full featured NAS/SAN as a filer/backup/mediaserver with all ZFS features
like LZ4 compress and the best non-Windows Fileserver if you use OmniOS .
- best platform for any OS to virtualize incl OSX, Linix, Windows or Solaris incl those that
need dedicated hardware like a Sat-mediaserver or USB (Videocards to come)
 
Just updated the list.. I will order a Zalman MS800 instead of the Inter-Tech.

Can anyone recommend another or better SSD than the upcomming Samsung 840 Evo? The Kingston SSDNow KC300 seems to be in the same price range, but I cannot find any reviews of it.

Also, can anyone recommend a particular flash drive, mSATA disk or SATA DOM to host ESXi ?
 
Just updated the list.. I will order a Zalman MS800 instead of the Inter-Tech.

Can anyone recommend another or better SSD than the upcomming Samsung 840 Evo? The Kingston SSDNow KC300 seems to be in the same price range, but I cannot find any reviews of it.

Also, can anyone recommend a particular flash drive, mSATA disk or SATA DOM to host ESXi ?

I've ordered the SL7 motherboard based on Gea_'s advice and will order the rest of the parts this evening, but am troubled by finding a flash drive to host ESXi. Any advice is appreciated.

I have been looking into MLC USB sticks and SLC SATA DOM's, but cannot really figure out what the majority uses.
 
Back
Top