ESXi AIO Critique

Nexillus

[H]ard|Gawd
Joined
Nov 6, 2012
Messages
1,116
Hello everyone,

First wanted to thank many of those who helped me in my previous thread many months back. Now that I have a bit more time I am moving into the purchasing and implementing the ESXi AIO Here's the thread that gave me some direction

I have not built a ESXi build or had much experience on the server side of computers however I have tried to do as much research to get caught up. Any critiquing would be appreciated or recommendations from this list.

Hardware:
CPU: Xeon E3-1230V3 Haswell
SSD: Agility 3 256GB (re-used from another computer mainly for VMs)
HDD: HGST 4TB NAS drives (Already purchased 4)
RAM: 32GB ECC
MB: SuperMicro MBD-X10SL7-FO (Will reflash LSI 2308 to IT firmware)
PSU: Seasonic 650watt gold (re-use from another computer)
Tower: Rosewill RSV-L4411
LSI HBA: IBM M1015 (reflash with IT firmware)
File System: ESXi ZFS OmniOS, napp-it
Good USB stick

Questions for the Pros:
I have 3 Seagate HDDs 1x 2TB ST2000DM001 and 2x ST3000DM001. Do I even want to use them in the system with them not being NAS quality and known Seagate failure rates?
What is your preferred backup policy using external 3-4TB externals or hotswap cold storage drives?
For MB and ECC memory should you only stick with certified sticks that are on the QVL from the manufacture?

Thanks all for the help!

Respectfully,
Nexillus
 
Last edited:
I'll try and answer a few questions for you, though someone with a bit more knowledge will likely follow up and correct me if I'm wrong.

That list of hardware specs, perfectly... the 1230v3 is a decent processor for most all applications. Home use VMs, it's a good choice. You're pretty well stuck with that kind of setup or going to an older LGA 2011 setup. For the money, I'd do exactly what you have now.
Ram is fine as is, more is always better though, but 32 will be good for some VM work.
The hard drives (4tb) are a good choice. I dont imagine you'll have any problems with them, and they should last you a good long while. Just be sure you have an idea of how you're going to set them up... Raid 5, raid 10, etc. Personal preference matters more so here and I tend to lean towards ZFS using Raidz, but you'll need to decide which method is best for you. I use a Raidz (Raid 5 equivalent) on a 5 drive setup of 2tb drives. Works fine, I've had them lose a drive on each array at some point and it never gave me any hassle, though if I'd of lost a second it would have been toast. Just figure what you'll need it for and work around that.
The SSD is a good idea for VMs, definitely. Make sure to setup a backup solution to back up the VMs onto the mechanicals so as to keep yourself safe.

On the note of the 2 and 3tb seagates... if you can spare the space, put the 2 matching drives in, put them in a raid 1 and use them to back up your VMs to, that way you keep them separate from the storage setup you'll be using. Not that anything would be harmed if you did back up to the storage array, but personal preference applies again. You could even put the 2tb in as a spare to those. Depends how safe you'd like to be... Remember though, backing up locally is great, but it doesnt do you any good if a flood, tornado or any other number of things happens. You could keep one of the drives out and use it externally, then once a week or 2, cross ship with a friend somewhere, or take it to work and put it somewhere safe there. Like I said, depends how safe you're looking to be.

On the backing up for externals, I don't personally... But I have too much space used to do that easily. Instead, I abuse crashplan a bit and upload most of the important aspects of my server to there. Saves me externals at the cost of time to redownload, but it's still better than nothing. You could do the above recommended, swapping them every week or two or shipping back and forth with a friend. Personally, I'd put them at work in a locked drawer or someplace like it, then on Monday, bring it home, back up, and bring it back Tuesday morning (minimizes time for things to go wrong versus taking it home on the weekend). Alternatively, you could hook that drive up there and backup daily or weekly over the network, assuming they'd let you and both sides can handle the bandwidth.

On ECC... I don't stick with the QVL. It's a nice thing to have, sure, and it's preferred when doing a setup where you want to make sure it'll work because troubleshooting isn't an option. It's targeted at companies buying new hardware mostly and usually it tends to be a very slight bit more. If it's a few dollars a stick, do it for sanity if nothing else. But with Amazons return policy, I can return most anything if it doesn't work out (I was tight on building my last server and the mere $25 for QVL wasn't an option thanks to my wife at the time). Chances are, they'll typically work without issue so long as you're sticking with a known vendor. I've yet to have any ECC, registered or unbuffered, fail in a build. The closest I had to that was a DOA stick (2 systems, didn't work, one was QVL for it), but the other 3 matching sticks worked fine.

Any ideas what systems you're going to look at for VMs and what each will be targeted towards?
 
Last edited:
Not a bad idea using the 2x 3TB in a RAID 1 for a local backup for the main array. I really like this idea and gives them a good purpose. As for external backup I'm working with something to build a small M-ITX NAS build that has roughly 4-6TB space for important stuff to be backed up and it is located half way across the United States.

Good to know on the QVL list, I know with unregistered non-ecc its not too picky but not having dealt with much registered ECC I wasn't sure if it is pickier. Basically get some good registered ECC DDR3 and call it a day.

As for the VMs it will be Web server, Minecraft server, Pfsense/Openvpn server and eventually will implement a email server down the road.
 
One observation: if this is going to serve up storage for VMs, you don't want raidz* of any flavor, since you only get random IOPS of a single vdev. Unless you have mongo storage requirements for the VMs, I would go with a 2x2 raid10, since you will get random IOPS of 2 for writes and 4 for reads (on average.)
 
thanks all again, I ordered all the parts. Should all be here within the week, ended up taking advantage of newegg's bitcoin deal and saved over $300 just from that which was great on the MB/CPU and the HGST HDDs.
 
So I have hit a roadblock now that I have all my parts. I successfully flashed the on board SAS to LSI 2308 IT firmware however the m1015 is not detected on the SAS configuration or under DOS/FreeDOS using the following commands.

sas2flash -o -listall

It only lists the on board controller there fore I am unable to flash it but I do see the ROM load as I get two ROM pages for each controller ( I have since disabled the on board ROM so now I have one)
 
That is cause you need an older motherboard.

Don't use freedos, use the uefi shell to flash it.
 
I have done the EFI both DOS/FreeDOS on the Supermicro board it doesn't show up, only the on board does.

I put the device into another computer that is running windows and it flashed its ROM however I cold not get the sas2flash to load, it would just stay at DOS prompt.
 
I don't understand what you mean, you tried EFI? or you tried dos and freedos?

I said to use the uefi shell, DO NOT USE DOS at all.

lsi supplies a uefi shell flash utility, it works great. The issue is, there runs out of bios space to map all the crap into the old 640k-1m range, so the flash utility in dos can't do it, cause it doesn't in the memory map. Using the new 64bit uefi bios, without dos, removes this issue.

I have been flashing these using uefi for a few years now, works great.
 
EFI shell is different than DOS shell. You need to get a different version of the file to flash to flash in EFI.
 
I don't understand what you mean, you tried EFI? or you tried dos and freedos?

I said to use the uefi shell, DO NOT USE DOS at all.

lsi supplies a uefi shell flash utility, it works great. The issue is, there runs out of bios space to map all the crap into the old 640k-1m range, so the flash utility in dos can't do it, cause it doesn't in the memory map. Using the new 64bit uefi bios, without dos, removes this issue.

I have been flashing these using uefi for a few years now, works great.

IIRC from my crossflash, the M1015 was not detected until I cleaned the card with megarec. See this page.

I tried both methods, I was able to flash it finally with the help of of HammerSandwich, that is what it needed. sas2flash now lists both adapters however the LSI configuration only see's the M1015 now and not the on board adapter. grrr...
 
Back
Top