New build ideas

sixpac

Limp Gawd
Joined
Jun 8, 2004
Messages
272
Hey all, I am looking for a new build for my lab. I want to go with two systems and probably a Supermicro build. My budget is around $5000 for everything.

Not sure if I should go DDR3 or 4 but I wanting to go with ECC. Hoping to get an Intel 6 or 8 core processor and minimum 64GB ram per host.

My current ESXi 6 rig is connected via Iscsi to a synology 1813+ with 8 - 4TB WD Red's in raid 6. Iops is pretty low which I am not happy about. All networking is going through a Cisco 2960 Gig switch.

summary of wants:

DDR3 or 4?
64GB ECC memory or more
6 core Intel CPU
Supermicro motherboard
6 NICs per machine (Intel)
Chassis doesn't have to be rack mount
Gold power supply (500 w)?
SSD's for data stores?

Thoughts? Suggestions?

thank you!
 
"It depends!!!"

I'm just going to focus on the "Thoughts?" portion of your post...
There are many questions to be answered...but I'm not asking them :p

Thought: I wish I had that budget ....though scope creep is leading me there.
Thought: $5000 = build a complete 3-node (maybe 4) VSAN assuming you stick with ESXi.
Thought: Maybe a combo of dual port 10Gbit NIC + 4 port 1Gbit ... throughput!!!!
Thought: SSD storage .... IOPS!!!
Thought: PowerSupply needs depends on power draw from system
(beside motherboard, CPU, and RAM..you'd need to specify what add on cards).
500W could be overkill or just right.....dunno with no info.

Thought: If that 1813+ is upgradable to 10Gbit NIC, try that + one in your current rig + SSD storage and you might be happy.

The current re-incarnation of my lab uses the following:
Supermicro X10SRi-F motherboard
Intel E5-2620v3 CPU
64GB ECC DDR4..

Unfortunately...I am not completely done yet so can't put up any useless numbers for you. ;)
 
The E3v5 xeons just came out. These should be able to do 16G*4 = 64G ecc of ddr4. I'd look into this to save probably quite a bit of money if I were you. These are only 4 core however.

This avoids dual socket & $400+ xeons..

http://www.newegg.com/Product/Product.aspx?Item=N82E16819117611&ignorebbr=1
http://www.crucial.com/usa/en/ct16g4wfd8213

Also: DL360/380 G6/7's are super cheap...
http://www.ebay.com/itm/HP-ProLiant...708402?hash=item58c87c4132:g:oU0AAOSwjVVVlD9B
http://www.ebay.com/itm/HP-ProLiant...223315?hash=item1c5cfdd513:g:tnUAAOSwLzdWTO-a
 
Last edited:
As an eBay Associate, HardForum may earn from qualifying purchases.
"It depends!!!"

I'm just going to focus on the "Thoughts?" portion of your post...
There are many questions to be answered...but I'm not asking them :p

Thought: I wish I had that budget ....though scope creep is leading me there.
Thought: $5000 = build a complete 3-node (maybe 4) VSAN assuming you stick with ESXi.
Thought: Maybe a combo of dual port 10Gbit NIC + 4 port 1Gbit ... throughput!!!!
Thought: SSD storage .... IOPS!!!
Thought: PowerSupply needs depends on power draw from system
(beside motherboard, CPU, and RAM..you'd need to specify what add on cards).
500W could be overkill or just right.....dunno with no info.

Thought: If that 1813+ is upgradable to 10Gbit NIC, try that + one in your current rig + SSD storage and you might be happy.

The current re-incarnation of my lab uses the following:
Supermicro X10SRi-F motherboard
Intel E5-2620v3 CPU
64GB ECC DDR4..

Unfortunately...I am not completely done yet so can't put up any useless numbers for you. ;)

Ok thanks. I am going to go with the same CPU and Motherboard that you picked out.

The 1813+ can't do 10G which sucks so I am probably going to go the SSD route. Gearing up and licesning for VSan is gonna be too rich for my blood.

Not really gonna have extra add in cards except for networking so I should be able to get away with 400 watt power supplies?
 
Ok thanks. I am going to go with the same CPU and Motherboard that you picked out.

The 1813+ can't do 10G which sucks so I am probably going to go the SSD route. Gearing up and licesning for VSan is gonna be too rich for my blood.

Not really gonna have extra add in cards except for networking so I should be able to get away with 400 watt power supplies?

You should investigate: https://www.vmug.com/evalexperience
You get access to a whole suite of VMware products - including VSAN.

I mentioned my own lab components because they seem to match what you mentioned: 6-core processor + Supermicro motherboard and can handle 64+GB RAM.

I would encourage you to dig deep and figure out what you want your lab to look like when you finish building it ...
doing this may make you realize you don't necessarily want the same components I have ...
so I guess I should lay down some of the questions...I was hoping others would jump in and ask:

1. Whats the purpose of the lab?
- Is it a true lab where you will constantly build/rebuild?
- Is it a home lab but will provide "production" work for your home?
- Mimic something in your work environment?

2. What will your work load look like?
- High CPU use?
- High IOPS?
- High bandwidth/throughput?

Those are the main ones...someone can add to it.


A little background on why I chose the components I did.
My original plan was to build my newest lab based on the Supermicro X10SDV-TLN4F.

While impressive, I could not get past the price tag for a limited mobo/cpu combo @ ~$900+.

For about $500 more I had a complete node with 64GB DDR4 RAM by going
with the E5-2620v3 and SM X10SRI-f.

Lost 2 core and 4 threads .... whoopie!!! I have never hurt in the CPU area ... only RAM and IO/IOPS.

I can always change out the processor if that becomes an issue ...can't do that on the X10SDV.

I doubted I would really need more than 32GB RAM..and didn't until I started playing with VDI ... easy to max out if generous like me giving each 3-4GB RAM.
I can go up to 256GB RAM (512GB is possible but I got the cheaper 32GB modules) if need be...double the X10SDV limit.

The main killer (and it took some forward thinking) was the one PCIe slot on the X10SDV.
I wanted to be able to change out NICs / Controllers as needed even if I may not do it.. just wanted the capability.


I'm way past TLDR so I'll stop ....just trying to get the thoughts/conversation going.

ps....

I use 200W power supplies ... the only add on cards I have currently are some Mellanox 10Gbe cards ...so it is fine.
If I decide that I want to test video card pass thru, I need to start looking for higher rated power supplies. Not there yet.


So again....what do you want your lab to look like when finished (initial build of course ...we never really stop building)?
 
Last edited:
Yep, already have the evalexperience subscribed to. I am actually running dual hosts right now and this is a replacement build.

I throw the kitchen sink against mine and I am maxed out on Ram and Iops like you but really not so much on CPU but I would rather have more to use than not. Its not quite a true lab (I do use it for some home production stuff).

I could consider another box (you need at least 3 for Vsan) but I don't really want to run more than 2 hosts (too much enegery).

Biggest issues with my current build is memory (only 32GB maxed out) and IOPS. Almost all my VM's are sluggish except for some of the linux vm's.

I am sold on the Supermicro board and CPU but still not sure what to do about the lack of IOPS and high latancy.

Oh and good to know about the 200watt P/S. Didn't think that 200watts would be enough. I dobut I will be doing any GPU passthrough stuff at anytime.
 
Demote your current hardware to an iscsi storage host and give them each a 10 GB nic
2 10gb intel x540-t2 nics 400x2 =$800
ebay gulftown 6 core server x2 = $800
2 10gb intel x540-t2 nics 400x2 =$800

you have $2600 left for storage

samsung evo 500gb x17

get rid of parity. run raid 10

or
forget the iscsi all together and go with direct attached storage
 
The Xeon D's look very interesting. Check out this Supermicro board with a 8 core processor and 10Gb networking.

http://www.newegg.com/Product/Product.aspx?Item=N82E16813182964

You can custom build a nice FreeNAS SAN and use the 10Gb to connect directly to each ESXi host rather than spending the money on a 10Gb switch.

I am interested in this one.

http://www.newegg.com/Product/Product.aspx?Item=N82E16813182973

It is a quad core Xeon, but this motherboard is less than $500 and comes with 2x10Gb ports. It has a PCIe slot, so you could add a 2 or 4 port NIC if you wanted. I would build that with a SataDOM and 64GB of memory for about $1,200. Figure build two of them for $2,500 or two for $3,500 with the 8 core processors and the rest goes towards the SAN.

I just built a FreeNAS SAN with this motherboard. http://www.newegg.com/Product/Product.aspx?Item=N82E16813157419

Adding 10Gb to it would make it perfect as it easily maxes out the 1Gb link.
 
Hmmm, the Xeon D's look interesting. I could afford to get 3 of them and put them in a cluster with 32GB of memory each and then use vsan for my datastore.

I would love to play around with 10G right to the desktop but the prices are still too high for my likings (switch wise).

I am definately going to look into putting together a build with these smaller nodes.
 
Any ideas what memory you are going use and case? I am trying to find the lowest wattage PS I can find.
 
Any ideas what memory you are going use and case? I am trying to find the lowest wattage PS I can find.

Hard to say especially since I have no interest in VSAN. For my NAS I went with this case http://www.newegg.com/Product/Product.aspx?Item=N82E16811163255 since it holds 8x 3.5" hot swap drives and 4x 2.5" drives. This is a nice case too http://www.amazon.com/gp/product/B0065SKNNK/ and a great price. NewEgg is more than twice what Amazon is charging, but still kind of big for VSAN.

For memory, I haven't checked Supermicro's HCL yet, but probably something like this. http://www.newegg.com/Product/Product.aspx?Item=9SIA24G2U48163.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I can't seem to find the HCL for the memory of this board yet?

If I do VSAN I am looking at SSD's, more than likely the EVO's. Any reason not to get EVO's over the pro's? I am probably going 2 x 512GB SSD per machine.
 
<snipped for brevity>

summary of wants:

DDR3 or 4?
64GB ECC memory or more
6 core Intel CPU
Supermicro motherboard
6 NICs per machine (Intel)
Chassis doesn't have to be rack mount
Gold power supply (500 w)?
SSD's for data stores?
</snipped>

Let the scope creep begin.... :D
So the new want list summary:

32GB DDR4 per host...
4 core Intel CPU
2 10GbE NICs

If you are considering hardware VSAN, might want to review VMware's recommended hardware.

Once you add a controller on the HCL (filling the only slot) ... What do you do to get additional NICs? ...
or if you add a NIC, what do you do about a supported controller?

If you do go with that Supermicro Mini-ITX board, then the perfect case/ps for it:
CSE 504-203B

Since you are going with 2 SSDs ... fitment is easy..just lay them in there ..
or purchase the drive bracket option (i did'nt bother).

Memory for that Supermicro X10 board should be same as mine:
M393A4K40BB0-CPB
 
Last edited:
I went down the 10gbe + shared storage (synology rs3614rpxs) route and regretted it,

The servers, NICs, NAS & switch were expensive, big, heavy, noisy and power hungry (and performance wasn't near that of local storage).

I've now gone with a couple of incredibly low power supermicro shallow quad core xeon servers with local SSD. Yes i've lost the shared storage/HA and have max of 32gb ram per node, but they perform brilliant, cost very little to buy and even less to run.

Next step i'm investigating options for "next best thing to HA" (i think there's a Veeam product that can provide quite quick failover by providing some form of replication).
 
I went down the 10gbe + shared storage (synology rs3614rpxs) route and regretted it,

The servers, NICs, NAS & switch were expensive, big, heavy, noisy and power hungry (and performance wasn't near that of local storage).

I've now gone with a couple of incredibly low power supermicro shallow quad core xeon servers with local SSD. Yes i've lost the shared storage/HA and have max of 32gb ram per node, but they perform brilliant, cost very little to buy and even less to run.

Next step i'm investigating options for "next best thing to HA" (i think there's a Veeam product that can provide quite quick failover by providing some form of replication).

Not sure what your setup is.

Mine is 3 node: 2 compute + 1 Storage connected direct attach 10GbE.
Entire setup is quiet - not silent ... and each compute node idles ~50W or less.

Still working on completing storage node - only has a single SSD in it.

Not sure I'll get around to a 3rd compute node for VSAN ... i think I prefer to keep my lab flexible and vendor agnostic.
 
Just to throw out another idea...

You can get 2, E5-2670 8 Core CPUs for around $280-320 (TOTAL).
You can get SuperMicro V1/V2 2P motherboards $100-$250 (each).
You can get DDR3 8GB RDIMMs for $14-25 (each).

Now, if you want to add 10Gig you can try to find a motherboard that has it, and same for on-board LSI controller to turn into a HBA.
 
Back
Top