Help a noob, building a server

SedoSan

Limp Gawd
Joined
Aug 29, 2012
Messages
145
Hello all,

I've been using my personal everyday use PC as a server (kill me :p) which has lots of disadvantages and am looking to build a dedicated server but I'm having a lot of trouble figuring out which OS to choose, through reading and researching and asking around I was led to ESXi. Before continuing I will lay out my current server and what I'm planning to do with it.

I have my normal (gaming) PC with an additional LSI 9261-8i RAID card which has 7 3TB HDDs connected to it in RAID6 and a 4xGbE NIC. Obviously I want these transferred to the new server case.
As for the services i'm running in my server, I'm running a media server, an application server, FTP server, VPN, couple of online games server, VoIP server, torrent server.

Now here is what I'm struggling with. I'm very familiar with windows 7 while being a complete noob with other OSs. I've heard about FreeNAS and I gave it a try on a virtual machine and I liked it, specially the ZFS features. Anyways I can't say i'm too familiar with it and am afraid to screw things up and lose 15TB+ of data. So here is my question.

What exactly is ESXi? from what I read it is a virtual OS that allows you to setup multiple OSs and run hardware as a pass-through. So, can I install both Win7 and FreeNAS and run them together simultaneously from the server machine? Since I want to setup my services and applications in w7 and manage my storage with FreeNAS. Does it work that way? Please guide me :)

Thanks.


*** Edited ***
The reason I want to run on a pass-through virtual OS is that i'm planning to switch to software RAID where ZFS in FreeNAS can control my HDDs directly.
***************
 
Last edited:
so in other words, you recommend it?
Does it do what I want to do?

Yup it should. I never done it that way myself but all the documentation on it points to allowing it to do what you ask of it.
Its free to use. Just try it out and see if does what you need it to.
 
What you are looking at is an esxi all in one, there is one documented on this forum I think.

If you do it right you will probably find it exceeds all your expectations, it certainly did for me.

You need to ensure both the motherboard and processor support vt-d to be able to pass through the required PCIe cards.

You will probably want to pass through the LSI controller and use that for the ZFS storage part.

Don't give up if it looks a little complex, you'll be well pleased if you persevere.

Also, as much RAM as you can fit in it.
 
I would get vmware player or workstation and start building vms before you buy hardware. You can also run a nested setup.
 
Hi to the OP:

I would strongly look at not running ZFS with your setup. Or, sell your RAID controller to buy server hardware. With ZFS you do not want to have extra RAID logic sitting on cards since you will use RAID-Z. As you mention. Also, remember that you may want to use something much simpler, such as Windows 8 as your server with the current RAID controller/ array controlling disks. You can use Hyper-V on Windows 8 and run many Linux/ Windows OSes very easily.

The main reason I am suggesting this is because you can literally remove the LSI controller from the workstation, place the controller hooked up to your drives in the server chassis and it will work. Other than a physical move, no data migration. If you swap to ZFS you will need to figure out how to move whatever is on your ~15TB usable array to the ZFS setup. Likely that will require buying more drives and you already have a great RAID controller.
 
Hi to the OP:

I would strongly look at not running ZFS with your setup. Or, sell your RAID controller to buy server hardware. With ZFS you do not want to have extra RAID logic sitting on cards since you will use RAID-Z. Also, remember that you may want to use something much simpler, such as Windows 8 as your server with the current RAID controller/ array controlling disks. You can use Hyper-V on Windows 8 and run many Linux/ Windows OSes very easily.

The main reason I am suggesting this is because you can literally remove the LSI controller from the workstation, place the controller hooked up to your drives in the server chassis and it will work. Other than a physical move, no data migration. If you swap to ZFS you will need to figure out how to move whatever is on your ~15TB usable array to the ZFS setup. Likely that will require buying more drives.

This is a reason i'm switching to software RAID, i was planning to run both OSs and get enough HDDs to be able to copy everything from my 15TB to the new array with an HBA card, then expand the original 15TB to the new one.

Would using Hyper-V on Win7/8 be a pass-through?
 
Perfectly reasonable idea. One other thing to consider is that you will basically create a RAID-Z2 (taking a guess here based on the fact you were using RAID 6) set, copy data to that set of drives, then add in a second RAID-Z2 set using the drives from the current build. Lots of data protection there so good!

One caveat is that adding disks in RAID-Z is not exactly like what you are used to with LSI. Essentially whereas you can add a new drive to the LSI adapter and use online capacity expansion to grow the array (which takes a long time especially if you have a lot of data) you will likely stripe the two RAID-Z2 sets. That means that you will fill each RAID-Z2 set at a proportionally different rate going forward.

Not a bad behavior by ZFS, but different and one that should be considered in the migration plan.

On Hyper-V pass-through is a little different. Given your setup, what you would likely do is create RAID volumes on the LSI controller then set them to offline and pass them through to guest VMs.

Again, ZFS is awesome, just was thinking that given your current hardware there might be an alternative to at least consider before making the jump.
 
Running the server part of your plan is a good thing; I also have a FreeBSD/LSI/ZFS server based on ESXi 5.1. But I struggle to get passthrough running for graphic cards. Until now I kind of gave up on that. There are success stories but I can't reproduce.
The server side I enjoy !
 
So, I tried ESXi on VM to test it out and ran W7 and FreeNAS from it (verciption :p) which worked fine, but I wanted to test it out on actual hardware. So I pulled out my (super-mini-server) which I have stored in the basement for the sole purpose of recording IP-cam 24/7 for house security, and I tried to install ESXi 5.1 on it. Doing so I got the error that no NIC was detected... I searched online and saw all this complicated oem.tgz injection to customize the ISO file, in the end I managed to install the NIC drive using a patch applying tool to make the process easy but this required that I use ESX 4.1 instead.

After trying to install I got another error which I don't remember now. However this mini server had an Intel Pentium G630 and 4GB of RAM, NIC is D-Link DGE-528T.
I'm wondering, was it a compatibility issue? Obviously this isn't the hardware I'm going to use on my actual server, I just wanted to test ESXi 5.1 on actual hardware.
Actual server going to be an Xeon mobo, probably a dual socket one with an intel 4x GbE NIC..

Another question, should I go with 5.1 or go with 4.1? it there a lot of difference?
 
So, I tried ESXi on VM to test it out and ran W7 and FreeNAS from it (verciption :p) which worked fine, but I wanted to test it out on actual hardware. So I pulled out my (super-mini-server) which I have stored in the basement for the sole purpose of recording IP-cam 24/7 for house security, and I tried to install ESXi 5.1 on it. Doing so I got the error that no NIC was detected... I searched online and saw all this complicated oem.tgz injection to customize the ISO file, in the end I managed to install the NIC drive using a patch applying tool to make the process easy but this required that I use ESX 4.1 instead.

After trying to install I got another error which I don't remember now. However this mini server had an Intel Pentium G630 and 4GB of RAM, NIC is D-Link DGE-528T.
I'm wondering, was it a compatibility issue? Obviously this isn't the hardware I'm going to use on my actual server, I just wanted to test ESXi 5.1 on actual hardware.
Actual server going to be an Xeon mobo, probably a dual socket one with an intel 4x GbE NIC..

Another question, should I go with 5.1 or go with 4.1? it there a lot of difference?

You don't need a dual socket board, and it would probably be a waste unless you plan on buying an ESXi license for more than 1 processor. The free license for ESXi allows you to use 1 processor with as many cores as you want, and up to 32GB of ram. As for ESX 4 or ESXi 5.1, I'm using ESXi 5.1. But at work we're using ESX 4 and are just starting to upgrade to ESXi 5.1.

For hardware, I recommend the following (A little overkill and also Intel biased but you can sub AMD parts to your liking/budget):
CPU: Intel E3 1230V2 - Approximately the same performance as a 2600k but lower TDP and has VT-d.
Motherboard: Supermicro X9SCL+-F this is the miniATX. X9SCM is the full ATX if you would rather have a full board. The +- has 2 Intel 82574L NICs instead of one 82574L and one 82579LM which the + has. Stock drivers for the 82579LM are not supported in ESXi for this but you can find custom drivers on this forum
RAM: KVR1333D3E9SK2/16G x2 for 32GB total ECC RAM the board supports 1600Mhz ram, but it's still expensive and won't give a noticeable performance boost.

You'll also want a USB drive to install ESXi to and boot. You can just install through IPMI or put the installer on the USB and just overwrite it as you install.
 
With ESXi 5.1 free you can have dual processors, my current server has two Intel E5520's running ESXi 5.1 free without a problem. Reading around you are not limited to the number of physical processes or cores just vCPU's.

Screen shot of my server. I am running an all in one with ZFS.

 
With ESXi 5.1 free you can have dual processors, my current server has two Intel E5520's running ESXi 5.1 free without a problem. Reading around you are not limited to the number of physical processes or cores just vCPU's.

Oh thanks, I didn't know they got rid of the processor limit. Although I'm not running anything that processor intensive anyway; ZFS needs RAM more than anything.
 
Regarding the hardware in ESXi with multiple OSs, I can assign which virtual OS can have access to which piece of hardware, right?
if i'm running w7 and freenas, and I only have 1 physical LAN port (for example), do both w7 and freenas share it?, and if I install a new LAN card, I can choose which OS to have which LAN port assign to it, am I correct?
 
So I think I decided on the hardware,
http://www.supermicro.com/products/chassis/4U/846/SC846A-R900.cfm
Going with this Chassis, 24 Hotswap bay, 4U case, 900W redundant.

As for the MOBO, this:
http://www.supermicro.com/products/motherboard/Xeon/C600/X9DRE-TF_.cfm
Dual Xeon socket 2011, Dual 10GbE LAN, more than enough RAM.

Budget isn't a problem for this, do you guys have any suggestions regarding these?

I'm planning to be future proof, installing every OS I think I might need thus having as much RAM as I can add (512GB on this board).

Would I be facing any troubles running ESXi on this then add W7 and FreeNAS for starters?
 
So I think I decided on the hardware,
http://www.supermicro.com/products/chassis/4U/846/SC846A-R900.cfm
Going with this Chassis, 24 Hotswap bay, 4U case, 900W redundant.

As for the MOBO, this:
http://www.supermicro.com/products/motherboard/Xeon/C600/X9DRE-TF_.cfm
Dual Xeon socket 2011, Dual 10GbE LAN, more than enough RAM.

Budget isn't a problem for this, do you guys have any suggestions regarding these?

I'm planning to be future proof, installing every OS I think I might need thus having as much RAM as I can add (512GB on this board).

Would I be facing any troubles running ESXi on this then add W7 and FreeNAS for starters?

That is an awesome sounding setup.

Add 2 of these: E5-2687W and you're golden :D

What exactly are you doing with this beast :)
 
That is an awesome sounding setup.

Add 2 of these: E5-2687W and you're golden :D

What exactly are you doing with this beast :)

To be honest i'm not sure if what i'm going to do need all of these, but I always like to get an overkill setup if I can. I think i'll make a bulletin to show the points for building such server
  • I love experimenting, building and exploring so this is kinda like a hobby
  • Better way to handle my storage and HDDs, I already have 7 3TB WD RED HDDs in my gaming rig which isn't a good thing to have :p, so I want a dedicated case or a server to hold them as well as expanding, i'm almost full on 15TB RAID6.
  • I'm making an all in 1 server for my house-hold where will be streaming all kind of applications such as Media (movies/tv shows/anime/music), streaming games directly from the server so no user has to install a game on his/her machine (some online games such as TERA or Final Fantasy are over 15GB so a user can just stream it directly from the server) Also a backup server for all house-hold computers (maybe a weekly backup for all PCs)
  • Other than the house-hold, i'm planning to invite friends over for a LAN party where we all can hook up to the server and stream without reaching traffic since LAGG will provide 20Gb/s bandwidth and all cables in the house is cat6 1Gb/s.
  • Running all kind of services such as Serviio, SFTP, VPN, TeamSpeak, CCTV.
  • Running game servers such as Minecraft (over 100+ users), counter-strike server.
  • Making a future-proof system even if this system is an overkill it will serve me for years to come.
:D

*** EDIT ***
Add 2 of these: E5-2687W and you're golden :D

I just checked this CPU, I really think its an over-overkill :p lol
I said budget isn't a problem, but not this problem! :p hehehe, I wonder if I can grab these on a special price! thanks for suggesting it :)

*** EDIT 2 ***
This idea just popped to me, I used to do some 3D designing on my PC having i7-3930k 6 core with HT OC'd to 4.6GHz, it wasn't as fast as I hoped for and having AMD GPU, I couldn't really utilize it for rendering since I was using Blender. Anyways, is it possible to design on my personal PC, but let the server render it for me?
It would be really interesting.
 
Last edited:
The problem with esxi is that it is a dedicated VM server not much a local ui, but that is a strength as well. If you have any interest in Linux, I'd consider running kvm on a desktop install of CentOS.
 
What rack do you have or plan to get?
To be honest I didn't think about the rack yet since I thought I'd get any good rack that is tall enough. >.>, The thing is I live in the United Arab Emirates (Abu Dhabi), so it is harder to get shipments from outside and we don't have the versatile options as someone from the US would have, so I'm gonna have to stick to whichever I can get in here, though I wouldn't mind you giving me some tips about good racks or things I should look out for when getting one.

The problem with esxi is that it is a dedicated VM server not much a local ui, but that is a strength as well. If you have any interest in Linux, I'd consider running kvm on a desktop install of CentOS.

What do you mean by a local UI? I'm planning to put this server in a dedicated server room in my house and don't have to go near it unless I add storage or physically change something, I will access ESXi from my PC in my room remotely. If this isn't what you mean please explain it to me :)
 
If you plan to only access the server remotely to control the vms, then esxi with vsphere or similar should be fine.
 
I have a question regarding ESXi,
is it possible to share resources among VMs?
lets say I have a maximum RAM of 20GB and using 3 VMs, if I give each of them 10GB RAM each would that work? they won't all probably use all 10GB at the same time, also for number of processors or cores, if I assign each VM 8 cores and I only have 12 for example, would inactive ones be used for other active ones?
I hope I made myself clear >.<
 
Honestly it sounds like a good setup, but is way overkill with that Case and Mobo for a first time setup and for a general purpose system.

You could get by with a much simpler setup to start, and not necessarily sacrifice much power.

Just something to consider. Personally I chose to have multiple cheaper hosts than a single more powerful host. That way I have full hardware redundancy and it is more tailored to what I use my systems for. None of my VMs are super demanding for specs, and I can easily fit within the memory constraints I put in place (I have 2x 16gb Hosts) with plenty of overhead for playing and lab type use. But I do have a couple VMs that need to be up with good uptime stats, which host redundancy provides (thanks to Hyper-V replication, I can have the critical VMs back up in the time it takes to boot the VM (~3min))

All I am trying to say is make sure you think about your requirements. Sometimes you come from the world of powerful workstations and put all your eggs in one powerful basket, which is not always the best depending on your application.
 
streaming games directly from the server so no user has to install a game on his/her machine

How you plan to do this?? You going to buy a Nvidia Quadro card :D
 
I have a question regarding ESXi,
is it possible to share resources among VMs?
lets say I have a maximum RAM of 20GB and using 3 VMs, if I give each of them 10GB RAM each would that work? they won't all probably use all 10GB at the same time, also for number of processors or cores, if I assign each VM 8 cores and I only have 12 for example, would inactive ones be used for other active ones?
I hope I made myself clear >.<

For your RAM question, you are talking about over-provisioning, which as a whole is not the best idea for performance though it can be done.

For the processor, it is not 1:1. For example, I have 5 VMs setup as dual-core in a host with a single dual-core processor. The hypervisor (host OS) handles delegating the CPU cycles.

In a virtual environment, you are better off keeping specs close to what is required or else you can run into worse performance overall.
 
streaming games directly from the server so no user has to install a game on his/her machine

How you plan to do this?? You going to buy a Nvidia Quadro card :D

oh you just gave me a new idea! lol.
My plan was that the guest PC will have a decent GPU and just runs the shortcut from the server share, and the game will load and they play the game normally as if it is installed on their PC, I already tested this and it works perfectly.
But your idea of processing the game in the server and stream the screen to the guest sounds interesting :p
 
I`m not sure on rack suggestions, I was hoping this thread would help me find and pick one too :D

There's got to be a lot of affordable, quality options on eBay I`m thinking.
 
I am no pro but you dont need to pass through the raid card, the performance difference would likely be something you wont notice.
 
I am no pro but you dont need to pass through the raid card, the performance difference would likely be something you wont notice.

I believe if I am going to let ZFS control my RAID for me, it is better to make it a pass-through so it can have full control instead of going around the RAID card, that's why I probably i'm getting an HBA card and a SAS expander as well.
 
Couple of additional questions guys :)

regarding how ESXi handle HDDs, do I assign which HDDs go to which guest VM?
say I have 20 HDDs, I want to set:
  • 2 HDDs on RAID1 for all guests VM OS
  • 3 HDD for each VM storage, (W7, FreeNAS, Ubuntu) probably not for freeNAS, is it possible to partition 1 HDD and devide it among all 3 guest OSs?
  • 15 HDD on ZRAID7 for FreeNAS storage with hot-spare

So, is it possible to assign them this way? won't there be a problem as an OS trying to share another OS's HDD? Won't an HDD be visible to all guest VMs and cause trouble?
 
Couple of additional questions guys :)

regarding how ESXi handle HDDs, do I assign which HDDs go to which guest VM?
say I have 20 HDDs, I want to set:
  • 2 HDDs on RAID1 for all guests VM OS
  • 3 HDD for each VM storage, (W7, FreeNAS, Ubuntu) probably not for freeNAS, is it possible to partition 1 HDD and devide it among all 3 guest OSs?
  • 15 HDD on ZRAID7 for FreeNAS storage with hot-spare

So, is it possible to assign them this way? won't there be a problem as an OS trying to share another OS's HDD? Won't an HDD be visible to all guest VMs and cause trouble?

The RAID1 for guest OS datastores is a good idea, but for storage you could just create a bunch of volumes in FreeNAS and share them with your other VMs.
With 18 drives left over, you could do RAIDZ3 and have triple parity if you are really worried about drives failing while you resilver.
 
So, I have received my IBM ServerRAID M1015 and the HP 24 SAS Expander today, though both without any manuals, and weirdly, the manuals available online is not detailed.

I'm a bit confused about the ports in SAS expander.
There are 8 internal SAS ports, and 1 external SAS port. From what I understand this far, 2 of the 8 internal ports are for input from the controller, I can use 1 or both (like LAGG to increase speed from 3GB to 6GB), then I use the rest of the 6 ports to connect 4 HDDs in each. thus far I understand, however I'm confused about the external SAS port, can I connect an additional 4 to it? or make it act as an output for a new expander?

I've made a couple of diagrams to see different setups, are these diagrams correct?
1:
882617_10152698435980603_1314146436_o.jpg

2:
902344_10152698459050603_1701279061_o.jpg
 
I remember reading these articles a while ago but it doesn't really answer my question.
is the external port in the SAS expander an input or an output? or both??
 
Last edited:
Ah... tried to clarify that in my message above. You can use as either but from a cost perspective, I would suggest using it as an input.
 
Ah... tried to clarify that in my message above. You can use as either but from a cost perspective, I would suggest using it as an input.

interesting, I know I have to do my own research but it's hard to find that small portion of detail in a 200 pages thread (the SAS Expander thread).

if I only connect a 8088 to the external port, would it go to 6Gbps?, and if I only connect to the external port, would "all" the internal port be available to connect? making it 8x4=32HDD?

Thats my first question, the second one, if I connect from the HBA/RAID to the 2 internal ports with LAGG, can I then use the rest of the 7 ports (6 internal and 1 external) to connect to 7x4=28HDD?

I know I have more questions about this but these are the important ones from them...
thanks
 
Back
Top