storage for home lab

Orddie

2[H]4U
Joined
Dec 20, 2010
Messages
3,369
Hey all,

i currently have two whitebox esxi hosts running in my lab at home. They share an iScsi connection to a Qnap 4 bay NAS. Im looking at the datastore times and am noticing some rather high numbers which got me thinking.

server one is an intel i5-6500 with 32gb of ram.
server two is an intel i7-5820k with 64gb of ram.

with the amount of VM's i have running and ram i have in server two, i know i can run everything on server two and still have room to grow.

what im thinking is should i convert server one into a FreeNas box? My concern is that both servers have a single gigabit network connection. I will kill the pipe when everything is booting and most likely keep the data rates at 25% while everything is running.

The Qnap has two gigabit ports which i have port channeled together to present a 2gb link. However, looking at what vmware says the storage delay is and than looking at the qnap manager as to the disk delay time... im just thinking i could do better.

The disks are WD reds 1tb's x 4
 
I wouldn't recommend running FreeNAS / ZFS on non-ecc ram, but it is certainly possible.

What if you run mdadm or another software raid on one system, and then picked up two PCI-E SFP+ NICs from ebay and used that as a dedicated 10gbps pipe between the two computers?
 
I wouldn't recommend running FreeNAS / ZFS on non-ecc ram, but it is certainly possible.

What if you run mdadm or another software raid on one system, and then picked up two PCI-E SFP+ NICs from ebay and used that as a dedicated 10gbps pipe between the two computers?

was thinking this already but am very knew to the whole PCI-E SFP. Do you know what type of cable would be needed to cross them over? Also i would have to check the HCL for vmware to ensure it will see the card.
 
A standard SFP+ cable would do the trick. I haven't used them yet for ESXi in my home lab, but they play very nicely with XenServer and Ubuntu.
 
A standard SFP+ cable would do the trick. I haven't used them yet for ESXi in my home lab, but they play very nicely with XenServer and Ubuntu.

it looks like i need to ensure SFP modules match? Like if i use an intel x520... i need to ensure i have intel SFP+ modules or vmware will refuse to bring the nic up. is that your XP as well?
 
The Intel nics are fairly tolerant of what SFPs are used. I'm using Brocade cables with mine because the BR-1020s I have will only connect with a Brocade cable. Just make sure its an active cable and you should be good to go.
 
The Intel nics are fairly tolerant of what SFPs are used. I'm using Brocade cables with mine because the BR-1020s I have will only connect with a Brocade cable. Just make sure its an active cable and you should be good to go.

are you running vmware?
 
Yes BR-1020s in the VMware Hosts and an Intel X520 in the FreeNAS box because Brocode and BSD don't play nice together. Eventually the Brocades will be replaced by Intel as support for them is getting kind of shaky.
 
Yes BR-1020s in the VMware Hosts and an Intel X520 in the FreeNAS box because Brocode and BSD don't play nice together. Eventually the Brocades will be replaced by Intel as support for them is getting kind of shaky.

hrm. very odd that so many users are having issues with the 520 and non intel SFP+'s in vmware. i wonder if you put the 520 in vmware what your XP would be
 
hrm. very odd that so many users are having issues with the 520 and non intel SFP+'s in vmware. i wonder if you put the 520 in vmware what your XP would be

I'm guessing they are buying the passive cisco SFPs and not the actives they are significantly cheaper. Or Brocade and Intel SFPs are the same.... I know the Brocades are picky from what I read when I bought the X520 is that as long as its an active SFP cable you where good to go.
 
i have been running FreeNas for a few days now and am starting to see datastore time creep back up. per the FreeNas forum, i setup 6x 1tb drives in 3x stripe 0 NFS. Keeping with the best practices, I'm trying to keep my usable storage low. The pool has been set to max use of 77% (2tb out of 2.5). My Concern about using FreeNas is the limit cap of usable space. Also, each vmware host is an SSD as local cache and vm swap file location
dgQiRQM.png
 
Last edited:
I'd take that latency.... Thats even on SSDs I think the problem is in the network or something....

Screen Shot 2016-06-15 at 10.12.54 PM.png
 
Is the last graph the latency on local SSD storage? Single SSD? Using the on board controller or add-on card?
 
If your referring to the one I posted. Its a Raid 5 of 4 Intel DC3500 160GBs over NFS to Freenas. I don't think my server has the guts to push the I/O I'm asking from it.

Because any Write to any array on the box spikes like that doesn't matter if its SSD or Spinning rust.
 
Last edited:
I think the problem is in the network or something....

View attachment 4138

i can not 100% agree with it being a network problem. when i look at the switch, i do not see 50% usage. when i look at vmware, i do not see over 50% usage. when i look at free nas i do not see over 50% at the disk, network, or CUP level.
 
i can not 100% agree with it being a network problem. when i look at the switch, i do not see 50% usage. when i look at vmware, i do not see over 50% usage. when i look at free nas i do not see over 50% at the disk, network, or CUP level.

I was talking about my problem all the stats look fine sans the graph I posted so either I'm asking for more I/O than the CPU and RAM can give me or its a network issue.

And just because none of the usage is high doesn't mean its not a network problem some older nics have laughably small queues.
 
I was talking about my problem all the stats look fine sans the graph I posted so either I'm asking for more I/O than the CPU and RAM can give me or its a network issue.

And just because none of the usage is high doesn't mean its not a network problem some older nics have laughably small queues.

whats your free nas server specs?
 
whats your free nas server specs?

L5630 with 16GB of unbuffered ECC Ram with a couple M1015s as HBAs. I have a FX8320E and 32 GB of ram sitting around I may to see if it makes anything better
 
double your ram first off. also, what file system are you using and how much free space do you have? if you went above the 50% mark your causing your own delay's.
 
Mdraid is just as "risky" with non ECC as ZFS. People forget that mdraids need to be scrubbed too..
 
Using Intel x520 cards in a pair of whitebox servers with a single Cisco TwinAx from each host directly connected to a QNAP TVS-471 also using Intel x520. Disks are 4x512GB Toshiba Q SSDs in RAID 5. Rarely ever do I see latency go above 1ms. Only wish I had gone 4x1TB SSDs instead.
 
Back
Top