Storage configuration for two small All-In-One ESXi boxes

You are not at the ultra high end so partitioning a logdevice
(two partitions with size min 10GB each) is an option.

ex with a 32 GB SSD with overprovisioning:
- use a hpa with 12 GB + 2 x 10GB partitions
or three partitions 10+10+12 GB

and add a 10 GB partition to each pool as logdevice.
 
You are not at the ultra high end so partitioning a logdevice
(two partitions with size min 10GB each) is an option.

ex with a 32 GB SSD with overprovisioning:
- use a hpa with 12 GB + 2 x 10GB partitions
or three partitions 10+10+12 GB

and add a 10 GB partition to each pool as logdevice.
For 10Gbps ethernet how much ZIL would I need? 6,25GB or 13,0GB? Should I account for 5 seconds or 10 seconds of contiguous transactions writes (5s * 10Gbps * 1/8 = 6,25GB while 10s * 10Gbps * 1/8 = 13,0GB)?

I was thinking on buying the 32GB SLC SSD for single-pools needs (1 ZIL) and the 64GB SLC SSD for dual/triple pools needs (2-3 ZIL required -> 2-3 partitions on a single SSD).
Although it's true that (ZIL) write performance will be limited using a single SSD partititioned in 2-3 parts :( An opinion on this?

The 32GB partitioning you suggested (SSD used at 65%) doesn't really convince me ...
 
btw
Do you need sync on your large pool from disks.
Regular filer use is async.

about the size:
I would calc up to 10s of continous writes from network
so 8-16 GB is what you need with 10 Gbe

This is similar to what HGST is using with their professional logdevices
like ZeusRAM (8GB) and S840Z (16GB)
 
btw
Do you need sync on your large pool from disks.
Regular filer use is async.
You're suggesting HDD pools to be left without ZIL since the use is basically limited to file-sharing (SMB mostly)?

(There is a - very slow - SLOG on the HDD pool anyway even if there is not a separated ZIL)
 
You're suggesting HDD pools to be left without ZIL since the use is basically limited to file-sharing (SMB mostly)?

(There is a - very slow - SLOG on the HDD pool anyway even if there is not a separated ZIL)

CIFS/SMB is per default fast and async and uses the write buffer to convert many small random writes to a single large sequential write conversion so even the fastest ZIL would not help as it is not used at all. You need sync only on transaction save actions (like databases ex where you want to initiate a delivery only once and only when a payment is done, inconsistent data would be expensive) or if older filesystems like ext3/4 or ntfs are involved ex with ESXi datastores. You may set sync to always then but that would be stupid and is suggested only to do a ZIL benchmark.

For a pure ZFS filer, your ZFS filesystem will always be consistent and even the largest ZIL will not help that a file (not the ZFS filesystem with CopyOnWrite) gets corrupted on a power failure during a copy of a large file to the filer.
 
Last edited:
For a pure ZFS filer, your ZFS filesystem will always be consistent and even the largest ZIL will not help that a file (not the ZFS filesystem with CopyOnWrite) gets corrupted on a power failure during a copy of a large file to the filer.

I'm gonna use ZFS both as a filer and iSCSI target (probably either Linux or FreeBSD since I need full disk encryption). Anyway I take you mean that the corruption of the data would be caused by the network (or client PC not using ECC memory, ...) rather than ZFS itself, is that right?

So bottom line is: 32GB SLC SSD will suffice in all cases (since I'm not gonna use it anyway for NAS functions), right?
 
Its only important that you separate filesystems with filer use (no ZIL) and ESXi datastores or iSCSI targets with older filesystems (ZIL required, enable sync or disable writeback cache on targets)

A Zil must be fast and secure with ultra low latency - not big in size.
 
Its only important that you separate filesystems with filer use (no ZIL) and ESXi datastores or iSCSI targets with older filesystems (ZIL required, enable sync or disable writeback cache on targets)
Do they really have to be separated? I mean a ZIL should be used for a pure VM storage / iSCSI target but also for a mixed iSCSI target / NAS, right?

Even though I admit it may be difficult to justify the VM iSCSI performance to a HDD-only pool (which should still be sufficient for low-speed tasks like testing OSses for instance) the ZIL should in that case be put into service, don't you agree?

Finally what target would you use: NFS (sync by default - safer IMO) or iSCSI (have to enforce sync writes IIRC since by default it does async)?

A Zil must be fast and secure with ultra low latency - not big in size.
So 32GB it is then ;)
 
Basically, you can keep the setting sync = default
This means that the client decides so ESXi requests are sync over NFS and CIFS is fast and async.

Only with iSCSI on ZFS volumes you must manually disable write back cache for sync write behaviour. With iSCSI Logical Units on files you must manually force sync.
 
So which one would you suggest between NFS and iSCSI for ESXi VM dataset?

iSCSI seems to be faster although most results are biased since the transaction was async (compared to NFS sync behaviour). NFS VM storage basically means that the VM is stored in a single file, whereas iSCSI means that the VM storage is by directly accessing blocks on the ZFS filesystem, right?
 
use NFS for simplicity and because it is more tolerant for delays and timeouts.
For first steps, use ALWAYS NFS

Use iSCSI if you you need mpio and some other HA features.
 
use NFS for simplicity and because it is more tolerant for delays and timeouts.
For first steps, use ALWAYS NFS

Use iSCSI if you you need mpio and some other HA features.

Ah it's always possible to "convert" an NFS VM storage into an iSCSI at a later time?
If that's the case then I'll use NFS without issues ;)

Furthermore NFS performance will be very good for say even a Intel S3700 pool?
 
Ah it's always possible to "convert" an NFS VM storage into an iSCSI at a later time?
If that's the case then I'll use NFS without issues ;)

Furthermore NFS performance will be very good for say even a Intel S3700 pool?

Everything is possible the one or other way.
You must now start with your setup. It is not possible to think of all possible problems.
Mostly you must work up to develop a feeling of whats good and possible and what not.

Think of the old rule.
Only a 30% difference in performance gives you the aha effect otherwise you do not feel it.
 
You must now start with your setup. It is not possible to think of all possible problems.

Yep, I'm starting now ;)

Future steps: upgrade RAM in one of the ESXi hosts (hopefully to 256GB, otherwise to 128GB).

Another important upgrade would be to go to 10Gbps ethernet (at least between the two big ESXi hosts). What would you reccoment as NIC? Intel X-540 (RJ45 / Cat7 cable) or Intel X-520 (and which one: Intel X520-DA2 [SFP+] or Intel X520-T2 [RJ45 / Cat7 cable] or Intel X520-SR2 [LC])? Sorry but I don't know much about 10Gbe. In my opinion fiber is better although the two hosts are one right beside the other (cable would be <1m or so).

If going RJ45 I could get switches that are ~800€ (Netgear XS708E) although NICs are somewhat more expensive than fiber ones. Any tips there?
 
SFP+ is the more pro and datacenter solution with lower latency.
You can use short range copper DAC cables and fiber up to 80km

10Gbase-T is more a 10G to the desktop solution.
Many of my media workstations and two of my computerpools are now on 10Gbase-T

SFP+ is the more expensive solution as you must add SFP+ tranceivers or quite expensive SFP+ DAC cables. So if you want to go price sensitive, use 10Gbase-T and a lowcost switch like the Netgear. New SFP+ switches are very expensive with 8+ ports.

I have four 8/12 port Netgears to connect some workstations.They are ok but not as professional as my HP 10G equippment that I use in my serverroom and inter floor/building cablings.
 
And between the Intel X520 and X540 (RJ45 versions) which one would you use?

Lastly but not least I may have to interconnect two rooms in my flat (at the moment ~12 Cat6 cables run through it) that are like ~20m apart. Could Cat6 cables be used for 10Gbase-T (I think they support up to 37m or something) or should I add a Cat7? Or for this medium-length interconnection you would rather use a fiber (SFP+) link?

The cable conduit is almost full so I'm not too sure about how to go about it :(
 
In an ideal installion Cat6 can work up to 55m.

Given that there is no certification for Cat X cables it's difficult to tell if it would work (I still have to wire the patch panel ;)).

Given that the two locations are ~20m-30m apart maybe it will work. Without buying expensive cable testers is there a way to verify:
a) That it works correctly (without dropping packets, ...)? -> ping?
b) That it performs at 10Gbps speeds -> test file transfer from ramdisk to ramdisk?

Still I agree it's worth a try before messing up again the cable infrastructure ;) (even more if it would work correctly).
 
The twisted pair drivers consume significantly more power than DAC transceivers.
If it is for point to point and short range the X520 is preferrable.
If it is for a bit longer range and you can use fibre and can get the SFP+ modules cheap on ebay it is even better.
 
Thank you for your answer, omniscence.

The twisted pair drivers consume significantly more power than DAC transceivers.

What do you mean by "significantly"? And for "The twisted pair drivers" you mean the RJ45 version rather than the fiber version?

If it is for point to point and short range the X520 is preferrable.
The X520 RJ45 you mean? Any specific reason for that?

If it is for a bit longer range and you can use fibre and can get the SFP+ modules cheap on ebay it is even better.
A bit longer means >100m or which "range" you're talking about? Fibre meaning which X520 model? The X520-DA2 or the X520-SR2? Not sure if I understand you correctly since for me SFP+ is already fiber/optical.
 
rj45s are of a higher latency and eat more watts compared to sfp+

http://www.missioncriticalmagazine....Home/Files/PDFs/WP_Blade_Ethernet_Cabling.pdf

Yep, I'm starting now ;)

Future steps: upgrade RAM in one of the ESXi hosts (hopefully to 256GB, otherwise to 128GB).

Another important upgrade would be to go to 10Gbps ethernet (at least between the two big ESXi hosts). What would you reccoment as NIC? Intel X-540 (RJ45 / Cat7 cable) or Intel X-520 (and which one: Intel X520-DA2 [SFP+] or Intel X520-T2 [RJ45 / Cat7 cable] or Intel X520-SR2 [LC])? Sorry but I don't know much about 10Gbe. In my opinion fiber is better although the two hosts are one right beside the other (cable would be <1m or so).

If going RJ45 I could get switches that are ~800€ (Netgear XS708E) although NICs are somewhat more expensive than fiber ones. Any tips there?
 
Back
Top