ESXi Build All in One with High I/O Performance SAN/NAS

Discussion in 'Virtualized Computing' started by xhyperamp, Nov 29, 2013.

  1. xhyperamp

    xhyperamp n00bie

    Messages:
    7
    Joined:
    Nov 27, 2013
    Hi All,

    I am looking to build a ESXi Server All in one box.

    Requirements

    • Require virtual machines to run the many different servers Win DC, Exchange, Linux and so on. At least 13 Virtual Machines
    • I require very high I/O Performance storage pool with good redundancy without spending $1500 on a powerful raid card. My aim is to run the virtual machines off this storage pool
    • I require passthrough on PCI-E Devices eg. Raid Controller, Quad Gigabit Nic etc
    • I will be getting 4 x 4TB WD Red drives for now and have the ability to expand later on with the extra 20 HDD Bays.
    • I don't have any limits on RAM or CPU for ESXi either. I have a license.
    • Easily setup membership permissions and user permissions on SAMBA/SMB on Windows Active Directory Accounts

    I will list all different builds and designs and you guys can give some suggestions and ideas.

    At the moment I'm new to all this High I/O storage configuration. So please provide me on which is my best option or suggestions with other software

    Hardware Raid Option

    • I was thinking of buying a LSI Logic LSI00328 MegaRAID SAS 9271-4i $515 and setting it up with Raid 10. The downside is that I don't want to buy another one of these raid cards just to be able to access the data again incase the original raid card fails in the future.
    Software Raid Options

    • Windows Server Storage Pool
      I was also thinking of setting up Windows Storage Pool on WS 2012 R2 and setting up some sort of SDD/Ram Cache for better performance.
    • ZFS Pool with Nexenta or simular software
      I was also thinking of setting up Nexenta ZFS with about 8GB Ram .
    • StarWind iSCSI on Windows Server 2012 R2
      Last of all I was thinking of setting up StarWind iSCSI on Windows Server 2012 R2.


    VMware ESXi 5.5 Build 1


    • CPU: AMD FX 8350 [FD8350FRHKBOX], 8-CORE BLACK EDITION CPU, 4.0 GHz, Turbo Core up to 4.20 GHz $225 AUD
    • Motherboard: ASRock 970-Extreme4 $116 AUD
    • Memory: Corsair 32GB (4x8GB) DDR3 1600MHz Vengeance PRO [CMY32GX3M4A1600C9R] DIMM 9-9-9-24 4x240-pin $425 AUD
    • Case: Norco RPC-4224 Rackmount Server Chassis $500 AUD
    • PSU: Corsair HX-750 $200 AUD
    • Hard Drives: 4x 4TB Western Digital [WD40EFRX] 4x $250 = $1000
    • SSD: Intel 120GB SSD 530 $120
    • Raid Controller: IBM M1015 / New LSI MegaRAID 9240-8i (IN JBOD mode) $135
    • SAS Expander: CHENBRO CK23601 $415
    • Intel PRO/1000 PT Quad (Already have)

    Total $3126

    VMware ESXi 5.5 Build 2
    • CPU:Intel® Xeon® Processor E5-2620 v2 6-CORE (15M Cache, 2.10 GHz) $500 AUD
    • Motherboard: Supermicro MBD-X9SRL-F $355 AUD
    • Memory: 64GB 4x16GB Memory ECC REG PC3-12800 DDR3-1600 $715 AUD
    • Case: Norco RPC-4224 Rackmount Server Chassis $500 AUD
    • PSU: Corsair HX-750 $200 AUD
    • Hard Drives: 4x 4TB Western Digital [WD40EFRX] 4x $250 = $1000
    • SSD: Intel 120GB SSD 530 $120
    • Raid Controller: IBM M1015 / New LSI MegaRAID 9240-8i (IN JBOD mode) $135
    • SAS Expander: CHENBRO CK23601 $415
    • Intel PRO/1000 PT Quad (Already have)

    Total $3940

    This is a list of servers that I will end up running.
    1. Windows Server 2012 R2 Domain Controller
    2. Microsoft Exchange Server 2013
    3. Microsoft Lync Server 2013
    4. Microsoft Sharepoint 2013
    5. Ubuntu LAMP Server with Webmin
    6. Microsoft System Center 2012 SP1
    7. Microsoft SQL Server 2012 SP1
    8. Microsoft Dynamics CRM Server 2013
    9. Microsoft Dynamics NAV 2013
    10. Microsoft Project Server 2013
    11. Microsoft Biztalk Server 2013
    12. ManageEngine Desktop Central
    13. ManageEngine ServiceDesk Plus
    14. ManageEngine OpManager
    15. VMware vCenter Server v5.5
    16. XBMC with PVR and Plex Media Server in Linux
    17. pfSense Router/Firewall
     
    Last edited: Nov 29, 2013
  2. Vader

    Vader [H]ardness Supreme

    Messages:
    4,521
    Joined:
    Dec 22, 2002
    You are not going to get "high i/o" performance with 4 spindles, period..I don't care what RAID level you put them on without some sort of caching mechanism.

    I'm not sure if Microsoft fixed their caching in Windows Server 2012 R2, but last I read it provided sporadic and lower than expected results.

    I haven't used Starwind in a long time so I can't speak to that for caching.

    I currently have an all in one box running my Infrastructure and TIERII Storage using NexentaCE with the NFS VAAI plugin, with an LSI2008 6GB SAS on Motherboard pass through to my Nexenta VM.

    I have assigned 2 vCPU and 8GB's of memory and my disk drives are 4 Seagate Hybrid 2.5 1TB Drives that have 8GB Flash along with the spindle. They are running RAIDZ. This is my TIERII storage pool for my vCloud Lab.

    Read and Write caching duties are handled by 3 SSD's, 1 120GB SSD's for L2ARC and 2 x Mirrored 60GB SSD's for Log.

    [​IMG]

    They are setup like the "BETTER" grouping in this diagram:

    [​IMG]

    I get stellar I/O performance. Unfortunately, right now, i'm limited to 1Gb throughput but it easily saturates that link. Hopefully, at some point, i'll be able to go 10Gb affordably.
     
  3. xhyperamp

    xhyperamp n00bie

    Messages:
    7
    Joined:
    Nov 27, 2013
    Awesome. I knew I was going to lack I/O performance but this will help me workout exactly what needs to be configured for maximum I/O experience on 4 drives. I will be adding more drives at a later time to improve speed once I start gathering everything in 1080p quality. I can also try PrimoCache and see what it can do for me.

    Thanks
     
  4. _Gea

    _Gea 2[H]4U

    Messages:
    3,646
    Joined:
    Dec 5, 2010
    Some basic options that you may consider

    - use socket 2011 boards (> 32 GB RAM) with a 4/6 core Xeon
    - use 32 GB+ ECC RAM
    - use 10 GbE
    - use ZFS software raid (best data security at all + very good performance)
    - use LSI HBAs (the raidless ones with IT mode)

    -> my best of all for these needs:
    http://www.supermicro.nl/products/motherboard/Xeon/C600/X9SRH-7TF.cfm

    - avoid expander with Sata disks (use more HBAs) or buy SAS disks
    - split your storage between high performance and backup/other needs
    - use SSD only pools with enterprise class SSDs for performance (Raid-Z2, Intel S3500/3700)
    - with these SSDs (especially the S3700) you do not need a log device
    - with enough RAM, you do not need a L2ARC device
    - with SSDs you do not need mirrors/ Raid-10 like with spindels because IO if SSDs about 100x better than with spindels

    - you do not need mirrorred ZFS Log devices with SSDs until you cannot accept a system failure with
    a simulatnious log device failure where you may loss 5s of last writes otherwise.

    - with All-In-One, you may even avoid sync write (no need for a ZIL) and use a UPS
    (need regular backups/replications, danger of loss of last 5s of writes, may leed to a corrupted ESXi filesystem)
    because mostly a crash affects ESXi, the storage VM and the guests

    - for Alll-In-One, use the same amount of RAM for the storage VM that you would use on a dedicated storage box
    (RAM is used for read-cache=performance)

    You may read my miniHowTo. I use the free OmniOS with my napp-it but most of it is usable for the similar NexentaStor as well.
    http://www.napp-it.org/doc/downloads/all-in-one.pdf
    http://www.napp-it.org/doc/manuals/flash_x9srh-7tf_it.pdf
     
  5. Child of Wonder

    Child of Wonder 2[H]4U

    Messages:
    3,236
    Joined:
    May 22, 2006
    I can vouch for Windows Storage Spaces. For the last year I've run a single Storage Space pool of 10x2TB 7200RPM drives and it did pretty well. All my vdisks were mirror (RAID 10, basically) and I used a program called Primocache Beta to use my file server's 32GB of RAM and a 240GB Intel 330 SSD as read and write cache.

    However, these disks also hosted my file shares which included nightly backups, streaming media, and shadow copies. As my lab grew to 15+ VMs being on 24/7, I started to see significant latency jumps when I began pushing IOPs that weren't coming from cache.

    This week I just upgraded my box to Windows 2012 R2 and am using one pool comprised of 4x512GB SSDs and 8x600GB 10k Velociraptors for my VMs and a separate 5x3TB pool for my file shares. I expect this to solve my IO latency issues. :)

    For my VMware datastores I used the latest beta build of Starwind v8 which has run very well.

    The downside of Windows Storage Spaces is that parity vdisks have TERRIBLE write performance. While I can hit 660MB/s read and 495MB/s write on a mirrored vdisk on the 8 Raptors drives, a parity vdisk on the same drives can only manage 102MB/s writes by default or 178MB/s if I tweak the pool's settings so writes are journaled async rather than synchronously. For a virtual environment you're basically forced to use mirrored vdisks (RAID 10).
     
    Last edited: Nov 29, 2013
  6. xhyperamp

    xhyperamp n00bie

    Messages:
    7
    Joined:
    Nov 27, 2013
    Thanks. Is there any other cases you would recommend looking at instead of the Norco 4224. I don't really want to spend over $600 for hotswap case. The 4224 is $500

    Led light failure notification would be nice for the hard drive bays. Unless you can make software flicker the HDD status light to let you know the specific drive on the Norco 4224 as an alternative method.

    Nexenta looks good but its not an option as it has a 18TB limit. It will fit my needs for a month or two and then ill be filling the case up with more drives. I'll checkout napp-it.

    I will also create a separate storage raid just for archiving some data as well with 8x 4TB drives.
     
  7. dave99

    dave99 2[H]4U

    Messages:
    2,743
    Joined:
    Jan 20, 2011
    one thing to remember with LSI (and possibly others), is they are pretty good at importing raid configs from other LSI controllers. I've moved drives from different models of dell perc (lsi rebadges) controllers before, and they were visibly and accessible on the new one.

    Also moved an array from a dell perc 5 to an ibm 5015 (both lsi rebadges), which are several generations apart tech wise, and the new controller brought it in with no problems.
     
  8. xhyperamp

    xhyperamp n00bie

    Messages:
    7
    Joined:
    Nov 27, 2013
    The reason I pick the Supermicro MBD-X9SRL-F is that it has 7x PCI-E ports. I can use these for adding more Quad Nic's or a Fibre Channel (Two can be used for GPU's in case I want passthrough GPU and KVM over ethernet for HTPC VM on a TV)

    All the newer devices now use PCI-E which makes the X9SRL-F a better option for me.

    The X9SRH-7TF has a build in LSI HBA but it only has 3x PCI-E Ports (Two can be used for GPU passthrough for KVM Ethernet for HTPC TV. The two PCI-32 slots arent really used on any new PCI Device so that wasted for me.

    X9SRH-7TF - PCI-Express Device List Configuration- No Extra Available PCI-E Ports
    • PCI-E: Intel PRO/1000 PT Quad
    • PCI-E: LSI Logic LSI00244 SAS 9201-16i 16Port 6Gb/s SAS/SATA
    • PCI-E: Lowend ATI GPU (For HTPC VM with KVM over ethernet)

    I will get both of these LSI HBA Adapters. On the X9SRL-F
    • SAS/ SATA Controller: LSI Logic LSI00244 SAS 9201-16i 16Port 6Gb/s SAS/SATA Single Controller Card $470 AUD
    • SAS/ SATA Controller: LSI SAS 9211-8i 6Gbps 8 Ports SAS/SATA 8-Port PCI-e $125 AUD


    Instead of getting these - Refer to my original post

    • Raid Controller: IBM M1015 / New LSI MegaRAID 9240-8i (IN JBOD mode) $135
    • SAS Expander: CHENBRO CK23601 $415

    X9SRL-F - PCI-Express Device List Configuration (3 Extra Available PCI-E Ports) - Chosen
    • PCI-E: Intel PRO/1000 PT Quad
    • PCI-E: LSI Logic LSI00244 SAS 9201-16i 16Port 6Gb/s SAS/SATA
    • PCI-E: LSI SAS 9211-8i 6Gbps 8 Ports SAS/SATA 8-Port PCI-e
    • PCI-E: Lowend ATI GPU (For HTPC VM with KVM over ethernet)

    Napp-it and OmniOS looks like a good option.
    I don't have a 10Gigabit switch or Network setup. It current is all gigabit. I will LACP 4 Gigabit Ports to the Switch for the extra performance. The switch will spread across the devices around the house.
    Within ESXi Server all virtual nic/switches will be configured at 10Gb speed.
    If I build another ESXi Server in the future I will use a Fibre Channel link between napp-it vm and the other ESXi Host.

    Is this Ram any good - I wish to have 64GB to start with and upgrade to 128GB if needed.
    http://www.ebay.com.au/itm/NEW-64GB-4x16GB-Memory-ECC-REG-PC3-12800-DDR3-1600-for-Servers-/221212286363?pt=US_Memory_RAM_&hash=item3381479d9b
     
    Last edited: Nov 29, 2013
  9. Child of Wonder

    Child of Wonder 2[H]4U

    Messages:
    3,236
    Joined:
    May 22, 2006
    Don't bother with dedicated RAID cards. Just get some M1015's, SSDs, and 7,200RPM or 10,000RPM SATA drives to use as a "SAN" then build two ESXi hosts with commodity hardware.

    Just build a file server and 2+ hosts with commodity hardware.
     
  10. xhyperamp

    xhyperamp n00bie

    Messages:
    7
    Joined:
    Nov 27, 2013
    So for my 24 Bay Storage.

    I should get
    3x IBM M1015/LSI 9240-8i $135 AUD Each

    Instead of the following combination
    • SAS/ SATA Controller: LSI Logic LSI00244 SAS 9201-16i 16Port 6Gb/s SAS/SATA Single Controller Card $470 AUD
    • SAS/ SATA Controller: LSI SAS 9211-8i 6Gbps 8 Ports SAS/SATA 8-Port PCI-e $125 AUD


    As the LSI 9201-16i doesn't appear to have much support on it and I have read of issues with using this card online so I prefer to stick only using the IBM M1015/LSI 9240-8i

    Hardware list.

    CPU:Intel® Xeon® Processor E5-2620 v2 6-CORE (15M Cache, 2.10 GHz) $500 AUD
    Motherboard: Supermicro MBD-X9SRL-F $355 AUD
    Memory: 64GB 4x16GB Memory ECC REG PC3-12800 DDR3-1600 $715 AUD
    Case: Norco RPC-4224 Rackmount Server Chassis $500 AUD
    PSU: Corsair HX-750 $200 AUD
    Hard Drives: 4x 4TB Western Digital [WD40EFRX] 4x $250 = $1000
    SSD: Intel 100GB SSD DC S3700 $287 AUD
    LSI HBA SAS/SATA Controller: 3x IBM M1015 / New LSI MegaRAID 9240-8i $135 Each (Flash to IT Mode)
    Intel PRO/1000 PT Quad (Already have) (Go it for $100 AUD)

    Total Cost: $3962 AUD (Excluding Quad Gigabit Adapter)

    As for software and drive configuration

    SAN/NAS Software: Setup Napp-it with OmniOS.
    Storage Pool: Create a ZFS Pool for 4x 4TB WD RED

    I want to avoid using SSD's for caching as the enterprise are rated to handle 10 Full Writes per day for 5 year period. They are good but they simply cost too much and using it for such caching will eat up it cycles over the years and one day it will fail.

    So I rather run these tasks in RAM if possible. Any suggestions?
     
    Last edited: Dec 1, 2013
  11. _Gea

    _Gea 2[H]4U

    Messages:
    3,646
    Joined:
    Dec 5, 2010
    LSI 9201-16 and LSI 9211 are ok.

    But in my own backup systems (Chenbro 50bay) I also use 6 x IBM 1015 in each system (flashed to 9211-IT mode).
    They are troublefree and unbeatable regarding price and performance.
     
  12. cw823

    cw823 n00bie

    Messages:
    33
    Joined:
    Mar 15, 2006
    ^^ what he said. And I've never paid more than $100 for an M1015 off ebay. Just have to be patient.
     
  13. somebrains

    somebrains Limp Gawd

    Messages:
    168
    Joined:
    Nov 10, 2013
    You guys are funny, you have lab budgets.
    A couple of us at work strip parts out of ewaste 3.5 and 4.0 clusters.
    Hosts, switches/routers, and shelves are cold spares off deprecated clusters.
    Sometimes the cage techs will turn us on to a pile of gear another DC tenant will leave in the dump pile.
    Every so often a guy will get tired of thrashed old gear and will add on something new like an R720 and our mgr will sign off some hours in exchange for the monthly lease.
     
  14. xhyperamp

    xhyperamp n00bie

    Messages:
    7
    Joined:
    Nov 27, 2013
    I have revised the build

    Hardware list.
    • CPU:Intel® Xeon® Processor E5-1650 v2 6-CORE (12M Cache, 3.50 GHz) $800 AUD
    • Motherboard: Supermicro MBD-X9SRL-F $355 AUD
    • Memory: 2x Crucial (32GB Kit (16GBx2), 240-pin DIMM, DDR3 PC3-14900 • CL=13 • Dual Ranked • x4 based • Registered • ECC • DDR3-1866 • 1.5V • Part Number: CT2K16G3ERSDD4186D $900 AUD
    • Case: Norco RPC-4224 Rackmount Server Chassis $500 AUD
    • PSU: Seasonic Platinum Series 860W Version 2 $280 AUD
    • Hard Drives: 4x 4TB Western Digital [WD40EFRX] 4x $250 = $1000
    • SSD: Intel 100GB SSD DC S3700 $287 AUD
    • LSI HBA SAS/SATA Controller: 3x IBM M1015 / New LSI MegaRAID 9240-8i (Flash to IT Mode) (Got three of them for $135 Each, Total = $405)
    • Intel PRO/1000 PT Quad (Already have) (Go it for $100 AUD)

    I am reconsidering the ram as this is very expensive for memory.

    Could you suggest on places to get ECC RDIMM Memory.

    The motherboard has 8 DIMM slots and I want to be able to upgrade to 128GB Memory in future.

    So 16GB x 4 DIMMS is required.
    I would prefer 1866Mhz ECC Memory.

    Any suggestions on where to buy and what ram to get?


    Thanks
     
    Last edited: Dec 27, 2013
  15. peanuthead

    peanuthead [H]ardness Supreme

    Messages:
    4,205
    Joined:
    Feb 1, 2006
    All of the standard sites or Ebay. There is no magical place to get cheap RAM currently.
     
  16. xhyperamp

    xhyperamp n00bie

    Messages:
    7
    Joined:
    Nov 27, 2013
    Now for ZFS Configuration.

    I have a budget of $1400 on storage drives. I need recommendations on hardware and configurtation for ZFS.

    • I required a pool with extreme I/O performance for 30 Virtual Machines. 180GB Storage Shared between hosts.
    • I also require a seperate pool for NAS Storage that gives me at least 12TB Storage with 250MB/s Read/Write Performancefor now and I can add more drives later on to expand the storage size.

    Please list hardware and configuration examples of what I should use.

    Thanks
     
  17. _Gea

    _Gea 2[H]4U

    Messages:
    3,646
    Joined:
    Dec 5, 2010
    High-Performance Pool:
    Use a mirror of two 200 GB (or better 400 GB not regarding price) Intel S3700 SSD

    Storage:
    Use a Raid-Z2 of 6 disks (each 3 or 4 TB),
    I would prefer 24/7 disks from Hitachi, WD or Seagate - avoid green disks-
    Be careful about fillrate, Pools slow down when nearly full