PCIe NVMe SSD in production environment

Discussion in 'Virtualized Computing' started by sybreeder, May 8, 2018.

  1. sybreeder

    sybreeder Limp Gawd

    Oct 24, 2010
    I've got at work Dell R730xd with 6 NL-SAS 7200RPM 8TB Drives in RAID10. Is was fine for what we use it - hyperv and main storage. OS is Server 2012R2 DC with about 20 VM. 1 of those is around 2TB bit and uses DFS. Recently i looked at perfmon and after a bit heavier work for example i've noticed queue lenght as big as 4.0. Ive started VM in that time. Normally its around 0.020-0.100. Should i start to worry about latency and/or IOPS?

    It crossed my min to purchase PCIe NVME as a cache. At home i've got S3500 and i've noticed that Intel SSDs has dedicated cache software. Iwas thinking about Intel 750, P3500 or P3600 as such cache device. Is anyone used PCIe as a cache device ? Is it safe ? The issue with PCIe drives is that there is really no method to raid other than software.
    Or perhaps maybe Intel are that good that raid1 for that drives is not needed and good backup is enough ?
    Thanks in advance for suggestions!
  2. dgingeri

    dgingeri 2[H]4U

    Dec 5, 2004
    I have done this with ESXi before, but not with Hyper-V. I don't know if it has the necessary features for it.
  3. Grimlaking

    Grimlaking 2[H]4U

    May 9, 2006
    Here's my issue. From an enterprise redundancy setup. It might help alleviate your disk I/O wait state issue, but what is the cost? If this single device fails what happens to the VM's you are running? How much performance does it buy you?

    If your system will maintain stability with a cache drive failure, then I say go for it. IIf that will actually cause the system to fail then in my opinion I would avoid doing the NVMe cache drive.

    Just my opinion mind you, but thinking from an enterprise standpoint I would never want to introduce something that acts as a single point of failure. But in my environment we are extremely risk averse.
  4. Eickst

    Eickst [H]ard|Gawd

    Aug 24, 2005
    Queue length of 4 with 6 disk raid 10 isn't bad. Did you look at your actual latency and iops during that time?