PCIe NVMe SSD in production environment

sybreeder

Limp Gawd
Joined
Oct 24, 2010
Messages
193
I've got at work Dell R730xd with 6 NL-SAS 7200RPM 8TB Drives in RAID10. Is was fine for what we use it - hyperv and main storage. OS is Server 2012R2 DC with about 20 VM. 1 of those is around 2TB bit and uses DFS. Recently i looked at perfmon and after a bit heavier work for example i've noticed queue lenght as big as 4.0. Ive started VM in that time. Normally its around 0.020-0.100. Should i start to worry about latency and/or IOPS?

It crossed my min to purchase PCIe NVME as a cache. At home i've got S3500 and i've noticed that Intel SSDs has dedicated cache software. Iwas thinking about Intel 750, P3500 or P3600 as such cache device. Is anyone used PCIe as a cache device ? Is it safe ? The issue with PCIe drives is that there is really no method to raid other than software.
Or perhaps maybe Intel are that good that raid1 for that drives is not needed and good backup is enough ?
Thanks in advance for suggestions!
 
I have done this with ESXi before, but not with Hyper-V. I don't know if it has the necessary features for it.
 
Here's my issue. From an enterprise redundancy setup. It might help alleviate your disk I/O wait state issue, but what is the cost? If this single device fails what happens to the VM's you are running? How much performance does it buy you?

If your system will maintain stability with a cache drive failure, then I say go for it. IIf that will actually cause the system to fail then in my opinion I would avoid doing the NVMe cache drive.

Just my opinion mind you, but thinking from an enterprise standpoint I would never want to introduce something that acts as a single point of failure. But in my environment we are extremely risk averse.
 
Queue length of 4 with 6 disk raid 10 isn't bad. Did you look at your actual latency and iops during that time?
 
Back
Top