Flash Virtualization in My Lab

NetJunkie

[H]F Junkie
Joined
Mar 16, 2001
Messages
9,683
Upgraded the lab a bit yesterday. Took my 3rd host up to 32GB finally. Added a 128GB Samsung 840 Pro SSD to each host and put the current build of PernixData's FVP on there. Very nice. :) Taking a lot of I/O off my Synology.

Not hitting it hard yet..still testing. But so far it's great.

PernixData.png
 
Does it use an SSD local to the host for flash storage or can that flash storage be remote via iSCSI?
 
Does it use an SSD local to the host for flash storage or can that flash storage be remote via iSCSI?

I think you need local flash. The whole point is turning server side caching into read/write cache vs. only read.

I'd love to get their stuff running in the lab. We're a FusionIO partner and try to sell their cards whenever possible but Pernix really seems like the logical next step to get server side caching into more clients.
 
Looks to only leverage local server-side Flash and looks to compete with the likes of Fusion-IO although it seems more like BYOFlash and we'll work with it rather than using proprietary Fusion-IO cards. I assume this has to fill-up and down as VMs are vMotion'd off hosts like Fusion-IO? Not really an issue given how fast Flash is, but I assume is suffers this same 'issue'. If you can even call it that.

Edit: actually reading the site helps, I'm stupids

FVP accelerates both read and write operations. For write operations, FVP can be configured so that changes to the data are initially committed to the flash and later persisted on the backend storage. In such scenarios data loss is prevented via synchronous replication to flash devices on peer servers in the cluster.
 
Looks to only leverage local server-side Flash and looks to compete with the likes of Fusion-IO although it seems more like BYOFlash and we'll work with it rather than using proprietary Fusion-IO cards. I assume this has to fill-up and down as VMs are vMotion'd off hosts like Fusion-IO? Not really an issue given how fast Flash is, but I assume is suffers this same 'issue'. If you can even call it that.

Pernix transforms any server side cache (FusionIO included) into read and write cache by replicating write cache between hosts.

EDIT: You beat me to it! :)
 
I looked at FusionIO and while it seems like a great product I think that it's just not economically viable outside of actual enterprise deployments which absolutely require that type of IO. For the SMB market dropping a ~$300 SSD in to speed things up is much more doable. Even if you do a Raid10 with 4 disks or some such it's still a heck of a lot less expensive than FusionIO. Though I suppose that it would really depend on the price of the FVP product.
 
HAH! I had the page open to reply to Thuleman and I completely missed it myself. :D

Want to play with!
 
It uses flash in the server..and as said, unlike Fusion I/O you can use any flash. I'm using simple Samsung SSDs but you'd want better ones for production, obviously.

It does read AND write caching, unlike Fusion and the install is simpler. Started by the guy that created VMFS. No changes to the VMs. vMotion and everything works as before. It can pull cached info off another host instead of the array so it doesn't move the cache with the VM..but if a VM needs a block that was on another server it'll grab it there since it's assumed that SSD is faster and it offloads the I/O from the array.

Very slick.
 
Kind of rhetorical since there's NDAs and such, but wondering how this will stack up against vFlash. Why would I go third party if VMware has it/will have it already?
 
I'm not telling any of my customers to stop looking at PernixData for a reason. :)
 
Back
Top