I have a makeshift SAN node setup that a pair of VM hosts are connected to and I get shit IOPS performance. Initially, I had a RAID0 array of 6 1TB 7200 HDDs connected to an Adaptec 3805 SATAII RAID controller and 1GB connectivity with a PowerConnect 2816 switch, everything configured for 9k frames. I'm using StarWinds Virtual SAN to handle the iSCSI connections and provide 5GB of RAM cache. This netted me about 800 IOPS...shit.
So I got myself a 500GB SSD (going through mobo SATA3 connection) and an Intel quad port NIC. Using NIC teaming and static teams configured on the switch and OS with 2GB teams to each host, I very occasionally see a spike into the 2,200 IOPS region....still shit.
Any idea what's causing such poor performance?
So I got myself a 500GB SSD (going through mobo SATA3 connection) and an Intel quad port NIC. Using NIC teaming and static teams configured on the switch and OS with 2GB teams to each host, I very occasionally see a spike into the 2,200 IOPS region....still shit.
Any idea what's causing such poor performance?