IBM V9000 AFA & SQL workloads (I/O wait)

Thuleman

Supreme [H]ardness
Joined
Apr 13, 2004
Messages
5,833
Is anyone running the IBM 900 or V9000 AFAs?

Can you speak to what happened to your SQL workloads after you switched from whatever you had before to the IBM array?

Specifically, did you see an increase in VM CPU consumption after switching?

As the story goes, SQL is experiencing high I/O Wait, you put that workload on an AFA and BOOM, the I/O Wait goes away and your CPU pegs. Makes sense at face value. The question really is whether the IBM solution, which cuts out all SSD drive overhead by essentially using PCI cards, improves storage access times significantly more than SSD based AFAs.
 
Hi, I'm not running IBM AFA, but this result is expected - quite often, what DB servers do, is waiting on storage to deliver or write data - storage is usually bottleneck. Only sometimes, when your are lucky to have data in the memory or at least in cache, CPU can use all the power.

When this constrain is removed - CPU can do the work they should do for most of time, not wait for storage.

I saw similar result when I moved some DB workloads from the array shared with other servers and virtual workloads to the array dedicated to DB only - it was conventional/spinning rust array, but as all array cache, CPU power, interface bandwidth were dedicated to one purpose - wait time decreased and CPU load increased significantly (as I remember - about 30%). And of course time needed to run some longer ETL and analytics scripts every night decreased greatly.
 
Have to V9000s about to go in our lab for testing. They are fast but don't have very good data services. Are they faster than an AFA using SSDs? Maybe...but we're splitting microseconds here. Not something 99.99% of SQL databases would ever benefit from.
 
We spend a little more time with IBM on the V9000. I agree that the performance delta between on-board and SSD flash doesn't matter in our (and probably most?) environments.

I like the way that capacity can be added in a very modular way so the customer is never in the situation where they have to fork out a lot of cash for capacity they don't need at the time.

This is where XtremIO really fails, the requirement to add two bricks during expansion if you want to keep things in the same cluster is decidedly meh. I also like that IBM doesn't nickle and dime one to death with licenses for the equivalent functionality of PowerPath, VPLEX, etc.

Lastly it's pretty cool that the V9000 can be the flash tier in front of a whole bunch of supported 3rd party storage, allowing you to create your own SDS fairly easily.

Between "just in time" capacity upgrades and licensing savings the TCO of the IBM solution is significantly lower than the EMC solution. Having said all that, we are sticking with XtremIO.
 
Is there any technical reason or it's just company politics? :)

The TCO of the array is only one factor in deciding which technology to purchase. We have a sizable investment in other EMC technologies (Isilon, VMAX, DataDomain, etc.). The XtremIO can sit in front of a VMAX3 if/when we decide to go that route as the current VMAXes are aging.

Then there's the cost to the relationship we have with EMC if we run off and buy something else. Likewise there's cost to the very very good relationship we have with our VAR (which really does add a lot of value to us) if we were to buy an AFA that they don't sell.

Ultimately any AFA (StorTrends anyone?) will do the job if the job is to deliver decent performance. If we can agree to that premise then it becomes a matter of what other factors come into consideration.
 
Back
Top