HP BladeSystem c7000, Storage, Exchange 2013

What really drives me nuts is HP's website. It says, "HP 3PAR StoreServ 8000 Storage with a starting all-flash price of $19,000 USD." I mean, I don't want an all-flash solution, but it looks like the 8000 series is the starting model. The lowest "numerical" model is the 8200 which says "All Flash Starter Kit". How the hell to I price out just standard storage? And what is with HP and these "nodes"? Clearly I don't know anything about HP storage. I am about to just order a $30k EqualLogic and call it a day. :p
 
It's not that simple. On the VC Flex-10 modules (that is the 10GBE one), you'll be doing that through the virtual connect manager. In there you'll be creating profiles and then assigning them to each server (when they're powered off). You can set how many NICs appear to the OS, what VLANs they'll use, bandwidth allocation, etc. Each profile also lets you set which FC fabric to connect. This is all done after you configure the host ports themselves on each module.

As for why you need additional mezzanine cards, the current ones you presumably cannot support the extra capabilities that the VC affords.

If you'd like more specifics, I manage about a hundred C7000 chassis in a very similar configuration, so I'm somewhat versed on them. Have 10GBE for networking, FC for storage, and about 20 full racks of 3PAR storage.
 
The mezz cards are less important being that they don't cost that much. What I really need is no more than a $50k solution which also includes the 10Gb or FC switches with either LeftHand or 3PAR. I would prefer 10Gb iSCSI but it seems like that would be an additional cost on the 3PAR side.
 
What really drives me nuts is HP's website. It says, "HP 3PAR StoreServ 8000 Storage with a starting all-flash price of $19,000 USD." I mean, I don't want an all-flash solution, but it looks like the 8000 series is the starting model. The lowest "numerical" model is the 8200 which says "All Flash Starter Kit". How the hell to I price out just standard storage? And what is with HP and these "nodes"? Clearly I don't know anything about HP storage. I am about to just order a $30k EqualLogic and call it a day. :p

All-flash is taking off so the marketing folks lead with talking about all-flash. You can get an all HDD HP 3PAR StoreServ 8200 too (the previous generation 7200 is still available too). The 7000 and 20000 3PAR families have been updated to the 8000 and 20000. An 7200/8200 means max two nodes and is the entry 3PAR, 7400/8400 is max of four nodes - available with either 2 or 4 nodes. There are also all-flash models in there too - they use beefier controllers to drive higher performance. Maybe this video of mine will help understand the newest products: https://youtu.be/k3XG6R9l_Tw
 
All-flash is taking off so the marketing folks lead with talking about all-flash. You can get an all HDD HP 3PAR StoreServ 8200 too (the previous generation 7200 is still available too). The 7000 and 20000 3PAR families have been updated to the 8000 and 20000. An 7200/8200 means max two nodes and is the entry 3PAR, 7400/8400 is max of four nodes - available with either 2 or 4 nodes. There are also all-flash models in there too - they use beefier controllers to drive higher performance. Maybe this video of mine will help understand the newest products: https://youtu.be/k3XG6R9l_Tw

Oh. So I guess these "nodes" are how much you can expand on the storage? I am used to NetApp. Buy a controller and add disk shelves.
 
So I realize I know nothing about 3PAR. I was told today that the most common RAID used is RAID 5 and that 3PAR doesn't have hot spares. Seems to be a whole different technology than traditional storage.

Although the part I didn't like is there is no dedupe or compression on HDD...You only get that with SSD?
 
3PAR is another enterprise platform - raid groups, volumes, luns, etc. It's solid, great engineers too, but still a traditional platform. What's the cost on a shelf going to be?

Actually, 3PAR is not at all a traditional array. The guys that designed it based the controller architecture on server clustering. It uses a custom ASIC that drives a lot of the value:
  • Internode communication: 3PAR is very different from other traditional 2 controller arrays. It scales to 8 nodes with the nodes interconnected via the ASIC. This allows any controller or port to service an IO from a host.
  • Cache IO: The ASIC is a data highway and the cache across the nodes are interconnected.
  • RAID Calculations: no load on the CPUs
  • Mixed workload: The ASIC enables support of mix workloads by using all disks in the system for all IO. The wide striping minimizes hotspots and is idea for mix workloads
  • Data deduplication: I'm not aware of any other all-flash that does hardware-based deduplication. Why is that important? When deduplication is done via CPUs and the IO workload is high, something will get sacrificed - either IO performance or deduplication will be turned off.
  • Thin Reclamation - UNMAP (reclaiming space) is done via the ASIC. The 3PAR ASIC detects patterns of zeros and can return that space to the free pool without scanning a LUN to find the free space.

The best summary I can give is that with the same architecture, HP 3PAR scales from an entry-midrange array through tier-1, all-flash, hybrid, and all HDD with the exact same software, management and in fact OS across all the boxes. 3PAR was built to be an IO Serving engine and to protect the IO. We don't care what the media is behind it. Proof is in the pudding - HP 3PAR is now the #1 midrange array and #2 all-flash in the market. it really isn't a "traditional" array.
 
Actually, 3PAR is not at all a traditional array. The guys that designed it based the controller architecture on server clustering. It uses a custom ASIC that drives a lot of the value:
  • Internode communication: 3PAR is very different from other traditional 2 controller arrays. It scales to 8 nodes with the nodes interconnected via the ASIC. This allows any controller or port to service an IO from a host.
  • Cache IO: The ASIC is a data highway and the cache across the nodes are interconnected.
  • RAID Calculations: no load on the CPUs
  • Mixed workload: The ASIC enables support of mix workloads by using all disks in the system for all IO. The wide striping minimizes hotspots and is idea for mix workloads
  • Data deduplication: I'm not aware of any other all-flash that does hardware-based deduplication. Why is that important? When deduplication is done via CPUs and the IO workload is high, something will get sacrificed - either IO performance or deduplication will be turned off.
  • Thin Reclamation - UNMAP (reclaiming space) is done via the ASIC. The 3PAR ASIC detects patterns of zeros and can return that space to the free pool without scanning a LUN to find the free space.

The best summary I can give is that with the same architecture, HP 3PAR scales from an entry-midrange array through tier-1, all-flash, hybrid, and all HDD with the exact same software, management and in fact OS across all the boxes. 3PAR was built to be an IO Serving engine and to protect the IO. We don't care what the media is behind it. Proof is in the pudding - HP 3PAR is now the #1 midrange array and #2 all-flash in the market. it really isn't a "traditional" array.

Why would I purchase a 3PAR over say something like Tintri?
 
Oh. So I guess these "nodes" are how much you can expand on the storage? I am used to NetApp. Buy a controller and add disk shelves.

3PAR allows you to scale up and out. You can start with a 2-node entry array and the family scales up to 8 nodes. So yes, the capacity of those systems grows as nodes are added.

So I realize I know nothing about 3PAR. I was told today that the most common RAID used is RAID 5 and that 3PAR doesn't have hot spares. Seems to be a whole different technology than traditional storage.

Although the part I didn't like is there is no dedupe or compression on HDD...You only get that with SSD?
A key concept with HP 3PAR is that it stripes data across the entire array. With a traditional array, you pick a number of disks and configure those as a certain RAID level and then assign hot spares to those RAID groups.

With 3PAR, you pick the capacity you you want for a RAID level and create pools of storage that you can then use when you provision storage. You're right, there aren't specific hot spares assigned. 3PAR stripes the hot spare space across all the drives so if a drive fails, you don't rebuild one drive but the rebuilding happens across the entire pool. Here's a video demo I did a couple years ago showing me configuring a 3PAR system that will help to understand some of these concepts: https://youtu.be/1FWjQOaS-s4

And yes, we aren't deduping HDD. The performance overhead to dedupe HDD is pretty high and we don't think it's worth it today. I know the team is looking at it and can they minimize the performance penalty but today, it isn't worth it. At least one major storage vendor made a big deal of HDD dedupe several years ago but if you go read their technical white papers, they basically say don't use it if you care about performance.

Lastly, I apologize for my delay in responding - I was on an international business trip to New Zealand and my team had me speaking at two events per day so I had no time. I did speak at a couple of VMUGs about vVOLs which I was great because I think people don't have a clear understanding of what it is, how it works, and what the benefits are.
 
Last edited:
Why would I purchase a 3PAR over say something like Tintri?

I don't know enough about Tintri to give you an honest assessment. They are a VMware only platform. I think with the emergence of vVOLs, a lot of the advantages Tintri had (of granular control of VM storage) go away. I've seen Tintri spending a lot of time bashing vVOLs and that can only mean they see it as a risk. Also, I'd guess that Tintri doesn't have the scale or performance of 3PAR. Lastly, 3PAR as a mature platform has a ton of features that are hard to develop and develop well for startups. Tintri has a good history but I'm guessing they don't have things like QoS, multi-site replication, stretch cluster (with native OS MPIO support), etc. Advance data services take a long time to develop.
 
I don't know enough about Tintri to give you an honest assessment. They are a VMware only platform. I think with the emergence of vVOLs, a lot of the advantages Tintri had (of granular control of VM storage) go away. I've seen Tintri spending a lot of time bashing vVOLs and that can only mean they see it as a risk. Also, I'd guess that Tintri doesn't have the scale or performance of 3PAR. Lastly, 3PAR as a mature platform has a ton of features that are hard to develop and develop well for startups. Tintri has a good history but I'm guessing they don't have things like QoS, multi-site replication, stretch cluster (with native OS MPIO support), etc. Advance data services take a long time to develop.

Looking forward to someone's response. :D
 
I don't know enough about Tintri to give you an honest assessment. They are a VMware only platform. I think with the emergence of vVOLs, a lot of the advantages Tintri had (of granular control of VM storage) go away. I've seen Tintri spending a lot of time bashing vVOLs and that can only mean they see it as a risk. Also, I'd guess that Tintri doesn't have the scale or performance of 3PAR. Lastly, 3PAR as a mature platform has a ton of features that are hard to develop and develop well for startups. Tintri has a good history but I'm guessing they don't have things like QoS, multi-site replication, stretch cluster (with native OS MPIO support), etc. Advance data services take a long time to develop.

We are not a VMware only platform - Virtualization only, yes, but VMware/Hyper-V/KVM/OpenStack/Xen are all fully supported ;)

We don't see VVOLs as a risk, we see it as affirmation of what we're doing, but the VVOL 1.0 spec didn't ~include~ 90% of what it was supposed to, or originally had in it (remember: I knew the spec from the original PRD), so we do point that out. Especially since you can't use it with DR or almost anything else right now.

As for features:
QoS is on a per-VM or dynamic group basis - minimum and maximum, in realtime, and adaptive to the entire environment (network latency, CPU states, etc)
Multi-Site replication: Chain supported, scale-out roadmapped.
Stretch Cluster: Roadmapped, coming soon.

And we have:
Realtime analytics on a Per-VM basis
Per-VM snapshot/restore/data migration
Per-VM state analysis and adaptation
Per-VM queuing and isolation
Per-VM latency control
ZERO management - no luns, raid groups, volumes - just VMs.
 
Actually, 3PAR is not at all a traditional array. The guys that designed it based the controller architecture on server clustering. It uses a custom ASIC that drives a lot of the value:
  • Internode communication: 3PAR is very different from other traditional 2 controller arrays. It scales to 8 nodes with the nodes interconnected via the ASIC. This allows any controller or port to service an IO from a host.
  • Cache IO: The ASIC is a data highway and the cache across the nodes are interconnected.
  • RAID Calculations: no load on the CPUs
  • Mixed workload: The ASIC enables support of mix workloads by using all disks in the system for all IO. The wide striping minimizes hotspots and is idea for mix workloads
  • Data deduplication: I'm not aware of any other all-flash that does hardware-based deduplication. Why is that important? When deduplication is done via CPUs and the IO workload is high, something will get sacrificed - either IO performance or deduplication will be turned off.
  • Thin Reclamation - UNMAP (reclaiming space) is done via the ASIC. The 3PAR ASIC detects patterns of zeros and can return that space to the free pool without scanning a LUN to find the free space.

The best summary I can give is that with the same architecture, HP 3PAR scales from an entry-midrange array through tier-1, all-flash, hybrid, and all HDD with the exact same software, management and in fact OS across all the boxes. 3PAR was built to be an IO Serving engine and to protect the IO. We don't care what the media is behind it. Proof is in the pudding - HP 3PAR is now the #1 midrange array and #2 all-flash in the market. it really isn't a "traditional" array.

You present a LUN or a Volume to a host, carved from a RAID group of some kind, ergo you're a traditional array. That's where I'm pointing the difference - the architecture is great, but the fundamental structure is traditional. As you yourself point out - you are an IO Serving engine. I don't care about IO - I care about the workload. IO is part of that, so is much much more - data types, command sizing, priority, protection, data management, synchronization, dev/test copies of the running data, CPU and memory states, network state, backup and replication state, commonality between other workloads, etc. That's what Tintri does - IO serving is almost table stakes at this point, it's what you can do with the REST of the data that matters, and 90+% of that is contained in a virtual machine now. What I can do at ~that~ layer of abstraction cannot be done at the block layer without a shim like VVOLs, and VVOL isn't fully baked yet, nor is everyone ready to start trying to leverage it to get there - you need a lot more information not just about the data, but about teh entire workload, to truly "manage" the application - VVOLs only provides part of that. That's what we do - we pull in all the rest and use it. I'd love to sit down on a webex with you to chat at some point - don't get me wrong, I LOVE 3Par, I really do - but we're doing something really special.

And we don't bother writing Zeros either ;) But we also don't have a pool to manage ;)
 
From my experience as user who had some good experiences with HP 3PAR I can agree with you both - 3PAR isn't "traditional array" on back-end and clearly had few very good features, but on front-end it isn't much different from other ones - it had non-VM-aware LUNs or VVols, which so far have limited capabilities.

I haven't had opportunity to work with Tintri yet (I only met one person with some real-life hands-on experience and it was good one), but for me it seems it should be easier to set up and integrate with compute/virtualisation stack.
I would consider it in case building new/upgrading virtual environment.

However, with Tintri, VMware and Exchange there is one small problem - MS doesn't support virtual Exchange running on storage presented via NFS:
http://exchange.ideascale.com/a/dtd...ange-data-on-file-shares-nfs-smb/571697-27207

http://up2v.nl/2014/02/03/exchange-does-not-support-nfs-vote-and-you-might-change-that/

(even if it is running on NFS completly fine, as virtualisation anyway abstracts machine from physical storage platform - see: http://www.joshodgers.com/2014/02/1...k-on-nfs-datastore-look-like-to-the-guest-os/ )
 
Last edited:
However, with Tintri, VMware and Exchange there is one small problem - MS doesn't support virtual Exchange running on storage presented via NFS:

Lop will probably have a better answer, but I wouldn't be surprised if NFS isn't supported simply to drive more customers to Hyper-V which supports iSCSI and SMB but not NFS. There really is no good reason for NFS not to be supported for Exchange. Managing block storage sucks. Once you start using NFS especially on storage like Tintri, you will never want to go back.

From a support perspective, there really isn't anyway for MS to know how your storage is connected. With VMware in the guest OS you will simply see the SCSI disks and PVSCSI controllers.
 
From my experience as user who had some good experiences with HP 3PAR I can agree with you both - 3PAR isn't "traditional array" on back-end and clearly had few very good features, but on front-end it isn't much different from other ones - it had non-VM-aware LUNs or VVols, which so far have limited capabilities.

I haven't had opportunity to work with Tintri yet (I only met one person with some real-life hands-on experience and it was good one), but for me it seems it should be easier to set up and integrate with compute/virtualisation stack.
I would consider it in case building new/upgrading virtual environment.

However, with Tintri, VMware and Exchange there is one small problem - MS doesn't support virtual Exchange running on storage presented via NFS:
http://exchange.ideascale.com/a/dtd...ange-data-on-file-shares-nfs-smb/571697-27207

http://up2v.nl/2014/02/03/exchange-does-not-support-nfs-vote-and-you-might-change-that/

(even if it is running on NFS completly fine, as virtualisation anyway abstracts machine from physical storage platform - see: http://www.joshodgers.com/2014/02/1...k-on-nfs-datastore-look-like-to-the-guest-os/ )

The core issue is the inability to abort transactions on a file-based stack - not a problem when you have a virtualized scsi layer in there to handle the aborts for you :) Exactly what the blog you posted noted.
 
The core issue is the inability to abort transactions on a file-based stack - not a problem when you have a virtualized scsi layer in there to handle the aborts for you :) Exactly what the blog you posted noted.

I know and understand, also I saw Exchange running on NFS virtual storage without any issues - "not supported" doesn't mean it won't work, however it has to be considered by organisation (that is why Nutanix re-introduced iSCSI support - to get official 'seal of approval' from MS :
http://www.joshodgers.com/2015/09/30/ms-exchange-on-nutanix-now-a-ms-validated-ersp-solution/
http://vinfrastructure.it/2015/06/iscsi-strikes-back-in-nutanix-storage/ )
 
You present a LUN or a Volume to a host, carved from a RAID group of some kind, ergo you're a traditional array. That's where I'm pointing the difference - the architecture is great, but the fundamental structure is traditional. As you yourself point out - you are an IO Serving engine. I don't care about IO - I care about the workload. IO is part of that, so is much much more - data types, command sizing, priority, protection, data management, synchronization, dev/test copies of the running data, CPU and memory states, network state, backup and replication state, commonality between other workloads, etc. That's what Tintri does - IO serving is almost table stakes at this point, it's what you can do with the REST of the data that matters, and 90+% of that is contained in a virtual machine now. What I can do at ~that~ layer of abstraction cannot be done at the block layer without a shim like VVOLs, and VVOL isn't fully baked yet, nor is everyone ready to start trying to leverage it to get there - you need a lot more information not just about the data, but about teh entire workload, to truly "manage" the application - VVOLs only provides part of that. That's what we do - we pull in all the rest and use it. I'd love to sit down on a webex with you to chat at some point - don't get me wrong, I LOVE 3Par, I really do - but we're doing something really special.

And we don't bother writing Zeros either ;) But we also don't have a pool to manage ;)
So we can agree that we don't have the same definition of "traditional array" and I will continue to assert that 3PAR isn't one. Good to have the perspective DontPanic! shared. We'll continue to talk about 3PAR as non-traditional.

Does Tintri support non-virtualized environments? Guessing not based on what you explained.

Also, is interesting to hear you talk about vVOLs not being a risk but then went on to talk about what it's missing. Clearly, Tintri does see it as a risk and in fact there was a panel discussion at VMworld where someone from Tintri was bashing vVOLs.
 
I consider any array that maintains a simple initiator to target based relationship as its core a traditional platform - for you, much as anyone else, it's the host that issues commands - but I don't care about the host.

As for VVOLs, that's because it ~is~ missing things. Claiming that a technology will solve problems, when it fundamentally breaks certain other ones, is a problem. Lack of replication, limits on number of VVOL, untested API layer... I don't know anyone rolling to production on it; it's a 1.0 technology.

And no, we don't support non-virtualized environments - no particular interest in doing so either. 85+% of all workloads are virtualized, moving to 100%, across hypervisors. Sure, there'll always be physical workloads, but I'd rather perfect the 85%+ than worry about those.
 
I love what Tintri is doing but we can't even look at them because of vendor lock-in due to insane discounts (disclaimer: I work for a big virtualization company in a SaaS division). We also deploy some DB workloads baremetal so that's another reason we can't.

I have extensive experience with 3Par (Most T and F series) from my previous job and do love the ASIC/controller architecture. I abhor the VNXs we currently utilize for a plethora of reasons.

I do have to agree with lopoetve about vVOLs.. the stuff that is doesn't support or work with like replication are deal breakers for us utilizing it at this time.

EDIT: I've been around this website and forums since the early days but have finally decided to actually post... :)
 
I love what Tintri is doing but we can't even look at them because of vendor lock-in due to insane discounts (disclaimer: I work for a big virtualization company in a SaaS division). We also deploy some DB workloads baremetal so that's another reason we can't.

I have extensive experience with 3Par (Most T and F series) from my previous job and do love the ASIC/controller architecture. I abhor the VNXs we currently utilize for a plethora of reasons.

I do have to agree with lopoetve about vVOLs.. the stuff that is doesn't support or work with like replication are deal breakers for us utilizing it at this time.

EDIT: I've been around this website and forums since the early days but have finally decided to actually post... :)

Appreciate the perspective you have about 3PAR. I agree that there are some real issues with vVOLs in the short term - no question. Replication is a deal breaker and that cascades into things like vMCS or SRM. It's a 1.0 implementation and it will get better.

Once vVOLs has that, the playing field will be level WRT granular VM visibility.
 
A friend of mine just bought four Tintri storage arrays to replace their 3PAR.
 
I wish. Unfortunately we are just the MSP and they don't listen so well about imposing limits.

Then talk money. Go full retard on the hardware estimate and place the numbers on them. If they have to choose between paying n times more or cutting quotas, most customers immediately take the path of the least expense regardless of what their previous demands were.

If you just tell them your disk is getting full they really couldn't care less. They don't understand what it means.
 
Back
Top