PDA

View Full Version : EMC vs. Compellent - Let the debate begin!


RiDDLeRThC
01-22-2012, 08:09 PM
Okay, so we are getting ready to make a large (for us) SAN upgrade.

We are looking at two VNX 5300s or two Compellent Series 40's.

For us, we currently have EMC gear and while we love our CX4 we cannot stand our AX4.

The EMC arrays we feel lack some of the most basic performance reporting features even with Navi Analyzer. We also hate the fact that we cannot rebalance storage pools when we add additional disks even though we hear that feature it coming in May.

As for the Compellent gear, it has everything you could ever want and need from a reporting aspect.


We have some other pros and cons of each but I'm interested to see what others have to say. Those two points above have been the bigest issue for us when it comes to the EMC gear we have no.


Both EMC and Dell are giving us deep discounts/buy back on our current EMC gear so we are getting pretty good pricing either way.

NetJunkie
01-22-2012, 08:17 PM
The AX4 is an odd beast. Don't base anything on that. There are also some oddities in the VNXe series..and we keep finding more...

Rebalancing pools IS coming. We're waiting on that as well. Compellent is easy. Simple to create LUNs. The downside I have seen and when I speak here I'm speaking as NetJunkie, not as my company, is that if you don't get the performance you want there aren't a lot of options. You roll in more spindles and hope for the best. With the current EMC stuff you can do FAST Cache (both read AND write), true tiering across 3 tiers...fast 24Gb back-end connections... FC, FCoE, iSCSI, NFS, and CIFS along with great vSphere integration.

I'm an EMC guy no question..but I do VMware integrations and deployments against everything.

RiDDLeRThC
01-22-2012, 08:28 PM
I'm right there with you, I asked my Compellent guys where the SSD's were in their quotes, the exact words were "We don't need SSD like EMC does, we do everything with our spindle count."

Just wish EMC had better performance reports, but as I was talking with my VAR... We have only wanted to see performance data out of our AX4 because well it sucks. I could care less to really see anymore data out of our CX4 than we get from vSphere because well it just works and works really really well.

RiDDLeRThC
01-22-2012, 08:34 PM
Have you come across any good articles on the Compellent/EMC differences?

I really see myself going the EMC route with what I know today but I know one person I work with is really liking Compellent and while I don't need to swing him my way it would be nice to have him onboard.

Vader
01-22-2012, 08:58 PM
Unisphere has a boatload of features and reporting features. Along with what NetJunkie mentioned pertaining all the protocol/connectivity and FAST support, there is also some very robust Software Packs and Suites that make it even better...such as the Replication technologies for Continuous Remote Replication via RecoverPoint, Replication manager..etc, which I might add, is managed via one interface, again, Unisphere. Of course the vSphere integration is phenomenal, and i'm not just talking from the vSphere side, i'm talking about the Unisphere side as well.

There's also nice to have features like being able to house all my drive types within one DAE.

Are you looking at two for DR?

Honestly, I think any vendor who thinks that throwing spindles at i/o requirements in the age of EFD is just crazy. EFD should be the defacto today as it will eventually replace all spindle types, soon, you will not need mid-tier 15k sas drive types and it will be large NL-SAS mixed with SSD for extended cache and tiering to provide the IOP's you need.

NetJunkie
01-22-2012, 10:33 PM
I don't have any competitive info to share..and don't have a lot not to share to be honest...

And yeah..saying you don't need EFD these days is just stupid. "Oh really? So how many spindles do I need to power and then cool to equal the 4,000 IOPS this one EFD will do?". Idiotic.

Dell reps were getting REALLY good spiffs last year to displace EMC after the relationship broke down...probably still are so they are trying REALLY hard to sell it. Probably going to beat it in another account this week. Customer said it's pretty much a done deal.

Child of Wonder
01-22-2012, 10:59 PM
I agree, the monitoring tools in EMC for Block are abysmal. True, one can continuously run Data Logging and migrate the NAR files off on a regular basis, but it would be nice to have something a little more intuitive and easy to use.

However, not knowing anything about Compellent, that EFD comment from them also turns me off. Spinning rust is the storage medium of the past and the current shortages and price increases are only going to serve to push consumers quicker towards SSDs.

Another issue with EMC I don't care for is the inability to avoid Storage Pool critical capacity errors even when you're only thick provisioning. For many deployments, I like the simplicity of throwing many disks into a large Storage Pool rather than creating traditional RAID Groups and MetaLUNing across them. However, the highest setting you can configure for capacity threshold is 84% on the Pool. For customers who don't want to run Thin/Thin in their VMware environment, this means 16% of the Storage Pool capacity is wasted.

RiDDLeRThC
01-22-2012, 11:24 PM
Wish I could get my hands on a demo compellent unit. Will be making a call to dell sales team tomorrow to see what they can do. Maybe they at least have a local lab that I can spend some time in. I don't want to really fly out to TX just to eval some hardware.

SpaceHonkey
01-23-2012, 01:03 AM
I'm right there with you, I asked my Compellent guys where the SSD's were in their quotes, the exact words were "We don't need SSD like EMC does, we do everything with our spindle count."

Wow - I almost dirtied my monitor with that one...

V@nill@
01-24-2012, 02:00 AM
I'm biased (EMC'er) but my big bug bear with Compellent is the fact that it is still 32bit.

Why does this matter? 4GB array cache. The end. No more.

The VNX natively goes up to 12GB per SP (I think) and on top of that you can add SSD drives as an extension of cache - this is an amazing feature. Tongue in cheek but when you have many TB of data and a 220GB+ cache array against a 4GB cache array I certainly hope the 4GB array has good performance reporting because you're going to need it!!

What else....VMware - VNX has a ton more VMware integration. Even beats the Tier 1 arrays.

http://wikibon.org/wiki/v/EMC_and_NetApp_lead_in_VMware_Storage_Integration_Functional ity

Errr I recall Compellent offer no compression or deduplication. EMC offer it for free.

Finally I don't think compellent offers NFS and CIFS in the same package. I think they can add a server or solution but it has separate management. The VNX will come with two X-Bldes which as managed via the same software (Unisphere - if you're not on Flare 30 yet check out the vids on youtube).

Vader
01-24-2012, 08:13 AM
Very good points V@nillA@ and to add to that...FAST Cache allows you to extend cache on the VNX5300 up to 500GB.

shabazkilla
01-24-2012, 08:39 AM
You have to feel almost bad for Compellent though. They have some great ideas and neat features, but SSDs are going to kill them. Day late, dollar short I guess.

lopoetve
01-24-2012, 11:12 AM
I'll just throw out that technically, Compellent isn't far off - enough spindles WILL get the job done.

Other than that, I can't comment here.

http://i.imgur.com/DgBKP.gif

gminks
01-24-2012, 04:36 PM
Before I get started full disclosure: I work for Dell. (out of curiosity - who else is from EMC on this thread besides v@nill@?).

Dell just released a new Compellent version that brings everything to 64 bit. Check out this blog post (http://www.boche.net/blog/index.php/2012/01/11/path-set-for-dell-storage-forum-2012-london/) for more info. Since Compellent has a perpetual license, existing customers who are on Series 40 hardware or higher don't pay for the upgrade to Compellent 6.0 (customers who have series 30 and lower will have to update hardware in order to update to the 64 bit software, but that software update to 6.0 is still free).

I'll stay out of the debate about SSDs...but I will say we support SSDs, we recommend them when they are necessary for the architecture that is designed to meet the business needs of the customer. If the need is for all SSD - we're happy to oblige. :)

One other point - Compellent currently supports all tiers in the DAE today.

NetJunkie
01-24-2012, 05:10 PM
Hey Gina! It's amazing who ends up on Hard Forum sometimes....

lopoetve
01-24-2012, 06:30 PM
hahaha! Lots of folk getting together here now.

V@nill@
01-24-2012, 10:06 PM
Hmmm I do remember a Gina Minks at EMC......just on email trails and watchers lists. The same?

fibroptikl
01-24-2012, 10:23 PM
I'm biased (EMC'er) but my big bug bear with Compellent is the fact that it is still 32bit.

Why does this matter? 4GB array cache. The end. No more.

As Gina mentioned, the latest version of Storage Center, is 64-bit.

http://www.compellent.com/About-Us/News-and-Events/Press-Releases/2012/120111-Storage-Center-6.aspx

V@nill@
01-25-2012, 01:06 AM
Ahhh perfect timing on the 64bit software, however how about that hardware to take advantage of the 64bit OS and more memory?

I hear later this year :)

sam-sgc
01-25-2012, 09:02 AM
We currently have 4 EMC CX4, and we decided to start purchasing Compellent since last summer. I currently have 1k+ spindles on Compellent.

Compellent is so much better, it's not even fair.

Their reporting is miles away from Navisphere analyzer.. and their Co-pilot support is the best.

Thin tiering actually works on Compellent.. it has so much performance impact on the CX4/ VNX platform is unsuable.

As for disk performance / SSD. We compare a fully loaded CX4 with 5 SSD to a compellent system with half the spindle count and guess who won ? Simply because of their superior architecture.. since all writes goes to raid-10.. it's really fast. That's why you don't need SSD unless you start pushing some really high IO. I'm pushing around 30k on each system and I don't feel the need for any SSD yet.

Of course, EMC will push for ssd...and their tiering works in 1gb increment...so much for efficiency when you compare it to the 2mb tiering of the Compellent..

I could go all day long...

Now, when you into high-end San.. EMC is still the clear winner.. their VMAX is still considered one of the best high-end system. however..price of entry is not cheap!!

If you really need SSDs.. you should be looking at Nimbus or PureStorage. They are pushing some serious IOps per U.

NetJunkie
01-25-2012, 09:10 AM
Was the disk in the CX4 also RAID10?

sam-sgc
01-25-2012, 09:27 AM
we compared all scenarios

raid-10 fat , raid-5 fat, raid-10 thin and raid-5 thin

of course..Compellent is using all his drives..so a write coming in will use 100+ spindles..even for a small 100gb lun.

with EMC, it would write on only 8 or 16 drives max.
you can try to simulate the Compellent behavior by using Storage Pool on EMC, but their performance is worse than using traditionnal luns.

Compellent_Junkie
01-25-2012, 09:45 AM
As my mother used to say

"the truth will always come out!!"

Child of Wonder
01-25-2012, 12:23 PM
lol

The Dell/Compellent marketing force is in full attack mode!!!!

danswartz
01-25-2012, 12:48 PM
ALL YOUR LUN ARE OURS!!!!

Vader
01-25-2012, 01:53 PM
Where is your comparison to VNX? You are comparing previous generation tech to current generation Compellent? I would certainly hope that Compellent would be better.


I don't think anyone here said that Compellant can't deliver IOPS..heck any vendor can throw spindles at the problem.no? The issue is two fold:

1. Extension of Cache is far superior with VNX even with Compellants 64bit support, btw, EMC FAST Cache=64k chunks. I do believe EMC needs to get more granular with FAST Tiering slice size, however.
2. Why would I want to manage 100 Spindles to handle required IOPS? Where is the efficiency there when I could have far less spindles, extend my cache up to 2.1TB's, and Tier between high capacity NL-SAS, 10-15k SAS, and EFD.

Also, you can also get an all SSD VNX5500-F as well.

EPOQ
01-25-2012, 02:00 PM
Where is your comparison to VNX? You are comparing previous generation tech to current generation Compellent? I would certainly hope that Compellent would be better.


I don't think anyone here said that Compellant can't deliver IOPS..heck any vendor can throw spindles at the problem.no? The issue is two fold:

1. Extension of Cache is far superior with VNX even with Compellants 64bit support, btw, EMC FAST Cache=64k chunks. I do believe EMC needs to get more granular with FAST Tiering slice size, however.
2. Why would I want to manage 100 Spindles to handle required IOPS? Where is the efficiency there when I could have far less spindles, extend my cache up to 2.1TB's, and Tier between high capacity NL-SAS, 10-15k SAS, and EFD.

I also agree that it should be a comparison between the current-gen EMC. I just went through a course on the VNX platform and it seems to have a lot to offer. Not to mention Unisphere being a much easier, more intuitive platform to operate than what they had previously.

reighnman
01-25-2012, 02:51 PM
Hi guys, I came across this thread while digging up information on Compellent SC6.0 (I'm upgrading this weekend), and while it doesn't really relate to EMC figured I'd post.

Last year I went through the painstaking process of evaluating SAN solutions for our growing vmware cluster. While we only have a couple dozen servers, we have been growing incredibly over the past couple years and needed a product with the reliability, performance and scalability of an large enterprise class system. In the end it turned out to be between 3PAR, Pillar, Compellent and NetApp. As for EMC, I've had bad experiences with them when dealing with smaller clients/setups so I just ignored them completely.

The goal was to have around 40TB of redundant (dual controller) tiered storage with 8gb FC or 10GB FCoE as a backend.

3PAR - I really liked 3PAR's F400 solution, but the problem we had was they were just bought out by HP and refused to work with us on pricing. I couldn't even get them to budge a dollar, so in the end they were easily twice as expensive as the other options and only supported 4Gb FC or 1Gb iSCSI.

Pillar - I guess they're now owned by Oracle now?? They had some interesting features, but again we had issues with available connectivity options and their software seemed a bit lacking. The GUI was out dated and kind of a pain to navigate, plus they were still the highest priced outside of 3PAR.

Netapp - Ahh, I liked netapp and in the end it turned out to be a very close decision. Unfortunately the only way to keep the price competitive was to use FAS2240 controllers, which had a limit of 144 drives. I'd say what drove me nuts was all of the software/support options you have to dig through, I think the quote was like 10 pages long. Also they had limited port availability 8Gb/10GbE ports, and I could just see having to replace the controllers a couple years down the line (as well as the disk enclosures).

And the winner..

Compellent - In all I'd say it was the most well rounded of the bunch in terms of software and hardware, plus it was the cheapest which didn't hurt. They claimed it was due to the end of the year, if that's the case I'd suggest holding out till December ;)

Everything has worked as claimed in regards to tiered storage, data progression, volume replays, performance and reliability. The data progression is great when you have file servers with a ton of archive data that doesn't need to waste space on the 15k drives, while forcing other services like sql to use 15k drives only. I can't say how happy we've been and copilot support has been outstanding.

We ended up with dual/redundant Series 40 controllers (Supermicro), each with 2 quad port 8GB FC for failover domains connected to Cisco MDS 9148 switches. 24 15k 450GB drives, 12 10k 600GB drives, 12 7k 2TB drives with enclosure capacity for 12 more drives which we plan on purchasing this year.

Each VMware host in the cluster (dl380 g7) has 2 dual port 8GB FC hba's configured for round-robin, allowing for a switch, card, or link failure.

http://i.imgur.com/faW2l.jpg

http://i.imgur.com/nxN89.jpg

More disk enclosures coming soon!

RiDDLeRThC
01-26-2012, 06:50 AM
reighnman - thanks for posting, I was hoping to find a few customer experience stories out of this post.

How has the hardware been?

Have you guys considered adding SSD's? Hows the reporting?

Have you used co-pilot support? How was it?

reighnman
01-26-2012, 09:39 AM
reighnman - thanks for posting, I was hoping to find a few customer experience stories out of this post.

How has the hardware been?

Have you guys considered adding SSD's? Hows the reporting?

Have you used co-pilot support? How was it?

No issues with the hardware, be it controllers or drives. The drives are typically seagate (http://www.seagate.com/ww/v/index.jsp?name=st3450857ss-chta-15k.7-sas-450gb-hd&vgnextoid=ba96470bd8cc1210VgnVCM1000001a48090aRCRD&locale=en-US&pf=1#tTabContentSpecifications), which I know now they're using the newer drives with the 64MB cache.

We left 12 bays open in one of the enclosures for SSD's but we've yet to need them, instead now we're considering adding additional 10k drives. Their SSD's are expensive, I believe we were quoted around $5,000 per drive (200GB) not including additional license costs. Real-time monitoring utilities are great (host/disk/volume/interface), they offer several utilities you can install to monitor performance metrics. As for long term monitoring, only storage trends are monitored by the system so you'd have to monitor your own performance trends via snmp.

Copilot support has been onsite twice in the past year to replace cards that had updated revisions released for one reason or another with no charge to us. They notify me that the updates are available, ship the new cards on-site and scheduled a tech to come out and install them. I get a call anytime there is an alert on the san, typically when I forget to flip it into maintenance mode when adding/rebooting hosts. They have remote access to the san (when enabled by the client) for troubleshooting, it's usually the first thing they request. Considering we've never had a major issue I don't know how far you can go on that.

Configuration has been a breeze, they did send a tech out for the initial install but I had it setup before he even got here just going through the documentation. The customer portal on the website has tons of documentation, best practices for different environments, videos, etc.

I'm sure any solution would have been better than what we had before. I can only tell you what I like about the Compellent system since I have no experience with the other vendors.

x98gulinski
01-26-2012, 11:51 AM
I've loved the copilot support we have, but we're on the older hardware/software.

gminks
01-26-2012, 12:13 PM
hey v@nill@ - the very same.

kdh
01-26-2012, 01:00 PM
Okay, so we are getting ready to make a large (for us) SAN upgrade.

We are looking at two VNX 5300s or two Compellent Series 40's.

For us, we currently have EMC gear and while we love our CX4 we cannot stand our AX4.

The EMC arrays we feel lack some of the most basic performance reporting features even with Navi Analyzer. We also hate the fact that we cannot rebalance storage pools when we add additional disks even though we hear that feature it coming in May.

As for the Compellent gear, it has everything you could ever want and need from a reporting aspect.


We have some other pros and cons of each but I'm interested to see what others have to say. Those two points above have been the bigest issue for us when it comes to the EMC gear we have no.


Both EMC and Dell are giving us deep discounts/buy back on our current EMC gear so we are getting pretty good pricing either way.

Agree on AX4.. It was a stop gap for small buisness who needed a safe but large footprint.

Agree on Pool Rebalance, but thats coming soon. You can rebalance the pool yourself by creating a new lun in the same pool and then migrating to it. Hack yes, but as long as you grow your pools out in large chunks of disks, you'll be fine.

Analyzer is 'ok'. You can download a few days worth of NAR files, combine them, and then pull them back into Analyzer to get a picture of things. OR.. You can download the Nar File, crack'em open with naviseccli, see they are CSV files, and do something awesome in excel to get the info you need or do something awesome in perl and gnuplot.

I personally like EMC.. But what I like, someone else might hate.

Why not get Dell to give you a Demo Unit and do a BackOff if you have the time? Really, you'll be the one stuck with the gear so it makes the most sense to use what you like.

Also.. something to think about. Has your EMC foot print dropped or killed any of your data? Unless EMC has betrayed you in some way in regards to data loss, wacky billing, and crappy support.. I'd stay on the same path.

best of luck.

RiDDLeRThC
01-26-2012, 01:03 PM
We will most likely be sticking with EMC, as you said... They haven't failed us (except for the AX4) and they are the safe bet.

I have already asked Dell for a demo unit, they dont have any. At least for our size business.

kdh
01-26-2012, 01:04 PM
Finally I don't think compellent offers NFS and CIFS in the same package. I think they can add a server or solution but it has separate management. The VNX will come with two X-Bldes which as managed via the same software (Unisphere - if you're not on Flare 30 yet check out the vids on youtube).

I have this solution on my 2 NS480s and 1 NS120.. its pretty damn close to the VNX..

Yes you can manage the devices from Unisphere, but I'm going to be 100% honest.. emc's gateways are terrible. They have been and will be. I've been using them for easily 5 years now.EMC knows its a terrible product hence why they picked up Isilon. If you want NFS or CFS, look at Emc's Isilon line.

RiDDLeRThC
01-26-2012, 01:05 PM
We are only interested in block so that doesn't effect our decision.

kdh
01-26-2012, 01:08 PM
I'll just throw out that technically, Compellent isn't far off - enough spindles WILL get the job done.

Other than that, I can't comment here.



While the above statement is true..

The cost of adding 100 Disks verse a few EFDs negates the above statement.

if you pay for data center power, having less do more for you is better. Getting Opex down is key. Spending 50K on DataCenter power because of large amounts of dumb drives is silly when you can spend 5K on DataCenter power and get the same level of performance.

fibroptikl
01-26-2012, 01:08 PM
We will most likely be sticking with EMC, as you said... They haven't failed us (except for the AX4) and they are the safe bet.

I have already asked Dell for a demo unit, they dont have any. At least for our size business.What size unit are you looking at?

kdh
01-26-2012, 01:10 PM
lol

The Dell/Compellent marketing force is in full attack mode!!!!

the emc/dell divorce was ugly for us customers in the middle. =\

RiDDLeRThC
01-26-2012, 01:12 PM
EMC VNX 5300 - HQ

4 x 200GB FAST Cache Drives (400 GB useable FAST Cache)
4 x 600GB 15K RPM SAS Vault Drives
1,464 GB Useable (10 x 200GB Flash, RAID5 (4+1)) – Top Tier
10,736 GB Useable (25 x 600GB 15K RPM, RAID5 (4+1)) – Mid Tier
10,932 GB Useable (15 x 1TB 7.2k RPM, RAID5 (4+1)) – Low Tier
1,060 GB Useable (4 x 600GB 15K RPM, RAID5 (3+1)) – Available Vault Drive Space
1 x 200GB Flash SAS Drive (Hot Spare Drive)
1 x 600GB 15K RPM SAS Drives (Hot Spare Drive)
1 x 1TB 7.2K RPM NL‐SAS Drives (Hot Spare Drive)

Proposed Rated IOPS, Flash DD’s = 10 x 2500 IOPS = 25,000 IOPS – Top Tier
Proposed Rated IOPS, 600GB 15k HD’s = 25 x 180 IOPS = 4,500 IOPS – Mid Tier
Proposed Rated IOPS, 1TB 7.2k HD’s = 15 x 80 IOPS = 1,200 IOPS – Low Tier



EMC VNX 5300 - Datacenter

4 x 100GB FAST Cache Drives (200 GB useable FAST Cache)
4 x 600GB 15K RPM SAS Vault Drives
366.8 GB Useable (5 x 100GB Flash, RAID5 (4+1)) – Top Tier
10,736 GB Useable (25 x 600GB 15K RPM, RAID5 (4+1)) – Mid Tier
7,288 GB Useable (10 x 1TB 7.2k RPM, RAID5 (4+1)) – Low Tier
1,060 GB Useable (4 x 600GB 15K RPM, RAID5 (3+1)) – Available Vault Drive Space
1 x 100GB Flash SAS Drive (Hot Spare Drive)
1 x 600GB 15K RPM SAS Drives (Hot Spare Drive)
1 x 1TB 7.2K RPM NL‐SAS Drives (Hot Spare Drive)

Proposed Rated IOPS, Flash DD’s = 5 x 2500 IOPS = 12,500 IOPS – Top Tier
Proposed Rated IOPS, 600GB 15k HD’s = 25 x 180 IOPS = 4,500 IOPS – Mid Tier
Proposed Rated IOPS, 1TB 7.2k HD’s = 10 x 80 IOPS = 800 IOPS – Low Tier

RiDDLeRThC
01-26-2012, 01:13 PM
the emc/dell divorce was ugly for us customers in the middle. =\

Yes, and its still ugly for us now! Our EMC array came from Dell last year.

fibroptikl
01-26-2012, 01:25 PM
EMC VNX 5300 - HQ
EMC VNX 5300 - Datacenter
What is the Compellent configuration you're looking at?

kdh
01-26-2012, 01:27 PM
ugh, so youre still dealing with that dell/black crap then right?

Tell your EMC rep you want the rest of the maintenance moved to emc. Its really just a paper work reshuffle for dell and emc that they do on the back end. they should do it for free. if they slightly bicker.. tell'em both youre looking at netapp. *wink*

RiDDLeRThC
01-26-2012, 01:27 PM
That would be helpful!

Both HQ and Datacenter would be the same.

• 2 active-active, shared-nothing series 40 controllers for high availability
• 2 – 2 port x 1 Gbps iSCSI card in each controller for host and replication connections
• 1 – SAS 6Gbps SAS I/O card for back end connections
• 12– 7.2k – 2 TB SAS drives
• 36 – 15K – 600 GB SAS drives

RiDDLeRThC
01-26-2012, 01:28 PM
ugh, so youre still dealing with that dell/black crap then right?

Tell your EMC rep you want the rest of the maintenance moved to emc. Its really just a paper work reshuffle for dell and emc that they do on the back end. they should do it for free. if they slightly bicker.. tell'em both youre looking at netapp. *wink*

If we keep the CX4 we will be moving support contracts, we have the paperwork.

Right now, if we do Compellent they will be buying back the EMC gear for a nice price/discount on Compellent.

fibroptikl
01-26-2012, 01:36 PM
That would be helpful!

Both HQ and Datacenter would be the same.

• 2 active-active, shared-nothing series 40 controllers for high availability
• 2 – 2 port x 1 Gbps iSCSI card in each controller for host and replication connections
• 1 – SAS 6Gbps SAS I/O card for back end connections
• 12– 7.2k – 2 TB SAS drives
• 36 – 15K – 600 GB SAS drivesIs it only iSCSI or is there Fiber channel connectivity as well?

RiDDLeRThC
01-26-2012, 01:39 PM
We are only using iSCSI

reighnman
01-26-2012, 02:13 PM
No 10gb iSCSI or FCoE?

RiDDLeRThC
01-26-2012, 02:14 PM
No 10gb iSCSI or FCoE?

Not right now. The upgrade path is there for us when we are ready.

V@nill@
01-27-2012, 01:16 AM
Proposed Rated IOPS, Flash DD’s = 10 x 2500 IOPS = 25,000 IOPS – Top Tier
Proposed Rated IOPS, 600GB 15k HD’s = 25 x 180 IOPS = 4,500 IOPS – Mid Tier
Proposed Rated IOPS, 1TB 7.2k HD’s = 15 x 80 IOPS = 1,200 IOPS – Low Tier



Your flash IOPS are by-the-guideline but the latest ones do seem to average at about 3200 IOPS. A number I put in personally peak at 4000 IOPS.

shade91
02-04-2012, 12:21 AM
Having worked with Dell, EMC and Compellent SANs, NetApp blows them out of the water. I wouldn't recommend anything else.

RiDDLeRThC
02-04-2012, 12:25 AM
We decide to remain with EMC and move forward with at least two new VNX arrays.

We might decide to not use the CX4 for DR and put in an identical VNX thats in our production site. Sucks because the CX4 would be perfect for DR but I don't get to make that decision, its the CIO.

NetJunkie
02-04-2012, 09:28 AM
We decide to remain with EMC and move forward with at least two new VNX arrays.

We might decide to not use the CX4 for DR and put in an identical VNX thats in our production site. Sucks because the CX4 would be perfect for DR but I don't get to make that decision, its the CIO.

Smart choice. :)

lopoetve
02-04-2012, 01:39 PM
No 10gb iSCSI or FCoE?

For the love of God, no.

10G ain't ready yet. And once again, that's all I got to say on that :p

NetJunkie
02-04-2012, 03:31 PM
For the love of God, no.

10G ain't ready yet. And once again, that's all I got to say on that :p

Say more. Or email me if you can't say it here. :)

Child of Wonder
02-04-2012, 10:03 PM
Say more. Or email me if you can't say it here. :)

Yes I second that.

RiDDLeRThC
02-04-2012, 10:40 PM
Ditto, not that we are going 10g soon but its being eyeballed. I would love to have just two connections from each server and 10g is the only way for that. We won't ever do FC.

fibroptikl
02-05-2012, 02:32 AM
For the love of God, no.

10G ain't ready yet. And once again, that's all I got to say on that :pI'm curious as well.