Where can I find reviews of servers? HP ProLiant BL460c and IBM BladeCenter HS21...

I'll throw out a warning about insight manager and control:

Make SURE your ESX boxes are patched. We have a whole lot of patches for that product to aid compatibility - you'll save yourself a lot of headache that way :) Patched it works well though.

Yeah, thanks Lopoetve. We are an EMC partner, and actually have a VMware resident on staff right now helping augment our staff as we continue our 80/20 virtualization project.

Do you work for VMware or a VAR?
 
Yeah, thanks Lopoetve. We are an EMC partner, and actually have a VMware resident on staff right now helping augment our staff as we continue our 80/20 virtualization project.

Do you work for VMware or a VAR?

VMWare - Technical Support Engineer in the Storage group :)
 
Cool. You might have talked to one of my engineers before. We are a big HP & EMC shop (26 datacenters), and have gotten very serious about virtualization in the last year. We use HP's virtualization (VPARs/NPARs) on our big iron equipment (mostly Integrity Superdome), but we are about 70% done moving 500 Windows hosts to ESX on the C7000's with a DMX-4 backend.

EDIT: I work for a company called Healthways. We do disease management and whole health management for health plans and large employers.
 
Nice! I'll admit I always preferred IBM for big iron (partially a background with some of their giant clusters and Blue Gene systems, and partially because SMIT > SAM ;)), but HP makes some solid boxes too. I miss my HP-UX box :(

Good to see you're going EMC though :) I think I've heard of Healthways - or at least heard it mentioned around the McKesson halls.
 
VMWare - Technical Support Engineer in the Storage group :)

Still hate EMC, they're still overpriced crap. My EMC's sit powered down, while I laugh at the tray of mixed SATA disks that says "OS - DO NOT REMOVE," right above 73GB 15K FC's.
Yeah, I'm almost all IBM storage. It's faster, it's cheaper, it's better. Which begs the question, when's VMWare going to actually support the IBM 2145? Instead of pawning it off to IBM and OOS/RPQs? :p
 
Still hate EMC, they're still overpriced crap. My EMC's sit powered down, while I laugh at the tray of mixed SATA disks that says "OS - DO NOT REMOVE," right above 73GB 15K FC's.
Yeah, I'm almost all IBM storage. It's faster, it's cheaper, it's better. Which begs the question, when's VMWare going to actually support the IBM 2145? Instead of pawning it off to IBM and OOS/RPQs? :p

When IBM certifies it, that's when. It's not our responsibility to write drivers for storage systems, it's the makers that have to get that job done. If it's not certified, it's because it doesn't accept certain commands that we try to give.

And what's your hate with EMC? :confused: Powerpath and navisphere rock for management systems, and the OS being spread across a small portion of your RAID5 array means it's far more reliable than other options. And last I checked, they don't sell any serious arrays with SATA drives - SAS or FC only. Plus, if you're just adding disks, you CAN remove the os portion of those 5 drives.

The only san parts I hate are Apple - no offense, they should stick to laptops and workstations. the apple RAID is a freaking joke, and so is the Apple SAN. Oh, and equalogic, who just makes total crap.
 
When IBM certifies it, that's when. It's not our responsibility to write drivers for storage systems, it's the makers that have to get that job done. If it's not certified, it's because it doesn't accept certain commands that we try to give.

Actually, it's not certified because VMWare insists on writing their own drivers, and refusing to honor Preferred Port. In theory, I could shoehorn SDD into there, but I'm not paid to do my vendor's work for them. Sorry, I guess you mistook me for a typical user who has no idea about their SAN.

And what's your hate with EMC? :confused: Powerpath and navisphere rock for management systems, and the OS being spread across a small portion of your RAID5 array means it's far more reliable than other options. And last I checked, they don't sell any serious arrays with SATA drives - SAS or FC only. Plus, if you're just adding disks, you CAN remove the os portion of those 5 drives.

... why the hell would I put my array on slow SATA disks? These are old 5400RPM Maxtrash disks. And stuck with a 'patch' that requires me to reinstall the patch every time I take the pseudo-head down. It's an entire separate shelf eating space in my cabinet that I could use for disk. I could care less if they wasted a few megs of fast disk, but instead, they want an entire shelf to waste with junk disk. The rest of the array is 15K 73GB FC disk, as I said. It was slower than crap, to boot. Granted, it's Clariion, but for what they want for DMX? I could get double that in IBM and probably an 8 node 2145. With so much less multipath headache.

The only san parts I hate are Apple - no offense, they should stick to laptops and workstations. the apple RAID is a freaking joke, and so is the Apple SAN. Oh, and equalogic, who just makes total crap.

I have an Xserve "Fiber" array attached to one box. It's not our box. I won't even work on it. Another department spent an absolute FORTUNE on it, tried to install it and the Xserve in their department. It overheated. So they made us put it in the data center. Then they tried to put both on the MDS9500's - I'm sure you can guess how that went. So now it sits there grinding away, stuck on my backup schedules chewing up drives for literally days, because it's got no throughput.
You think Equalogic's hardware is trash, try their salespeople. I had to walk out of that meeting. Memorable quotes "it's not Linux," while showing how every interface is 'eth0, eth1, eth2' etc. "It's faster than fiber channel!" while showing two gigE ports and a third 'management' port. "SATA is just as fast as SCSI disk!" Then they came back with a quote that was three times what it would cost me for a fully licensed IBM DS3400 with just as much SCSI disk, two FC switches, and four HBAs.
 
Actually, it's not certified because VMWare insists on writing their own drivers, and refusing to honor Preferred Port. In theory, I could shoehorn SDD into there, but I'm not paid to do my vendor's work for them. Sorry, I guess you mistook me for a typical user who has no idea about their SAN.
Negative. We let IBM/EMC/etc do the certifying. And you can't shoehorn anything in: There's no console access to install drivers even if you wanted to (No, the Console OS is not a console - it's a VM).
... why the hell would I put my array on slow SATA disks? These are old 5400RPM Maxtrash disks. And stuck with a 'patch' that requires me to reinstall the patch every time I take the pseudo-head down. It's an entire separate shelf eating space in my cabinet that I could use for disk. I could care less if they wasted a few megs of fast disk, but instead, they want an entire shelf to waste with junk disk. The rest of the array is 15K 73GB FC disk, as I said. It was slower than crap, to boot. Granted, it's Clariion, but for what they want for DMX? I could get double that in IBM and probably an 8 node 2145. With so much less multipath headache.
So pull it, put in new disks, and install the latest flare code. :confused: I don't see what the problem is. It's not like you're stuck with what's on there (although the flare update is a PITA). At least, with any decently modern box you shouldn't be. What model is it? As for speed: Got any numbers? Set up correctly with decent switches and the right zoning, they're all pretty much equal on 2G fibre, at least from what we're seeing in house. That's what I saw at my prior job too - we had an HP array and a Clariion both.
I have an Xserve "Fiber" array attached to one box. It's not our box. I won't even work on it. Another department spent an absolute FORTUNE on it, tried to install it and the Xserve in their department. It overheated. So they made us put it in the data center. Then they tried to put both on the MDS9500's - I'm sure you can guess how that went. So now it sits there grinding away, stuck on my backup schedules chewing up drives for literally days, because it's got no throughput.
You think Equalogic's hardware is trash, try their salespeople. I had to walk out of that meeting. Memorable quotes "it's not Linux," while showing how every interface is 'eth0, eth1, eth2' etc. "It's faster than fiber channel!" while showing two gigE ports and a third 'management' port. "SATA is just as fast as SCSI disk!" Then they came back with a quote that was three times what it would cost me for a fully licensed IBM DS3400 with just as much SCSI disk, two FC switches, and four HBAs.

Yeah, the xserve fiber ones are a fucking joke. LOL multipathing, LOL IDE drives, LOL management. As for equalogic - I just love how they like to eat partition tables for fun.
 
Negative. We let IBM/EMC/etc do the certifying. And you can't shoehorn anything in: There's no console access to install drivers even if you wanted to (No, the Console OS is not a console - it's a VM).

I'm going to refrain from comment rather than tip my hand.

So pull it, put in new disks, and install the latest flare code. :confused: I don't see what the problem is. It's not like you're stuck with what's on there (although the flare update is a PITA). At least, with any decently modern box you shouldn't be. What model is it? As for speed: Got any numbers? Set up correctly with decent switches and the right zoning, they're all pretty much equal on 2G fibre, at least from what we're seeing in house. That's what I saw at my prior job too - we had an HP array and a Clariion both.

Uhm, because the 'latest flare' isn't persistent. Seriously. It has to be reloaded every time you reboot the callhome box, any time you crash the controller, any time you change the array.. yeah, real quality there. We won't mention the DMX that lost a director and the untrained monkey that couldn't figure it out for nearly eight hours - the only thing I've seen EMC consistently get right is swapping disks.
And you're trying to compare Yugos to a Ferrari. The combo we have behind the SVC is what you get when you don't have the physical room for a DS8000, or only want open systems access with no mainframes or FICON. HP storage has been a joke for years after they gutted it, and a Clariion, forget it. You're going to need a minimum DMX3 to offer realistic competition. To state otherwise is flat out wrong, period.
CX620 has 4 fiber ports. An SVC has a minimum 8 ports at a minimum 2Gbit multiplied by the ports of your storage arrays. e.g. Clariion @ 8Gbit theoretical excepting that it's active/passive so it's really 4Gbit vs 4 Node SVC + 3 x DS4700 = 16Gbit to host, with 48Gbit out of your arrays theoretical. That's based on 2Gbit ports, not 4Gbit. The SVCs come in 4Gbit flavor, giving you a theoretical 32Gbit SVC 48Gbit backing disk. Clariion, like NetCrap, is a toy. If you try to put real load on it, you will find this out very quickly.

Yeah, the xserve fiber ones are a fucking joke. LOL multipathing, LOL IDE drives, LOL management. As for equalogic - I just love how they like to eat partition tables for fun.

Management? You can manage those things? News to everyone here. They got put on the "you delete any files, you are paying for the restore, in entirety" plan. Since I have to read a minimum 8 tapes to restore one file. Makes me very grumpy, that.
Equalogic was the only array I've seen worse than the IBM DS300, which is a rebranded Adaptec 1250. Joke? Not even. The DS300 claims redundant power - they're not. Fill the disk to 95% on your iSCSI 'controller' box, array crashes, goes into complete rebuild mode. Slowest U320 I've ever seen in my life, but typical for Adaptec. At least you eventually get access to your data again, though. Then again, the Equalogic crashed and burned bad when they put load on it, then completely died when they tried to demonstrate hot-swap...
 
I'm going to refrain from comment rather than tip my hand.
Trust me - no matter how much you want to shove, you're not getting a real console. We specifically built it so that the COS (after ESX2.5) is truly a VM. It might look like you have real hardware access, but that's because we made it so that the HAL was transparent. It's a VM - if you're installing drivers, and we figure it out, we'll just laugh when you call for support. Their interaction with the VM is totally undefined.
Uhm, because the 'latest flare' isn't persistent. Seriously. It has to be reloaded every time you reboot the callhome box, any time you crash the controller, any time you change the array.. yeah, real quality there. We won't mention the DMX that lost a director and the untrained monkey that couldn't figure it out for nearly eight hours - the only thing I've seen EMC consistently get right is swapping disks.
And you're trying to compare Yugos to a Ferrari. The combo we have behind the SVC is what you get when you don't have the physical room for a DS8000, or only want open systems access with no mainframes or FICON. HP storage has been a joke for years after they gutted it, and a Clariion, forget it. You're going to need a minimum DMX3 to offer realistic competition. To state otherwise is flat out wrong, period.
CX620 has 4 fiber ports. An SVC has a minimum 8 ports at a minimum 2Gbit multiplied by the ports of your storage arrays. e.g. Clariion @ 8Gbit theoretical excepting that it's active/passive so it's really 4Gbit vs 4 Node SVC + 3 x DS4700 = 16Gbit to host, with 48Gbit out of your arrays theoretical. That's based on 2Gbit ports, not 4Gbit. The SVCs come in 4Gbit flavor, giving you a theoretical 32Gbit SVC 48Gbit backing disk. Clariion, like NetCrap, is a toy. If you try to put real load on it, you will find this out very quickly.
ok, yeah, that's not normal :p EMC support? And ESX doesn't support that form of multipathing, so I can't combine bandwidth on what I'm doing. As far as we're concerned, the most you can get is 4gbit at the moment - although with more paths each host can have a full 4gbit. Doesn't matter as much for what we do as much. When I was working with the big iron (Blue Gene/L system, as well as an 78 node P575 cluster) we had stuff similar to what you're talking about. We really don't see it anymore though.
Management? You can manage those things? News to everyone here. They got put on the "you delete any files, you are paying for the restore, in entirety" plan. Since I have to read a minimum 8 tapes to restore one file. Makes me very grumpy, that.
Equalogic was the only array I've seen worse than the IBM DS300, which is a rebranded Adaptec 1250. Joke? Not even. The DS300 claims redundant power - they're not. Fill the disk to 95% on your iSCSI 'controller' box, array crashes, goes into complete rebuild mode. Slowest U320 I've ever seen in my life, but typical for Adaptec. At least you eventually get access to your data again, though. Then again, the Equalogic crashed and burned bad when they put load on it, then completely died when they tried to demonstrate hot-swap...

Yeah, you can... sorta :p They love to eat partition tables though >_< . I'll give the apple some credit - you rebuild the table, the VMs will be there.
 
Back
Top