Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
EMC...just saw an email that our exchange cluster got hit with the '80 day EMC bug', least Im not on call
Nice, not a bad idea to use blades for dedicated servers, probably more dense than individual 1U/2U boxes and probably less power consumption when you look at the big picture... or does it end up being more?
Mind sharing why the switch from EMC to Hitachi? What EMC hardware are you migrating from?
I am replacing a CX4-240 with the HUS130 and an AX4 with the HUS110. It was a mix of budget, technology, and overall impression of the companies. We looked at EMC, Dell, Nimble, Hitachi and Exagrid, but we definitely got the most bang for buck with Hitachi.
EMC just assumed that because they were incumbent that they were the only choice. They didn't listen when I wanted to make tweaks to their solution and frankly acted insulted when I asked questions why they did things a certain way. They gave us their solution and it was evident that it was driven by dollars and cents rather than our requirements.
I really liked Nimble and their technology, but ultimately, their time on market made me go another route. I like to be on the leading-edge, not the bleeding-edge for our production environment. Maybe next upgrade if they are still going strong.
I purposely gave Hitachi our goals in broad strokes and they came back the first time with a solid offering, but CommVault was not in our budget. After some tweaking, we ended up with close to double the usable space on both the primary and D2D arrays and more than enough performance for our growth over the next 3-4 years.
We are replacing EMC Networker with PHDVirtual backup software as well so I get to revamp our entire storage and backup environment from the ground up.
Needless to say, I can't wait.
Just got done building this wall rack. I am not in the IT biz, so it is what it is.
Zyxel USG 100 With Blue Coat WebPulse feed/proxy., old definition AV and IDS.
Sophos/Astaro Home UTM in bridge mode = 8 Core Atom (22nm ) Rangeley processor in Supermicro Super Server. (maybe, custom built Kingston ram due this Friday. Lets hope the ISO loads )
Zyxel GS1900 smart switch. (no 16 port Procurve available and I am only a GUI guy = no Cisco)
Everything including cable power amp uses .6 amps to .8 (full load) = 70 watts and 95 watts
Small server rack
Wall server rack
Home server rack
It should be much less power use.
Actually that's another thing, if you buy a blade enclosure, how hard is it to find blades in the future? I'm guessing there is no standard to these and each vendor does it their own way and changes it every few years?
yep, but this can also be good for people that don't need the latest... since older stuff quickly becomes worthless
I am fighting to replace all the netburst garbage with 55/56 series xeon stuff since its soooooo cheap on ebay
EMC just assumed that because they were incumbent that they were the only choice. They didn't listen when I wanted to make tweaks to their solution and frankly acted insulted when I asked questions why they did things a certain way. They gave us their solution and it was evident that it was driven by dollars and cents rather than our requirements.
Nice, not a bad idea to use blades for dedicated servers, probably more dense than individual 1U/2U boxes and probably less power consumption when you look at the big picture... or does it end up being more?
Way less, blades are extremely power efficient, I wish we would have moved to it sooner. I'd say nearly 50% less power used on our blade systems than rack mount servers.It should be much less power use.
10G Ethernet with FCOE. No LACP is for the birds. We do a lot of active active software defined networking and vmware.VisBits, are those 10Gb uplinks? Those links to the network or FC? I would assume they are network links they are LACP links? Looks like the top 2 and bottom have 2 and the second one from the bottom has 6?
Blades have their place, they are not an end all solutions
Initial cost is considerably higher as you need 2 enclosures minimum or if you dont mind having down time.
Power is more efficient, you almost always need an external data store as most blades only have 1-4 drives, but the new dell's do have a storage module now.
Your upgrade path can also be considered less, once you spend $30k on the enclosure (starting usually), hope the socket and chipsets used are not near to end of life or with in your 3 year cycle your upgrade path will be limited.
Actually that's another thing, if you buy a blade enclosure, how hard is it to find blades in the future? I'm guessing there is no standard to these and each vendor does it their own way and changes it every few years?
Never buy bleeding edge blades, the prices drop super fast and the performance isn't really that much faster. Checkyep, but this can also be good for people that don't need the latest... since older stuff quickly becomes worthless
I am fighting to replace all the netburst garbage with 55/56 series xeon stuff since its soooooo cheap on ebay
Do you need more than 1TB of working data?Our DC are 90% HP blades, G1 blades fit into the same chassis as the G8 blades. For VMWare they are excellent for individual servers not so useful as they have a max of 2 disks although our SQL cluster are all physical blades with iscsi for storage.
Details?
Friend used to use EMC for alot of large migrations but is now using
http://www.nimblestorage.com/
basically a team of original data storage guru's formed this company
I am replacing a CX4-240 with the HUS130 and an AX4 with the HUS110. It was a mix of budget, technology, and overall impression of the companies. We looked at EMC, Dell, Nimble, Hitachi and Exagrid, but we definitely got the most bang for buck with Hitachi.
I really liked Nimble and their technology, but ultimately, their time on market made me go another route. I like to be on the leading-edge, not the bleeding-edge for our production environment. Maybe next upgrade if they are still going strong.
Nimble is awesome, we have a CS440 and a CS460 with max cache.
Its funny, these look like little systems but their capacity and io performance blows our VNX 5300 out of the water.![]()
I would have had less reservation if it was not ALL of our production data migrating off the CX4. I just couldn't justify that to myself. It sounds like you have more storage diversity than us. I am sure that it would have been fine, but Murphy's Law is usually derived from IrishMLK's Law... Hence, I went with Hitachi.
If I end up doing a VDI project this year, then Nimble is the definite choice. Goodbye XP!
I would have had less reservation if it was not ALL of our production data migrating off the CX4. I just couldn't justify that to myself. It sounds like you have more storage diversity than us. I am sure that it would have been fine, but Murphy's Law is usually derived from IrishMLK's Law... Hence, I went with Hitachi.
If I end up doing a VDI project this year, then Nimble is the definite choice. Goodbye XP!
Trying to get a CS240 at work, local storage only for Hyper-V sucks.
Still deciding if we are going with 10gb or just 1gb on the CS240.
Way less, blades are extremely power efficient, I wish we would have moved to it sooner. I'd say nearly 50% less power used on our blade systems than rack mount servers.
10G Ethernet with FCOE. No LACP is for the birds. We do a lot of active active software defined networking and vmware.
I disagree with everything you said, infact I don't think you have any idea what your even talking about.
1. Blades are a fantastic solution for almost anything now days, theres full, half and quarter height blades, the full height blades support up to 2 PCI-E X16 3.0 GPU! They have the power to support it to! You should do some research.
They have quarter height blades with 6 ram slots and dual proc with 2 disk... got density? With 960GB ssd being so cheap now its a no brainer to use it.
2. 2 enclosures? Everything in a m1000e is active active, you have 2 ports for everything in the chassis, a failure of the backplane is damn near unheard of.. we've never had a problem and I have 20 enclosures now.
3. We use very little external storage with our blade systems, most blades have 240gb, 460gb and 960gb mirrors for customer data, you will never need more iops than that.
4. Enclosures can be had BRAND NEW for $3500 from good dell vendors, you just have to shop around and know the right things. Switches for them have a huge price range depending on the brand and features you want, we use a lot of dell M6220 switches because our network is heavily layer 2 at the access level.
5. The service life of the M1000e is awesome, only a few years ago did they upgrade the backplane to support 10G on the A fabric, B and C support 10G and 40G IB!
6. Talking about the sockets of a blade? Its the same as a Rack mount server... they don't magically make different parts for these, its a different form factor.
Never buy bleeding edge blades, the prices drop super fast and the performance isn't really that much faster. Check
Do you need more than 1TB of working data?
Its mostly people from netapp, I know the CEO, Suresh very well!
Nimble is awesome, we have a CS440 and a CS460 with max cache.
Its funny, these look like little systems but their capacity and io performance blows our VNX 5300 out of the water.
Some more pictures of our NAS and VNX.
Wonder if anyone else on [H] aside from me supports these. Sadly I don't think I'll be able to post much more than this:
![]()
Over the past 18 months I have started to go of NetApp, we had a set of disks snap their own latches and because the disks are spring loaded they auto ejected from the tray!
I appreciate the info from someone who has the experience, i have 0 experience with blades and have gone solely based on info from numerous site articles and blogs from techrepublic and othes i have found as well as our parent companies head Sys Admin. (who loves blades)
Sure SSD's would be awesome, but our company is not that large so dropping $2k on an SSD, each, for systems is not always something i can pass over to them convincingly enough (doesn't help the parent company I.T says SSD are a bad idea because some tool at HP said so)
My main issue was the backplanes failing, so a min of 2 was needed, i wasn't aware they could be had so cheap and were that reliable, when i spoke with HP they said $30k for the base, and if you spend $60k or more then will give the enclosure for free.
I am currently looking at options for a new co-location and while i considered blades, the upfront costs from the info i had were almost 2-3x what it would cost for individual servers and sans.
We will have 2 MSSQL 2012 boxes in Replication and offsite replicated to another country.
8 applications servers running custom applications on them 4 and 4
4 IIS servers i want in load balance
2 Reverse proxies
and a partridge in a pear tree
i want to virutalize as much as i can and my worry was HA speed in ESXI / vCenter incase something goes down.
Way less, blades are extremely power efficient, I wish we would have moved to it sooner. I'd say nearly 50% less power used on our blade systems than rack mount servers.
10Gig goodies arrived!
These were a steal on ebay, 50USD/pop WITH 2 10gbics![]()
I'll have to admit seeing you guys go back and forth about blades vs not going blades cracks me up. Blades will always have their place and not have their place. I have a buddy who just got done ripping out 4+ blade centers for 2u servers. Justification. Could get more out of the physical servers by not having to share the back plane than keeping them.
Long and short. Do your home work. If a blade center makes space by it. Odds are it probably doesn't always make sense
10Gig goodies arrived!
<SNIP>
These were a steal on ebay, 50USD/pop WITH 2 10gbics![]()