Network pics thread

8c973e9cbee30302db8b2d9ac69786e4.jpg



The upper disk are part of a Dell MD3620F which is used to export block devices for NAS. It has 56 3TB SAS disk.

The bottom EMC array is a VNX 5300 with 20 2TB 7.2k and 50 300GB 15k.
 
The blades are dedicated servers for customers. I work with a hosting provider.

Back of them:
8381da190edce85b7bd6f3108a6ac0ee.jpg


They are kinda hard to cable, the power cables are generally to long and because they are 12 gauge cabling we don't have to many options. Each PSU in the blade center is 2700 watts.
 
Nice, not a bad idea to use blades for dedicated servers, probably more dense than individual 1U/2U boxes and probably less power consumption when you look at the big picture... or does it end up being more?
 
We are getting our new Hitachi kit delivered on Monday. An HUS130 with ~72TB usable (10 200GB SSD, 42 600GB SAS, and 42 2TB drives) as the primary SAN and an HUS110 with ~40TB for D2D backup.

I will post pics when we get them installed.
 
EMC...just saw an email that our exchange cluster got hit with the '80 day EMC bug', least Im not on call
 
Nice, not a bad idea to use blades for dedicated servers, probably more dense than individual 1U/2U boxes and probably less power consumption when you look at the big picture... or does it end up being more?

It should be much less power use.
 
VisBits, are those 10Gb uplinks? Those links to the network or FC? I would assume they are network links they are LACP links? Looks like the top 2 and bottom have 2 and the second one from the bottom has 6?
 
Mind sharing why the switch from EMC to Hitachi? What EMC hardware are you migrating from?

I am replacing a CX4-240 with the HUS130 and an AX4 with the HUS110. It was a mix of budget, technology, and overall impression of the companies. We looked at EMC, Dell, Nimble, Hitachi and Exagrid, but we definitely got the most bang for buck with Hitachi.

EMC just assumed that because they were incumbent that they were the only choice. They didn't listen when I wanted to make tweaks to their solution and frankly acted insulted when I asked questions why they did things a certain way. They gave us their solution and it was evident that it was driven by dollars and cents rather than our requirements.

I really liked Nimble and their technology, but ultimately, their time on market made me go another route. I like to be on the leading-edge, not the bleeding-edge for our production environment. Maybe next upgrade if they are still going strong.

I purposely gave Hitachi our goals in broad strokes and they came back the first time with a solid offering, but CommVault was not in our budget. After some tweaking, we ended up with close to double the usable space on both the primary and D2D arrays and more than enough performance for our growth over the next 3-4 years.

We are replacing EMC Networker with PHDVirtual backup software as well so I get to revamp our entire storage and backup environment from the ground up.

Needless to say, I can't wait.
 
I am replacing a CX4-240 with the HUS130 and an AX4 with the HUS110. It was a mix of budget, technology, and overall impression of the companies. We looked at EMC, Dell, Nimble, Hitachi and Exagrid, but we definitely got the most bang for buck with Hitachi.

EMC just assumed that because they were incumbent that they were the only choice. They didn't listen when I wanted to make tweaks to their solution and frankly acted insulted when I asked questions why they did things a certain way. They gave us their solution and it was evident that it was driven by dollars and cents rather than our requirements.

I really liked Nimble and their technology, but ultimately, their time on market made me go another route. I like to be on the leading-edge, not the bleeding-edge for our production environment. Maybe next upgrade if they are still going strong.

I purposely gave Hitachi our goals in broad strokes and they came back the first time with a solid offering, but CommVault was not in our budget. After some tweaking, we ended up with close to double the usable space on both the primary and D2D arrays and more than enough performance for our growth over the next 3-4 years.

We are replacing EMC Networker with PHDVirtual backup software as well so I get to revamp our entire storage and backup environment from the ground up.

Needless to say, I can't wait.

Interesting, I cannot say my experience was the same with EMC. I have a great VAR and a great sales team. We just did somewhat the opposite of you. I'm replacing a VNX5300 with a VNX5800 and my DPM backup with Avamar/Networker.

This will bring our array ownership to 2x5300's, 1xCX4-120, 1xAX4, and now 1x5800
 
Just got done building this wall rack. I am not in the IT biz, so it is what it is.

Zyxel USG 100 With Blue Coat WebPulse feed/proxy., old definition AV and IDS.

Sophos/Astaro Home UTM in bridge mode = 8 Core Atom (22nm ) Rangeley processor in Supermicro Super Server. (maybe, custom built Kingston ram due this Friday. Lets hope the ISO loads )

Zyxel GS1900 smart switch. (no 16 port Procurve available and I am only a GUI guy = no Cisco)

Everything including cable power amp uses .6 amps to .8 (full load) = 70 watts and 95 watts

Small server rack
Wall server rack
Home server rack

Spec or model nr. of the UTM server
 
It should be much less power use.

Blades have their place, they are not an end all solutions

Initial cost is considerably higher as you need 2 enclosures minimum or if you dont mind having down time.

Power is more efficient, you almost always need an external data store as most blades only have 1-4 drives, but the new dell's do have a storage module now.

Your upgrade path can also be considered less, once you spend $30k on the enclosure (starting usually), hope the socket and chipsets used are not near to end of life or with in your 3 year cycle your upgrade path will be limited.
 
Actually that's another thing, if you buy a blade enclosure, how hard is it to find blades in the future? I'm guessing there is no standard to these and each vendor does it their own way and changes it every few years?
 
Actually that's another thing, if you buy a blade enclosure, how hard is it to find blades in the future? I'm guessing there is no standard to these and each vendor does it their own way and changes it every few years?

yep, but this can also be good for people that don't need the latest... since older stuff quickly becomes worthless

I am fighting to replace all the netburst garbage with 55/56 series xeon stuff since its soooooo cheap on ebay
 
Our DC are 90% HP blades, G1 blades fit into the same chassis as the G8 blades. For VMWare they are excellent for individual servers not so useful as they have a max of 2 disks although our SQL cluster are all physical blades with iscsi for storage.
 
yep, but this can also be good for people that don't need the latest... since older stuff quickly becomes worthless

I am fighting to replace all the netburst garbage with 55/56 series xeon stuff since its soooooo cheap on ebay

That's a good point. Looking on ebay real quick and there actually is some decent deals. Too bad ebay sellers like to rape on shipping though. Some are asking like over $500. The issue with that is I don't think that is refundable should something go south with the purchase.
 
EMC just assumed that because they were incumbent that they were the only choice. They didn't listen when I wanted to make tweaks to their solution and frankly acted insulted when I asked questions why they did things a certain way. They gave us their solution and it was evident that it was driven by dollars and cents rather than our requirements.

Sad to hear that :(. The things you hear from customers sometimes makes you want to punch the sales team ;).
 
Nice, not a bad idea to use blades for dedicated servers, probably more dense than individual 1U/2U boxes and probably less power consumption when you look at the big picture... or does it end up being more?
It should be much less power use.
Way less, blades are extremely power efficient, I wish we would have moved to it sooner. I'd say nearly 50% less power used on our blade systems than rack mount servers.


VisBits, are those 10Gb uplinks? Those links to the network or FC? I would assume they are network links they are LACP links? Looks like the top 2 and bottom have 2 and the second one from the bottom has 6?
10G Ethernet with FCOE. No LACP is for the birds. We do a lot of active active software defined networking and vmware.

Blades have their place, they are not an end all solutions
Initial cost is considerably higher as you need 2 enclosures minimum or if you dont mind having down time.
Power is more efficient, you almost always need an external data store as most blades only have 1-4 drives, but the new dell's do have a storage module now.
Your upgrade path can also be considered less, once you spend $30k on the enclosure (starting usually), hope the socket and chipsets used are not near to end of life or with in your 3 year cycle your upgrade path will be limited.

I disagree with everything you said, infact I don't think you have any idea what your even talking about.

1. Blades are a fantastic solution for almost anything now days, theres full, half and quarter height blades, the full height blades support up to 2 PCI-E X16 3.0 GPU! They have the power to support it to! You should do some research.

They have quarter height blades with 6 ram slots and dual proc with 2 disk... got density? With 960GB ssd being so cheap now its a no brainer to use it.

2. 2 enclosures? Everything in a m1000e is active active, you have 2 ports for everything in the chassis, a failure of the backplane is damn near unheard of.. we've never had a problem and I have 20 enclosures now.
3. We use very little external storage with our blade systems, most blades have 240gb, 460gb and 960gb mirrors for customer data, you will never need more iops than that.
4. Enclosures can be had BRAND NEW for $3500 from good dell vendors, you just have to shop around and know the right things. Switches for them have a huge price range depending on the brand and features you want, we use a lot of dell M6220 switches because our network is heavily layer 2 at the access level.
5. The service life of the M1000e is awesome, only a few years ago did they upgrade the backplane to support 10G on the A fabric, B and C support 10G and 40G IB!
6. Talking about the sockets of a blade? Its the same as a Rack mount server... they don't magically make different parts for these, its a different form factor.

:rolleyes:


Actually that's another thing, if you buy a blade enclosure, how hard is it to find blades in the future? I'm guessing there is no standard to these and each vendor does it their own way and changes it every few years?
yep, but this can also be good for people that don't need the latest... since older stuff quickly becomes worthless
I am fighting to replace all the netburst garbage with 55/56 series xeon stuff since its soooooo cheap on ebay
Never buy bleeding edge blades, the prices drop super fast and the performance isn't really that much faster. Check


Our DC are 90% HP blades, G1 blades fit into the same chassis as the G8 blades. For VMWare they are excellent for individual servers not so useful as they have a max of 2 disks although our SQL cluster are all physical blades with iscsi for storage.
Do you need more than 1TB of working data?



Details?

Friend used to use EMC for alot of large migrations but is now using

http://www.nimblestorage.com/

basically a team of original data storage guru's formed this company

Its mostly people from netapp, I know the CEO, Suresh very well!


I am replacing a CX4-240 with the HUS130 and an AX4 with the HUS110. It was a mix of budget, technology, and overall impression of the companies. We looked at EMC, Dell, Nimble, Hitachi and Exagrid, but we definitely got the most bang for buck with Hitachi.
I really liked Nimble and their technology, but ultimately, their time on market made me go another route. I like to be on the leading-edge, not the bleeding-edge for our production environment. Maybe next upgrade if they are still going strong.

Nimble is awesome, we have a CS440 and a CS460 with max cache.


e929cd7c22751d4bc985b11feea23ec6.jpg

57b4250892b38f143c034512e77504b0.jpg

e548bdc97e2d1b1c3afb32dbdedbb9de.jpg



Its funny, these look like little systems but their capacity and io performance blows our VNX 5300 out of the water. :)




Some more pictures of our NAS and VNX.

4c69fddc4ab1a3f08594c80e8ba1b203.jpg

0ce5cf14944455297a7aa44678495ab7.jpg
 
Nimble is awesome, we have a CS440 and a CS460 with max cache.


Its funny, these look like little systems but their capacity and io performance blows our VNX 5300 out of the water. :)

I would have had less reservation if it was not ALL of our production data migrating off the CX4. I just couldn't justify that to myself. It sounds like you have more storage diversity than us. I am sure that it would have been fine, but Murphy's Law is usually derived from IrishMLK's Law... Hence, I went with Hitachi.

If I end up doing a VDI project this year, then Nimble is the definite choice. Goodbye XP!
 
I would have had less reservation if it was not ALL of our production data migrating off the CX4. I just couldn't justify that to myself. It sounds like you have more storage diversity than us. I am sure that it would have been fine, but Murphy's Law is usually derived from IrishMLK's Law... Hence, I went with Hitachi.

If I end up doing a VDI project this year, then Nimble is the definite choice. Goodbye XP!

Let me give you some advice. Disk only based systems are VERY reliable and have guaranteed performance. You know how many spindles are available and what sort of iops that data set will have at all times. With a cache accelerated system like nimble tintri ect... your data is only as fast as the 10 7k disk or hot data in flash. If you do a lot of boot storms (all vms at once) you will have issues with any cache accelerated system.

If your looking to deploy mad VDI, tintri is the shit! Its fast, good performance just like nimble, and it dedupes the data on the flash so your vdi deployment is almost 0 impact. And you can make clones very very rapidly.

Nimble is good but for VDI tintri is a clear choice. I've tested both, for our work load tintri wasn't a good fit, we needed block storage.
 
I would have had less reservation if it was not ALL of our production data migrating off the CX4. I just couldn't justify that to myself. It sounds like you have more storage diversity than us. I am sure that it would have been fine, but Murphy's Law is usually derived from IrishMLK's Law... Hence, I went with Hitachi.

If I end up doing a VDI project this year, then Nimble is the definite choice. Goodbye XP!

Same boat we were in, just couldn't put all of our data into some up and comming all ssd array company. we had a 60tb all flash quote from Nimbus Data for 160k. $140k less than what we paid for our VNX5800 and still didn't pull the trigger.

like you said, if we head down vdi route as well we would be all over one of them!
 
Trying to get a CS240 at work, local storage only for Hyper-V sucks.
Still deciding if we are going with 10gb or just 1gb on the CS240.
 
Trying to get a CS240 at work, local storage only for Hyper-V sucks.
Still deciding if we are going with 10gb or just 1gb on the CS240.

Depends what your doing, if you have a lot of sequential IO the 10G will help, if you do a lot of iops multiple 1g ports with multipath will be just fine.

If you need quotes on nimble stuff pm me, I can help you get a good deal!
 
Wonder if anyone else on [H] aside from me supports these. Sadly I don't think I'll be able to post much more than this:

IMG_00000384.jpg
 
Way less, blades are extremely power efficient, I wish we would have moved to it sooner. I'd say nearly 50% less power used on our blade systems than rack mount servers.



10G Ethernet with FCOE. No LACP is for the birds. We do a lot of active active software defined networking and vmware.



I disagree with everything you said, infact I don't think you have any idea what your even talking about.

1. Blades are a fantastic solution for almost anything now days, theres full, half and quarter height blades, the full height blades support up to 2 PCI-E X16 3.0 GPU! They have the power to support it to! You should do some research.

They have quarter height blades with 6 ram slots and dual proc with 2 disk... got density? With 960GB ssd being so cheap now its a no brainer to use it.

2. 2 enclosures? Everything in a m1000e is active active, you have 2 ports for everything in the chassis, a failure of the backplane is damn near unheard of.. we've never had a problem and I have 20 enclosures now.
3. We use very little external storage with our blade systems, most blades have 240gb, 460gb and 960gb mirrors for customer data, you will never need more iops than that.
4. Enclosures can be had BRAND NEW for $3500 from good dell vendors, you just have to shop around and know the right things. Switches for them have a huge price range depending on the brand and features you want, we use a lot of dell M6220 switches because our network is heavily layer 2 at the access level.
5. The service life of the M1000e is awesome, only a few years ago did they upgrade the backplane to support 10G on the A fabric, B and C support 10G and 40G IB!
6. Talking about the sockets of a blade? Its the same as a Rack mount server... they don't magically make different parts for these, its a different form factor.

:rolleyes:

Never buy bleeding edge blades, the prices drop super fast and the performance isn't really that much faster. Check

Do you need more than 1TB of working data?

Its mostly people from netapp, I know the CEO, Suresh very well!

Nimble is awesome, we have a CS440 and a CS460 with max cache.



Its funny, these look like little systems but their capacity and io performance blows our VNX 5300 out of the water. :)




Some more pictures of our NAS and VNX.

I appreciate the info from someone who has the experience, i have 0 experience with blades and have gone solely based on info from numerous site articles and blogs from techrepublic and othes i have found as well as our parent companies head Sys Admin. (who loves blades)

Sure SSD's would be awesome, but our company is not that large so dropping $2k on an SSD, each, for systems is not always something i can pass over to them convincingly enough (doesn't help the parent company I.T says SSD are a bad idea because some tool at HP said so)

My main issue was the backplanes failing, so a min of 2 was needed, i wasn't aware they could be had so cheap and were that reliable, when i spoke with HP they said $30k for the base, and if you spend $60k or more then will give the enclosure for free.

I am currently looking at options for a new co-location and while i considered blades, the upfront costs from the info i had were almost 2-3x what it would cost for individual servers and sans.

We will have 2 MSSQL 2012 boxes in Replication and offsite replicated to another country.
8 applications servers running custom applications on them 4 and 4
4 IIS servers i want in load balance
2 Reverse proxies

and a partridge in a pear tree

i want to virutalize as much as i can and my worry was HA speed in ESXI / vCenter incase something goes down.
 
Last edited:
Wonder if anyone else on [H] aside from me supports these. Sadly I don't think I'll be able to post much more than this:

IMG_00000384.jpg

Ahhh - Integrity - the legacy of Tandem Computer! Brilliant stuff, actually. They are one of the few systems from the 70s that successfully transitioned from a proprietary processor design to more generic Intel while preserving their core design advantages. Wonderful stuff.

Unfortunately, they chose Intel's Itanium line instead of mainstream x86. They chose...poorly.
 
Man I wish I had access to the filers and our cisco kit etc

These days I deal mainly with the VMWare clusters and the underlying blades, I can't provision storage, can't setup iSCSI LUNs, can't even access any of our Cisco kit

All I can do is expand the Volumes, check the aggregates, control the qtrees (Yes it is netapp) and babysit the basic side of the filers. We do have about 8 heads in our live environment, another 4 in DR and 6 in our overflow DC so its not a small environment.

Some nub allowed 2 of our aggregates to hit 90% full and that was on SATA disks as well. It's a real nightmare with NetApp once you start hitting that level on a set of disk shelves it fucks the entire filer head. We also have issues with inodes, mainly because some of our software generates millions of 1k files.

Over the past 18 months I have started to go of NetApp, we had a set of disks snap their own latches and because the disks are spring loaded they auto ejected from the tray!
 
Over the past 18 months I have started to go of NetApp, we had a set of disks snap their own latches and because the disks are spring loaded they auto ejected from the tray!

Sounds like a new meaning for "kicked from the raid pool" :D
 
I appreciate the info from someone who has the experience, i have 0 experience with blades and have gone solely based on info from numerous site articles and blogs from techrepublic and othes i have found as well as our parent companies head Sys Admin. (who loves blades)

Sure SSD's would be awesome, but our company is not that large so dropping $2k on an SSD, each, for systems is not always something i can pass over to them convincingly enough (doesn't help the parent company I.T says SSD are a bad idea because some tool at HP said so)

My main issue was the backplanes failing, so a min of 2 was needed, i wasn't aware they could be had so cheap and were that reliable, when i spoke with HP they said $30k for the base, and if you spend $60k or more then will give the enclosure for free.

I am currently looking at options for a new co-location and while i considered blades, the upfront costs from the info i had were almost 2-3x what it would cost for individual servers and sans.

We will have 2 MSSQL 2012 boxes in Replication and offsite replicated to another country.
8 applications servers running custom applications on them 4 and 4
4 IIS servers i want in load balance
2 Reverse proxies

and a partridge in a pear tree

i want to virutalize as much as i can and my worry was HA speed in ESXI / vCenter incase something goes down.

We use all crucial M500 SSD/Samsung 830/840PRO ssd. Only had a few failures over 2 years, 99% of them were within the first few hours.
 
Way less, blades are extremely power efficient, I wish we would have moved to it sooner. I'd say nearly 50% less power used on our blade systems than rack mount servers.

I'll have to admit seeing you guys go back and forth about blades vs not going blades cracks me up. Blades will always have their place and not have their place. I have a buddy who just got done ripping out 4+ blade centers for 2u servers. Justification. Could get more out of the physical servers by not having to share the back plane than keeping them.

Long and short. Do your home work. If a blade center makes space by it. Odds are it probably doesn't always make sense
 
I'll have to admit seeing you guys go back and forth about blades vs not going blades cracks me up. Blades will always have their place and not have their place. I have a buddy who just got done ripping out 4+ blade centers for 2u servers. Justification. Could get more out of the physical servers by not having to share the back plane than keeping them.

Long and short. Do your home work. If a blade center makes space by it. Odds are it probably doesn't always make sense

Define share backplane? Need more than 6x 10G ports per server?
 
Back
Top