Network pics thread

might want to invest the time to bring that wiring up to code.

This is in my house. I am not too worried about it since all the loose cables are mainly cat5. Everything else is grounded/breakered/terminated correctly so I am not too worried about it. I had an inspector here 6mo ago to check some work that was done and he looked at it long and hard but didn't say anything. Meh.
 
Yeah ... we use Riverbed/Palo Alto too ... not too many people know about those.

Nice Juniper gear by the way -- I'm assuming they finally let you stack the 4500 with the 4200? When I bought them at my last gig that wasn't supported.

The 4200 was rock solid ... it ate any advanced routing I threw at it. Just be careful with the limited RIB size. Also, make sure you stick to the S releases (e.g. 10.0S10.1 was the one I used way back when)... stay away from the R's ... those are like T trains.

Yea, you can VC the 4200s with the 4500s. TCAM/RIB limitations shouldn't be an issue here, it's a small DC deployment. The customer picked the code level (rofl). They got a recommendation from an SE.

This is the first time I've touched Juniper gear in a few years. It went pretty smoothly, no real complaints. Seeing the price on the BOM was shocking too.
 
This is in my house. I am not too worried about it since all the loose cables are mainly cat5. Everything else is grounded/breakered/terminated correctly so I am not too worried about it. I had an inspector here 6mo ago to check some work that was done and he looked at it long and hard but didn't say anything. Meh.
Odd to find an inspector who won't complain about Article 334.30.
 
thats why i bought all new servers when i bought my stuff, instead of scrounging on ebay for them.

My asus server was 500$ plus proc & hdd's...

1 asus server xeon quad core & 2 x 500 gig hdds
1 asus intel core 2 duo & 4 x 160 gig hdds
1 dell power connect 5224 managed switch
1 dell 24 port gigabit switch ( non managed )
1 Linksys 24 port 10/100 switch
1 sonicwall tz210
1 15″ dell lcd
1 wd 2tb nas drive
1 wd 1tb usb drive
1 apc 1500 ups
2 poe adapters 1 for voip phone & 1 for WAP
1 shuttle computer indel core2 duo 2.2 2 hard drives inside

All this, running as we speak, and pulling 332watts. Trying to figure out what exactly my power consumption for a single month is for this server rack foll of goodies is, 332 watts isn’t really to bad considering i’m running a pile of gear..


https://lh5.googleusercontent.com/-e-J3LaNzUxk/TqoAXSdZImI/AAAAAAAAONw/gVP8efZ48jM/s640/DSCN3062.JPG

That's not bad at all considering I have a single dual P3 rackmount HP server that draws that much by itself (which is why I don't even run the thing anymore).
 
That's not bad at all considering I have a single dual P3 rackmount HP server that draws that much by itself (which is why I don't even run the thing anymore).

yeah i know, i bought a few extra watt meters, when i get a chance i'm going to plug in some old dell servers and take pictures and blog how much power they take just to show people.

Buying old equipment that works sometimes isn't the best idea..

New server ( newer ) = less power
older server that works = MORE power more heat = more $$ actually in long run..

I should start a thread, how much does your server / setup consume..
 
I think you should, My rack with 2/3 servers running, KVM, network switch is like almost 600watts
 
What are the EX4500's for? You don't have fiber on the EX4200s.

The 4500s are for 10g connectivity (storage, blade chassis, etc).

All the switches are part of the "Virtual-Chassis". The 4500s are the "routing engines" in the stack while the 4200s are "line cards". No fiber is needed on the 4200s as everything is using Juniper's stacking ports (VCPs) in the back.
 
Yeah, I caught up with the rest of the thread. The VC setup is interesting, but seems of dubious value.
 
Cost vs. value, mainly.

It's just stacking. If you need the density, the cost doesn't change (much) to add VC. You end up with the same density, but a single management point.

I'm not crazy about stacking in the core, but it's not related to cost.
 
On a side note, I know how much you guys love pretty cabling.

Mess01.jpg


Mess02.jpg
 
Yeah, I caught up with the rest of the thread. The VC setup is interesting, but seems of dubious value.

At my last gig, I needed fiber and copper in the datacenters ... so I deployed 4200-48Ts with 4200-24Fs in a VC. The 4500 wasn't ready in terms of code to have it stack on the 4200.

I didn't really care about the single point of management. In the colos that I put these .. I only had a few 30A circuits to work with (that also supported all the servers). My requirements were 8 ports of 10gig, at least 20 fiber connections, and 60 copper connections. Most of these were A/B connections (e.g. if A/core1 blows up, B/core2 takes over).

So instead of buying 2 6504s each with a sup720, 6704, 6748, and 6724 ... I bought 2 48Ts and 2 24Fs and 4 2 port 10gig modules. Each stack had 1 48T and 1 24F. Saved a lot on power, space, and cost for the same performance the 6504 would have given me (only needed to use OSPF/BGP/PIM sparse mode/BFD features ... and only ~2K routes).

Worked great ... and I didn't have to burn 10gig ports for uplinks to a fiber switch -- I used them instead for cross-links between core1 and core2 in each DC.

The 4500 stacked with the 4200 gives a lot of 1 or 10gig flexibility in a small amount of space with a pretty much non-blocking 128Gbps IS-IS backplane between the switches. Think of the value vs a full chassis solution. Having said that, I would never put more than 2 or 3 in a stack.
 
It's just stacking. If you need the density, the cost doesn't change (much) to add VC. You end up with the same density, but a single management point.
The pricing doesn't change much, but the ongoing maintenance and management costs do.

If you have an application (like just2cool did) where it wins, then it can be a great solution. For us, it was really a wash and the added complexity, management, and specificity weren't helpful.
 
The pricing doesn't change much, but the ongoing maintenance and management costs do.

How? I guess I'm missing something, but your hardware isn't changing (other than some VCP cards in the 4500s). Maintenance costs should be the same.

You could even argue that you're saving overhead costs due to less points of management and per-device licensing for management/monitoring tools. Even without considering that though, the costs are virtually the same.
 
I don't think the management is any simpler; at least, that's dependent on the application and deployment. In some ways, the management is more complicated -- keeping every switch at the same Junos revision level, for instance. It's also more clostly becasue you'll need to find someone who knows the Junos specific VC details. Those details are something pretty vendor-specific and not common knowledge and Junos engineers are lots less common than Cisco engineers. While some of the concepts are similar between vendors in this area, the details and implementation idiosyncrasies are things that only come from experience. And those details and idiosyncrasies are what usually cause outages.

If you compare VC to switching over ethernet, I think you've got a simpler, more manageable and inspectable solution because you're just doing the back-haul switching over a regular technology.

In simlpe situations, VC seems great; two EX4500s VCed together to share routing engines, back eachother up, and so on. In such an application, the value proposition is a little clearer.

In more complicated situations -- the ones the vendors push as reference solutions, such making campus-wide VCs -- I think the value is really dubious. It's easier to diagnose a plain ethernet or TCP issue than it is to figure out the secret sauce in the VC mix.
 
I think you're reaching a bit. VCP (like Stackwise) is very simple to understand and configure. I do agree on campus-wide or stretched stacks (VSS, VCP, IRF, Stackwise, etc) though. Very bad idea. Though that isn't really what we're discussing here nor does it cost more. The value added is pretty good too with, again, less points of management, and then throughput gains, etc.
 
It's simple to understand and configure if you neglect the in situ problems that I'm mentioning. With those blinders on, you're right: no problems at all and everything's rosy.
 
It's great for a home switch, though I think it has a fan. I particularly like the gigantic white background; it goes in any decor.
 
Cool, is that a console port I see? How's the CLI on those things? I picked up 2520 ProCurve's for my remote sites, because I had never worked with 3COM before. Naturally I did not want to deploy 16 switches from a brand that I've never worked with.

I don't use the CLI because it would mess up my CISCO CLI knowledge.
 
been there ? learning cli lots every day at my work, then they throw in a juniper to configure and its all backwards....

I'm a Cisco guy, too. Having to configure Extreme switches was a pain in the ass. I felt like a complete beginner (which I am at Extreme). At least my Cisco knowledge let me understand what needed to be done and what was being done and why. It's just the commands are different to get the same results... A good learning experience, though.
 
I don't use the CLI because it would mess up my CISCO CLI knowledge.

I find it both annoying and useful to use different CLI's. I have found that it gives me a better understanding of networking by branching out. It also keeps me from being a fanboy. :) (because every vendor has shit that really pisses me off)
 
It's great for a home switch, though I think it has a fan. I particularly like the gigantic white background; it goes in any decor.


lol. It's silent. :D Fully manageable. Interface is very simple, i was kinda dissapointed that it's not as nice as the HP's or Cisco's lol
 
lol. It's silent. :D Fully manageable. Interface is very simple, i was kinda dissapointed that it's not as nice as the HP's or Cisco's lol

I wish my D-link was silent, the fan makes a little noise, but its not that bad considering the whole rack makes noise. Maybe a little 3-in-1 oil will silence it. Nice and configurable tho.
 
Hello everyone, new here so I thought I would show the start to my network at home.

Rack.jpg


Picked this 7ft rack up while I was in Virginia on business for 50 bucks.

Will have some servers and a CCNA lab in it soon.
 
Back
Top