Network pics thread

Well, Since we've been on the edgemax train here recently.

We recently moved into our new datacenter. And our small 3-4 person office has a direct multimode fiber drop from the core switch. Now, I hear what you're saying. Why even install a router. Just give your office workstations public IP's and call it a love story. Well, 1. I'd rather not waste the IP's. 2. It'd be nice to have things like guest wifi and such where I'm not setting static IP's on guest phones and laptops. Since I refuse to run DHCP or NAT on one of my core routers. As I'm sure most of you would agree.

Enter the office Router. We used a RB493G (RouterOS) for the first few weeks. But from what I could tell. It would only pass about 300Mb/s (Almost on the dot) with the CPU load pegged at 100%. Remember, It's doing NAT and such. I'm not going to argue that 300Mb/s worth of internet wasn't enough. Obviously, It was hard to even find places that could push such bandwidth. But it really killed me transferring ISO's and such around when I knew that fiber should do full Gig.

On a whim, I swapped in the edgemax. Programmed it as a direct replacement. And I'm impressed. Running NAT I can still achieve full wirespeed transfer. Now, Obviously the graph is a bit off, As a Gigabit Ethernet isn't going to pass 1.43Gb/s. But Still holy hell. For 99 bucks, Sign me up for a rackmount version when they come out..

 
Here is a network rack at one of our elementary schools that I re-cabled a few weeks ago.
Before: (I started unlinking cables before I decided to take pictures)
ncesrack01.jpg


After:
ncesrack02.jpg

ncesrack03.jpg
 
heat from propane is easy to divert

a piece of drywall....osb...held under the block would save it from damage

Yeah, we've just had bad luck with plumbers lighting our buildings on fire. Common sense isn't common enough sometimes. Homeowners always want to go with the lowest bid, ugh.
 
Had some new switches wired up last week, just noticed it the equip room today (Im not in the network group), nice job again (Ive no idea the company it was outsourced to, but they do a bang up job)

Rack1.jpg

Rack1-wide.jpg
 
Man I need to get a pic of the datacenter I'm in sometimes, you guys would love the fiber patch panels and switches.

The datacenter has Alcatel 7550's that feed other 7550's and then 7330's if you know what I mean. It provides HSIA, VOIP, VOD and IP-TV, you can do the math. lol
 
what's your power situation look like for that UCS rack? Sungard only allowed 2 chassis per rack(plus FEX) before we maxed out on power.
 
I'd suggest populating the UCS from the bottom of the rack up.

From experience, it became a pain to work with when we added more to it.
 
I attended a CDW/Cisco seminar earlier this week on the UCS. We have 2 UCS C200 M2 servers running our Call Manager environment. I did not know about all the other rack and blade server options. They are doing some pretty interesting stuff!
 
The two units at the top are converged UCS fabric extenders which are pretty much just stripped down Nexus switches.

As far as power goes, the chassis has 4 dedicated drops from the UPS and the FEX are powered by 2 Z-Line PDUs not pictured, both on separate circuits.

We will be hard-pressed to outgrow the single chassis once the old ESX and legacy servers we have left get decomm'd. With 4 spaces left to populate in the chassis, we have room for VDI, Call Manager, or anything else that may get thrown our way.

On a side note, I really think that a couple Nimble boxes would make the rack even sexier... That's for FY-2014. I hope. :D
 
I see lot of UCS stuff lately, looks pretty slick! So is it basically a virtualization platform like Vmware? Guessing it can virtualize the networking portion too?
 
I see lot of UCS stuff lately, looks pretty slick! So is it basically a virtualization platform like Vmware? Guessing it can virtualize the networking portion too?

UCS are basically servers just like you'd pick up from Dell, HP, etc. They tend to sell them into environments where people use them for virtualization but they are not a "virtualization platform like vmware". Vmware sells the software...
 
just look like rebranded sun servers... surely with a nice cisco logo markup
 
Still working on this, but half the office is now in.
Almost finished.

In terms of colours:

Blue - Normal Data
Yellow - VoIP
Pink - WAPs
Green - AMX (Automation for TV's, Meeting Rooms, Lighting etc)
Red - Servers
Purple - Management
White: WAN

Network1.JPG

Network2.JPG
 
Is it not the normal practice to run one line to a desk and daisy chain it through the VOIP phone to the desktop? This isn't the first rack I have seen with separate VOIP and data lines and it makes me curious. Most DoD networks that I have worked with run VOIP and computers on the same line, allowing CDP to work out the devices. It works fine with 802.1x security as well. Maybe this is just a Cisco thing?

That is a colorful rack. I like the angled patch panels. I've never seen those before.
 
We have separate switches for data (gigabit) and voice (100 meg POE) here. It's been like that before we had our Cisco phone system.
 
Is it not the normal practice to run one line to a desk and daisy chain it through the VOIP phone to the desktop? This isn't the first rack I have seen with separate VOIP and data lines and it makes me curious. Most DoD networks that I have worked with run VOIP and computers on the same line, allowing CDP to work out the devices. It works fine with 802.1x security as well. Maybe this is just a Cisco thing?

That is a colorful rack. I like the angled patch panels. I've never seen those before.

Yea, thats how I have always seen it done though I use LLDP/LLDP-MED
 
I've done VOIP both ways, in the past gigabit pass-through phones were significantly more expensive than 100Mb, but they have came down over the years.

I usually re-purposed old 10/100Mb POE switches for VOIP phones
 
Is it not the normal practice to run one line to a desk and daisy chain it through the VOIP phone to the desktop? This isn't the first rack I have seen with separate VOIP and data lines and it makes me curious. Most DoD networks that I have worked with run VOIP and computers on the same line, allowing CDP to work out the devices. It works fine with 802.1x security as well. Maybe this is just a Cisco thing?

That is a colorful rack. I like the angled patch panels. I've never seen those before.

It is, but we've noticed that if we lose a switch, or do any work, it also disconnects their PC. This way we can avoid that, and also visually see what is what, rather just just a big blue jungle. We had the capacity, so we thought screw it, just plug everything in seperately. Obviously not cost effective in all situations, especially when using Cisco's. Also, if the phone reboots from a Call Manager change, the PC also loses network briefly. Not a major issue if you're running good Wireless access through as well, for laptops at least. Desktops would still have a problem.

Also, some of our IP-Phones aren't gigabit pass through, so we lose the use of all our gigabit ports, which sucks. So in our case, every PC connected via cable is running at a true 1Gbps.

There are another 2 fully loaded 3560's at the back of the comms rack.
 
Not the greatest picture, I know. This is my home rack that is for labbing and personal use. I was moving it around, so there is really nothing connected in the picture.



Digi CM32 Console Server
Dell PowerConnect 2824
3x Cisco 2651XM
2x Cisco 2611XM
Cisco 4006
2x Cisco 2950
2x Cisco 3550
Custom 4u Server (NAS)
Dell PE 2950 (ESXi)
 
Is it not the normal practice to run one line to a desk and daisy chain it through the VOIP phone to the desktop? This isn't the first rack I have seen with separate VOIP and data lines and it makes me curious.

I've been in shops that do both daisy-chain and seperate lines for the phone and PC. In my situation it has been the cost of pulling double the amount of drops for each cube/office that is the deciding factor yea or nay.

In fact at my last gig, one office had it one way, and the other office had it the other. Cost was the factor.
 
How much extra is it to pull 2 cables than one at the same time though?

I would have said the main cost would have been associated with having to double your switching capacity, especially for large environments.

It's always good to pull 2 drops through anyway, if you have an issue with 1 cable 2 or 3 years down the track, at least you have 1 spare :)
 
We daisy chain our phones as we only have 10/100 poe switches anyway.

Picked up a Bluetooth barcode scanner recently for doing inventory and my manager wanted a printed copy of the manual. A device this small should not have a manual this large lol
EDIT: This is English only.
manualtoobig.jpg
 
Last edited:
Please tell me that manual contains multiple language versions!

The DoD networks I have worked with so far only use gigabit access on the server side and the trunks. The lines to the users are only 100mb except for a few exceptions. So it never occurred to me that gigabit would be a deciding factor in using one line or two for VOIP/desktop access.

I just accepted a position at a college that is gearing up to rebuild their infrastructure. I can not wait until I get to play with the new hardware.
 
Bar code scanning is serious business, only to be used by trained professionals. :D


This came in not too long ago:



I put in the mobo, cpu and ram:



Waiting for a fan splitter/extension cables and a bracket to put a SSD for the OS, then I'll be all set. Not 100% sure yet if I want to use it as a NAS or a SAN but it will be dedicated storage, and the drives from my current main server will be going in here so I can offload the disk IO task from the other server. Eventually I want to build one or more servers for VMs, this is more or less phase 1 of that. It cost $3,500 so it's a lot to swallow. Either way any server I build now wont have it's own storage, but use storage off this new server. Whether I use iSCSI or NFS for that, I still have to decide. I'm leaning towards treating it like a SAN where it's on it's own network, but may use a hybrid of NFS and iSCSI.
 
Either way any server I build now wont have it's own storage, but use storage off this new server. Whether I use iSCSI or NFS for that, I still have to decide. I'm leaning towards treating it like a SAN where it's on it's own network, but may use a hybrid of NFS and iSCSI.

If you are a windows guy, have a look at Microsoft's iSCSI target. I have a Windows 2008 R2 server that I use as a workstation (email, web browsing, office docs, etc so I don't have to fire up an additional PC or laptop to do quick work since it's on 24/7), that I also use for centralized storage for media files and data for access around the house.

I eded up adding another RAID card and dual-port NIC into it and building a dedicated volume that serves up iSCSI to my ESXi cluster and Windows DB cluster on a dedicated iSCSI VLAN.

This MS iSCSI target software performs as well as openfiler and blows FreeNAS out of the water. And it's free.

So you can have your typical windows file server that also acts as an iSCSI SAN in a single box.
 
I'm a Linux guy, but yeah I have played with Windows iSCSI target, and it works pretty well, and may end up using it for the few windows VMs I have.

For OS I'll probably just use CentOS and do everything command line. I've played with open filter and freenas before but I found myself often going in the command line anyway, so figured I'll just do it that way.
 
Bar code scanning is serious business, only to be used by trained professionals. :D

This came in not too long ago:

I put in the mobo, cpu and ram:

Waiting for a fan splitter/extension cables and a bracket to put a SSD for the OS, then I'll be all set. Not 100% sure yet if I want to use it as a NAS or a SAN but it will be dedicated storage, and the drives from my current main server will be going in here so I can offload the disk IO task from the other server. Eventually I want to build one or more servers for VMs, this is more or less phase 1 of that. It cost $3,500 so it's a lot to swallow. Either way any server I build now wont have it's own storage, but use storage off this new server. Whether I use iSCSI or NFS for that, I still have to decide. I'm leaning towards treating it like a SAN where it's on it's own network, but may use a hybrid of NFS and iSCSI.

Ahh Starboard Storage.. they've come a long way since the days of being RELData and just pushing out iSCSI gateways. We use those same arrays in one of our enterprise SAN roll outs and I have one in my little home network as well.
 
How much extra is it to pull 2 cables than one at the same time though?

I would have said the main cost would have been associated with having to double your switching capacity, especially for large environments.

It's always good to pull 2 drops through anyway, if you have an issue with 1 cable 2 or 3 years down the track, at least you have 1 spare :)

Twice the cost. Industry runs around $150 /run in my area.
 
Twice the cost. Industry runs around $150 /run in my area.

Negotiate better...the wire is about 10% of the cost of doing the drop. Its the labor that dominates. It should cost you about 1.1x the cost, not 2x. And years from now you'll be very glad you did.
 
Finally got a chance to SSH into my friends edgemax.

I was surprised as hell the second I ran a few commands to find it is almost identical to junOS. I really, really want one now.
 
I wish there was edgerouter lite plus(with more ports). But I don't think that's going to happen, and they are taking too long to bring other models out. :( so depressing
 
That sucks! Here it's more just the 1st run which costs .. if you are bundling 9 more cables with it, it's not 9x the cost. Just the extra time for labour to terminate 9 more at each end, and obviously the cost of 9 more lengths of ethernet, which is pretty cheap. $100 or so for 305metres.
 
Negotiate better...the wire is about 10% of the cost of doing the drop. Its the labor that dominates. It should cost you about 1.1x the cost, not 2x. And years from now you'll be very glad you did.

Cat6 Plenum rated + faceplates, jacks, velcro, and other materials are certainly going to cost more than $15 for a run of any significant length, plus labor.

Yes some jobs do some in less per drop but this is a rate I've seen pretty often. I handle a large amount of cabling jobs in my position.

While I do negotiate jobs, I don't have near as much flexibility since anyone I use has to be on state contract.
 
Negotiate better...the wire is about 10% of the cost of doing the drop. Its the labor that dominates. It should cost you about 1.1x the cost, not 2x. And years from now you'll be very glad you did.

"Negotiate better." Hah hah hah. Is that all it takes? :rolleyes:

Around here (SF Bay Area) electrical/data contractors have a flat rate per drop- doesn't matter if it's one, two, four or a hundred cables into the same cube, it's still 1x, 2x, 4x or 100x the cost.

And nevermind the cost per drop in a union controlled building. :mad:
 
Back
Top