Network pics thread

I can do you one better...

IMG_0807.jpg
now disassemble it and show us the circuitry pr0n
 
You. do. not. understand. IT.

Get out of here. You don't being in [H].
T.h.e.r.a.p.y.

Truth! Guess I should be taking more pics of any old 2924XL's that I rip out. :rolleyes:
BAM! Nailed it. I think I have some AUI transceivers and some 10BaseFL media converters around. Apparently the kiddies like the old stuff. :eek: Alas, I did not keep any of the 10Base5 vampire taps from back in the day.

http://img190.imageshack.us/i/img0205yn.jpg/


 
Last edited:
be prepared to cream your pants
DSC01817.jpg

sure, it was like 7 years ago but whoa! Good stuff here!
 
yeah this is at home. The ISM-SRE-300 is basically a small blade server which runs one of several applications available from cisco. I'm running AXP (application extension platrform) which is essentially a virtualized linux (customized centos) environment, so I can have it do whatever I'd like. It can also interface directly with the router cli and pick up data and modify the running configuration, though I don't have a need for any of that. I have mine running asterisk 1.8, freeradius, bind, dhcpd and cacti (plus mysql and apache). It's good because it's reliable and eliminates the need for another server running those basic functions. You can also run cisco unity express on it and apparently they're coming out with wireless lan controller and network monitoring applications for it.

What made you want to get this setup and how did you afford it?
 
T.h.e.r.a.p.y.

BAM! Nailed it. I think I have some AUI transceivers and some 10BaseFL media converters around. Apparently the kiddies like the old stuff. :eek: Alas, I did not keep any of the 10Base5 vampire taps from back in the day.





rofl

As part of my quest to clean up around the house:


Sorry about the crap cell phone pics - it's a 2900XL and that's a 2948G (and fellow [H] members have recently informed me that it's useless).

Still don't know why I have this Cisco stuff around, I've never used it. :eek:
 
T.h.e.r.a.p.y.

BAM! Nailed it. I think I have some AUI transceivers and some 10BaseFL media converters around. Apparently the kiddies like the old stuff. :eek: Alas, I did not keep any of the 10Base5 vampire taps from back in the day.





Where is that Cisco coffee cup from?
 
What made you want to get this setup and how did you afford it?

I used to have individual dedicated devices for each network function and I grew tired of continually upgrading this/upgrading that, "playing with things" and dealing with the inherent mess (and I don't ever want a rack in my house). I wanted to simplify as much as possible with an all-in-one box that did everything I currently wanted/needed (adsl, poe gigabit switching, routing, firewall, ips, vpn etc.) and left room for flexibility in the future (terminating various media with different modules) that would also fill my occasional thirst for tinkering and experimenting. This fit the bill perfectly. The ISM module let me further simplify my setup so that the only "server" I have is purely for storage and could even be replaced with a dumb nas in the future if need be. Getting my asterisk system onto something reliable and integrated was another huge plus. Also since this thing is modern and fast for a home network it should last me for quite a few years (everything currently has a 3 year smartnet contract). I almost went for the wireless version but it doesn't support the ISM (wireless card uses the slot) and I can imagine a newer/faster wireless technology will be commonplace before this unit is past it's sell-by date. Selling some of my old components helped out but I also work...
 
I've started virtualizing everything at work. This is a work in progress at the moment.

Two server racks. The right one will house VM servers (4 so far) and SQL cluster and SAN.
The two switches at the top right are HP Procurve 2910al 48 port gigabit L3's.
The servers are SuperMicro builds with dual quad-core 3.06ghz processors with HT enabled. Each system has 48gb of ram.
The servers are running Citrix XenServer and the hypervisor runs on 10k rpm drives in RAID 10.
Each node is connected to the production and storage network via bonded gigabit adapters using LACP.
IMG_0364.JPG


More Detail for the right rack.
IMG_0365.JPG


The rack on the left side currently has an HP Procurve 2910 48 port gigabit L2 switch and two network attached storage systems, both SuperMicro.
Both NAS' are running FreeNAS using ZFS for the file system. The 1u NAS is 8TB raw and the 3u NAS is 32TB raw.
The top is used for repository and archiving. The bottom is used for application storage, ISO repository for XenServer, Citrix XenApp profiles, etc.
Each of these systems is connected to the storage network via bonded gigabit connections using LACP.
IMG_0367.JPG
 
They are P4000 G2 7.2TB SAN, two nodes. So yeah, basically LeftHand SAN in there. HP hardware running LeftHand software from their acquisition. Everything in the storage network runs through iSCSI; all load balanced and aggregated.
 
nice, what kind of throughput do you see from them? I will have a P4000 and 2910-24 here for testing in a few weeks.

I am building an SQL Cluster.
 
Between the two nodes, I see roughly 1000 IOPS through iSCSI. That was using a program called IO Meter I believe. I haven't really done much other testing on them suffice to say though that was I have used them for, I haven't had any issues.

The SQL cluster that is about to be moved into that rack runs on two Dell R710 machines both direct connected to an MD3000. That MD3000 only serves those two machines for SQL data.
 
Why RAID10 on the hosts? Planning to virtualise our domain soon and will be investing in SAN spindles and leaving the hosts on SD cards...
 
Each host has four, 10k rpm SATA drives. I just put them in RAID 10 to have the speed and redundancy on the hypervisor. They were cheap, so inconsequential to me about doing it. Just be aware that the only things the host hard drives do is for the hyervisor. I do not enable and storage on them whatsoever, as that is handled by the SAN.
 
I've started virtualizing everything at work. This is a work in progress at the moment.
Why are you mounting the switches and patches backward? That is, why aren't the cable connections towards the hot side of the rack, where the NIC cable connections are?
 
Why are you mounting the switches and patches backward? That is, why aren't the cable connections towards the hot side of the rack, where the NIC cable connections are?

They look the right way to me :confused:

Switch front and patch panel front in the... front, the patch panel routes the cables to the back or to wherever else they go, you use short cables from the switch to the patch panel to connect devices to the switch.

Least, thats how I've always done it and seen it done in the few places I've worked at.
 
I've always done the opposite. The patch panel goes "up" in the infrastructure to a central patch or a central switch. The switch and the patch panel's ports face the hot isle, and patches from the machines go up the wiring gutter in the rack to the switch or patch panel, as needed.

In this scheme, where does the cable from a particular server go? It seems that it must wrap around from the hot aisle (where it connects to the server), back to the top of the rack on the cold aisle, through the body of the rack. This means the cable isn't accessible as the rack fills up. I think this scheme causes lots of problems, like the sloppy green cables that make U35-36 useless; or the blue and purple cables that look like they're going cross-cabinet someplace.
 
corge: which version of freenas are you running? did you do any testing/comparison to see what throughput you got with various other operating systems?

I've recently done a lot of research and found speeds to be quite limited in freenas although i wasn't running on a very powerful machine.
 
I've always done the opposite. The patch panel goes "up" in the infrastructure to a central patch or a central switch. The switch and the patch panel's ports face the hot isle, and patches from the machines go up the wiring gutter in the rack to the switch or patch panel, as needed.

In this scheme, where does the cable from a particular server go? It seems that it must wrap around from the hot aisle (where it connects to the server), back to the top of the rack on the cold aisle, through the body of the rack. This means the cable isn't accessible as the rack fills up. I think this scheme causes lots of problems, like the sloppy green cables that make U35-36 useless; or the blue and purple cables that look like they're going cross-cabinet someplace.

I don't think you understand how front mounting patch panels and switches work, as for the green and purpole cables, they look temporary

You set up the patch panel that all the cables go out behind it and then down (or up) one of the corners of the racks(wiring gutter), branching out to any servers if any.

As for the switch, it hooks into the patch panel exclusively, hence why I say the purple and green cables look temporary. Rarely do I ever connect a device ditrectly to the switch without out at least one patch panel inbetween.

To be honest I've seen both methods used, I prefer to front mount it, and at least whre I've seen, more comonly used.
 
You set up the patch panel that all the cables go out behind it and then down (or up) one of the corners of the racks, branching out to any servers if any.

As for the switch, it hooks into the patch panel exclusively, hence why I say the purple and green cables look temporary. Rarely do I ever connect a device ditrectly to the switch without out at least one patch panel inbetween.
If I have it right, to add a connection in such a configuration access must be available to the patch panel which is at the crowded end of the rack where the back of the keystones or sub-panels aren't accessible. Or, cables hanging and slack. I guess one could also leave a couple of U under the patch panel, but that wastes space and

You've got to pay for the patch panels, the extra patch cables, and you lose a U or two of space in the rack. There's a multitude as many points of failure in such a setup, too -- the jack, the punchdown on the back of the keystone, and both ends of the short patch cable, specifically.

What are the benefits that overcome these disadvantages? Why do you prefer this method?
 
the front mounted patch panelthing is strange. I use patch cables to go from the back of the servers to the switches, mounted at the rear top of the rack. above the switches in the back of the rack I have patch panels that run from each rack to my relay rack, which I only really use to uplink my switches to the core.
 
Finished 90% this weekend, just have to move the HDhomerun over to it.

dscf03flf.jpg


top:

Network
WHS
Server 2008R2 Testbox
Cisco Lab
UPS
 
Ok, I'm going to kind of go at this in the order I've read them.
1) I'm using WD Velociraptor 10k SATA drives in the hosts. I'm using 15k SAS drives in the SAN. The hosts do nothing on those hard drives except run the hypervisor. The hypervisor needs very little hard drive access speed and the 10k drives were on sale. I would have used them anyway just in case.

2) The switches are mounted forwards due to the fact that the space behind the racks will be limited. I have the cables from the servers coming up the cable management tray in the back into the back of the patch panels above the switches. They are all pre-made cables, so I'm using Panduit CAT6 couplers. They aren't really cheap, but easy to access, replace and they work like a champ.

3) The green cables are management cables only. They are temporarily put in like that until I move our core switch in from the old room to this new one. The purple cables in the right rack are port-aggregated (HP lingo) or trunked (Cisco, except minus the ether-channel as this isn't needed in dual port trunking on Procurves). That is between switch one and switch two for that rack. There is a blue cable from each of those switches that goes back to the core switch at the moment. The reason that there are purple cables between those two is so that from server storage NICs to SAN storage NICs, I don't have to go back through the core switch. I eliminate that hop in the network and the load on the core switch. The purple cables from the left rack to the right rack are temporary as well.

4) I will take a picture of the back and upload it soon.
 
Is there a standard to how to mount the switches for wiring ? I have seen it both ways some are on the rear for all the ports some are on the front of the rack.
 
2) The switches are mounted forwards due to the fact that the space behind the racks will be limited. I have the cables from the servers coming up the cable management tray in the back into the back of the patch panels above the switches.
that's too bad. How are you solving the cooling issues?
Is there a standard to how to mount the switches for wiring ? I have seen it both ways some are on the rear for all the ports some are on the front of the rack.

Meaningful standards are pretty rare, and it should be no surprise that there's not a standard for how to to populate an equipment rack.

I think the first fulcrum in the decision is the air flow through the switch. The switch should be mounted so it's pulling cool air from the cold aisle, and exhausting hot air towards to hot aisle. Some people build racks with ducted cooling and so on, so that might not directly apply. If a back-to-front cooling kit is available, you might be able to install the switch "backwards" without affecting cooling when the kit is used.

For me, the next issue is reliability. If I can use a patch cable to connect a server to a switch, I only have two points of failure: the crimp at each end of the cable. If I'm going through a patch, I have to worry about the punch down on the keystone plus both ends of the other patch cable.

Adding or removing keystones requires access to the panel, and that might not be convenient. Making it convenient can result in wasted space. Fiddling with the connections might disrupt adjacent connections, or channels that share the same keystone panel.

With the switch ports facing the hot aisle, these problems don't exist and its easy to route the patches in the gutter of the cabinet. It's trivial to remove, add, or replace patches wired this way, and there's no chance of wires being pinched by the insertion or removal of a server.

If there's no room on the hot aisle, then I guess there's no other choice; but when there's not much room around the rack, cooling issues aren't far behind.
 
Last edited:
Is there a standard to how to mount the switches for wiring ? I have seen it both ways some are on the rear for all the ports some are on the front of the rack.

In enterprise situations it really depends on your datacenter layout. Normal practice would be to put all the network connections to the rear of the rack, but many times rear access might not be available. Either one is OK.

For many Home or SOHO installations you are putting the racks in a closet with limited access or even up against a wall. In those cases putting all the network in front is pretty much mandatory.
 
that's too bad. How are you solving the cooling issues?

I have none. In that room there is a dedicated 5 ton A/C unit that was in there before, but for a much larger space. I say that because this is a converted office into server room. All of it has been redone. The A/C unit was for the office here and then then customer service bullpen next to the office. Now, all of the ducts have been removed and replaced. This room has three ducts from that one rooftop A/C directly into it. The air returns are actually placed in the ceiling right behind the server racks. So, the hot air being exhausted out the back and rises to the air returns for the A/C unit. Cold air in the front, hot air up and out the back.

This room also has a dedicated 3 ton Mitsubishi mini-split system. This is the main A/C for the room because it will run in winter as well as the rooftop A/C like most is supposed to do heating and cooling...not cooling to 65 degrees during 15 degree outside weather with snow.
 
Back
Top