Network pics thread

Since I clearly don't have enough network appliances already:

P1100648.jpg


P1100650.jpg
 
Outside of paying ~$20k retail? Probably nowhere unless you're really lucky. I haven't seen them really popping up anywhere since it's not EOL.
 
hows the 3750x working for you as a core switch? We killed those things with iscsi traffic. I wouldn't trust them beyond the access layer.

Yeah pretty good, this is a small-ish site. So the 3750x has 6 switches in the stack, the first 2 are for ESX servers, so NICs are spread across the two for redundancy etc. Switch 1 has the Primary WAN, Switch 2 has the Backup WAN for further resiliency.
The rest of the switches in the stack are a combination of access ports, WAPs etc.

No heavy constant heavy iSCSI traffic on the core, we are using 8Gb fibre channel switches for the SAN.

Having said all that though, one of the switches in the stack did die a few weeks ago. It was receiving power, and powered on, but all the LEDs were off. Wouldn't come back to life. Other than that, never seen another 3750x die.
 
Material-wise, 6a does typically cost substantially more that 6. However, labor costs should be the same. Not sure how that would total out per-run for a professional install, but it shouldn't be 2x I'd think.

Given the additional cost, limitations, and lack of real benefit of 6 over 5e, it doesn't make sense to me to use it.

the cable is about 2-2.5 times on its own, the patch panels and jacks cost more as well

both are certified for 10Gbase-T for the same distances


its also even more of a PITA than cat6 to work with. Terrible. any place that knows what they are doing would charge more to install it because its more labor
 
Since I clearly don't have enough network appliances already:

too bad celestix won't help you do anything with them.. tried to get info on driving the LCD on those RSA boxes and they were zero help...
 
Just a couple of 3560G access switches in one of the smaller DC's i look after. Fibre port channels to a 3750x core stack.

Sorry for the massive size, this forum does not auto size. :(


You could benefit from some real cable labels. Those look like they're falling off.
 
too bad celestix won't help you do anything with them.. tried to get info on driving the LCD on those RSA boxes and they were zero help...
I have images for the disks as they came from the factory, so I have the LCDs working on both the Celestix and RSA 1Us. If you want a copy, I can upload it for you. Unfortunately I don't have a copy for this 2U, but I'll be working on that (this one runs 2008 instead of 2003, so not that straightforward).
 
Sorry for such a dumbass question, but whats so special about 3PAR. I keep seeing them more and more, tried to look at them but still can't figure out what their claim to fame is

3PAR doesn't use RAID in the common sense that you add drives to the array. They use chunklets and RAID across the chunklets. You never have a hot spare/global spare drive. You have spare chunklets on all drives. You can set them to be available where when they are grouped, you can lose an entire cage, magazine, port, etc.

I work with several 3PAR arrays (in the double digits) and with the technology in them and just amount of brainpower that went in to design them is really pretty slick.
 
Just another remote site I have been working on, it's pretty much finished now.

Does the job.

Below: Some old servers/hardware being retired. Replaced with new HP Gen 8's and HP 3PAR.



Our new core infrastructure (servers)
2x ESX Hosts
1x Backup Server with 30Tb Disk Tray
1x Autoloader
HP 3PAR 7200

IMG_20131113_115352.jpg

You really ought to get the other two hard drives and have the DO license consume them into your current CPGs. The nodes need to be balanced if you're running all of those node pairs as one. the 3PARs will RAID up....so CAGE 0, DRIVE 0, then CAGE 1, DRIVE 0, etc. I'm just assuming you have it setup like this. Balancing the CPGs across all nodes is very important in the 3PAR arrays. If you need help or have questions, let me know. I manage these things for a living.
 
3PAR doesn't use RAID in the common sense that you add drives to the array. They use chunklets and RAID across the chunklets. You never have a hot spare/global spare drive. You have spare chunklets on all drives. You can set them to be available where when they are grouped, you can lose an entire cage, magazine, port, etc.

I work with several 3PAR arrays (in the double digits) and with the technology in them and just amount of brainpower that went in to design them is really pretty slick.

Good to know. It's something that I've always wanted to look at, but nobody I know has them.
 


My Baby. Dell PE 2950 x2 Quad Core Xeons 3.16GHz, ram is 32GB, 5x 146GB SAS Drives <-- Running esxi 5.5

Also a Dell PE R200, specs are: Single Dual Core Xeon 3GHz, ram is 4GB, 1x 250GB for Windows, 1x 500GB for backups. <-- Running Windows Server 2008 R2 Enterpise. Also it's running my Exchange Server 2010
 
That's neat to see, I need to get on graphing my environmental stuff. I have all the data but I have not coded a graph feature yet. Would be neat to see.

And yeah definitely need to fix your air flow there. Maybe add some fan holes in the back door and add exhaust fans on top. Add intakes at the bottom in front.
 
I graph a few bits to be honest, here are a few of them.



It also generates this every 24 hours, I uploaded the first one to youtube

http://www.youtube.com/watch?v=y3cEhaJqXIg

Nice! What commands are you sending to RRDtool and how did you set it up? I tried so hard to get that to work and failed. It seems really confusing to use and the documentation is not all that great. If I could figure it out I could get my monitoring app to use that instead of rolling my own with libpng. Though I might also do like I did for this site: http://www.uogateway.com/ (click on any shard entry to see graphs) The graphs arn't as nice though but I could make them better.
 
Nice! What commands are you sending to RRDtool and how did you set it up? I tried so hard to get that to work and failed. It seems really confusing to use and the documentation is not all that great. If I could figure it out I could get my monitoring app to use that instead of rolling my own with libpng. Though I might also do like I did for this site: http://www.uogateway.com/ (click on any shard entry to see graphs) The graphs arn't as nice though but I could make them better.

Use Observium or Cacti. (I prefer Observium myself) It'll give you graphs like this:

MZgoek2.jpg
 
Oh so this is part of Cacti? I thought they were just being manually generated with RRDtool. I want to incorporate graphs in my custom monitoring app. Though I think Cacti is open source I can always check how they do it. Worse case scenario I can always make my app feed all the data to Cacti but I want it self contained as I will eventually release it to the public.

Observium looks like an interesting app too.
 
Oh so this is part of Cacti? I thought they were just being manually generated with RRDtool. I want to incorporate graphs in my custom monitoring app. Though I think Cacti is open source I can always check how they do it. Worse case scenario I can always make my app feed all the data to Cacti but I want it self contained as I will eventually release it to the public.

Observium looks like an interesting app too.

They both use RRDtool. They auto create graphs as well as you can manually create them.
 
Some core gear just racked in my new DC. Cisco Nexus 7710, and a pair of Nexus 6001's on top of that. Above that is a 2901 for oob console access and a 3850 for misc. connectivity. The other half of the core of the DC is on the other side of the room. Not pictured are 40 Nexus 5548's and 32 2248 FEX's. 7710's are missing several F3 modules since they just started shipping.

datacenter1.jpg
 
Use Observium or Cacti. (I prefer Observium myself) It'll give you graphs like this:

MZgoek2.jpg

Thanks for the tip re: Observium, I have just started using this myself based on your post and so far I'm very impressed with it! :D

Jay what device are you using to generate the temps and humidity? is it a rack monitor device like NetBotz??
 
Some core gear just racked in my new DC. Cisco Nexus 7710, and a pair of Nexus 6001's on top of that. Above that is a 2901 for oob console access and a 3850 for misc. connectivity. The other half of the core of the DC is on the other side of the room. Not pictured are 40 Nexus 5548's and 32 2248 FEX's. 7710's are missing several F3 modules since they just started shipping.

datacenter1.jpg

About time someone posted something worth looking at. What modules will the 7710 be populated with? What is the line of business?
 
About time someone posted something worth looking at. What modules will the 7710 be populated with? What is the line of business?

Two F324FQ-25 modules (24 40 Gig ports each) per chassis
One F348XP-23 module (48 10 Gig ports) per chassis

This is for a health care provider.
 
Took possession of my new house last week, started installing gear. Have a Juniper SRX, Ubiquiti APs, APC UPS, and a few other goodies to go in yet. Will also be rebuilding my home theater, and doing some home automation, so lots of work to do.

RWbeDArl.jpg
 
Took possession of my new house last week, started installing gear. Have a Juniper SRX, Ubiquiti APs, APC UPS, and a few other goodies to go in yet. Will also be rebuilding my home theater, and doing some home automation, so lots of work to do.

RWbeDArl.jpg

Good luck man. We took over our house last weekend too. Loads of good stuff to come.. hopefully
 
Finally have some pictures from work, stuff is moving to the point that I am more proud of it. I inherited a complete trainwreck. SBS, around 500ft of cat5 just sprawled at the bottom of the server racks, etc.

The random desktop sitting at the bottom of the server rack is my ESXi machine I use to run Juniper Firefly to build labs and so forth.

2014-03-11%2015.40.23.jpg

2014-03-11%2015.40.54.jpg

2014-03-11%2015.40.58.jpg

2014-03-11 15.41.33.jpg

2014-03-11%2015.41.21.jpg


weathermap.png
 
Last edited:
It's been a while since I've posted in here, but I thought I'd share a few pics of my "permanent" lab rack at work. I rotate other gear in and out as needed, but this stuff stays put:

GLC4rbxl.jpg
k4SYWGql.jpg


Top to bottom:
Aruba S3500-24P PoE Switch
Cisco 2950-24 Switch (Management Net)
Aruba 3200 Controller
Aruba 3400 Controller
Cisco 4948-10GE
Aruba 3200 Controller
Fortigate 100D
Fortigate 100D
SonicWall NSA2400
Cisco ASA 5510
Cisco 6509E, Sup720-3BXL, 6748, 6548, 2x6000W AC PSUs (Edge Router, Full BGP Tables from ISP, pretty much doing nothing else)

I don't have pics of the adjacent rack, but the gear is:

2x IBM X3560 M2 (ESXi Hosts)
1x Nexsan SATABeast (42x2TB NL-SATA Disks, FC/iSCSI SAN)
1x EMC VNXe 3100 (12x600GB 15K SAS Disks, iSCSI/NFS/CIFS SAN)
1x Brocade 300 24-Port 8GB FC SAN Switch
2x Brocade VDX 6740 FCoE Fabric Switch (48x10GE, 4x40GE)
1x Brocade VDX 6710-54 Fabric Switch (48x1GE, 6x10GE)
 
Last edited:
Back
Top