Network pics thread

Are you going to shock mount the rack? I worked on a high mobility unit that used "wire rope isolators" and it helped a lot. http://www.vibrationmounts.com makes some nice ones.

Then again, when i put a full Comm Suite in the bottom of a tour bus, i used 1 inch rubber isolators and it worked fine.

That company makes some really awesome stuff- thanks for the link.
 
PSU on that HP pop on you yet?

I'm assuming you mean me :)

No, not yet. Are they known to go bad?

Although.....

I had HP send me a new one within the first week due to incredible electric noise or "coil whine" which drove me nuts. This one has been working fine.
 
We just moved our Corporate Headquarters into a building with a data center on premises, which has been REALLY nice compared to where we were.

A/B power, A/B cooling, Gen-set, controlled access, real-time security monitoring, fiber risers, raised floor, hundreds of feet above street, etc.

So we consolidated a lot of equipment onto a new R930 server, a couple R910's, an R710, our existing EqualLogic SAN (white cables) and a new 10Gb/40Gb Force10 switch. This pic is a week or two old, there is some more telecom equipment in there now. We also still haven't added the new 10Gb fiber NIC's to the old servers.

Went with a wide 48U rack (old 42U rack is upstairs) - was a good move, able to actually use all of the U's for equipment instead of having to pass thru cables.

Everything is running on 220v on 2 metered and switched by outlet PDU's. All equipment has redundant power too, so everything is on A(+Gen-set)/B.

The new R930 also has some of the new PCIe SSD's and 256GB of DDR4.

The move went 95% unnoticed- I've been planning it out for a year. We built a DR site at one of our other offices and failed over nearly everything the week before we moved. Had a couple glitches with SQL mirroring and telecom fail-over, but besides that, it worked as planned.

Now that we are on the new site it is quite a relief that we have proven that our DR site works and are on new hardware, software and configurations in a new data center- should be home free at this point.

DC_1.jpg


My new R930:

22-July-2015_16-34.png


22-July-2015_16-35_1.png
 
Last edited:
Very nice and very clean Jen4950. It's always good to see someone moving to a better location and new hardware. Site moves are always a pain and create that moment when you force the swap and everyone looks around waiting for the world to end.
 
The following setup is our lanparty setup (a part off)

Our internet connections come in on a general switch, this switch can be enormous WAN connections merge into a fat connection or for redundancy.
With this switch there will be two cables to our Watchguard firewalls / routers which are provided pfsense serving a fast and redundant connection .

4.jpg


From these firewalls / routers go there several cables to our ( 10Gbit ready) Cisco backbone which hangs below the firewalls. This backbone is the center point of the network, and provides the complete connection between table all the switches and the servers.
The servers are largely virtualized on our ESXI platform . This platform consists of four lots Intel Core I5 servers with 16GB of ram , and one Intel Core i3 8Gb VMware Vcenter ..

1.jpg


All of these servers with 1Gbit / ps connected for mutual traffic management with vCenter Server .
VMware because these nodes do not use a local storage they use a 4 Gbit / ps optical fiber connection via our switch to the SAN and have a dedicated connection from 2Gbit / ps on the traffic to the backbone.

2.jpg


3.jpg


Which uses a SSD RAID 6 configuration for optimal performance .

6.jpg


7.jpg


Greetz Justin
 
Last edited:
Why do you need all this for a lanparty? :eek:

Also wouldn't RAID10 give you better performance?

not always, on my adaptec controllers with a bunch of SAS attached SSDs RAID10 was within the margin of error of a RAID60 which was only slightly faster than a RAID6 and considering $/gb of SSDs more lossy raid levels seem to make less and less sense
 
I'm sorry , I thought that the images are scaled automatically . I just changed it . Raid 10 might provide better performance , but I would much capacity to hand on , isnt it ?
 
For SSDs RAID10 is stupid honestly, there is this "RAID10 is the only raid" attitude going around a lot of tech folks which I just don't understand...

with a good controller any RAID level should work just fine for SSDs
 
That what I was wondering as well. Using the recent pic post as an example. Those look like Cisco-badged HP DL380 G5s which means most likely a P400 or a P800 RAID card. Will those even perform fast enough to make a RAID10 of SSDs a better option than any of 4 Raid1 SSD arrays? Given the assumed age of the unit, I'd be more worried about the RAID card dying than any of the SSDs. I haven't personally tested SSDs on anything of that generation, but as FLECOM alluded to, I think that might be a little overkill as well. RAID 6 seems right to me given that hardware.
 
I can confirm that they er Cisco branded DL380 G5 machines i got around 8 af of them at work, they are Cisco MCS7800 for voice applications, i got a couple of IBM servers that are cisco branded to.
 
7tac7Mql.jpg


One: IPFire Router, i5 3330s, 16GB memory, 1TB SSHD, quad gigabit Intel NIC
Two: Ubiquiti Edgeswitch 48-port 500w (mounted backwards)
Three: Hypervisor, 2*E5520 xeons, 48GB ECC Memory, XenServer 6.5, quad gigabit Intel NIC Active/Active
Four: Hypervisor, 2*L5520 xeons, 48GB ECC Memory, XenServer 6.5, quad gigabit Intel NIC Active/Active
Five: File Server, G540 Celeron, 16GB ECC Memory, 32GB SSD OS, 32GB SSD Swap, 8*3TB HDs RAIDz2, Ubuntu Server 14.04 + ZOL, 10gbps Fiber NIC
Six: File Server, 2*L5640 xeons, 72GB ECC Memory, 20*2TB HDs in ZFS effective RAID10, Ubuntu Server 14.04 + ZOL 10gbps Fiber NIC
UP'S: I have two 2U 750VA APC batteries in the mail, which should go a long way to making this look more normal.
 
Home Setup:

This is under the stairs to my basement.

Got a killer deal on a small cheap rack ($15 shipped). Pulled everything off the wall and moved the equipment to one location. I came up with what I think is a really good idea, I had three rolls of velcro stripping (5 yards each) so I cut some of it into little strips about 5" long and used a screw in the middle of the strip to attach it to the framing in my basement ceiling. This keeps it neat and in a bundle. It carries the cable line and my first floor network drops over to the network location. This will make it easy to add more drops but keep them bundled which I will be doing in the future when I remodel my second floor soon. I'll be terming all runs to the punch down at that time.

I also ran a new dedicated 20 amp circuit with 4 outlets over to the location.

For an amateur with no professional networking experience at all I think I did a decent job. I don't have rack mountable cases that would fit my 2 ESXi hosts, but since it's only a 2 post rack it probably wouldn't be a good idea due to weight anyways. Such it goes for a poor hobbyist though.

Wish I could score an IT job, sigh :\


ifhYa4N.jpg


1tGlJe4.png
 
Last edited:
I came up with what I think is a really good idea, I had three rolls of velcro stripping (5 yards each) so I cut some of it into little strips about 5" long and used a screw in the middle of the strip to attach it to the framing in my basement ceiling.
Very nice job. I bought a special staple gun and have been stapling my wires to the boards in my crawlspace... I like your setup a whole lot better!
 
im actualy working on a small trailer, with a 24U Rack inside (Will post pictures later)

At work i did a flight case with 2 XW6600 and a DL380G5 with a switch, E1 Multiplexer some custom hardware and a Firewall, for a project.

Promised pictures of the Flightcase :) the trailer project is moving fast forward, and i will post a picture series when i have the rack mounted inside the trailer.

Front:
flightcase_1.png


Back:
flightcase_2.png


The last 4U on the back where filled with some custom hardware that i designed.

the case consist of the following
2 x HP XW6600
1 x RAD Megaplex 2100
1 x HP DL380 G5
1 x Cisco ASA
1 x HP procurve Switch 24port (cant remember the model number but it was a 10/100Mbit.)

The cabling is not that neat, this is because we need to "easily" change the setup, the space is so tight in this case that only small hands can get to the backsides of the PC's
 
So this is my slightly different take on the AIO. I didn't go entirely AIO, just used the principle to save some power overall. Coming from what I was using i saved 450W at idle.

First, the rack:


Top to bottom:
  • Brocade SilkWorm 200e 16 Port FC Switch
  • Quanta LB4M
  • 4x Rackable ESXi nodes
  • Rackable SE3016 SAS expander
  • Rackable 2u ESXi AIO (Storage server + Routers)
  • Rackable SE3016 SAS Expander

Closeup of the 200E and LB4M:


Cable Managers from Dantrak:


D-ring cable managers:


And lastly the PDU's:


Now for some specifics on the storage server:


First SE3016:
  • 13x Hitachi 500GB UltraStar drives
  • 2x RaidZ2 VDEV's + Hot spare
  • 2x STEC Mach16IOPS 50GB SSD's for sLog

Storage Server:
  • ESXi 5.5U2
  • 2x Xeon x5560
  • 48GB DDR3 ECC REG
  • LSI 9200-8e (passthru to OmniOS+Nappit)
  • Qlogic qle2462 FC HBA (passthru to OmniOS+Nappit for FC Target)
  • Adaptec ASR-5805 256MB Cache + BBU (4x 72GB 10,000RPM drives RAID 0+1 for ESXi boot, OmniOS+Nappit boot and pfSense VM storage)
  • ConnectX2 10Gb adapter for routing VLAN's and serving CIFS shares from the storage server
Second SE3016:
  • 5x 2TB drives RaidZ1 for media storage

Nodes 1-4:
  • ESXi 5.5U2
  • 2x Xeon L5420
  • 16GB DDR2 ECC REG
  • Qlogic QLE-2460 FC HBA for shared storage
  • Sandisk Fit Ultra 16GB for ESXi boot

I'm very happy with the result. I achieved my power savings of 450w at idle, the entire rack now idles at 1000w. I also have another LB4M, PowerDsine POE injector, DVR, SlingBox and RG from my ISP running on another BBU for a total of 1120w idle. I was at over 1500w idle before.

The storage server now hosts 2 pfSense routers that were external boxes before. I also now have faster than 1GB inter-vlan routing that I enjoy. The site-to-site connections that I run also enjoy higher throughput thanks to the faster CPU's and ram they are running on now.

The ESXi nodes have no trouble maxing out their 4Gb connection. I have the storage server serving a single LUN on both FC connections. The ESXi nodes are set up in round-robin mode to spread the load over both ports. I also stuck another QLE-2462 (2x FC connections) in a server I am building to test the throughput and saw 750MB/sec on reads. Round robin of course. That's about the maximum you will see out of that configuration.

For anyone who is wondering FC is a super-cheap option for shared storage. I paid $15/single port HBA and $100 for the switch. The switch is fully licensed and came with 14 transceivers. Beats the pants off the price of all 10Gb or IB.
 
Last edited:
Can we get specs on the HTPC's and how they connect to TV's or other devices?
I built the HTPC's a while ago, before I moved into a new house, so the specs are about 2 generations ago, but get the job done well for any 1080p content.

HTPC specs (3 of each):
MSI P67A-G43 Motherboard
Intel i3 2100 Processor
PNY 4GB (2x2GB) DDR3-1600 Memory
Corsair Force 3 60GB SSD
Galaxy GTS 450 Graphics Card
NMediaPC Pro-LCD 5.25'' Screen
Corsair CX430 Power Supply
Rosewill R4000 4U Rackmount Case
All HTPC's run Windows 8.1 x64 MCE with Emby Theater

Connection to the TV's:
USB --- Monoprice USB --> Ethernet adapters which go to MCE IR receivers under the TV's. You could also use a wireless keyboard/mouse, as I have used them for initial configuration. As it stands now, I have the HTPC's in sleep mode, and use the remote to wake them and shut them down back to sleep.
HDMI --- Monoprice HDMI --> Ethernet adapters (dual CAT6 ethernet) which go to the TV's. They can also be connected to an AVR for full 7.1 TrueHD/DTS-HD surround sound. They also work with 3D Blu-ray, as my one 60" LG plasma has 3D and it works very well. No picture dropouts/pixelation yet as of 2 years of using them.
 
Why are there some things "blacked/ scratched out"?
I was curious about the Hardware Config of your PFSense box and your ISP Bandwidth?
 
MAC addresses. Internet speed is 105/20.

The pfSense box hardware:
Jetway JNF9HG (Intel Celeron N2930 Quad Core, Quad Intel Gigabit NICs)
1x8GB Kingston HyperX DDR3-1600 SODIMM
SanDisk 128GB x110 mSATA
Silverstone PT13 Slim-ITX
60w FSP power brick
 
MAC addresses. Internet speed is 105/20.

The pfSense box hardware:
Jetway JNF9HG (Intel Celeron N2930 Quad Core, Quad Intel Gigabit NICs)
1x8GB Kingston HyperX DDR3-1600 SODIMM
SanDisk 128GB x110 mSATA
Silverstone PT13 Slim-ITX
60w FSP power brick

Uh, MAC addresses don't need hiding from others... They only matter to devices on the same network.
 
Uh, MAC addresses don't need hiding from others... They only matter to devices on the same network.
MAC address of the cable modem could be used for a social engineering attack on a Comcast account, or spoofed on Comcast's network with ForceWare. Other individual MAC addresses I blacked out are like that on purpose because I only allow certain MAC's on my LAN. I don't need someone if they were in the vicinity of my home trying to get on my WLAN if they were to break the wireless encryption. Both are real threat vectors, and you'd learn this in any basic networking security course.
 
Fine, I'll grant you the first one, but MAC address filtering on wireless is pointless. It provides no security.
 
It's security through obscurity as classified by some, but is a good method to keep out people who might be newbs at exploits in the neighborhood and don't know to sniff the traffic. Of course, anyone who actually knows what they're doing can easily bypass it by spoofing it to a known client.
 
@bds1904 what do you think about that LB4M? I've been looking at managed switches with 10gig on ebay and keep seeing those for about a third of the next cheapest option but it looks good.
 
@bds1904 what do you think about that LB4M? I've been looking at managed switches with 10gig on ebay and keep seeing those for about a third of the next cheapest option but it looks good.

They are fast and cheap BUT they use about 65w of power and the web interface is a bit clunky. You can configure everything through the web interface just fine, it just has some querks you have to work around but nothing major. The CLI is easy.

I actually run 2, one for the access network and another for the server network. 10gb multi-vlan trunk between them over fiber to electrically isolate the rack.
 
omfg, that is beautiful. What are you actually doing with all that? Hosting? Seems like you have pretty dope firewall setup with all that Juniper gear.
 
omfg, that is beautiful. What are you actually doing with all that? Hosting? Seems like you have pretty dope firewall setup with all that Juniper gear.

Hosting a bit yeah, but i'ts mostly an R&D network at this point. With dual VDSL connections its not all that much fun you can do, the Junipers in the picutre are only 100Mbit, and in a few months I'm getting 8 strands of fibers, with 2 active gigabit connections, so the Junipers will be pretty much useless unfortunately :*( The new layout plans for a central pfSense firewall, and virtualizing the other intstances around the grid. So it will be a sad time to see those Junipers go. It will be a very sad day indeed :(

That's awesome! That air pipe job is rather interesting. Kind of gives it that super villain movie command centre look.
Haha, you are totally right, I was going for that look! My favourite superhero is Dr. Thaddeus Venture Jr. of the Venture Brothers, so all i need at this point
are some protagonists so i can join the Guild of Calamitous Intent! Beyond that though it gives me a PUE of less than 1.05, which is better than facebook :)

That organization and cleanliness on the rack....
Cheers :D
 
[ Infected ];1041904765 said:
Hosting a bit yeah, but i'ts mostly an R&D network at this point. With dual VDSL connections its not all that much fun you can do, the Junipers in the picutre are only 100Mbit, and in a few months I'm getting 8 strands of fibers, with 2 active gigabit connections, so the Junipers will be pretty much useless unfortunately :*( The new layout plans for a central pfSense firewall, and virtualizing the other intstances around the grid. So it will be a sad time to see those Junipers go. It will be a very sad day indeed :(

Sound expensive those fibers?, who will deliver the connection, google?

What about looking out for a Juniper SSG550?
 
Back
Top