Network pics thread

I have a lot of Sonicwall units out there. It really depends on what the client needs. If it's a small branch office and you don't expect to ever have to have to deal with VLAN *tags* then I usually push the Sonicwall TZ100 or TZ210 TotalSecure units. They include Premium Content filtering, Gateway AV, IPS, and rudimentary anti-spam all in one unit for a decent price.

If you need full "industry standard" VLAN support then it's minimum NSA240.

I've also dealt with the TippingPoint units and they're really nice. We deploy them in scenarios where PIM-DM multicast over VPN tunnels is necessary (certain VoIP solutions like the 3Com NBX require this)

I really do like the Sonicwall units themselves. Easy to use and configure and seem to work well, but the company and their tactics are shady.

Riley

I'm looking to buy a unit for my house, i had the tz100, hated it, never worked properly. So i ditched it bought a Astaro 220. Im going back to the sonicwalls because i work with a company that sells them like hot cakes, i would like a unit at home that can protect all the computers.

I also wish to setup 2-3 vlans, i work from home so i need a vpn so i can connect to my servers behind the unit. I also have a home automation at home that i don't want access to my servers and the wireless in the house i don't want access to my servers EXCEPT my laptop, the wifes laptop and other machines i want isolated from my servers & nas units.

Jason
 
I'm looking to buy a unit for my house, i had the tz100, hated it, never worked properly. So i ditched it bought a Astaro 220. Im going back to the sonicwalls because i work with a company that sells them like hot cakes, i would like a unit at home that can protect all the computers.

I also wish to setup 2-3 vlans, i work from home so i need a vpn so i can connect to my servers behind the unit. I also have a home automation at home that i don't want access to my servers and the wireless in the house i don't want access to my servers EXCEPT my laptop, the wifes laptop and other machines i want isolated from my servers & nas units.

Jason

The TZ series' PortShield system will allow you to separate each port in the unit into its own zone and you can setup firewall rules to manage the flow of traffic between zones. As long as you don't need 802.1q VLAN tagging then this may be sufficient. But if you need VLAN trunks, or have a VLAN-aware WAP then this won't work.

Riley
 
The TZ series' PortShield system will allow you to separate each port in the unit into its own zone and you can setup firewall rules to manage the flow of traffic between zones. As long as you don't need 802.1q VLAN tagging then this may be sufficient. But if you need VLAN trunks, or have a VLAN-aware WAP then this won't work.

Riley

Cool, thanks, The unit i'm looking at is a TX210, with the av software for viruses & spyware, and VPN.

I hope to have about 3 vlans / seperated networks for the house, if it works at my house then i will be able to sell it at work too.

j'
 
The Juniper does something like 2.6 Tbps for the chassis and 120Gbps per card.

I forget what the current specs on the backplane on the alcatels are but its like 50Gbps per slot.

Wow nice to see some 7750 :D

I think those are a 1Tb total and, as you said, 50Gb per slot.
 
Lots of really great setups here. I love looking through these and checking out everyone's configurations!

A quick question for those of you more versed in network architecture: I would like to have two separate cabinets, a shallow wall mounted cabinet for switches, patch panels, and the router + a second larger cabinet on casters to house several servers. My questions is about connecting the server cabinet to the switch cabinet. Should I run cat5 from the patch panel in the switch cabinet to the server cabinet for each server? If so, should I use a patch panel inside the server cabinet as well to make those connections? Or should I throw a switch into the server cabinet and run a single fiber link between the two cabinets? What would be the "best practices" approach? :)

And, of course, a photo: This is some of my older equipment sitting in my bedroom closet. Everything else is strewn about the living room pending retooling.

102110191601.jpg


Thanks! :)
 
Switch + Fiber is always nice, however Cat5e or Cat6 + Switch in the cabinet would probably do the trick just as well. Generally fiber would be for above GbE speeds or longer distances. Might as well avoid the fragility of fiber and go with copper. Since it's so close anyway it's not like it matters if you choose copper and decide fiber later. i.e. not a cost concern, as in "whoops I'm out $5 bucks because I used copper." As opposed to "whoops I'm out $20-$30 for this fiber cable, plus these GBIC's or SFP's I paid for and I don't really need fiber." So after having typed all of this, stick with GbE copper; it's cheaper :)
 
If you're planning on doing structured cabling between racks (with the patch panels), it's going to be much easier to work with copper. Fiber is good for long distances or in extremely high EM fields.
 
Defiantly go with a patch panel, because you can always punch down extra cable or install fiber term points ahead of time so if you need it later you can just attach an end or plug it in.The best is when you patch the entire panel to a switch whether it be 24-48 port switch. The hard part is now over and you can focus of the design, or retool.

Desk---Patch panel----Patch panel----Switch: Never have to unplug a thing from it and just use management to disable ports. :)
 
I really do like the Sonicwall units themselves. Easy to use and configure and seem to work well, but the company and their tactics are shady.

This.

I have a few sonicwalls deployed in our environments (Pro 2040's and Pro 4500's) and they work great.

That is, if you are a windows shop and never, ever plan on connecting a tunnel to any other hardware vendor.

Juniper endpoint? Cisco endpoint? Mac or Linux client? Forget it. Any performance issues or connectivity isues and their tech support will throw it over the fence and blame the other company and flat out refuse to troubleshoot it.

Sure, they'll off some suggestions on what has worked for other customers in the past, but they won't take ownership at all.
 
My questions is about connecting the server cabinet to the switch cabinet. Should I run cat5 from the patch panel in the switch cabinet to the server cabinet for each server? If so, should I use a patch panel inside the server cabinet as well to make those connections? Or should I throw a switch into the server cabinet and run a single fiber link between the two cabinets? What would be the "best practices" approach? :)

It depends on how much money you have and how much space you have. The cheapest way is to run cat-5 from the back of the servers in their cabinets straight to the switch in the network cabinet. It will work just fine, but if you have a really dynamic hardware environment with gear in and out of the server cabinets, it can get messy.

The next way involves structured wiring between patch panels in the server cabinets and network cabinets. That way when you swap gear in either cabinet, it's a patch cord swap instead of having to unwrap cable bundles.

Fiber and GBIC connections between switches in the server racks and network racks simply distributes the network core into the server racks (since the server cabinets now require switches connected via fiber to the core), complicating your physical switch layout and requiring more switch gear.

I'd suggest going with structured wiring with patch panels if you have the money, and direct cat-5 from the network cabinets to the server gear if you dont.
 
This.

I have a few sonicwalls deployed in our environments (Pro 2040's and Pro 4500's) and they work great.

That is, if you are a windows shop and never, ever plan on connecting a tunnel to any other hardware vendor.

Juniper endpoint? Cisco endpoint? Mac or Linux client? Forget it. Any performance issues or connectivity isues and their tech support will throw it over the fence and blame the other company and flat out refuse to troubleshoot it.

Sure, they'll off some suggestions on what has worked for other customers in the past, but they won't take ownership at all.



I wont be able to connect a macbook pro to a sonicwall with a client or vpn ? Seriouse ?
 
I wont be able to connect a macbook pro to a sonicwall with a client or vpn ? Seriouse ?

I have yet to figure out how to connect to the NSA 4500 I have in the office from my Mac. If there is some magical configuration or client I am missing out on let me know.
 
I have yet to figure out how to connect to the NSA 4500 I have in the office from my Mac. If there is some magical configuration or client I am missing out on let me know.

L2TP over IPSec with a pass phrase ?
 
I have yet to figure out how to connect to the NSA 4500 I have in the office from my Mac. If there is some magical configuration or client I am missing out on let me know.

We are using a third party client called VPN Tracker 5 but they're on v6 now.

Be prepared to build all kinds of special and unique snowflake configurations for your Mac users using this. All of our windows users authenticate via an LDAP connection to our Active Directory, but we had to make all kinds of execptions and additional rules to make our Mac users connect properly. And Sonicwall was singularly unhelpful in the setup. We have the mac user configuration established purely through trial and error after literally days of me sitting there trying config after config.

F*ck Macs but f*ck Sonicwall even more. :mad:
 
Defiantly go with a patch panel, because you can always punch down extra cable or install fiber term points ahead of time so if you need it later you can just attach an end or plug it in.The best is when you patch the entire panel to a switch whether it be 24-48 port switch. The hard part is now over and you can focus of the design, or retool.

Desk---Patch panel----Patch panel----Switch: Never have to unplug a thing from it and just use management to disable ports. :)

The next way involves structured wiring between patch panels in the server cabinets and network cabinets. That way when you swap gear in either cabinet, it's a patch cord swap instead of having to unwrap cable bundles.

You've all been a great help! Thanks for the advice. It does seem much cleaner and flexible to use a patch panel in the server cabinet to make the connections. As far as cable management inside the server cabinet, what would be the best way to route cat5 patch cables coming from the patch panel to the servers? Just pull them all to one side and bundle them with zip ties and route them to the rear of the cabinet?

Thanks again for the help, I am excited to see how it turns out! :)
 
As far as cable management inside the server cabinet, what would be the best way to route cat5 patch cables coming from the patch panel to the servers? Just pull them all to one side and bundle them with zip ties and route them to the rear of the cabinet?

Pretty much. But never, ever use zip ties. Always use velcro strips. They are easier to adjust if you need to add or remove a cable from the bundle and they don't pinch cat-5 like zip-ties can do.


There's a debate on using cable management arms or not. Some say it blocks airflow and others say it's worth using them so you don't have to unplug all the cables each time you want to slide the server out.
 
Why not have a top of rack switch?

Because now you are distributing network gear over all of your cabinets instead of having a single "networking gear" area in your space. Plus now you need some method of connecting all of your switches using fiber GBICS or trunked copper links if you have more than one cabinet, as opposed to a nice clean daisy chain of short-run amphenol links in a single cabinet.

By having a distributed network gear layout, chasing down network issues can be a hassle since you now have to potentially touch multiple cabinets for a single issue instead of just working on all of your gear in a single cabinet or cluster of cabinets. This works fine if you have four cabinets in a line in a single cage, but in larger deployments your space might be broken up into chunks on the floor in different spaces and walking between them gets to be a hassle (not to mention securing your spaces, etc).
 
Plus now you need some method of connecting all of your switches using fiber GBICS or trunked copper links if you have more than one cabinet, as opposed to a nice clean daisy chain of short-run amphenol links in a single cabinet.

One or two trunks versus "nice clean daisy chains" of links for each server in the cabinet?

I wasn't aware that today was April fools day. :eek:

If you're worried about troubleshooting being difficult because a switch is in a different rack, you're doing it wrong. Do you have any experience working on large networks?
 
Last edited:
Yeah I know, he just went from vpn to dns and a lot of what he says makes it very clear he doesn't know what he's talking about... but maybe I just need to google some more.
 
OpenDNS does have a blacklist service that will return NXDOMAIN (ideally, it's probably just a goddamn advertising page) on domains they tag as virus infected. I think the bigger question is how do they determine what domains are infected. Although it's just for viruses, I doubt it would protect against an XSS attack or the like.

OpenDNS is a security measure, it is hardly an ironclad security measure (unless you have user management set so that the user cannot change DNS servers/a box with incorrect DNS servers cannot get internet access (safe*connect will do the latter)).

OpenDNS makes for good icing on the cake, however, there are much better security practices: not running as an administrative user/root, setting up a good spam filter to block viruses and general crap coming in via email, maintaining some sort of antivirus, etc.

One or two trunks versus "nice clean daisy chains" of links for each server in the cabinet?

I wasn't aware that today was April fools day. :eek:

If you're worried about troubleshooting being difficult because a switch is in a different rack, you're doing it wrong. Do you have any experience working on large networks?

I don't think that he's referring to literally connecting uplink -> switch a -> switch b -> switch c -> switch d (as they like to do at the university I attend), i would hope that he is talking about a single rack of switches, with each access switch connected back to a core. Although the core switch would be 'some method of connecting all of your switches using fiber GBICS or trunked copper links'. Perhaps unmanaged switches are involved?

As I recall, Mr. agrikk has a network of 100+ PCs, there was a mostly full telecom rack of patch panels (with blinking lights) a ways back in this thread.
 
Yes... if you're talking about "security" for an end user content filter, and known virus blacklist. Berg0 mentioned using osx vpn with a juniper device, had nothing to do with blocking webpages via dns.

I know what opendns is, it just didn't relate to what was said.
 
I don't think that he's referring to literally connecting uplink -> switch a -> switch b -> switch c -> switch d (as they like to do at the university I attend), i would hope that he is talking about a single rack of switches, with each access switch connected back to a core.

I think we're on the same page. My point was to address the notion that it's neater to run individual runs from each server to a central rack, rather than having a top of rack switch with 1 or 2 uplinks.

My company does this in a small DC that has 9 or 10 racks full of servers with a patch panel at the top and 40 something cat5 runs over to the "network rack." It's an absolute nightmare if you have to plug in a new device or trace a cable. Yes it works, no it's not worth the down-time for us to change it.

OP asked how to neatly manage cables between a server and network rack, and the cleanest way to do that is to have a top-of rack switch with uplinks to a distro switch. :cool:

My $0.02
 
My company does this in a small DC that has 9 or 10 racks full of servers with a patch panel at the top and 40 something cat5 runs over to the "network rack." It's an absolute nightmare if you have to plug in a new device or trace a cable. Yes it works, no it's not worth the down-time for us to change it.

The company that I use to work for did this. I don't remember how many people said it was stupid. It only became a royal pain in the ass when they started their hosting stuff and all of the hosting stuff was in a different row and the networking rack. Running all that cable through the overhead cable trays was a royal pain in the ass.

Actually if I remember they had 1 rack for patch panels and then another one for networking gear. Patch into a patch essentially
 
The company that I use to work for did this. I don't remember how many people said it was stupid. It only became a royal pain in the ass when they started their hosting stuff and all of the hosting stuff was in a different row and the networking rack. Running all that cable through the overhead cable trays was a royal pain in the ass.

Actually if I remember they had 1 rack for patch panels and then another one for networking gear. Patch into a patch essentially

Another one here for top of rack switches instead of stringing together patch panels. Network rack goes at the end of the isle, then each cabinet gets 2x Fiber / copper runs, servers plug into switch at the top of the rack and that's it. This makes identifying network problems stupid simple. If you have a server with network issues it can only be in on of 4 places, cable to the TOR switch, TOR switch itself, trunk to commander switch (only if the whole cabinet is having issues), commander switch itself (only if a whole cabinet is having issues, or whole row).

Come to think about it, patch panels would be more of a PITA. You would have your 48 port patch in the rack, then another 48 port patch at your network rack, then your runs between the distribution rack and the network rack, I see cable trays getting very over crowded and a lot of time and effort spent / wasted on a more complex less functional solution.
 
Yes... if you're talking about "security" for an end user content filter, and known virus blacklist. Berg0 mentioned using osx vpn with a juniper device, had nothing to do with blocking webpages via dns.

I know what opendns is, it just didn't relate to what was said.

Juniper firewall and osx PLUS open dns give s a very good secure setup, open dns will help filter out all the unwanted "CRAP" hence why i said more secure.

I wasn't talking about the DNS features, more less talking about open dns's features they provide.
 
Juniper firewall and osx PLUS open dns give s a very good secure setup, open dns will help filter out all the unwanted "CRAP" hence why i said more secure.

I wasn't talking about the DNS features, more less talking about open dns's features they provide.

But for VPN I still fail to see how OpenDNS's filtering is a benefit. You are creating a secure tunnel between points.

Then again for the enterprise for content filtering and intrusion prevention I can think of many better solutions then OpenDNS.
 
But for VPN I still fail to see how OpenDNS's filtering is a benefit. You are creating a secure tunnel between points.

Then again for the enterprise for content filtering and intrusion prevention I can think of many better solutions then OpenDNS.

Depends if it is site to site, or if it is a client ( laptop ) to site, and if the laptop is sending and receiving all traffic from the site.
 
Depends if it is site to site, or if it is a client ( laptop ) to site, and if the laptop is sending and receiving all traffic from the site.

But even if it is Laptop to site, and the laptop is piping all its traffic through the VPN you would be on the company's proxy or content filtering solution. So I should already have an internal content filtering solution, and then the firewall will take care of threat detection and prevention, maybe even gateway AV. This is a much better solution as you can also tier your access control. For instance call center drones can get nowhere except a few sites, but C-Level associates can get anywhere they want.

For a home user or small business this may be a practical solution, but there are many other better setups that I can think of then OpenDNS.
 
Back
Top