Network pics thread

Internet connection at home is decent, but yeah...we have 20gbit here. Pretty much exclusively used for internal stuff too.
 
ya I am at a small (tiny) colo but it's a nice place... rent an office and a tiny corner to stick my 24U rack in lol :)

it's so hard to go home when you have gig internet at your desk!

That's awesome. I wish there was a colo place here, I would host all my stuff from it. My only option for online stuff is leasing as if I do colo and something goes down I'm kind of screwed. Remote hands can be expensive.

For my internal stuff it's all at home though. Wish my ISP would allow servers and offer stuff like multiple IPs and static IPs.
 
sorry X4170... not sure, they don't seem too loud after initial power up, but I'm in a data center so it's insanely loud in there anyway...

I don't know if I would buy them again, the whole ILOM thing is such a huge pain, they sometimes decide they just don't want to turn on, you press the power button or give it a turn on command via the ILOM and it says it's on, but it's not... only way I can get them to turn on is to reflash the ILOM, unplug them, clear the config, reseat the memory and then they usually boot and I have no idea what, if any of those procedures is necessary, I just was losing my mind trying to get it to power up last night and it finally came up... although it seems if you just never unplug them they are happy... they just seem to hate being unplugged...

installed some UPS' in the bottom of the rack... 2x 2700W high voltage online double conversion UPS'

2014-09-07%2023.30.27%20%28Medium%29.jpg


2014-09-07%2023.30.36%20%28Medium%29.jpg


so far drawing about 1300W... one more server coming in tomorrow hopefully

I wish they weren't so big (4U EACH!!!) but the price was right... got the pair refurb (new old stock with new batteries) with a set of high voltage PDU's from a local UPS/electrical place
 
Last edited:
how good are those Dell UPS?

they are rebadged powerwear units... the menus are identical to my eaton units...

so in other words, very good

Dell used to rebadge APC, then they started rebadging powerwear, now they are back at APC...

I will *NEVER* buy an APC again

after using some powerwear and eaton units and seeing how many years of life you get out of the batteries and how well built they are, the APC units are garbage in comparison
 
Can you elaborate on your dislike for APC? I have used them for 15 years and seldom had any issues, I get the batteries replaced in the units when they die and they keep on working. I am not working with data center level UPS's so perhaps that is where our experience differs.
 
Wow those are some sexy looking UPSes, though at that size I would expect more run time, but guess it does not really matter much in a data centre environment, it's just to have a buffer to switch over to generator.
 
Hey FLECOM,

If you are in a data center, what is the need for UPS protection? Doesn't reliable power come with your hosting agreement?
 
Can you elaborate on your dislike for APC? I have used them for 15 years and seldom had any issues, I get the batteries replaced in the units when they die and they keep on working. I am not working with data center level UPS's so perhaps that is where our experience differs.

I have had several SmartUPS 3000 units fail on me, and they over charge their batteries to increase runtime, so the batteries only last like 2 years or so... We have powerwear units with 6 year old batteries that have only lost a couple minutes runtime...we change batteries because of schedules, not because the UPS cooked them to the point we need a crow bar to pull them out

their big stuff like the Symmetras are good though... their old units are indestructible but really hard on batteries... but they had a run there of a couple years where they were just garbage... which is why Dell and IBM went to powerwear... Dell later went back to APC since their newer stuff is allegedly better but I can't put stuff thats allegedly better on production equipment.... IBM still rebrands powerwear AFAIK

Hey FLECOM,

If you are in a data center, what is the need for UPS protection? Doesn't reliable power come with your hosting agreement?

they have generators (which I am on), but I am not on the building UPS... I don't have an official agreement, I help them out with some projects and they let me stick a rack in the corner (quite literally)
 
Last edited:
Agree on APC battery issue, have had issues with rackmount as well as standalone, in both cases batteries seldom last more than a couple of years.
 
In the 'wtf were they thinking' category, I present the following:













This is a brand new install in a 3000sf remodeled office, they moved in last Wednesday. Sigh...
 
Last edited:
^^^ That is just sad. A wiring closet only days old and looking like that?

Strangest thing is how they put in U brackets for wiring, so they did have an eye towards structured wiring, but they still ended up with a crappy looking 2 post.

they have generators (which I am on), but I am not on the building UPS... I don't have an official agreement, I help them out with some projects and they let me stick a rack in the corner (quite literally)

This is the best arrangement. Free colo space and bandwidth? Hells yeah...

Even though I'm working at AWS these days and literally have ridiculous amounts of storage and compute at my disposal for free, I still can't get myself to drop my arrangement with my (now ex-) brother-in-law for half a cabinet of powered and free-bandwidth goodness.
 
Agrikk, it was doomed from the start. Nobody seems to have any pride in their work.
The company that did the cabling used about 1 cable tie every 3" on the back of that rack, no bushings on the conduit stubs in the walls, and various other stupidities.

This was their 'completed' termination:
 
Agree on APC battery issue, have had issues with rackmount as well as standalone, in both cases batteries seldom last more than a couple of years.

Ditto here too. I swear APC is adopting the HP scheme of ensuring recurring revenue...
 
I have had several SmartUPS 3000 units fail on me, and they over charge their batteries to increase runtime.

The really old ones in late 90's early 2000? I know I had those issues. My newer ones have been great so far.
 
Last edited:
heck at least they used a cabinet and a 2 post, I've seen a heck of a lot worse... I've learned to give up trying to make it look pretty unless the customer requests it
 
Got called back up to the tower again. Brought my DSLR this time.
The view is pretty nice up here:


I have no idea wtf these things are:



This is a Maine PBN relay station or something. It also looks like a Maine forestry service lookout point:

Back in 2001-2004 when I worked for a small WISP in central Maine, we had some radios and antennas colocated on a tower right next to an MPBN tower (like 30ft away). We had a phone number that we could call into MPBN Ops and they would turn down their transmitter to half output so it was safe to climb the tower we were on. Occasionally we would see those guys working on their equipment in their building. One of the techs gave us a tour one day. Pretty neat stuff.
 
Ditto here too. I swear APC is adopting the HP scheme of ensuring recurring revenue...
unfortunately I think this is kind of true, I have a few newer but not super new units, they are better than some of the older ones, but I think its just cheap ass batteries and the lack of cooling (most units do not have any sort of active cooling when they are not in any active mode (boost/convert/on-battery)

I have resorted to putting a few older silent 80mm fans ontop of a couple of the units with a wall wart 12v router psu to force some air through them and it seems to have kept the batteries in good shape..

however the internal fan in each unit has had issues and needed replaced after only a couple of years of use (now 5 year old units, still on original batteries)

EDIT: regarding the HP refernce: My employer uses HP towers and servers exclusively and we have not had that bad of a failure rate considering the 9000 something computers we have across all the sites, some of the newer units have come with HDD's that fail an extended smart test, but HP next day shipped replacements at no charge. cant really blame HP for seagate's failures
 
EDIT: regarding the HP refernce: My employer uses HP towers and servers exclusively and we have not had that bad of a failure rate considering the 9000 something computers we have across all the sites, some of the newer units have come with HDD's that fail an extended smart test, but HP next day shipped replacements at no charge. cant really blame HP for seagate's failures


You must not use their servers....

The hardware itself is fine. But I've yet to see one show up as ordered and now you have to have a support contract to download drivers and bios updates.
 
You must not use their servers....

The hardware itself is fine. But I've yet to see one show up as ordered and now you have to have a support contract to download drivers and bios updates.

This. This bullshit right here makes me completely nuts.

What possible reason, besides filthy lucre, would you have for putting drivers and updates behind a warranty roadblock?

F*ck you, HP.
:mad:
 
This. This bullshit right here makes me completely nuts.

What possible reason, besides filthy lucre, would you have for putting drivers and updates behind a warranty roadblock?

F*ck you, HP.
:mad:
Dell does the same exact thing with their SAN products
 
This. This bullshit right here makes me completely nuts.

What possible reason, besides filthy lucre, would you have for putting drivers and updates behind a warranty roadblock?

F*ck you, HP.
:mad:

Cisco even lets you download their firmware and drivers for their UCS servers without a support contract. You do have to login with a Cisco account, but that is it.
 
Back in 2001-2004 when I worked for a small WISP in central Maine, we had some radios and antennas colocated on a tower right next to an MPBN tower (like 30ft away). We had a phone number that we could call into MPBN Ops and they would turn down their transmitter to half output so it was safe to climb the tower we were on. Occasionally we would see those guys working on their equipment in their building. One of the techs gave us a tour one day. Pretty neat stuff.

Nice to see other Mainers in here! Here is some of our stuffz.

Our USC Chassis and some other servers.
aOEdgmyl.jpg


UCS. Maxed out now, Just added the bottom 2 192GB RAM and will join our ESX Cluster.
TjVSnryl.jpg


ESX.
OZ8ITiJ.jpg


NetApp Shelves.
646KxkIl.jpg


What we are chucking.
VJVrxzal.jpg


Networking Hell. Where getting new core soon and will rewire EVERYTHING.
nvcsTq2l.jpg


PBX :-( also getting ripped out soon.
1zXGgiel.jpg

CbCe849l.jpg


Rack at our other location. Mmm pretty.
Pyc4R1Vl.jpg


Some other fun stuff here!

iPods for our customer ordering devices. We have about 100 of these around New England managed on Meraki.
g7ZEHNUl.jpg


New stuff to hand out.
44BuLeOl.jpg


My Rig at work.
94IuVRll.jpg


Then the not so fun stuff...


UPS for Phone Sys. Almost every pack was like this, this was just the worst.
a4AtKtwl.jpg


This is what the inside of most of the warehouse PCs look like...
eZPIP5al.jpg
 
Dell does the same exact thing with their SAN products

And that's bullshit, as well. :)

They are basically trying to cash in on aftermarket resale of their products, punishing people for trying to reclaim residual value from obsolete gear by selling on eBay or CL or whatever.

As if Joe Blow, resident IT hobbyist, is going to spend $100 on a dual dual-core Xeon server for his home lab and then go spend hundreds on a support contract?

All companies are doing by implementing these blocks are guaranteeing that these servers end up in an ewaste pile or landfill instead of getting some extra use.

I suppose that doing this allows them to reduce the length of time old drivers are available on their site. But still...
 
You must not use their servers....

The hardware itself is fine. But I've yet to see one show up as ordered and now you have to have a support contract to download drivers and bios updates.
My employer has over 50 HP Servers in production at one site alone(not including off site datacenters and the other 10 sites), ranging from g3 through g8, 1u/2u/5u including external sas arrays.

The team that manages them doesn't seem to be unhappy with them.

Probably has something to do with being under a large health care corporate umbrella where they only use HP products *shrugs*
 
My employer has over 50 HP Servers in production at one site alone(not including off site datacenters and the other 10 sites), ranging from g3 through g8, 1u/2u/5u including external sas arrays.

The team that manages them doesn't seem to be unhappy with them.

Probably has something to do with being under a large health care corporate umbrella where they only use HP products *shrugs*

We've got I think around 300 (There are 2000 servers but most these days are Virtual) of them from G1 to G8 AMD and Intel 1U 2U and 5U. Windows, Linux, Vmware, and Hyper-V. Gotta love the variety in higher ed. :D

Like I said above for the most part we can't complain about the hardware. The support of the hardware is awful though.

The hardware guys are pretty set on moving to Cisco UCS.
 
This. This bullshit right here makes me completely nuts.

What possible reason, besides filthy lucre, would you have for putting drivers and updates behind a warranty roadblock?

F*ck you, HP.
:mad:

can't be as bad as Sun, they hide their firmware behind support contracts but some of the older firmwares are just plain broken... like seriously they don't work, absolute garbage

oh and fuck you oracle

I didn't have an issue downloading a bios for my Dell R610 back when I needed it, don't know if that's changed

edit: just checked and ya you can download the bios for the R610 off dell's website? I remember finding the C1100/C6100 bios' without issue also
 
Last edited:
And that's bullshit, as well. :)

They are basically trying to cash in on aftermarket resale of their products, punishing people for trying to reclaim residual value from obsolete gear by selling on eBay or CL or whatever.

As if Joe Blow, resident IT hobbyist, is going to spend $100 on a dual dual-core Xeon server for his home lab and then go spend hundreds on a support contract?

All companies are doing by implementing these blocks are guaranteeing that these servers end up in an ewaste pile or landfill instead of getting some extra use.

I suppose that doing this allows them to reduce the length of time old drivers are available on their site. But still...

it also means added TCO for customers... if I tell a client you can have an HP with a support pack for x, or you can have a supermicro for a lot less, guess what they are going to buy?

really sad that HP went that route... guess I won't be buying any more proliants... this makes me genuinely sad :(

guess that I need to add HP to the no buy list along with Cisco and Sun/Oracle and Dell SAN products... who else?
 
Last edited:
If you desperately need a specific HP SPP or the likes, I should be able to help you out with that. HP datacenter care support package at work means I have access to just about everything so far as I can tell. Actually for the most part HP has been pretty good with support in my experience, though the POD 240as that we have certainly have their quirks and have caused some headaches. Irony is that while HP supports Superdomes in them, they'll experience thermal events twice a day in them due to their lower threshold than ProLiant servers or 3PAR SANs. Same problem with the rest of the Integrity line. I doubt many, if any, of you are running Itanium or PA-RISC stuff though, so at least you won't have that problem.
 
I haven't had an issue, and for personal stuff I won't care but for clients it makes me uneasy...
 
I miss being in Maine I really do. Such a lovely state.

<-- Born and raised in Brownville Jct, ME, lived in Presque Isle and Bangor for quite a while. Sister lives in Charleston (near Dover-Foxcroft) and another sister in Portland.

Now living in crappy, muggy, hot Brunswick, GA.
 
Racked this DL320e the other day which basically filled out one of my secondary racks (not located in server room):

IMG_8070_zpsb1469d59.jpg


It's an HP ProLiant DL320e Gen8 with a single quad core E3-1240V3, 24GB of RAM, Server 2012 R2, and a 240GB Samsung 840DC SSD. Pretty awesome little server for the money. This will eventually be a backup domain controller but is currently a single use application server for Equorum Plot Station.

Other things pictured are an HP ProCurve 2510G-24 connected to the main server room with 1Gb fiber, Synology DS412+ (with 4 3TB HDDs in RAID10 for on site backup storage), and a Iomega IX4-200D which I just use for storing random unimportant things that don't need backup (I think it has 4 1TB HDDs in RAID5). Rack is an APC AR100HD NetShelter WX and the UPS is a Tripp Lite SMART500RT1U.
 
...DL320e Gen8 with a single quad core E3-1240V3, 24GB of RAM, Server 2012 R2, and a 240GB Samsung 840DC SSD...
Very pretty little rack. Nice and clean.

Unless you supporting a really big data center and/or several hundred end users that box seems a bit overkill for a backup DC. But what the heck, this is [H] and overkill is what we celebrate.
 
Very pretty little rack. Nice and clean.

Unless you supporting a really big data center and/or several hundred end users that box seems a bit overkill for a backup DC. But what the heck, this is [H] and overkill is what we celebrate.
That's right! Actually, as of right now this server is being used as an application server running Equorum Plot Station. Basically what the software does is act as an intermediary between SolidWorks and the print driver to be able print SolidWorks prints very efficiently and automatically (a process which would take a human literally hours can now be done in a minute). The server has to be able to open big models so the specs need to be similar to our engineering workstations (HP Z420s). That's the immediate need this server fulfilled, then when we replace our existing virtual infrastructure (which today isn't ballsy enough to handle this software), I'll move Plot Station to the VMWare host and then use this as a backup DC. By then this server will have already "paid for itself". All in all I have $1,800 into the whole server including the OS.

I am only supporting about 60 regular users and 100 total FTEs.

Using Fiber for 1 meter of 1Gbps is also overkill ;)
Lol, the fiber is the link between the main server room and this room. The run is about 400 feet so Ethernet was not feasible.
 
Last edited:
Back
Top