Network pics thread

You're adding a genset to an existing install, then? Is that possible without downtime? How do you install the switchgear to cut over to the generator source? Or was that switchgear already installed before the site went live?

We come in on a weekend when we're closed, preferably a Sunday and have them wire it in.

After that's all wired and tested, it has it's own ATS. That's what I meant by cut-over, actually wiring it into the circuits.
 
Oh, so you're not 24/7. That's lots easier.

Yeah, luckily we get Sunday's. Granted sometimes people complain when I down things and they can't VPN in on the weekend. :( I just tell them to STFU (in nicer terms) it's necessary. :)

Also our server batteries can carry us about 1.5 hours, and network gear about 30-45 minutes. So we could probably cut over one circuit at a time without dropping power. However I'd rather do full power down and have them fully test the wiring first.
 
Oh, so you're not 24/7. That's lots easier.

Tell me about it, try having a blown transfer switch that caught fire rebuilt without even losing a leg of power while still hot.... and that the parts are not made anymore so they have to be fabricated and custom fitted. The bypass mechanisms in the older transfer switches are a joke.
 
Tell me about it, try having a blown transfer switch that caught fire rebuilt without even losing a leg of power while still hot.... and that the parts are not made anymore so they have to be fabricated and custom fitted. The bypass mechanisms in the older transfer switches are a joke.

Would this be Colo4 by chance?
 
I'm lucky to have two feeds into my room, UPS's on both feeds, but only have a genset on one.
 
Would this be Colo4 by chance?

Nope.

I'm lucky to have two feeds into my room, UPS's on both feeds, but only have a genset on one.

Doesn't matter, so do we. But when you have to bring a part of one feed down and can't even get a 60 second service window it sucks no matter what. Even with transfer switches and all the devices in the world, critical customers will go nuts.
 
4507-1.jpg


1 of 2 Cisco 4507R+E's. Dual 7LE supervisors with 10Gb fiber SFP's, dual 2800 watt power supplies and two 48 port GbE POE line cards. cant wait to get them up and running.
 
Not to much to post here,

Got this dual G card today that i bought off ebay for 50$ going into my hyper v server, going to see if i can get teaming going for iscsi..

P1020021.JPG


Cleaned up the rack a little bit,,

P1020023.jpg
 
Not to much to post here,

Got this dual G card today that i bought off ebay for 50$ going into my hyper v server, going to see if i can get teaming going for iscsi..

You never team or link aggregate iSCSI, iSCSI uses multipathing instead of teaming.
 
You're adding a genset to an existing install, then? Is that possible without downtime? How do you install the switchgear to cut over to the generator source? Or was that switchgear already installed before the site went live?

If you're fast enough, and manage to unplug and replug within a single AC phase, there is no downtime. :D Preferably when the voltage is at around 0.

Dash, how do you find Freenas compared to Openfiler? I want to build a SAN eventually, not sure what will be powering it yet though.
 
If you're fast enough, and manage to unplug and replug within a single AC phase, there is no downtime. :D Preferably when the voltage is at around 0.

Dash, how do you find Freenas compared to Openfiler? I want to build a SAN eventually, not sure what will be powering it yet though.

freenass has been flawless for me, running on a 2 year old usb stick, funny enough i accidental unplugged that server the other day, and well plugged it back in turned back on and no problems at all, 2008r2 re-connected and my vm's were still in good shape.

I might be going to the new qnap device next week tho.

Dash..
 
Last edited:
Nice. These going in a closet?

Yes. One in each of our server/networking rooms. They will be connected by 10Gb fiber.

We are waiting on a quote from a contractor to extend fiber from a demarc in each of our buildings to our server rooms. For our WAN setup, Verizon ran fiber between our two buildings and there are plenty of spare pairs for us to use, so might as well take advantage of it.
 
freenass has been flawless for me, running on a 2 year old usb stick, funny enough i accidental unplugged that server the other day, and well plugged it back in turned back on and no problems at all, 2008r2 re-connected and my vm's were still in good shape.

I might be going to the new qnap device next week tho.

Dash..

Did you have any problems with Openfiler?

I am running 2.3 w/ NFS & ESXi 5 and get constant disconnects or 'pauses' in my VM's.

I like the interface of Openfiler, and the fact that it's linux.

I don't like the slowness of CIFS on FreeNAS, and don't need ZFS, but the problems with Openfiler is enough to make me want to switch.
 
Did you have any problems with Openfiler?

I am running 2.3 w/ NFS & ESXi 5 and get constant disconnects or 'pauses' in my VM's.

I like the interface of Openfiler, and the fact that it's linux.

I don't like the slowness of CIFS on FreeNAS, and don't need ZFS, but the problems with Openfiler is enough to make me want to switch.

I had the opposite experience, in that OpenFiler was rock solid for me, and FreeNAS gave me nothing but problems once I enable iSCSI. Also it pisses me off that you have to reboot FreeNAS everytime you make a network change, and you can't stack them. (For instance: remove IP from NIC, bond two interfaces, re IP the new lagg interface. = 3 reboots)
 
I had the opposite experience, in that OpenFiler was rock solid for me, and FreeNAS gave me nothing but problems once I enable iSCSI. Also it pisses me off that you have to reboot FreeNAS everytime you make a network change, and you can't stack them. (For instance: remove IP from NIC, bond two interfaces, re IP the new lagg interface. = 3 reboots)

Same experience. OF was rock solid. I since moved to OI+ZFS, but I tried the latest Freenas, it was the slowest out of everything I tested for iSCSI/ZFS.
 
mike, what size drives are those..


I wish i could get a free 2.5" sas drive for free to add to my wall of goodies ( looking for a non working one )
 
2.5 inch dual port SAS 10KRPM, 300GB. I guess I'd offer to send you a dead one, but we always shred them since they have customer data.

Also, I upgraded my desktop to 24 gigs:

DesktopUpgrade.png
 
Last edited:
2.5 inch dual port SAS 10KRPM, 300GB. I guess I'd offer to send you a dead one, but we always shred them since they have customer data.

Also, I upgraded my desktop to 24 gigs:

I wont re-quote the image, but what does the red mean in the cpu graph ?
 
Red is kernel time.
I believe it is the amount of time spend in kernel mode (running drivers and OS), as the green shows user mode. Someone correct me if I am wrong.

Close. The green line is the total CPU time on that processor, inclusive of kernel time.
 
Back
Top