Data Center Move - lots of pics : no 56K

cyr0n_k0r

Supreme [H]ardness
Joined
Mar 30, 2001
Messages
5,360
Because the actual move could only be completed at night from 2am to 8am I wasn't really in the mood to take pictures of the physical move (unplugging, transit, hookup).
I do however have lots of server pics for people that enjoy them.

I will start with my oldest pictures of our equipment and move from there.

1.jpg
....................
2.jpg

Our old rack in our old center. Our ugly LCD that was tiny, and very hard to read. Very blurry. We also only had 2 arrays at this time and only a few servers.



1.jpg

This is our old network. Pix 515E with a 10/100 48 port Dell Switch. Patch panel.



2.jpg

2 of the servers in this picture have been retired. And we only had 3 drive arrays in this picture. Our old UPS's round out the bottom. A single 1500VA and a 2200VA.



2.jpg
....................
3.jpg

Our new rack with our new pull out LCD unit. This one is MUCH nicer and a lot sharper. I raised it a bit too to make it a bit more comfortable when standing. It's now at eye level. Servers=naked.



4.jpg
...
5.jpg

Our new network (same Pix515E) however we are now on gigabit switches. I have added slots to expand to a failover pix and secondary switch when I get some more money.
The back of the drive arrays and our new 5000VA UPS with 2 step down transformers. (our data center has a 100% SLA on power, but sh*t happens)



6.jpg
7.jpg

Naked and Non-Naked drive arrays. We now have 4 (the 4th has the capacity of 2) with room for 2 more. Cold spares peek out at the top.



1.jpg
.............
2.jpg

From this to THIS



4.jpg
............
3.jpg

From this to THIS



bandwidth.jpg

We peak to about 40mbps. Avg is usually around 25-30mbps.
 
Why are you using a Rack UPS system? Does your facility not provide you with UPS protected power?
 
Why are you using a Rack UPS system? Does your facility not provide you with UPS protected power?
Our facility uses centrifugal force generators that constantly run via grid power, but can be instantly transferred to diesel. The facility has a 100% uptime guarantee outlined in our contract, but I've been burned before by other data centers.

While I realize my system would not keep everything running for more than a few minutes it is designed mostly to absorb ANY kind of accidental power interruption lasting for less than a second or even a few seconds. If our facility lost power for more than a few minutes there are bigger problems.
 
In terms of that 100% uptime, what happens when the power goes out even if it is only for a second.
 
It would technically be a material breach of contract. I would be able to leave without paying the remaining time I had left on my term.
 
Our facility uses centrifugal force generators that constantly run via grid power, but can be instantly transferred to diesel. The facility has a 100% uptime guarantee outlined in our contract, but I've been burned before by other data centers.

While I realize my system would not keep everything running for more than a few minutes it is designed mostly to absorb ANY kind of accidental power interruption lasting for less than a second or even a few seconds. If our facility lost power for more than a few minutes there are bigger problems.

what exactly is a centrifugal force generator.. I kinda figured out what it does by the name, is there any sort of info about them online? I'm curious.
 
Neat pics. Care to give a generalized idea of what you're pushing out of those servers?
 
Very nice layout.

Out of curiosity, what is this all for? Is it for your company or some type of public hosting?
 
It is a company I own.
Current storage capacity is right around 25TB. With space for 2 more arrays I can hold another 20TB before I need to begin upgrading older smaller capacity drives in the older arrays.

The servers host about 40 virtual servers spread across 4 machines. Each are dual socket quad cores.
 
So it is data for the company? Like remote hosted storage for branch offices or something?

Just curious, since you are obviously in a remote data center you are going to have more limited access depending on what sort of pipe(s) your office(s) are using.
 
No cable management for your servers? No offense, but you don't have hardware issues that require you to pull those puppies out while online? I assume with virtualization it probably doesn't matter because you could just VMotion (assuming your running VMware) off to another host and power it down to work on it but any reason why? I known it looks prettier just wondering about functional. Either way it kinda looks like our server room at work. Thanks for the pics, looking good. :)
 
What are you using for the virtualization?
VMware Infrastructure
ESX 3.5

So it is data for the company? Like remote hosted storage for branch offices or something?

Just curious, since you are obviously in a remote data center you are going to have more limited access depending on what sort of pipe(s) your office(s) are using.
Offsite backup. And the data center is 10 minutes drive from my office. It's our primary facility that handles all of our east coast and international backups.

No cable management for your servers? No offense, but you don't have hardware issues that require you to pull those puppies out while online? I assume with virtualization it probably doesn't matter because you could just VMotion (assuming your running VMware) off to another host and power it down to work on it but any reason why? I known it looks prettier just wondering about functional. Either way it kinda looks like our server room at work. Thanks for the pics, looking good. :)
I have all the Dell cable arms, but I didn't put them on. Since all the servers are 1U it REALLY restricts airflow to have them on there. And the 1U's don't really have much in the way of hot swappable parts in the chassis, so I'm not really gaining anything by having the arms on there.
 
I have all the Dell cable arms, but I didn't put them on. Since all the servers are 1U it REALLY restricts airflow to have them on there. And the 1U's don't really have much in the way of hot swappable parts in the chassis, so I'm not really gaining anything by having the arms on there.

Fair enough. :)
 
not a bad little setup :). Gotta migrate those hosts to ESX4.0 though.. RC2 should hit the streets soon.. then RTM.
 
Interesting setup. I design high voltage substations and I've got some vendors trying to pitch me flywheel UPS systems. Very similar in theory to the centrifugal force generator - energy stored in rotating mass. It's used just as it is here - to ride out the gap until the prime mover picks up the load.
 
reading is good! :cool::p

100% SLA means jack if the SLA doesn't spec UPS sources. Some facilities do not provide UPS services and it's up to the client to do so. While it's rare, it was worth asking.

Our facility uses centrifugal force generators that constantly run via grid power, but can be instantly transferred to diesel. The facility has a 100% uptime guarantee outlined in our contract, but I've been burned before by other data centers.

While I realize my system would not keep everything running for more than a few minutes it is designed mostly to absorb ANY kind of accidental power interruption lasting for less than a second or even a few seconds. If our facility lost power for more than a few minutes there are bigger problems.

Are you being fed dual path UPS sources or are you single sourced? If you are single sourced, I can see the need for a second UPS to give you the balance of redundancy and cost effectiveness.

In terms of that 100% uptime, what happens when the power goes out even if it is only for a second.

It depends on the SLA agreement. Sometimes seconds could mean a free week of hosting, sometimes an hour could mean a free month of hosting and/or contractual breach rights.

VMware Infrastructure
ESX 3.5


Offsite backup. And the data center is 10 minutes drive from my office. It's our primary facility that handles all of our east coast and international backups.


I have all the Dell cable arms, but I didn't put them on. Since all the servers are 1U it REALLY restricts airflow to have them on there. And the 1U's don't really have much in the way of hot swappable parts in the chassis, so I'm not really gaining anything by having the arms on there.

If you are ever looking for the Midwest region or Florida region, let me know.


Those dell arms are the most useless things on earth. We throw all of ours away. Not only do they block air, they are a PITA when routing many cables, and even a bigger issue with fiber.
 
Are you being fed dual path UPS sources or are you single sourced? If you are single sourced, I can see the need for a second UPS to give you the balance of redundancy and cost effectiveness.
The feed going to the UPS is single feed (A path only) however the A path is fed from 2 diverse generator feeds. If I had a B path I would be fed from 4 generators. 2 for each.
 
Interesting setup. I design high voltage substations and I've got some vendors trying to pitch me flywheel UPS systems. Very similar in theory to the centrifugal force generator - energy stored in rotating mass. It's used just as it is here - to ride out the gap until the prime mover picks up the load.

Somewhat unrelated, but a test of this concept is what caused the chernobyl accident. They tried to see if the steam turbines' momentum to generate enough electricity to shut off the reactor in an emergency, it worked, they shut it off too far and tried to start it without waiting a while for it to "cool down" and ended up taking out too many control rods and it rapidly started and melted.

Sorry for the off topic... Nice setup by the way, any recent upgrades?
 
You thought about scrapping those 515s for 5510's or 5520s (if you want the gb interfaces?)
 
I am really suprised they let you take pictures... they have "NO CAMERA" signs all over the place at terremark here in miami
 
I am really suprised they let you take pictures... they have "NO CAMERA" signs all over the place at terremark here in miami
I asked at one point about it and they said they didn't care as long as I was only taking pictures of MY equipment. As soon as I tried to take pictures of other stuff they wouldn't allow it.

You thought about scrapping those 515s for 5510's or 5520s (if you want the gb interfaces?)
It's on the roadmap.
 
I am really suprised they let you take pictures... they have "NO CAMERA" signs all over the place at terremark here in miami

They have the signs here in Irvine California as well but they've never had a problem with me taking pics of my own cabinet.
 
So these are your servers in a hosted datacenter pretty much then? You don't own the whole facility?
 
Just out of curiosity, how many amps do you have designated to each cabinet? Assuming you have 2 powerstrips per locker?
 
I dont know of others, but I'm using dual fed 30amp 208v circuits.

Who makes your whips? I've been pretty happy with the Servertech CW series. Are you running per cabinet transfer switches or does your colo handle this? I'm still looking for a better solution for my dual fed 120/208 stuff (xfer-wise).

So much easier going 3 phase for clusters IMO. In the past I've run single source 120A 120v to one cabinet. PITA running 4 whips in 1 cabinet.
 
Ya know... i agree with him on the battery backups. I was using a very well known data center in NJ and they had great uptime... until one day something blew up (realy BLEW UP) in the data center, kicking on every fire supression system (including that one that fills the room with that non-breathable fire-supression gas) and eventually powering the entire building down.... our server went down also.

Shit happens like he said... generators fail, things blow up, better to have redundancy.
 
Back
Top