Inside Google's Data Centers

This is so awesome. Google seems like an awesome place to work. In one of the pics there's even a random bike. Inside the data center. Where else can you go around while riding a bike, at work? :p It's all the little things that can make work more fun. The colors and stuff make it pretty cool too.
 
Another FAKE :mad:

fakegoogle2.jpg
 
The top portion does not look fake but the rest, yeah. Are these really fake, or did they actually design some of their stuff this way to screw with us, like even the LEDs and everything on one side are inverted but the cable tray on top is not symmetric.

I'm still going through all of these. It's totally an awesomeness overload. Even the mechanical stuff is awesome, like all the cooling and such.
 
The top portion does not look fake but the rest, yeah. Are these really fake, or did they actually design some of their stuff this way to screw with us, like even the LEDs and everything on one side are inverted but the cable tray on top is not symmetric.

Definatly fake, and the second I posted is very obvious, very bad chop :D
 
Sounds like they are taking water in, and cycling it back out. I'd like to see how them dumping heat into the ocean/gulfs has changed the ecosystem, whether the heat has made life flourish, or if it's too much and killed off the ecosystem.

Not to mention the ecological cost of generating that amount of electricitry...
 
Definatly fake, and the second I posted is very obvious, very bad chop :D

Floor panels are different from one side of center to the other. I say they are likely not fake (but I practically live in datacenters everyday, so I obviously don't know)
 
oh please. They couldn't generate enough heat to dump into an ecosystem to effect it in any meaningful way. 1 volcano eruption over a period of years like Kilauea has a much larger impact on eco-systems and even that isn't saying much.

I 100% agree. I see only two minor ecological impacts:

1.) Since water has a higher specific heat capacity than air, it is more efficient at cooling, therefore less power to cool is required (positive impact-saves electricity);
2.) Animal life will gather around heat sources no matter where, which is another positive impact.
 
Floor panels are different from one side of center to the other. I say they are likely not fake (but I practically live in datacenters everyday, so I obviously don't know)

You seem to be suggesting that Google has laser synchronized the droops of the cables on the left side of their DC to be in the exact mirrored position as the droops of the cables on the right side of their DC. For what purpose? Is that a standard datacenter practice?
 
Since water has a higher specific heat capacity than air, it is more efficient at cooling, therefore less power to cool is required (positive impact-saves electricity)

1 BTU/lb-°F for water vs. 0.075 BTU/lb-°F for air (at sea level). HVAC engineering can be a bitch sometimes :D
 
Quoted from the Google page:

"Denise Harwood diagnoses an overheated CPU. For more than a decade, we have built some of the world's most efficient servers. "

I'm guessing this is a stupid question for those who know, but is it common for companies who operate server farms like this to build their own? For some reason I always assumed they would just buy Dell or whatever.
 
Another FAKE :mad:

fakegoogle2.jpg

I worked in a datacenter for 5 years and those pics looked ok to me at first glance. And BTW that is neater cabling than what I was used to dealing with.

At first I thought MindBuster was full of it, but as I look at the second pic I think he's right. Look at the racks on the left.... the LEDs on the back of the servers are on the left. Now look at the racks on the right... the LEDs are on the right! WTF! Those are servers and the lights will all be one way or the other, they don't make LEFT and RIGHT hand servers.

So bravo, MindBender, for pointing out the fraud.
 
Quoted from the Google page:

"Denise Harwood diagnoses an overheated CPU. For more than a decade, we have built some of the world's most efficient servers. "

I'm guessing this is a stupid question for those who know, but is it common for companies who operate server farms like this to build their own? For some reason I always assumed they would just buy Dell or whatever.

There was an article a couple years ago that said they built their own, and put 12v batteries in each one to boot so each server had it's own little UPS. I called BS on that, but maybe they do build their own. But honestly, why would they do that instead of just buying HP/IBM/Dell like any other company that uses servers?
 
you know what's even sexier? NO DUST. OCD cleanliness in imagined server room is perfect. (Meanwhile here in reality, dust is everywhere..)

Where I worked they mopped daily. If you keep a controlled environment and the floors are kept clean, dust isn't that big of a problem. It still happens but at like 1/1000 of what you would experience in your home.
 
There was an article a couple years ago that said they built their own, and put 12v batteries in each one to boot so each server had it's own little UPS. I called BS on that, but maybe they do build their own. But honestly, why would they do that instead of just buying HP/IBM/Dell like any other company that uses servers?

Why? Because they buy in such bulk that they can save significant money having their own systems made. And yes, they are all run by direct 12v, with a small battery on each.

Why would they buy a commodity HP/IBM/Dell when they can get the same CPU/RAM horsepower for half the price by leaving out all the bits they don't use?

Here's a Google server from a few years ago: http://news.cnet.com/8301-1001_3-10209580-92.html And here are a couple Google-official pages on them: http://www.google.com/about/datacenters/efficiency/internal/index.html#servers http://www.google.com/green/efficiency/#data-centers (See the third picture down.)
 
Quoted from the Google page:

"Denise Harwood diagnoses an overheated CPU. For more than a decade, we have built some of the world's most efficient servers. "

I'm guessing this is a stupid question for those who know, but is it common for companies who operate server farms like this to build their own? For some reason I always assumed they would just buy Dell or whatever.

Yes. It was a trade secret of most datacenters for a long time, but Google talked about it publicly a few years ago, then Facebook not only talked about it, they even open-sourced their platform!

http://opencompute.org/
 
Why? Because they buy in such bulk that they can save significant money having their own systems made. And yes, they are all run by direct 12v, with a small battery on each.

Why would they buy a commodity HP/IBM/Dell when they can get the same CPU/RAM horsepower for half the price by leaving out all the bits they don't use?

12v? Nah. http://www.google.com/about/datacenters/gallery/#/tech/12
Those are 208v plugs, industry standard. That means power supplies converting AC to DC.

Now I'm not saying they couldn't hire staff to hand build the servers.... but factor in salary of said staff to build and maintain those hand built servers and you just negated cost savings vs buying prebuilt rack servers from HP/Dell (I don't want to mention IBM again because I hate them). Also, it would be next to impossible to maintain regularity with the hardware. HP can buy 10 bajillion motherboards and they will all be exactly the same (or close enough). No way you could buy that many off the shelf components and they would all be the same. Ditto for cpu/hd's/raid controllers/etc. Regularity is important, since once something is tested and found to be acceptable you want to limit the potential variables for problems.

I am not an electrical engineer. But I am a server engineer for a major corporation and have worked, lived (and sometimes slept) inside big datacenters.

BUT then again this is GOOGLE we are talking about, so why wouldn't they throw out industry approved practices and just do whatever the hell they wanted? I suppose anything is possible.
 
Data center staff are not highly paid. The people who assemble the servers are low-wage, largely contract employees.
 
You seem to be suggesting that Google has laser synchronized the droops of the cables on the left side of their DC to be in the exact mirrored position as the droops of the cables on the right side of their DC. For what purpose? Is that a standard datacenter practice?

Thanks for noticing :)

Umm.. Wrong.

No....

I worked in a datacenter for 5 years and those pics looked ok to me at first glance. And BTW that is neater cabling than what I was used to dealing with.

At first I thought MindBuster was full of it, but as I look at the second pic I think he's right. Look at the racks on the left.... the LEDs on the back of the servers are on the left. Now look at the racks on the right... the LEDs are on the right! WTF! Those are servers and the lights will all be one way or the other, they don't make LEFT and RIGHT hand servers.

So bravo, MindBender, for pointing out the fraud.

Thanks :)

To all others, the pics ARE fake.
Where are all the old [H] chopgawds when you need them. :p
 
1
Now I'm not saying they couldn't hire staff to hand build the servers.... but factor in salary of said staff to build and maintain those hand built servers and you just negated cost savings vs buying prebuilt rack servers from HP/Dell (I don't want to mention IBM again because I hate them). Also, it would be next to impossible to maintain regularity with the hardware. HP can buy 10 bajillion motherboards and they will all be exactly the same (or close enough). No way you could buy that many off the shelf components and they would all be the same. Ditto for cpu/hd's/raid controllers/etc. Regularity is important, since once something is tested and found to be acceptable you want to limit the potential variables for problems.

I am not an electrical engineer. But I am a server engineer for a major corporation and have worked, lived (and sometimes slept) inside big datacenters.

BUT then again this is GOOGLE we are talking about, so why wouldn't they throw out industry approved practices and just do whatever the hell they wanted? I suppose anything is possible.

I don''t understand. You think Google has a Newegg account or something that they use to buy parts. I couldn't claim to know how they do their procurement but I can guarantee it's not a standard affair.
 
Congratulations google you win....
The prize.....Internets......
The competitors.....Nobody..........
 
Those colorful pipes remind me of a building near/on the MIT campus in Cambridge MA.

It's some sort of physical plant and it has lots of pipes running through it, and they are painted in similar bright colors.

Been that way since long before anyone heard of Google though :p
 
Thanks for noticing :)

To all others, the pics ARE fake.
Where are all the old [H] chopgawds when you need them. :p

They're not *ALL* fake. And it's not like they're 100% contrived-just-for-photography photos. A few were photoshopped for who-knows-what reasons (probably they were something like the end of a server aisle, or the servers on the other side were "ugly" or some such nonsense.) But it's not like the photos are "fake". I can confirm one of them wasn't even a "setup", it was an employee actually working. And that person says that another one THEY can confirm is only a part-setup. (They cleaned up a little before having the picture taken.)
 
Since water has a higher specific heat capacity than air, it is more efficient at cooling, therefore less power to cool is required (positive impact-saves electricity)

1 BTU/lb-°F for water vs. 0.075 BTU/lb-°F for air (at sea level). HVAC engineering can be a bitch sometimes :D

Ok...you are arguing my point for me?
 
12v? Nah. http://www.google.com/about/datacenters/gallery/#/tech/12
Those are 208v plugs, industry standard. That means power supplies converting AC to DC.

Now I'm not saying they couldn't hire staff to hand build the servers.... but factor in salary of said staff to build and maintain those hand built servers and you just negated cost savings vs buying prebuilt rack servers from HP/Dell (I don't want to mention IBM again because I hate them). Also, it would be next to impossible to maintain regularity with the hardware. HP can buy 10 bajillion motherboards and they will all be exactly the same (or close enough). No way you could buy that many off the shelf components and they would all be the same. Ditto for cpu/hd's/raid controllers/etc. Regularity is important, since once something is tested and found to be acceptable you want to limit the potential variables for problems.

I am not an electrical engineer. But I am a server engineer for a major corporation and have worked, lived (and sometimes slept) inside big datacenters.

BUT then again this is GOOGLE we are talking about, so why wouldn't they throw out industry approved practices and just do whatever the hell they wanted? I suppose anything is possible.

With the way the big server makers rape you on everything I can actually see it being cheaper to hire guys to build servers with standard parts. Especially once they're up to speed and can probably put one together in like 10 minutes. When you think about it putting together a computer is fast if all the parts are right in front of you and you've put that specific system together before. They're probably very efficient at it and it ends up costing less. I'm guessing they must also have special contracts where they can get high volume of the same parts.
 
Where I worked they mopped daily. If you keep a controlled environment and the floors are kept clean, dust isn't that big of a problem. It still happens but at like 1/1000 of what you would experience in your home.

My server rooms are dust free and we only clean once a quarter.
 
I think the most damning evidence on the second pic has to be the barcode labels on the various equipment. They're exact mirrors from side to side.

Actually, both pics. I hadn't looked at the first one closely enough the first time.
 
I think the source images are stored on here somewhere... dunno how to grab 'em though. There's image source reference to the sub directory:

Data%20centers%20%E2%80%93%20Google%20Data%20centers_files/[IMAGE_NAME_HERE].jpg

but doesn't seem to work when I try to access it.
 
Bear in mind I know how to get the 1024x versions of the photos, but not the originals...(?)
 
where's the chambers where they harvest human energy while we dream our life away?
 
Thanks for the info about the inner workings of a data center everyone, very interesting.
 
Back
Top