53rv3r pr0n

Wow. I had no idea it took all that to run [H]ard OCP and forums. Then again, I hadn't really thought about it. It's really very cool to see behind the scenes. i too pictured all this in your house for some reason, much like my servers. Kyle, keep up the awesome work. Millions of geeks could live a day without a lil [H]ardness in their day.

As for the pics, giggity for the servers, giggity for the rack, and giggity for all the rest of the equiptment. GIGGITY GIGGITY GIGGITY GOO!
 
FreeBSD is your host OS? That is really interesting. Our needs here at the university are much smaller than yours yet we run Server 2003 as our Host OS (not my decision by the way) with VMWare as the Virtualization software. The university's hosting service got hacked several months back and due to the number of issues we had with them, we switched off to our own hosting. We were in the process of testing but were not quite ready for prime time for our setup but we did it anyway because of the number of issues we were having by not having control over anything. We've had zero down time since then.
 
Are you saying that even with all of the resources that Oracle performance dropped or just if we add a few more VMs, which we are not intending to do. I have to play DBA as we cant afford one, which is why I am asking.
Yes, performance didn't match up with what VMware promised, primarily on I/O, but more importantly oracle is not certified to run on VMware. Oracle has their own VM product and they want you to use it. Running oracle on a noncertified platform would be a very, very big mistake. Problem is that oracle's VM is not a mature product and they had little expertise with it internally-- our sales rep didn't even want to push for it. So our choices were a) go with dedicated servers which actually work, b) go with VMware which tested slower and lose our oracle support, or c) go with oracle VM which was an unknown quantity. We chose a, and you should too. Just say no to VMs for oracle.
 
VMware is made of Adamantium and Win :D

The ability to maintain a virtual machine instance indefinitely, and start it on any hardware, makes everything so much easier. Seeing as how the operating system guys left the door wide open, I'm so thrilled to see VMware [and *Xen, Softricity, and the others) filling the operational void.

<dream>Maybe some day it will be possible to run isolated applications on PC hardware without multiple OS instances - like mainframes have been doing since back in the day </dream>

* Ignoring Citrix - open source Xen still lives ;)
 
FreeBSD is your host OS? That is really interesting. Our needs here at the university are much smaller than yours yet we run Server 2003 as our Host OS (not my decision by the way) with VMWare as the Virtualization software. The university's hosting service got hacked several months back and due to the number of issues we had with them, we switched off to our own hosting. We were in the process of testing but were not quite ready for prime time for our setup but we did it anyway because of the number of issues we were having by not having control over anything. We've had zero down time since then.

Several years ago, we were making our plans to move to FreeBSD because of the performance we had seen it deliver in our environment while offering better security. Before we got ready to deploy, one of our RedHat boxes got hacked and the intruder got into every box in the cab. He locked us out at box itself. Guy was awesome. He was a phisher. REALLY good at what he did. So we moved to FreeBSD, before we were ready, caused a lot pain, but it has paid off in the long run. Our environment has been very secure.

We moved from our last CentOS boxes here a few months ago. They were a couple of ad servers from "Spinbox" (NEVER USE THESE GUYS) that kept getting hacked and the damn boxes were being used for attacks on DOD computers and NASA boxes. Talk about a pain in the ass. Spinbox would not fix their issues and I finally just went down and pulled the network cables out of them. Idiots. We had been with Google's Ad Manager beta for about 9 months at that time and we migrated over to them in 6 hours fully. Google's system was still lacking at the time, but in the last few months they have fixed their software. Good stuff now.
 
[...] We chose a, and you should too. Just say no to VMs for oracle.
Well yes, there is that. You can always throw more hardware at the performance issue - especially if you use intel 10 gig and virtual I/O to offload the CPU, and (more importantly) lower the latency. From my experience, brute-forcing I/O adds about 100 microseconds to each buffer cache op that has to go to the disk controller - I like Areca with 1 gig write-back and a battery if you use host RAID, or something iSCSI if you don't trust host RAID.

Throwing more oracle licenses at a performance issue is cost prohibitive, and the support issue is a non-starter for most commercial deployments.

...but running a production oracle environment in an unsupported configuration doesn't mean it won't work - it just means you're hardcore :cool:
 
Kyle, you fairly experienced with FreeBSD? I haven't much played with it but have been thinking about fiddling with it. Would you recommend it over a Linux server distro such as Ubuntu server? I've had great luck with Ubuntu server security wise but haven't ever messed with FreeBSD.

Please understand, I am not the admin. I know shit about actually using FreeBSD. But obviously me and the admin work tightly together. He is the software guy, and I am the hardware guy. That all said, we use FreeBSD for very specific applications. Webservers, and MySQL (HardForum) and Postgres (HardOCP) database servers. FreeBSD has afforded us great performance and tremendous levels of security (we not exactly low profile) in what we use it for. FreeBSD still gives us a very small install footprint compared to Linux distros out there. We just feel as though FreeBSD has not be sacrificed to the bloat gods. FreeBSD is still "lean and mean" if you will. Everything we need, and very little that we don't. You will not however find as much support for it in both the community and by applications due to the fact that it is not as popular. I would suggest a good bit of research before moving to it depending on exactly what you want to do. Also, and we have been dong this for over a year now, is testing on an old server Cliff keeps at home using a virtualized environment. That has made deployment a lot easier, and now that we are moving to a virtualized environment, testing in the actual environment gets pretty easy as well.
 
Cool that [H] runs on postgress and mysql, both are great projects with a lot of install base and community support. FreeBSD was definitely the leader in connection scaling for several years, and still has better maturity than Linux when it comes to epoll - both in-kernel and the applications that use it.

Agreed :D

I wish VBulletin would give us the option to use Postgres rather than MySQL. We would if we could.
 
I greatly prefer RHEL to freebsd for mysql, and of course freebsd can't run oracle at all so it's a non-issue. A lot of that is just my greater familiarity with linux. I do admin several mysql sites on BSD and they generally run OK, but I'm not a big fan of being forced to recompile libraries like linuxthreads as well as the database binaries themselves to get decent performance. Rolling your own also blocks enterprise support from the mysql.com guys, which while rarely needed (mysql support is largely google.com) is of truly excellent quality.

I haven't had any security issues that could be attributed to linux, but again we're talking about funded enterprise environments behind firewalls, with both host and network based IDS running as well as people whose job it is to keep up on bugtraq and patches. The only times environments we support have been penetrated, it's been attributed to application logic faults, input checking, etc, which we can't control.

As for the support issue-- I don't care if you have Tom Kyte working as your DBA, you want support for oracle. It's not optional.
 
They used to offer very different featuresets with different philosophies, leading to a windows vs. macos type religious war. These days, it's more analagous to linux vs. freebsd. One is several orders of magnitude more popular than the other, but either one will do what you need it to. But keep in mind that popularity really matters with free open-source software, because the community is your support structure via google.

I will say that the last I checked replication in postgres was still horrible and mysql has come a looooong way with 5.0.
 
Friend of mine used to host websites on The Planet and then one day he lost everything. He called The Planet and they told them they just had a massive fire and that they're not responsible for users not backing up their own data.

While I agree that users should back up their work just in case The Planet fucks up, I'd like to think that a professional server farm would run a collective backup server as well.

My friend would also no longer suggest The Planet as a host to anyone.

I would not host with them either if that happened (fire) for the reason that a data center isn't supposed to burn down. Someone f8cked up bad. I have been to the planet (dallas, tx) and I noticed that their facility was a tier 2 style facility (attached facility), which would explain fire issues. However, if this happened, they are not responsible if you lost data. If they had offered this as a service and failed to provide adequate backup, then you have a case.

Oh yeah, tell your friend to host with my company ;)
 
Dell.

Well, so much for the idea that [H] was upgrading. Good luck getting service on those. Been down that road, with a much more substantial purchase than [H] by about a factor of four, and did nothing but spend all our time doing tech support's job for them, then arguing and fighting for the promised SLAs. Couldn't even get a replacement drive next day, forget anyone on the phone who spoke English competently. We finally ejected Dell and put them on the blacklist. They don't even dare send us fliers any more, and we're worth a lot of sales and PR.

Meantime, IBM's been nothing but reliable, prompt, and courteous. They have yet to miss an SLA at all. Dead motherboard for out of production server? Next morning, it's there. Blade down at 4:45PM on Friday? Replacement installed before 6:00PM. Box failed three times, here have a permanent loaner with twice the cores and another two gigs of memory.
 
My company is officially hardware and software agnostic, but we do have preferences, and we prefer dells. The trick to getting good service is to establish a relationship with your sales rep. Even if you don't do a ton of business with them, calling the guy up and saying hello makes a real difference. It also impacts procurement cost in a huge way. Dell may look huge and impersonal but it's run by people just like anything else.
 
Dell.

Well, so much for the idea that [H] was upgrading. Good luck getting service on those. Been down that road, with a much more substantial purchase than [H] by about a factor of four, and did nothing but spend all our time doing tech support's job for them, then arguing and fighting for the promised SLAs. Couldn't even get a replacement drive next day, forget anyone on the phone who spoke English competently. We finally ejected Dell and put them on the blacklist. They don't even dare send us fliers any more, and we're worth a lot of sales and PR.

Meantime, IBM's been nothing but reliable, prompt, and courteous. They have yet to miss an SLA at all. Dead motherboard for out of production server? Next morning, it's there. Blade down at 4:45PM on Friday? Replacement installed before 6:00PM. Box failed three times, here have a permanent loaner with twice the cores and another two gigs of memory.

I've always joked that Dell's customer support sucks so bad because their hardware are too good that the support technician have little to no experience helping their customers.

I've had one issue that resulted in calling Dell Customer Support for my company and that was when Dell sold me the wrong Citrix license pack when I wanted Access Essentials, but they gave me concurrent package for the MetaFrame servers. It took awhile to convince them that they're not the same and you can't have concurrent licenses for Access Essentials, only Named licenses. Our Dell servers have had zero glitches thus far.
 
what is ESXi and what do you mean you got it for free?

also when you say "We handle all of our own domain name purchases and DNS servers." you mean you "pay" for them right, as i imagine it's usually a very focused business of companies that do that kind of service.

I basically jsut wanted to get a better understanding of this since i realized one day when someone asked me to take care of setting up a website that i really didnt fully understand how that worked (again beyond the layman basic understanding of some dude in his home hooking up a machine to host a website on some personal domain).
 
nice. does that building have backup power? hate to see all these machines struggle if the power went out for any length of time in the summer :eek:
 
nice. does that building have backup power? hate to see all these machines struggle if the power went out for any length of time in the summer :eek:

Yes, power is important to us too.

Fully redundant (2N) electrical power is supplied via four independent TXU substation feeders. Full power reliability is provided through six in-building main switchgear rooms. Four main switchgear feed each quadrant with 480-volt service through multiple 4,000 AMP electrical risers.
 
I didn't see any sort of KVM solution for your racks. Do you just crashcart it when you need local access?
 
The most important thing when picking a KVM (imo anyway) is making sure if any machines absolutely require PS2 or USB. I find it frustrating when you discover that a box that shouldn't be picky about USB.... is in fact so picky about it that using USB is a waste of time. Lets not forget boxes that have no PS2 ports. There are plenty of KVMs that do both; I just like to be safe rather than sorry and make sure I can get into CMOS setup and have the right cables the first time around.

With that said, I don't know how Dell servers behave or are equipped port-wise with reference to this as we're an IBM shop. :) All I know is my two x3650s don't have PS2 ports.
 
I might have something for you. We consolidated another datacenter recently, and I believe I recovered some KVM equipment. If there is a good 16+ port IP KVM switch in there, I'll hook you up.

Let me look tomorrow and see what I have.
 
Awesomeness. Help is always appreciated. None of our boxes have PS2, all USB now days.
 
KVM over IP is nice, and with the right system you can use PS/2 or usb dongles.
Our standard is the Avocent DSR series, but it is rather spendy. Most Dell servers have their own ILO interface which might be an stopgap option to ditch the crashcart if you are headed towards an all Dell environment anyway.
 
Honestly, I have not looked into this much as I assigned it to Cliff, but I don't see an IP solution working. We are using our NICs for other things besides keyboards. :)
 
I realize the standard for servers is pre-built stuff, but I can't resist commenting on the irony of an enthusiast x86 hardware site that consistently dogs Apple users who can't build their own systems, running on a pre-configured Dell, the computer company that I recommend for the grandmothers of my friends. :D

Of course, I type this on a Dell laptop (that has given me nothing but trouble), as the BYO options for lappies are pretty limited.

This virtualization stuff is interesting, and I am surprised you don't take a bigger performance hit. I guess it is just the leanness of BSD? I understand how virtualization works; it's the why I don't. Does it really come down to price?
 
I realize the standard for servers is pre-built stuff, but I can't resist commenting on the irony of an enthusiast x86 hardware site that consistently dogs Apple users who can't build their own systems, running on a pre-configured Dell, the computer company that I recommend for the grandmothers of my friends. :D

Of course, I type this on a Dell laptop (that has given me nothing but trouble), as the BYO options for lappies are pretty limited.

This virtualization stuff is interesting, and I am surprised you don't take a bigger performance hit. I guess it is just the leanness of BSD? I understand how virtualization works; it's the why I don't. Does it really come down to price?

for "enterprise" level solutions though, you generally have to go with Dell. What do you want them to do? build their own sever blades from sratch???

On top of that most botique places that are often thrown around here dont make server solutions for the most part either.

furthermore, hardocp has had some positive articles on dell anyway (albeit i have found them highly questionable, since outside of my previous statement i do beleive dell sucks ball and that it was more of a puff piece, especially from their continually pathetic mainstream user business practices and tech support).
 
p.s. i own a computer made by Maingear, but my S-IPS LCD monitor is a Dell ;) (since Maingears monitor offerings were indeed not anything special and the Dell i bought was in fact one of the best monitors in its size class).
 
This virtualization stuff is interesting, and I am surprised you don't take a bigger performance hit. I guess it is just the leanness of BSD? I understand how virtualization works; it's the why I don't. Does it really come down to price?

There are many reasons to virtualize, and price is a big component, but by no means the only component. Physical servers typically do not use 100% of their resources 100% of the time. Virtualization allows you to leverage your total resource pool across a larger number of servers which drives down your unit cost.

Then you get into space, power, and cooling which are costly in the datacenter world. By reducing your number of physical servers, you can reduce your required rack space, which in turn can reduce your space, power, and cooling needs. We virtualized over 500 physical servers this year onto 64 HP blades. That's 80% of our physical environment at one datacenter. Just the effect on operational costs alone was enough to make our accounting department faint in shock.

After that, you add in the effects of high availability and D.R. We make use of VMotion, which allows VMware to move virtual servers across physical hosts without shutting them off. If eight virtuals are sitting on one host and suddenly one of the host demands enough resources that would impact the other seven, VMotion will migrate that VM to another host automatically and behind the scenes. There is no noticeable impact on the server other than it was able to ramp up to the demand unhindered. You can lose a physical host, take one down for upgrades or maintenance, whatever you need.

Since the VM's reside on a large SAN array as "files", we are also able to replicate that data from one set of storage frames to another at a different datacenter in near real-time. Using Site Recovery Manager, we can actually set those VM's to come up on cold hardware that is just sitting there waiting for a disaster. Once it occurs, the VM's will come online and assume their old identities at the new site.

Using slick tools like the VMware convertors or PowerConvert Platespin (what we use), you can even perform physical to virtual conversions without even turning off the server. We converted many of our physicals on the fly using Platespin. Once the conversion takes place, the old server shuts down and the new virtual assumes it's place. Very cool stuff.

I could go on, but I'm probably starting to bore at this point. As you can see though, virtualization is very cool technology and has uses that aren't readily apparent at first.
 
Regarding using a prebuilt/configured vendor solution (HP, Dell, IBM, etc) versus a "homebrew" server from a reliability standpoint is a no brainer. :cool:

While it might seem nostalgic to run a the [H] on overclocked/moded hardware, the reality is that that kind of setup will not give you the 99.9% uptime that this site demands. Time is money and downtime is lost ad revenue that will go to another site.
[H] is a business and has to make business decisions. Revenue is key here.

It's kind of like automobiles, you may drive a Ford or a Saturn or a whatever, and it works for you because you only have to worry about driving yourself and your friends/family around town, maybe on a road trip every now and then. Your car is "designed" to meet those kind of needs.
[H]ardOcp is driving around a LOT more people than your little car, they need the computer equivalent of a big rig semi truck/bus/train/(insert your own metaphor here) that is made to carry a lot of stuff (content) all over the globe constantly with very little down time.

OEM built servers are designed and tested (some more thoroughly than others) to much higher standards than desktops are, and subsequently perform more reliably under much higher stress environments.
I would never trust a homebrew server in a production enterprise environment, it's just asking for trouble.
 
Well yes, there is that. You can always throw more hardware at the performance issue - especially if you use intel 10 gig and virtual I/O to offload the CPU, and (more importantly) lower the latency. From my experience, brute-forcing I/O adds about 100 microseconds to each buffer cache op that has to go to the disk controller - I like Areca with 1 gig write-back and a battery if you use host RAID, or something iSCSI if you don't trust host RAID.

Throwing more oracle licenses at a performance issue is cost prohibitive, and the support issue is a non-starter for most commercial deployments.

...but running a production oracle environment in an unsupported configuration doesn't mean it won't work - it just means you're hardcore :cool:

We are already running active/active in a cluster, but on our new setup we really only want to Virtualize for Distaster Recovery anyway, guess we will try it out cause we got some spare time with the other system still up and humming along.

Thanks for the answers guys, great forum as usual!
 
There are many reasons to virtualize, and price is a big component, but by no means the only component. Physical servers typically do not use 100% of their resources 100% of the time. Virtualization allows you to leverage your total resource pool across a larger number of servers which drives down your unit cost.

Then you get into space, power, and cooling which are costly in the datacenter world. By reducing your number of physical servers, you can reduce your required rack space, which in turn can reduce your space, power, and cooling needs. We virtualized over 500 physical servers this year onto 64 HP blades. That's 80% of our physical environment at one datacenter. Just the effect on operational costs alone was enough to make our accounting department faint in shock.

After that, you add in the effects of high availability and D.R. We make use of VMotion, which allows VMware to move virtual servers across physical hosts without shutting them off. If eight virtuals are sitting on one host and suddenly one of the host demands enough resources that would impact the other seven, VMotion will migrate that VM to another host automatically and behind the scenes. There is no noticeable impact on the server other than it was able to ramp up to the demand unhindered. You can lose a physical host, take one down for upgrades or maintenance, whatever you need.

Since the VM's reside on a large SAN array as "files", we are also able to replicate that data from one set of storage frames to another at a different datacenter in near real-time. Using Site Recovery Manager, we can actually set those VM's to come up on cold hardware that is just sitting there waiting for a disaster. Once it occurs, the VM's will come online and assume their old identities at the new site.

Using slick tools like the VMware convertors or PowerConvert Platespin (what we use), you can even perform physical to virtual conversions without even turning off the server. We converted many of our physicals on the fly using Platespin. Once the conversion takes place, the old server shuts down and the new virtual assumes it's place. Very cool stuff.

I could go on, but I'm probably starting to bore at this point. As you can see though, virtualization is very cool technology and has uses that aren't readily apparent at first.


This is the entire reason we went to VMWare, yes virtualization is nice, but the Distaster relief component is just plain bad ass.
 
I realize the standard for servers is pre-built stuff, but I can't resist commenting on the irony of an enthusiast x86 hardware site that consistently dogs Apple users who can't build their own systems, running on a pre-configured Dell, the computer company that I recommend for the grandmothers of my friends. :D

Of course, I type this on a Dell laptop (that has given me nothing but trouble), as the BYO options for lappies are pretty limited.

This virtualization stuff is interesting, and I am surprised you don't take a bigger performance hit. I guess it is just the leanness of BSD? I understand how virtualization works; it's the why I don't. Does it really come down to price?

Much like any other enthusiast might, I personally built ALL of our servers until it made more sense to purchase server prebuilt. Both in terms of cash, reliability, and the time to configure and deploy. Until you have built your own enterprise class servers and deployed them in an heavily loaded environment, you will not understand. I have, and I do understand. Today, and the market has greatly changed in that 10 years, there are good reasons behind buying fully tested and configured hardware in an "enterprise" environment. The simple fact of the matter is that I cannot build servers even close to as good as Dell can do it. Desktops....that's another story.
 
Thats exactly how we do it too. KVM using a dongle (2x USB (or 2x PS2) + 1x VGA = 1x RJ45) and hitting Print Screen to switch servers.
 
Not always the case. If you have lights out management or DRAC, then you are connecting via management NIC directly to your network via IP. Some nics, such as those on supermicro boards, are dual personality, in that they can do both IP KVM function and general networking functions for communications.

We have DRACs in all of our servers. So we don't need dongles? Just the KVM switch or not even that? Any documentation you can point me to would be great on this. I could not find anything at Dell.
 
Plug your drac directly into your switch.

Boot your server, (I forgot which key you hit during boot, but there should be a prompt for DRAC configuration), then in the drac configuration, set your IP and your settings for your DRAC access card... once this is done, save and finish booting.

Go to another system, punch in the IP in your browser, defualt user is root, defualt password is calvin

You now have full remote access and full desktop and remote media access.

You are the effin man! I did not know the DRAC even had this function. We bought them so we could reboot a locked box without driving 50 miles. :) Obviously we have them setup to access these functions through the browser interface, but did not know KVM ability was available.
 
That's right, I forgot that Dell calls it DRAC, Dell Remote Access Control. On HP machines it called ILO, Integrated Lights Out. Both do the same functions, usually using a separate "management" port.
 
It's a full blown kvm, you can do reboots, configure bios, do a full remote os installation, remote media boot, you can remote boot floppy, you have full console functions, you can monitor fans, temps, drives, set alarms, etc. It's pretty nice to play with, it's complete remote hands for your administration.

Just did not occur to me to use it inside the data center. DOH!
 
for "enterprise" level solutions though, you generally have to go with Dell. What do you want them to do? build their own sever blades from sratch???

No. Call a company that isn't absolutely worthless, deceptive, and complete and total ripoffs. For example, IBM or HP.

Dell has been thrown out of so many midsize shops for their deceptive practices and antics, it's not even funny. There are less than two mid-large cap businesses in the area here which still permit Dell to even bid on anything other than desktop systems. For a reason.

Dell is not enterprise, they are booby-prize. Period. People may not like my delivery, but it is my job to know this better than anyone here. Enterprise systems are what I have done for a living for over 15 years - and I don't mean just the extreme low-end, that being all things x86.

Also, Kyle, you cannot do DRAC the way you think you can. Or want. As usual, people who claim it, divorced from reality of purchasing. DRAC is a separate licensable feature, which Dell charges exorbitantly for, just like HP. What you get for free, is just about nothing. Only IBM does not charge additional licensing on the RSA (Remote Supervisor Adapter.)
 
No. Call a company that isn't absolutely worthless, deceptive, and complete and total ripoffs. For example, IBM or HP.

Dell has been thrown out of so many midsize shops for their deceptive practices and antics, it's not even funny. There are less than two mid-large cap businesses in the area here which still permit Dell to even bid on anything other than desktop systems. For a reason.

Dell is not enterprise, they are booby-prize. Period. People may not like my delivery, but it is my job to know this better than anyone here. Enterprise systems are what I have done for a living for over 15 years - and I don't mean just the extreme low-end, that being all things x86.

Also, Kyle, you cannot do DRAC the way you think you can. Or want. As usual, people who claim it, divorced from reality of purchasing. DRAC is a separate licensable feature, which Dell charges exorbitantly for, just like HP. What you get for free, is just about nothing. Only IBM does not charge additional licensing on the RSA (Remote Supervisor Adapter.)

Dell has yet to deceive me. I don't know what you're on about.
 
Back
Top