[H]orde build underway!

sabregen

Fully [H]
Joined
Jun 6, 2005
Messages
19,501
To all,

I have been pounding away at many projects over the last few months, not the least of which is HD movie rips, transcoding file for archival storage, bringing up a MyMovies database, building a new HTPC, and playing with ESX and ESXi in prep for my VCP exam in a few weeks.

About 2 weeks ago, Robstar decided it would be a good idea to post a find on eBay, a pair of Opteron Socket 1207 HE 8346 CPUs for $149 + shipping. Well, I couldn't resist, and the aftermath of this purchase has been the sell-off of my existing ESXi box, and the purchase of new stuff to facilitate the replacement of the previous machine.


Here's what I had:
Rocketfish full tower
Enlight 470-MP EPS12v ATX PSU
Supermicro H8DC8 motherboard
2x Opteron 275 CPUs
4x4GB PC2700 ECC Registered
1x 36GB SCSI U160 10K HDD (boot)
1x HP NetRAID 1-M SCSI U160 PCI RAID controller
1x HP NetStorage 12 (with 1x 73GB U160[ISO storage] and 7x 18GB UW drives in RAID-5 [VMs])
Ran 7 VMs using ESXi
IMG_1729.jpg

IMG_1730.jpg

Here's my new build:
Rocketfish full tower
Enlight 470-MP EPS12v ATX PSU
ASUS KFN32-D SLI motherboard
2x AMD Opteron Socket1207 HE8346 Quad cores
4x 2GB Kingston PC2-5300 ECC Registered DDR2
2x Scythe Andy Samurai HSFs
1x Dell SAS 5/iR 4 port SAS HBA (does RAID-0/1)
1x Fujitsu 10k 2.5" SAS (boot volume for now, going to add another 3)
1x Athena Power 4-in-1 5.25" hotswap enclosure for boot volume
6x WD 320GB AAKS (RAID5 for file storage/VM storage)
1x eVGA 8800GTX 768MB

So, it's quite the overhaul. The eventual plan (most likely in order):
* Add 3x more Fujitsu 73GB 10k SAS drives to the Dell SAS 5/iR for boot volume
* Add 4-in-1 5.25" hotswap enclosure for boot volume [Purchased this morning 10/17/08]
* SS the boot volume to 1/4 size = 73GB usable
* Replace SAS 5/iR with at least an 8port SAS controller,and something with a lot more features
* Add another 4x73GB set of 10k 2.5" SAS drives + hot swap 4-in-1 enclosure
* SS the second 4x73GB volume to 1/4
* Second 4x73GB volume for scratch space only
* Add another 8GB of RAM
* Add PCI-E SATA controller, 8/16 port with hardware RAID capabilities
* purchase 2x 5-in-3 hot swap enclosures for SATA drives
* Add 1TB SATA drives

I've debated about whether just to keep this a straight VMWare box and later down the road (after the VCP test) migrate off to Vista x64 or Server 2k8 in workstation mode (through OS modification), but it would essentially require me to move at least 2TB of data 3x over GbE...and so I canned that idea.

Here's what I've decided to do, at least, for now:
* Vista x64 Ultimate
* 6x320GB drives in RAID5 for VMs/file shares (until I get more disks)
* VMWare Workstation 6.5
* Virtualize 2x ESX servers + Openfiler iSCSI + Virtual Center Server (environment in a box, basically)
* Game on the machine
* day to day usage is a requirement
* transcode movies, and store them for the MyMovies database/HTPC usage/general streaming to other machines

I don't know what to call this project. It's not a real server, I know. It doesn't go in a rack, and it doesn't have gobs of storage like Ockie's stuff does. However, it's the best I can do for now. I plan to take pictures, and post up a worklog of the whole process.

I have specifically chosen all of the components in the system to accomplish my goals, and planned my upgrade path to achieve what I am looking for. The HSFs are under 30dB, the SATA drives are quiet, the SAS drive is quiet. I want this machine to basically be the ultimate do everything box (at least, on my budget). I want it to be big (the case helps :)), and quiet. A gentle giant under my desk...until you use it, and then it just crushes everything else I've ever used before.

I am looking for suggestions on the project name. It kicks off on Friday night, and should be mostly complete by Sunday morning. If you guys can think of anything, please let me know. Thanks for any help you might be able to offer.


Info on the environment that it's going into:
Upstairs computer room in a 2 story house. Room is painted dark grey, with blackout shades. There's brushed aluminum (silver) track lighting. Furniture is primarily black (shelves, flush mount wall organizers, storage, tv stand). Entire room is primarily black/grey/silver/white. Black and white art on the walls.

About me:
Geek. Love hardware. Love to build and mod. Love to game. Recently very into virtualisation technologies, and new found love for storage solutions (especially fibre channel RAID arrays). Into guns, and blowing shit up in real life.

NAMES UP FOR CONSIDERATION
BigIron -ultatryon@ecc
Octoron -ultatryon@ecc
Octopus -ultatryon@ecc
Optimus Prime -nitrobass24
Opteron Prime -me (yes, I stole a little)
super-plasma-frag-some-b****es-with-eight-core-opty-server -GenesisFactor
BOHICA - [H]eaVy B
The Gibson - w1retap
CubeZilla - TechFre@k
Fustercluck -TechFre@k
[H]orde - TechFre@k
 
A few off the cuff:

Project BigIron
Project OMGWTFBBG
Octopus (get it? 8 cores!)
Octoron (8 Opteron's)
My Other Wife (as you are going to pay out the nose from here on out making it better)
 
* Vista x64 Ultimate
* 6x320GB drives in RAID5 for VMs/file shares (until I get more disks)
* VMWare Workstation 6.5
* Virtualize 2x ESX servers + Openfiler iSCSI + Virtual Center Server (environment in a box, basically)
* Game on the machine
* day to day usage is a requirement
* transcode movies, and store them for the MyMovies database/HTPC usage/general streaming to other machines

Are you going to run ESX under VMWare? Isnt that a little counter intuitive?
 
BigIron - I like it..that old nostalgic, muscle in IT, kind of thing. also it doesn't tie me to a specific set of hardware

OMGWTFBBG - OMGWTF

Octopus - kinda sweet, but I might get tied down to doing things in 8's that way, right...worth thinking about though

Octoron - that's pretty sweet...good job. Concerned that it ties me to 8 opteron cores though...but I guess if I upgraded, I could shoot for 8 physical CPUs and keep the name

my other wife - i don't think my current one will like that (although it is appropriate)
 
Are you going to run ESX under VMWare? Isnt that a little counter intuitive?

yes, and actually, i just got the ESX 3.5 Update 2 VM (first one, I'm gonna do 2) built and running on my laptop (Q6600/4GB/3x160's in RAID-5). It is up, and I can hit it in VI Client...sweet. I'm building all the VMWare stuff to prep for the VCP exam in a few weeks. the idea of virtualising a bare metal hypervisor, and creating VMs for those virtual hosts...to host is intriguing. Plus, ti lets me do all of the stuff I need to do, and I can even create an OpenFiler VM to boot other VMs on from the 2x ESX hosts.

I'll have to draw up the logical view of how this all happens for those that don't know much about virtualization, and show the differences between the physical and the virtual views. It made my head hurt the first time somone explained it to me.

Last night when I did the installation, I hadn't configured the ESX server VM properly, and got a PSOD, and it took forever to boot. I got it figured out now, and I get completed boot process from the time I hit go to ready to connect through VI Client in under 2 minutes, and that's on my laptop...not bad.
 
Hell yea sabregen now you got something working for ya.

First off I think you should call this box Optimus Prime like the transformer because this is anything but your standard box... you said it its a workstations, fileserver, vm server, HTPC, and encoding box rolled into one beast. BTW I like what your gonna put into it.

Some advice....IDK what kind of transcoding you will be doing but I know that for me I notice a significant slowdown from going from my 15k array to even a 4x1tb raid0 array so you might want to make that second sas volume for encoding and not scratch, and just use the 1tb as storage.

Cant wait to see some pics
 
Hell yea sabregen now you got something working for ya.

First off I think you should call this box Optimus Prime like the transformer because this is anything but your standard box... you said it its a workstations, fileserver, vm server, HTPC, and encoding box rolled into one beast. BTW I like what your gonna put into it.

Some advice....IDK what kind of transcoding you will be doing but I know that for me I notice a significant slowdown from going from my 15k array to even a 4x1tb raid0 array so you might want to make that second sas volume for encoding and not scratch, and just use the 1tb as storage.

Cant wait to see some pics

Optimus Prime...that's badass!

I hear ya on the array issues. I know that there's going to be some limitations on the speed, and I had already thought of doing the second array as scratch space for conversions.

I did mention the HTPC, and believe me, it will be playing plenty of movies, but the actual HTPC box is downstairs in the livingroom. This machine will be feeding the HTPC MyMovies install with it's content.

thanks for chiming in, good to see you here.
 
Since I know this guy and we drink beer together....it just came to me after what he is spending to get this together

Project Food Stamp
Project Soup Kitchen
Project Pizza Delivery
.....I lol'ed when I thought of that!

All seriousness, I can't come up with a good name, but I am sure it will kick A$$!
 
vmmap.jpg


Okay, let me explain. I'm not the best at diagrams, and I suck even worse at MS Paint. A few disclaimers - the colors don't mean shit. It's just to differentiate. The left side is obviously the Physical, and the right is the logical (although ESX maps it out much prettier).

Physical view -
* Vista Ultimate x64 as the Host OS - running on Dell SAS 5/iR and a 73GB 10k 2.5" Fujitsu. The RAID-5 array is 6x 320GB WD AAKS 7200RPM 3.5" drives on Nforce Professional chipset

* VMWare Workstation is running inside of Windows Vista Ultimate x64 OS

* VMWare Workstation is running 4 virtual machines, itself. These are called "single nested VMs" because they sit at the first layer of virtualization. There's some work to be done to get ESX virtualized. If anyone's interested, I will link it up. The virtual machines running directly under VMWare Workstation are as follows:

1.) ESX Host 1
2.) ESX Host 2
3.) XP based Virtual Center Server
4.) Openfiler linux iSCSI target OS - ESX can run virtual machines from iSCSI arrays. This OS turns your basic run of the mill box into an iSCSI target...something that is normally expensive to buy. I'm going to virtualize it...so it really is free, and I am free from hardware requirements (basically what I've done is kept from have to use any more hardware, at all).

* All of the single nested VMs are stored in a folder on the RAID-5 array, let's just call it "Virtual Machines" It really doesn't matter. All that matters is that you understand that it's a bunch of files, inside of folders, sitting on an NTFS formatted RAID-5 array...and VMWare Workstation knows where these files are, and had added them to view, so that I can run them.

Logical View -
* The 4 machines at the center, right-hand side are the important ones in this diagram. There's the two ESX servers (notice the orange thick bar between them?) that can talk to one another, and are aware that they are each one of two ESX host machines. The virtual Center Server is the VM that maintains what the ESX hosts are doing, which host is running what VMs, and how everything is going. Although the ESX VMs are the brawn, the Virtual Center Server VM is the brains. Features like HA, DRS, VMotion, SVMotion do not function without a Virtual Center Server. Then there's the Openfiler VM. All of the other boxes in this diagram (excluding the 4 in the middle) actually have their configuration and disk files stored on the Openfiler VM's hard drive (which is actually a virtual drive). The ESX hosts are mapped the Openfiler VMs iSCSI storage, and each host can see the files/VMs that the other host sees.

* Virtual Center maintains the environment of the ESX hosts. It ensures that it can see each host by monitoring each hosts heartbeat signal, and in the event that a host goes does, Virtual Center is the one that alerts the administrator of the failure/failover (depending on how you have it configured). Virtualcenter is also the resource that determines what conditions equal a dynamic VM movement from one ESX host to another. In the real world, Virtual Center is the lifeblood of the ESX/VMWare environment. One Virtual Center server should manage no more than 1000 ESX hosts

* Each ESX host, in this case, is a VM. Each has 1 CPU, 1 GB of RAM, an 8GB SCSI HDD, and a CD-ROM. The ESX VMs have been set up as Red Hat Enterprise Linux x64 machines, and Intel/AMD virtualization has been turned on. USB/Floppy/Sound has been removed. Each ESX host is managed by Virtual Center, and each ESX host is also mapped to the Openfiler VM that is storing all of the double nested VMs.

* Double Nested VMs are housed on the Openfiler VM. Their configuration files and disk file reside inside of a virtual hard drive inside of the Openfiler VM. The double nested VMs think, of course, that they are real physical machines, with real physical hardware, running on local disk. In actuality, they are not, and they are two layers deep in the virtualization environment. I plan to store the following VMs on the Openfiler VM:

1) Windows Server 2k3 domain controller - pretty self explainatory on this one, just a basic AD. It'll run DNS/DHCP for all of the double nested VMs.

2) Windows XP File Server - because it will be a member of the domain, it'll use AD credentials. I doubt that I will join any of my physical machines to the domain. This is more of a test. It'll be sharing out the media that's on the RAID-5 array (that it's running on)

3) Orb Streamer - Exactly what it says. This will also be a member of the 2k3 AD domain, but because it's a streaming machine and it's using orb, I can actually tell this double nested VMto look at the RAID-5 array (that all of this will be running on), and use my existing media library as the source media to stream out over the internet. It kind of makes your head hurt...but I can tell you that it works like a champ. I've streamed BSG episodes to my PSP while I was in the Phoenix airport. I live in Albuquerque :D

4) No-IP.com dynamic DNS updater - Yes, this program is stupid small. Yes, I can run it on another VM, but...but...but...THIS IS MY DAMNED PROJECT! It will also be an AD Domain member. It's gonna be running on a very stripped XP installation. If you don't know what this is, it's an application that you'd typically throw on a machine with internet access that's on 24/7. As long as it has an internet connection, it reports your external IP address to No-IP.com's servers, and makes sure that DNS everywhere in the world has a record of your IP...and you get to pick the DNS name. Perfect for home users that don't want to get a business line/contract with their ISP to get a static IP. All I have to remember is the DNS name. I've been using this for 8 years now. My IP can change all it wants, every 5 minutes, this program updates the rest of the world. The result is that I can remote into any of my machines (real or virtual) from anywhere in the world. This is great when I am out of town, and my wife breaks something.

5) Network Packet sniffer - This will be an AD Domain member. I like Wireshark. This will be a stripped down XP VM that will be monitoring the entire network (double nested VMs, single nested VMs, and the real network) in promiscuous mode. I can pop into this VM and capture packets, see collisions, ARPs, all that good stuff. Very useful for network issues.

So, there it is, in a nutshell. Please let me know if you guys have any questions on the planned layout for the VM infrastructure. I know this is not really on the topic of the build (hardware), but it is a big part of the planned use.
 
Arcy...send me your WC stuff! Hehe, I love your rigs man, so it's good to see ya in here.

I agree. Pics o plenty during the build.

Oh, totally subbed :D

I love the big hardware systems ;)

I've got a few maze 4 mounting kits you can have, but that's about it :p
 
Instead of a Project name, I am quite interested in what beer you will be drinking when this is being put together
 
Instead of a Project name, I am quite interested in what beer you will be drinking when this is being put together


You only want to know that, so you can plan ahead on whether you need to bring your own or not...


...YOU DO
 
You only want to know that, so you can plan ahead on whether you need to bring your own or not...

I am thinking that this will call for....lets see.........Boddington's?

Maybe, for the name, something of a combination of stuff you have worked on, like the Roadrunner supercomputer and other cool stuff, can't name any cause I have had some beer already...:eek:
 
Boddingtons sounds great.

Update: ESX hosts, and VC Server installed/configured. Datacenter and Cluster configured. HA/DRS configured. OpenFiler installed, and configured...but I goofed something up. ESX hosts are not seeing targets. Process is halted for tonight. Anyone know anything about iSCSI? I know just about jack shit. After I get OpenFiler functional, then I just have to install 5x VMs on that space.

OpenFiler iSCSI Target LUNS, 25GB virtual disk for the double nested VMs:
* 4GB for XP packet sniffer
* 4GB for XP No-IP.com Dynamic DNS Updater
* 4GB for XP Orb Streamer
* 4GB for XP File Server
* Remaning for 2k3 domain server (~9GB)
 
Boddingtons sounds great.

Update: ESX hosts, and VC Server installed/configured. Datacenter and Cluster configured. HA/DRS configured. OpenFiler installed, and configured...but I goofed something up. ESX hosts are not seeing targets. Process is halted for tonight. Anyone know anything about iSCSI? I know just about jack shit. After I get OpenFiler functional, then I just have to install 5x VMs on that space.

OpenFiler iSCSI Target LUNS, 25GB virtual disk for the double nested VMs:
* 4GB for XP packet sniffer
* 4GB for XP No-IP.com Dynamic DNS Updater
* 4GB for XP Orb Streamer
* 4GB for XP File Server
* Remaning for 2k3 domain server (~9GB)

I never could get openfiler to work the way I wanted it..particularly the integration with a domain controller for single signon.
 
I never could get openfiler to work the way I wanted it..particularly the integration with a domain controller for single signon.


I have a second vswitch on both ESX hosts. I have created a second virtual disk for the openfiler VM (which it saw, just fine). My IPs are set. The ESX hosts are allowing iSCSI through the firewall, and iSCSI service is turned on in openfiler. All of the DNS/gateway information on openfiler and the ESX hosts are correct. I have created the Volume, the volume group, the partitions and allowed access to the ESX hosts in open filer, and have mapped the LUNs. The ESX hosts can see that there is an iSCSI server, but they get no targets. It's really frustraing me. do you use an iSCSI program? If so, which one?
 
UPDATE: ESX servers now seeing the iSCSI target LUNS....don't know what I did wrong, but i suspect it was a FQDN/DNS/IP configuration issue on the ESX hosts. Damned VMWare is picky about that stuff. Gotta go to work now...building double nested VMs tonight!
 
Just a FYI, the PERC 5/i is going to be a bottleneck with more than 4 10K SAS drives on it. My 5/i topped out at roughly 250MB/s with 4 15K drives... when I switched to an Adaptec 5805 my speeds increased to about 350MB/s.
 
also on a hardware related note, how do you justify spending $400 on a motherboard for a pair of $75 cpus?
 
[H]ea\/y B;1033176867 said:
Project WOPR

Project BOHICA

Dude...BOHICA? as in Bend Over Here It Comes Again? appropriate, for the amount of $ this is requiring.

What the hell is WOPR?
 
The Gibson

Gibson...as in 1/8 of an onion in a rocks glass full of Vodka? ;) Good to see you in here, w1retap. That new box you built is really sweet. Who'd have thought MDF could look so damned pretty?
 
Back
Top