clayton006
[H]ard|Gawd
- Joined
- Jan 4, 2005
- Messages
- 1,089
Hello,
I'm a DBA by trade and I'm upgrading my two i7 boxes to a shared box with two 6 core xeons. I thought about playing around with VM technologies but before I do that I'd like to have some opinions thrown at me.
Currently:
I had two i7 boxes with 12GB ram a piece. I had 3x 74GB in each box dedicated to virtual machine vmdk's. (I was running VMware server 2 on windows 7 host OS). I had two Red Hat Enterprise Linux VM's per i7 box running Oracle RAC 11gR2. All four of these VM's combined chewed a total of 20GB of RAM (including the host OS).
My new motherboard will be a EVGA SR-2 board. I'm doing some gaming with this setup but also will be dual/booting with either ESXi or Hyper-V (or I would triple boot if necessary) I'll have 48GB of ram eventually (starting out with 24GB).
Question 1:
I know I should be using some bare metal hypervisor of some kind. Would ESXi be up to the task, or could I spring with W2K8 R2 Hyper-V? (I have a technet account so I have the windows software) The VMware server 2 gets the job done, and it performs ok, but I didn't know if I could better utilize my hardware using a bare metal OS.
Question 2:
I'll have 6x 74GB raptor 10kRPM HDD's in one computer now. I have a Highpoint Rocket Raid 4port RAID card. I didn't know if I should RAID-5 four of those raptors and leave the other two as separate drives? In RAC, the more spindles the better, didn't know if the main RAID-5 array would have enough I/O to house all four VM's voitng, ocr, and data volumes.
Question 3:
When I look to upgrade storage should I be making multiple RAID 5 array's (say like 12 SSD's total to make 3 RAID 5 arrays) for testing, or should I leave them separate so I avoid I/O contention.
Sorry I know I'm asking a lot of questions. The bottom line is that I want to be able to efficently use my new hardware as best as possible to run at least two complete RAC setups (8 VM's total, 2 test prods and 2 test DR's).
I'm a DBA by trade and I'm upgrading my two i7 boxes to a shared box with two 6 core xeons. I thought about playing around with VM technologies but before I do that I'd like to have some opinions thrown at me.
Currently:
I had two i7 boxes with 12GB ram a piece. I had 3x 74GB in each box dedicated to virtual machine vmdk's. (I was running VMware server 2 on windows 7 host OS). I had two Red Hat Enterprise Linux VM's per i7 box running Oracle RAC 11gR2. All four of these VM's combined chewed a total of 20GB of RAM (including the host OS).
My new motherboard will be a EVGA SR-2 board. I'm doing some gaming with this setup but also will be dual/booting with either ESXi or Hyper-V (or I would triple boot if necessary) I'll have 48GB of ram eventually (starting out with 24GB).
Question 1:
I know I should be using some bare metal hypervisor of some kind. Would ESXi be up to the task, or could I spring with W2K8 R2 Hyper-V? (I have a technet account so I have the windows software) The VMware server 2 gets the job done, and it performs ok, but I didn't know if I could better utilize my hardware using a bare metal OS.
Question 2:
I'll have 6x 74GB raptor 10kRPM HDD's in one computer now. I have a Highpoint Rocket Raid 4port RAID card. I didn't know if I should RAID-5 four of those raptors and leave the other two as separate drives? In RAC, the more spindles the better, didn't know if the main RAID-5 array would have enough I/O to house all four VM's voitng, ocr, and data volumes.
Question 3:
When I look to upgrade storage should I be making multiple RAID 5 array's (say like 12 SSD's total to make 3 RAID 5 arrays) for testing, or should I leave them separate so I avoid I/O contention.
Sorry I know I'm asking a lot of questions. The bottom line is that I want to be able to efficently use my new hardware as best as possible to run at least two complete RAC setups (8 VM's total, 2 test prods and 2 test DR's).