farscapesg1
2[H]4U
- Joined
- Aug 4, 2004
- Messages
- 2,648
Ugg.. been going back and forth on this for awhile and just can't make a decision. I'm currently running an installation of OpenIndiana with the following;
A pool of 6 3TB drives in mirrored pairs shared for general storage (music, video, documents, etc.)
A pool of 6 320GB drives in mirrored pairs shared via iSCSI as a VMWare datastore
A pool of 4 240GB SSD drives in RAIDZ1 shared via iSCSI as a VMWare datastore
This is currently being run as a standalone server in a Norco 4224 case, using a Supermicro X8SIL-F, XEON x3430, 16GB RAM, 2 M1015 storage cards and a QLogic 2462 providing fiber for two ESXI hosts (only one is currently in use).
Due to some issues with the OS (which has been limping along), I need to rebuild and I'm currently looking at the following options.
1) I have a spare X8SIL-F, xeon x3440, and 32GB RAM. I can get everything set up (using OmniOS/Linux/FreeBSD/whatever). The downside to this is the PCIe slot limitation on the motherboard. With the QLogic card and two M1015 controllers, I'm out of slots and stuck with only the two onboard NICs, unless I replace the two M1015s with something like a LSI 9201-16i which is too pricey right now.
2) I have two unused Dell R710 servers (one with a single X5550 and 64GB RAM, the other with dual X5650s and 64GB RAM). I would need to pick up an external HBA (looking at an LSI 9201-16e) and a couple 8088-8087 adapter brackets so I can re-purpose the Norco case as a JBOD case for the drives (already have a JBOD board to control the power).
With either option I'm leaning towards moving back to an "All-in-one" setup to be able to run a few other VMs with the storage that are essential (storage, AD, vCenter, and maybe Plex).
I guess the pros/cons between deciding are;
X8SIL/x3440
A pool of 6 3TB drives in mirrored pairs shared for general storage (music, video, documents, etc.)
A pool of 6 320GB drives in mirrored pairs shared via iSCSI as a VMWare datastore
A pool of 4 240GB SSD drives in RAIDZ1 shared via iSCSI as a VMWare datastore
This is currently being run as a standalone server in a Norco 4224 case, using a Supermicro X8SIL-F, XEON x3430, 16GB RAM, 2 M1015 storage cards and a QLogic 2462 providing fiber for two ESXI hosts (only one is currently in use).
Due to some issues with the OS (which has been limping along), I need to rebuild and I'm currently looking at the following options.
1) I have a spare X8SIL-F, xeon x3440, and 32GB RAM. I can get everything set up (using OmniOS/Linux/FreeBSD/whatever). The downside to this is the PCIe slot limitation on the motherboard. With the QLogic card and two M1015 controllers, I'm out of slots and stuck with only the two onboard NICs, unless I replace the two M1015s with something like a LSI 9201-16i which is too pricey right now.
2) I have two unused Dell R710 servers (one with a single X5550 and 64GB RAM, the other with dual X5650s and 64GB RAM). I would need to pick up an external HBA (looking at an LSI 9201-16e) and a couple 8088-8087 adapter brackets so I can re-purpose the Norco case as a JBOD case for the drives (already have a JBOD board to control the power).
With either option I'm leaning towards moving back to an "All-in-one" setup to be able to run a few other VMs with the storage that are essential (storage, AD, vCenter, and maybe Plex).
I guess the pros/cons between deciding are;
X8SIL/x3440
- Pros = lower power (about 80-90W average usage), storage and processing in same box
- Cons = 3 PCIe slots max, limited to 32GB RAM, only 2 onboard NICs
- Pros = 4 PCIe slots, 4 onboard NICS, dual proc support, 64GB RAM (have memory to increase up to 96GB), able to run additional virtual systems
- Cons = no 3.5" storage, need external SAS cabling, more power (170W with X5650's or 140W with the single X5550) plus additional power to run the hard drives in a JBOD case.
Last edited: