FreakinAye
n00b
- Joined
- Mar 14, 2006
- Messages
- 23
I'm looking at upgrading an old VMWare environment (a bunch of Win2k3x64 w/ VMWare Server 1.x and local storage) with something more... modern.
I'm going to start with a pair of ESXi5 boxes with shared storage, and have been going back and forth on what to do for the storage.
The new storage needs to be able to run 25-50 WinXP desktops (1gb ram) with moderate use. CPU and RAM should be fine with the ESX boxes I have planned.
Initially I was looking at QNAP or Synology 8-port NAS for about $1000 diskless. I planned on using NFS or iSCSI with 8x 1Tb WD Black drives. The more I read about these the more I think they aren't up to the task... I'm concerned about both performance and reliability and so I'm back to looking at homebuilt NAS.
Here is my current plan for OI/Nappit
MB: X9SCM-F-O - $200
CPU: Xeon E3-1230 - $240
RAM: 4x4GB of Kingston unbuffered ECC - $140
HBA: ServeRaid M1015 reflashed - $100
HDD: WD Black * ?????
Case: Probably tower case with 2 icydock (or similar) bays, but may just get a norco instead if I'm going over 8-10 disks
That's about as far as I've gotten with hardware. I'm really not sure how many drives I'm going to need to meet the performance requirements of the VMs.
*EDIT* Planning on using RaidZ2
Questions
1) Will 8x1TB WD Black drives handle this, or do I need more/faster drives?
2) Will I be able to benefit from deduplication in this scenario and is 16GB enough ram for it (2-3GB per TB was recommendation I saw)?
3) Is iSCSI or NFS generally preferred with ZFS on ESXi5?
4) Do I dedicate a NIC port to storage traffic or can 2x1GB ports be teamed for throughput & reliability?
Any other advice is greatly appreciated! I've been putting this off for too long so I wouldn't have to make a decision, but Gea's work has put a ZFS build over the top for me and makes it a clear winner.
I'm going to start with a pair of ESXi5 boxes with shared storage, and have been going back and forth on what to do for the storage.
The new storage needs to be able to run 25-50 WinXP desktops (1gb ram) with moderate use. CPU and RAM should be fine with the ESX boxes I have planned.
Initially I was looking at QNAP or Synology 8-port NAS for about $1000 diskless. I planned on using NFS or iSCSI with 8x 1Tb WD Black drives. The more I read about these the more I think they aren't up to the task... I'm concerned about both performance and reliability and so I'm back to looking at homebuilt NAS.
Here is my current plan for OI/Nappit
MB: X9SCM-F-O - $200
CPU: Xeon E3-1230 - $240
RAM: 4x4GB of Kingston unbuffered ECC - $140
HBA: ServeRaid M1015 reflashed - $100
HDD: WD Black * ?????
Case: Probably tower case with 2 icydock (or similar) bays, but may just get a norco instead if I'm going over 8-10 disks
That's about as far as I've gotten with hardware. I'm really not sure how many drives I'm going to need to meet the performance requirements of the VMs.
*EDIT* Planning on using RaidZ2
Questions
1) Will 8x1TB WD Black drives handle this, or do I need more/faster drives?
2) Will I be able to benefit from deduplication in this scenario and is 16GB enough ram for it (2-3GB per TB was recommendation I saw)?
3) Is iSCSI or NFS generally preferred with ZFS on ESXi5?
4) Do I dedicate a NIC port to storage traffic or can 2x1GB ports be teamed for throughput & reliability?
Any other advice is greatly appreciated! I've been putting this off for too long so I wouldn't have to make a decision, but Gea's work has put a ZFS build over the top for me and makes it a clear winner.