Storage Server:
Chassis: SuperMicro - 846E26-R1200B
http://www.supermicro.com/products/chassis/4u/846/SC846E26-R1200.cfm
Motherboard: SuperMicro - H8SGL-F
http://www.supermicro.com/aplus/motherboard/opteron6000/sr56x0/h8sgl-f.cfm
CPU: AMD - Opteron 6212
http://products.amd.com/en-us/OpteronCPUDetail.aspx?id=765
SAS HBA: LSI SAS 9211-8i
http://www.lsi.com/channel/products/storagecomponents/Pages/LSISAS9211-8i.aspx
(2) FCoE CNAs: Brocade 1020
http://www.brocade.com/products/all/adapters/product-details/1010-1020-cna/index.page
Storage: Lots of 3TB disk drives & 6x SSD (4x cache & 2x ZIL)
OS: Solaris 11.1
Application Server:
Bare Bones Server: SuperMicro - 2022G-URF
http://www.supermicro.com/Aplus/system/2U/2022/AS-2022G-URF.cfm
CPU: AMD - Opteron 6272
http://products.amd.com/en-us/OpteronCPUDetail.aspx?id=761
FCoE CNAs: Brocade 1020
http://www.brocade.com/products/all/adapters/product-details/1010-1020-cna/index.page
OS: ESXi 5.1
The final project (fingers crossed):
I hope to direct connect two ports of Brocade CNA on the Application Server with two ports of a Brocade CNA on the Storage Server with twin-ax cables.
After that project is functional, I hope to use the other CNA on the Storage server to connect to a cheap fiber switch that is yet to be picked out, most likely something off ebay. I want to connect about 5 workstations to this switch to boot off of this home brew SAN.
The last thing I want to do is on a back burner so far back I may never get to it, but one day I would like to try learn how to do some sort of netboot through my normal 10/100/1000 network for stuff like MythTV front ends. I have other much more important projects that need to come first.
Question number 1: What protocol would work best between Storage and Application servers, I need to know what to be searching for? At first I wanted to use IP so I could use iSCSI targets and NFS. I've been reading as much as I can and it would seem others on this site are using something different, but I'm new at this so I'm not sure what it is. FC maybe, is that a protocol? Another issue here is that I want some of the servers to have access to the same files through this high bandwidth connection.
Question number 2: ZFS ZIL / cache drives... At his point I'm pretty sure the most RAM I will ever have in the Storage server is 64GB. From what I've read this means that the ZIL pool/section will never take up more than 32GB? If this is true, does it make sense to partition (4) 128GB SSDs. I could take 16GB partitions on each of the 4 drives for mirror + stripe ZIL. That still leaves quite a lot of space left over for cache across 4 drives that have some real bandwidth.
Question number 3: I have been bouncing back and forth from RAIDZ and mirrored + striped. This is mostly because some of the data will be written and then who knows when it will ever be accessed again. I have mp3's for instance that I haven't listened to in years, or movies in a media center. Some of the data will be from MythTV or other DVR software... that data I could care less about. I recently found out about the "zfs set copies=x" command. So now I'm thinking about just adding all my 3TB drives into a pool no mirroring or RAIDZ. Then just using that command to adjust how much redundancy I want. Family photos I may use 3 or 4 copies, but a boot drive target for a teamspeak or minecraft server only needs 2, while DVR I'm comfortable with 1. If I want to watch a show I that got lost that bad, I can buy it on amazon video or something. Anyway what are your thoughts on this?
Please take it easy on the flames, I've been doing much reading, but as I'm new at this stuff it's hard to put the pieces together that I'm reading.
Chassis: SuperMicro - 846E26-R1200B
http://www.supermicro.com/products/chassis/4u/846/SC846E26-R1200.cfm
Motherboard: SuperMicro - H8SGL-F
http://www.supermicro.com/aplus/motherboard/opteron6000/sr56x0/h8sgl-f.cfm
CPU: AMD - Opteron 6212
http://products.amd.com/en-us/OpteronCPUDetail.aspx?id=765
SAS HBA: LSI SAS 9211-8i
http://www.lsi.com/channel/products/storagecomponents/Pages/LSISAS9211-8i.aspx
(2) FCoE CNAs: Brocade 1020
http://www.brocade.com/products/all/adapters/product-details/1010-1020-cna/index.page
Storage: Lots of 3TB disk drives & 6x SSD (4x cache & 2x ZIL)
OS: Solaris 11.1
Application Server:
Bare Bones Server: SuperMicro - 2022G-URF
http://www.supermicro.com/Aplus/system/2U/2022/AS-2022G-URF.cfm
CPU: AMD - Opteron 6272
http://products.amd.com/en-us/OpteronCPUDetail.aspx?id=761
FCoE CNAs: Brocade 1020
http://www.brocade.com/products/all/adapters/product-details/1010-1020-cna/index.page
OS: ESXi 5.1
The final project (fingers crossed):
I hope to direct connect two ports of Brocade CNA on the Application Server with two ports of a Brocade CNA on the Storage Server with twin-ax cables.
After that project is functional, I hope to use the other CNA on the Storage server to connect to a cheap fiber switch that is yet to be picked out, most likely something off ebay. I want to connect about 5 workstations to this switch to boot off of this home brew SAN.
The last thing I want to do is on a back burner so far back I may never get to it, but one day I would like to try learn how to do some sort of netboot through my normal 10/100/1000 network for stuff like MythTV front ends. I have other much more important projects that need to come first.
Question number 1: What protocol would work best between Storage and Application servers, I need to know what to be searching for? At first I wanted to use IP so I could use iSCSI targets and NFS. I've been reading as much as I can and it would seem others on this site are using something different, but I'm new at this so I'm not sure what it is. FC maybe, is that a protocol? Another issue here is that I want some of the servers to have access to the same files through this high bandwidth connection.
Question number 2: ZFS ZIL / cache drives... At his point I'm pretty sure the most RAM I will ever have in the Storage server is 64GB. From what I've read this means that the ZIL pool/section will never take up more than 32GB? If this is true, does it make sense to partition (4) 128GB SSDs. I could take 16GB partitions on each of the 4 drives for mirror + stripe ZIL. That still leaves quite a lot of space left over for cache across 4 drives that have some real bandwidth.
Question number 3: I have been bouncing back and forth from RAIDZ and mirrored + striped. This is mostly because some of the data will be written and then who knows when it will ever be accessed again. I have mp3's for instance that I haven't listened to in years, or movies in a media center. Some of the data will be from MythTV or other DVR software... that data I could care less about. I recently found out about the "zfs set copies=x" command. So now I'm thinking about just adding all my 3TB drives into a pool no mirroring or RAIDZ. Then just using that command to adjust how much redundancy I want. Family photos I may use 3 or 4 copies, but a boot drive target for a teamspeak or minecraft server only needs 2, while DVR I'm comfortable with 1. If I want to watch a show I that got lost that bad, I can buy it on amazon video or something. Anyway what are your thoughts on this?
Please take it easy on the flames, I've been doing much reading, but as I'm new at this stuff it's hard to put the pieces together that I'm reading.