I wasn't really sure if this should go here or in the storage section, so feel free to move it.
I work for a school that operates entirely off of LTSP terminals. We have three schools, a school office, town office, fire station, and maintenance building all using terminals. At the center of this is a single RedHat Enterprise Linux 3 (or maybe 4) server, with a 12 bay SAS JBOD that stores all user files on a NFS share. I know the drives in the JBOD are 250GB SATA drives in a RAID 5 configuration, but thats about it, no one wants to touch it for fear that it will break. It connects to our network with a single gigabit NIC and has been happily humming along for a little over 10 years.
We had a lot of performance issues with the NFS server over the past school year, so I'd like to see if we could replace it with something a little nicer. We upgraded almost all of our terminals to thick clients, which means there are a lot more simultaneous connections to the NFS server now.
Due to our budget, we have been looking into a lot of the off-lease and used servers on ebay like the Dell C2100, HP DL180, and Dell R610. We don't use a lot of space (the current size of /home is under 600GB). I'm thinking that we could see a performance increase with something like the Dell R610 with 6 500GB SSDs in RAID 10. The 4x gigabit ports in the R610 have me wondering if we could do some sort of link aggregation to increase total throughput.
Am I on the right track here?
My primary concerns are:
1. Whether we will lose any performance from dropping from 12x 7200RPM SATA drives in the JBOD to only 6x SSDs.
1.a Or should we go with a new server and a new JBOD? Our existing JBOD has a single SAS 3gig connection.
2. SSD reliability. I have seen equal numbers of people swear that SSDs fail too often in servers as have said that SSDs are the only way to go.
3. Is the 4 port link aggregation feasible/worth it? Or should we be looking into connecting all our servers via 10GBe or fiber?
I work for a school that operates entirely off of LTSP terminals. We have three schools, a school office, town office, fire station, and maintenance building all using terminals. At the center of this is a single RedHat Enterprise Linux 3 (or maybe 4) server, with a 12 bay SAS JBOD that stores all user files on a NFS share. I know the drives in the JBOD are 250GB SATA drives in a RAID 5 configuration, but thats about it, no one wants to touch it for fear that it will break. It connects to our network with a single gigabit NIC and has been happily humming along for a little over 10 years.
We had a lot of performance issues with the NFS server over the past school year, so I'd like to see if we could replace it with something a little nicer. We upgraded almost all of our terminals to thick clients, which means there are a lot more simultaneous connections to the NFS server now.
Due to our budget, we have been looking into a lot of the off-lease and used servers on ebay like the Dell C2100, HP DL180, and Dell R610. We don't use a lot of space (the current size of /home is under 600GB). I'm thinking that we could see a performance increase with something like the Dell R610 with 6 500GB SSDs in RAID 10. The 4x gigabit ports in the R610 have me wondering if we could do some sort of link aggregation to increase total throughput.
Am I on the right track here?
My primary concerns are:
1. Whether we will lose any performance from dropping from 12x 7200RPM SATA drives in the JBOD to only 6x SSDs.
1.a Or should we go with a new server and a new JBOD? Our existing JBOD has a single SAS 3gig connection.
2. SSD reliability. I have seen equal numbers of people swear that SSDs fail too often in servers as have said that SSDs are the only way to go.
3. Is the 4 port link aggregation feasible/worth it? Or should we be looking into connecting all our servers via 10GBe or fiber?