SAS confusion

15k drives are overkill for me, and I don't want 40 drives. The FC is going to be for storage, nothing IP-related. I don't need a switch. Other users will get their files shared over the normal network. Management will be done over the network. But again, that's not something I need right away if I run out of storage in my tower too quickly.
 
15k drives are overkill for me, and I don't want 40 drives. The FC is going to be for storage, nothing IP-related. I don't need a switch. Other users will get their files shared over the normal network. Management will be done over the network. But again, that's not something I need right away if I run out of storage in my tower too quickly.

I figured since you were talking about SSD, 15k mechanical drives would be a good compromise between lower cost and increased RAID rebuild times, until the SSD's hit the price/size point that you want.

So most connections will be over regular ethernet, and you'll have some kind of shared block storage over FC that both the server and your workstation will be able to simultaneously access?
 
I figured since you were talking about SSD, 15k mechanical drives would be a good compromise between lower cost and increased RAID rebuild times, until the SSD's hit the price/size point that you want.

So most connections will be over regular ethernet, and you'll have some kind of shared block storage over FC that both the server and your workstation will be able to simultaneously access?
Rebuild times are going to be HBA limited actually...disks won't really play a part.
 
Rebuild times are going to be HBA limited actually...disks won't really play a part.

Ahh, guess I've been spoiled by ZFS. Haven't had a hardware RAID failure in 6 years, and only one drive fail in ZFS.

Anyways, my blu-ray encoding is virtualized with 8 cores and has no problems with its single gigabit link to storage on cheap 3.5" drives. Replacing the 4 core Xeons with faster 6 core Xeons would be the only way to increase my encoding times.
 
Back
Top