I'm really not sure how file transfers work when there are many requests for the same file at the same time. I'm looking at file sizes from 50MB to 5GB and transfers to 50-5,000 people at one time. I'll be running Linux or FreeBSD. Example: 50 people request the same 5GB file, each person will transfer via Internet with FTP or HTTP. I'm trying to figure out how the OS handles something like this. Some HTTP downloading clients allow files to be split into parts (threads?) and I've seen that FTP seems to only allow one thread per connection. So if the same file is being accessed by 50 users, will the HD be accessing the disk at 50 different locations (or however many threads are open) for each user? This seems like A LOT of use on the HD. OR - would/can the file be transferred to RAM, read once - transferred to RAM - and then read/transferred from RAM..? If it isn't transfered from RAM, is an SSD a necessity for transfers like this? I've also used RAM drives a few times, where a segment of RAM is provisioned to act as "hard drive" storage but it obviously copies/accesses/transfers much faster. Does anyone know if these are used in high capacity servers these days? So, does anyone know how this type of setup would work and have any suggestions on OS/FS/server setup/software (apache, nginX, IIS, ETC) to handle loads such as this?