RAID5 has some often-ignored side effects. While it can achieve the throughput of n-1 disks, it will achieve the IOPS of 1 disk (namely the slowest disk). Adding more disks will not increase the IOPS. This principle extends to RAID6 (n-2) as well. RAID50 and RAID60 improve upon this, to the number of RAID5s striped across. Ex: If you have 6 disks in RAID50, you'd have a RAID0 of 2 RAID5s with 3 disks each. You'd achieve the IOPS of the two slowest HDDs in the total RAID50, roughly.
RAID10 yields the throughput, capacity, and IOPS of n/2 disks (roughly). Some implementations of RAID10 will allow read operations to occur on single disks in the RAID1s within the RAID10, so you'll have Read IOPS of n disks, which is as fast as a RAID0 of those disks. In all cases, Write IOPS will be slightly less than n/2. Ex: if you have 6 disks in a RAID10, you'd have the capacity of 3 disks, and the IOPS of 3 disks. Your IOPS will be significantly greater than if you had used RAID5 or RAID50: you'd have 300% the IOPS of the RAID5, or 133% the IOPS of the RAID50.
RAID5,6,50,60 all have very long rebuild times. RAID10 does not.
For this reason, I prefer RAID10 to any other RAID layer.
That's as general of a statement to make as the initial post. Without taking into consideration the controllers, or the drives as Iopoteve said, or even the access method to that array: internal via PCI/PCI-X/PCI-E/eSATA/SAS, or external via FC/FCOE/iSCSI/NFS/etc. Not anywhere near that simple...