houkouonchi
RIP
- Joined
- Sep 14, 2008
- Messages
- 1,622
Out of curiosity what is the mean time write to failure on these 1tb drives? I mean that is a wicked setup and all, but say you were running an heavy exchange and SQL server install on some VM's. What kind of burn through would you expect on the TLC cells? Also just from a learning standpoint, if I wanted to setup something similar, assuming raw performance was not a concern would it make sense to setup maybe one ssd to a vm (or placed them strategically) to reduce the wear on the drives?
I wouldn't want to do heavy VM's or SQL as yeah that is write heavy. My usage is primarily read access and I do not do heavy writes. in 2 months I have done:
Code:
root@方向音痴: 01:31 PM :~# df -h /ssd
Filesystem Size Used Avail Use% Mounted on
/dev/sdd 5.5T 2.2T 3.3T 40% /ssd
root@方向音痴: 01:31 PM :~# uptime
13:31:25 up 62 days, 13:15, 7 users, load average: 9.88, 8.88, 9.15
root@方向音痴: 01:31 PM :~# iostat -m sdd
Linux 2.6.39.4-houkouonchi-web10g-ioat-vlan (houkouonchi) 08/14/2014
avg-cpu: %user %nice %system %iowait %steal %idle
11.52 1.60 4.56 7.75 0.00 74.57
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sdd 72.73 9.44 0.02 51015953 101307
So only 101GB written but 51 TB read.