Raid Write > Read?

Joined
Oct 28, 2004
Messages
722
This is sort of weird, last night I noticed on my test array that I was getting 210mb/s writes and 170mb/s reads when I set it up with raid0. Now I have it setup as a rebuilt raid5 (8 hour rebuild weaksauce) and I'm getting about 130mb/s writes, and 110-115mb/s reads. The CPU is absolutely peaked out and might even be a real bottleneck here, but still - the write speeds have been significantly faster in each test with 8 disks. When I had only 4 disks, the read speeds were faster in each scenario. Does this seem weird to anybody else?

Code:
llama raid # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md0 : active raid5 sdh1[7] sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1] sda1[0]
      2173866688 blocks level 5, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]

unused devices: <none>
llama raid # ls
llama raid # dd if=/dev/zero of=1gbfile bs=1024k count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 7.86339 s, 137 MB/s
llama raid # dd if=1gbfile of=/dev/null
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 9.18942 s, 117 MB/s
llama raid # dd if=1gbfile of=/dev/null
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 9.37025 s, 115 MB/s
llama raid # dd if=/dev/zero of=10gbfile bs=1024k count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 82.4836 s, 130 MB/s
llama raid # dd if=10gbfile of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 93.3376 s, 115 MB/s
llama raid # dd if=/dev/zero of=10gbfile bs=1024k count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 83.7587 s, 128 MB/s
 
The read speeds with raid 5 is faster than the write because when it is writing the data it has to calculate the parity bits and write them as well.
 
underdone said:
The read speeds with raid 5 is faster than the write because when it is writing the data it has to calculate the parity bits and write them as well.

Thank you for not reading the numbers - or my post whatsoever. The write speeds in this test are clearly faster than the read speeds. I did find the problem though - I am not using the same block size, and the default block size is apparently so small that it kills performance. The lowest I tested was a 256k block size for dd and that jumped performance up. So overall numbers are about 150mb/s reads, 130mb/s writes - which does have some sanity to the numbers.
 
hokatichenci said:
Thank you for not reading the numbers - or my post whatsoever. The write speeds in this test are clearly faster than the read speeds. I did find the problem though - I am not using the same block size, and the default block size is apparently so small that it kills performance. The lowest I tested was a 256k block size for dd and that jumped performance up. So overall numbers are about 150mb/s reads, 130mb/s writes - which does have some sanity to the numbers.
nice catch. Good to know for future reference. Thanks for posting your solution.
 
Back
Top