Hi,
It seems like I'm having some performance-issues with my PERC 5/i card.
I have a Dell PE 2950, with a PERC 5/i installed. I have 2x 300GB Seagate Cheetah T10 15K RPM (ST3300555SS) in a RAID1-setup.
While copying stuff, or when doing dd-tests, the system is almost unusable;
Is this normal/expected behavior?
Further on, I'm getting really slow results from some dd-tests I've been doing;
Taking the sync into the calculations, the write-speeds was the following;
Virtual Drive information;
I think my battery needs replacement (I've ordered a new one), so I've forced Write Back even if the BBU is bad. This is just for testing purposes, as the system is not "in production" yet, so it doesn't matter if the volume gets messed up if the power should fail.
So, to conclude, shouldn't I be seeing higher numbers than this? Is there something I'm doing wrong? Is there something else I could do to get better results?
It seems like I'm having some performance-issues with my PERC 5/i card.
I have a Dell PE 2950, with a PERC 5/i installed. I have 2x 300GB Seagate Cheetah T10 15K RPM (ST3300555SS) in a RAID1-setup.
While copying stuff, or when doing dd-tests, the system is almost unusable;
Code:
jocke@noshut:~$ time echo "lolol" > testfile
real 0m3.668s
user 0m0.000s
sys 0m0.000s
jocke@noshut:~$ time echo "lolol" > testfile && time echo "lolol" > testfile2
real 0m8.387s
user 0m0.000s
sys 0m0.000s
real 0m0.000s
user 0m0.000s
sys 0m0.000s
jocke@noshut:~$ time ls -al / | grep lolol
real 0m2.420s
user 0m0.000s
sys 0m0.004s
jocke@noshut:~$ time ls -al /etc | grep lolol
real 0m1.143s
user 0m0.008s
sys 0m0.004s
jocke@noshut:~$ time cat testfile | grep kek
real 0m2.012s
user 0m0.000s
sys 0m0.004s
Is this normal/expected behavior?
Further on, I'm getting really slow results from some dd-tests I've been doing;
Code:
jocke@noshut:~$ ./ddtest.sh
Testing 128k bs
time sh -c "dd if=/dev/zero of=ddfile1 bs=128k count=262144 && sync"
262144+0 records in
262144+0 records out
34359738368 bytes (34 GB) copied, 550.545 s, 62.4 MB/s
real 10m6.725s
user 0m0.108s
sys 0m50.055s
dd if=/dev/zero of=ddfile2 bs=128k count=131072
131072+0 records in
131072+0 records out
17179869184 bytes (17 GB) copied, 250.725 s, 68.5 MB/s
time dd if=ddfile1 of=/dev/null bs=128k
262144+0 records in
262144+0 records out
34359738368 bytes (34 GB) copied, 447.795 s, 76.7 MB/s
real 7m29.888s
user 0m0.120s
sys 0m19.697s
Testing 64k bs
time sh -c "dd if=/dev/zero of=ddfile3 bs=64k count=524288 && sync"
524288+0 records in
524288+0 records out
34359738368 bytes (34 GB) copied, 564.332 s, 60.9 MB/s
real 10m20.132s
user 0m0.184s
sys 0m49.859s
dd if=/dev/zero of=ddfile4 bs=64k count=262144
262144+0 records in
262144+0 records out
17179869184 bytes (17 GB) copied, 263.477 s, 65.2 MB/s
dd if=ddfile3 of=/dev/null bs=64k
524288+0 records in
524288+0 records out
34359738368 bytes (34 GB) copied, 487.615 s, 70.5 MB/s
real 8m10.336s
user 0m0.120s
sys 0m19.329s
Testing 8k bs
time sh -c "dd if=/dev/zero of=ddfile5 bs=8k count=4194304 && sync"
4194304+0 records in
4194304+0 records out
34359738368 bytes (34 GB) copied, 602.128 s, 57.1 MB/s
real 11m10.026s
user 0m0.904s
sys 0m53.283s
dd if=/dev/zero of=ddfile6 bs=8k count=2097152
2097152+0 records in
2097152+0 records out
17179869184 bytes (17 GB) copied, 279.494 s, 61.5 MB/s
time dd if=ddfile5 of=/dev/null bs=8k
4194304+0 records in
4194304+0 records out
34359738368 bytes (34 GB) copied, 546.709 s, 62.8 MB/s
real 9m10.107s
user 0m0.696s
sys 0m21.037s
Taking the sync into the calculations, the write-speeds was the following;
Code:
128k bs: 54.0 MB/s
64k bs: 52.8 MB/s
8k bs: 48.9 MB/s
Virtual Drive information;
Code:
root@noshut:~# megacli -LDInfo -Lall -aALL
Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name :system
RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0
Size : 278.875 GB
Mirror Data : 278.875 GB
State : Optimal
Strip Size : 128 KB
Number Of Drives : 2
Span Depth : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, Write Cache OK if Bad BBU
Current Cache Policy: WriteBack, ReadAdaptive, Direct, Write Cache OK if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Disk's Default
Encryption Type : None
Is VD Cached: No
I think my battery needs replacement (I've ordered a new one), so I've forced Write Back even if the BBU is bad. This is just for testing purposes, as the system is not "in production" yet, so it doesn't matter if the volume gets messed up if the power should fail.
So, to conclude, shouldn't I be seeing higher numbers than this? Is there something I'm doing wrong? Is there something else I could do to get better results?