Dell PERC 5/i performance issues

jocke

n00b
Joined
Oct 4, 2010
Messages
32
Hi,

It seems like I'm having some performance-issues with my PERC 5/i card.

I have a Dell PE 2950, with a PERC 5/i installed. I have 2x 300GB Seagate Cheetah T10 15K RPM (ST3300555SS) in a RAID1-setup.

While copying stuff, or when doing dd-tests, the system is almost unusable;

Code:
jocke@noshut:~$ time echo "lolol" > testfile
real 0m3.668s
user 0m0.000s
sys 0m0.000s

jocke@noshut:~$ time echo "lolol" > testfile && time echo "lolol" > testfile2
real 0m8.387s
user 0m0.000s
sys 0m0.000s

real 0m0.000s
user 0m0.000s
sys 0m0.000s

jocke@noshut:~$ time ls -al / | grep lolol
real 0m2.420s
user 0m0.000s
sys 0m0.004s
jocke@noshut:~$ time ls -al /etc | grep lolol
real 0m1.143s
user 0m0.008s
sys 0m0.004s
jocke@noshut:~$ time cat testfile | grep kek
real 0m2.012s
user 0m0.000s
sys 0m0.004s


Is this normal/expected behavior?

Further on, I'm getting really slow results from some dd-tests I've been doing;

Code:
jocke@noshut:~$ ./ddtest.sh
Testing 128k bs
time sh -c "dd if=/dev/zero of=ddfile1 bs=128k count=262144 && sync"
262144+0 records in
262144+0 records out
34359738368 bytes (34 GB) copied, 550.545 s, 62.4 MB/s

real 10m6.725s
user 0m0.108s
sys 0m50.055s
dd if=/dev/zero of=ddfile2 bs=128k count=131072
131072+0 records in
131072+0 records out
17179869184 bytes (17 GB) copied, 250.725 s, 68.5 MB/s
time dd if=ddfile1 of=/dev/null bs=128k
262144+0 records in
262144+0 records out
34359738368 bytes (34 GB) copied, 447.795 s, 76.7 MB/s

real 7m29.888s
user 0m0.120s
sys 0m19.697s
Testing 64k bs
time sh -c "dd if=/dev/zero of=ddfile3 bs=64k count=524288 && sync"
524288+0 records in
524288+0 records out
34359738368 bytes (34 GB) copied, 564.332 s, 60.9 MB/s

real 10m20.132s
user 0m0.184s
sys 0m49.859s
dd if=/dev/zero of=ddfile4 bs=64k count=262144
262144+0 records in
262144+0 records out
17179869184 bytes (17 GB) copied, 263.477 s, 65.2 MB/s
dd if=ddfile3 of=/dev/null bs=64k
524288+0 records in
524288+0 records out
34359738368 bytes (34 GB) copied, 487.615 s, 70.5 MB/s

real 8m10.336s
user 0m0.120s
sys 0m19.329s
Testing 8k bs
time sh -c "dd if=/dev/zero of=ddfile5 bs=8k count=4194304 && sync"
4194304+0 records in
4194304+0 records out
34359738368 bytes (34 GB) copied, 602.128 s, 57.1 MB/s

real 11m10.026s
user 0m0.904s
sys 0m53.283s
dd if=/dev/zero of=ddfile6 bs=8k count=2097152
2097152+0 records in
2097152+0 records out
17179869184 bytes (17 GB) copied, 279.494 s, 61.5 MB/s
time dd if=ddfile5 of=/dev/null bs=8k
4194304+0 records in
4194304+0 records out
34359738368 bytes (34 GB) copied, 546.709 s, 62.8 MB/s

real 9m10.107s
user 0m0.696s
sys 0m21.037s

Taking the sync into the calculations, the write-speeds was the following;

Code:
128k bs: 54.0 MB/s
64k bs: 52.8 MB/s
8k bs: 48.9 MB/s

Virtual Drive information;
Code:
root@noshut:~# megacli -LDInfo -Lall -aALL

Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name :system
RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0
Size : 278.875 GB
Mirror Data : 278.875 GB
State : Optimal
Strip Size : 128 KB
Number Of Drives : 2
Span Depth : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, Write Cache OK if Bad BBU
Current Cache Policy: WriteBack, ReadAdaptive, Direct, Write Cache OK if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Disk's Default
Encryption Type : None
Is VD Cached: No

I think my battery needs replacement (I've ordered a new one), so I've forced Write Back even if the BBU is bad. This is just for testing purposes, as the system is not "in production" yet, so it doesn't matter if the volume gets messed up if the power should fail.

So, to conclude, shouldn't I be seeing higher numbers than this? Is there something I'm doing wrong? Is there something else I could do to get better results?
 

Phew. Good. Then I know that there is a problem. Started to wonder if I where trying to solve a problem that didn't exist (-:

Yes you should see much higher numbers. Are you using a recent 3.X.X kernel or some ancient 2.6.X kernel?

Yes, "ancient" kernel (running Debian Squeeze).

Code:
root@noshut:~# uname -a
Linux noshut 2.6.32-5-amd64 #1 SMP Sun May 6 04:00:17 UTC 2012 x86_64 GNU/Linux
root@noshut:~# cat /proc/version 
Linux version 2.6.32-5-amd64 (Debian 2.6.32-45) ([email protected]) (gcc version 4.3.5 (Debian 4.3.5-4) ) #1 SMP Sun May 6 04:00:17 UTC 2012


I can upgrade the kernel to see if that have any effect.
 
So, tried with new kernel (from backports, though, not sure if that has any effect?);

Code:
root@noshut:~# uname -a
Linux noshut 3.2.0-0.bpo.1-amd64 #1 SMP Sat Feb 11 08:41:32 UTC 2012 x86_64 GNU/Linux
root@noshut:~# cat /proc/version 
Linux version 3.2.0-0.bpo.1-amd64 (Debian 3.2.4-1~bpo60+1) ([email protected]) (gcc version 4.4.5 (Debian 4.4.5-8) ) #1 SMP Sat Feb 11 08:41:32 UTC 2012

Still same behavior regarding the "lockup" of the system while doing dd-tests;

Code:
root@noshut:~# time ls -alh /usr | grep lolol

real	0m1.141s
user	0m0.000s
sys	0m0.000s
root@noshut:~# time ls -al /var | grep lolol

real	0m1.167s
user	0m0.000s
sys	0m0.000s
root@noshut:~# time echo "lolol" > /home/jocke/testfile1

real	0m8.585s
user	0m0.000s
sys	0m0.000s

Sometimes it's really extreme;

Code:
root@noshut:~# time echo "lolol" > /home/jocke/testfile

real	0m18.576s
user	0m0.000s
sys	0m0.000s

**wait for a few minutes**

root@noshut:~# time rm /home/jocke/testfile

real	0m20.424s
user	0m0.000s
sys	0m0.000s

And the same dd-test on new kernel;
Code:
jocke@noshut:~$ ./ddtest.sh
Testing 128k bs 
time sh -c "dd if=/dev/zero of=ddfile1 bs=128k count=262144 && sync"
262144+0 records in
262144+0 records out
34359738368 bytes (34 GB) copied, 405.094 s, 84.8 MB/s

real    7m20.289s
user    0m0.088s
sys     0m45.815s
dd if=/dev/zero of=ddfile2 bs=128k count=131072
131072+0 records in
131072+0 records out
17179869184 bytes (17 GB) copied, 186.29 s, 92.2 MB/s
time dd if=ddfile1 of=/dev/null bs=128k
262144+0 records in
262144+0 records out
34359738368 bytes (34 GB) copied, 423.814 s, 81.1 MB/s

real    7m3.856s
user    0m0.104s
sys     0m21.001s
Testing 64k bs
time sh -c "dd if=/dev/zero of=ddfile3 bs=64k count=524288 && sync"
524288+0 records in
524288+0 records out
34359738368 bytes (34 GB) copied, 424.446 s, 81.0 MB/s

real    7m42.460s
user    0m0.088s
sys     0m46.067s
dd if=/dev/zero of=ddfile4 bs=64k count=262144
262144+0 records in
262144+0 records out
17179869184 bytes (17 GB) copied, 206.187 s, 83.3 MB/s
dd if=ddfile3 of=/dev/null bs=64k
524288+0 records in
524288+0 records out
34359738368 bytes (34 GB) copied, 456.613 s, 75.2 MB/s

real    7m37.132s
user    0m0.144s
sys     0m21.093s
Testing 8k bs
time sh -c "dd if=/dev/zero of=ddfile5 bs=8k count=4194304 && sync"
4194304+0 records in
4194304+0 records out
34359738368 bytes (34 GB) copied, 470.894 s, 73.0 MB/s

real    8m31.198s
user    0m0.540s
sys     0m48.915s
dd if=/dev/zero of=ddfile6 bs=8k count=2097152
2097152+0 records in
2097152+0 records out
17179869184 bytes (17 GB) copied, 227.969 s, 75.4 MB/s
time dd if=ddfile5 of=/dev/null bs=8k
4194304+0 records in
4194304+0 records out
34359738368 bytes (34 GB) copied, 510.307 s, 67.3 MB/s

real    8m31.188s
user    0m0.740s
sys     0m21.585s

Taking the sync into calculation;
Code:
128k bs: 74.5 MB/s
64k bs: 70.9 MB/s
8k bs: 64.1 MB/s

At least I'm getting better results (about ~20MB/s more on write, and ~5MB/s more on read), but I'm not sure if I should see even higher numbers?
 
So, without any IO at all;

Code:
jocke@noshut:~$ time ls -alh /var | grep lolol

real	0m1.275s
user	0m0.000s
sys	0m0.000s

It's not consistent; i.e. it doesn't happen every time, but often enough.

This couldn't be caused by badly aligned partitions or something? Or just bad drives?
 
check on the log file, if the kernel or the module thrown-up error msg...

I usually use dmesg or tail /var/log/messages
 
Hi,

Thanks for the feedbacks. I've done several more tests now, and neither dmesg, nor /var/log/messages, produce any errors while doing these tests.
 
Your speeds look about right. For how fast the spindles spin, 10k / 15k drives don't have as much sequential as modern drives, the platters are not as dense.

Here is 4 146gb 15k seagate drives in RAID 10 on an adaptec card from several years ago, and really its not very impressive.
 
Since they were 300GB I assumed these would be somewhat recent or at least as good as a Velociraptor since they would mostlikely be 2.5 inch SAS drives.
 
The density of the 2.5" velociraptors is much higher than the platter density of 3.5", 300GB drives, and hurts the sequential performance quite a bit.
 
So I would actually be better off with buying some newer drives? Could I expect the "lockups" during high IO to go away as well? Or is the latter more related to the controller, than the disks?
 
Those 15k Cheetah's have the performance of 10K drives - and fairly old ones at that. That's what the T10 bit stands for.

I looked this up before as customer of mine has these drives. :)


Speeds look fairly normal for just a RAID 1 TBH. Does the PERC have caching turned on for reading / writing?
 
The density of the 2.5" velociraptors is much higher than the platter density of 3.5", 300GB drives, and hurts the sequential performance quite a bit.

I expected the SAS drives to be 2.5 inch as well. I guess if I bothered to look up the product I would not have wasted bandwith here..
 
Last edited:
Those 15k Cheetah's have the performance of 10K drives - and fairly old ones at that. That's what the T10 bit stands for.
Ah, I see. I was kinda confused with the 10k vs 15k RPM, yes. (-:

Does the PERC have caching turned on for reading / writing?
'Write Back' and 'Adaptive Read Ahead' is enabled, so I think so. I'm not sure if there are any other "cache" settings that needs to be set?
 
So, I got my storage-disks today; 4x 2TB WD2002FAEX.

I initialized two of them as RAID1, and did some tests on it;

Using 2.6-kernel;

Code:
jocke@noshut:~$ ./ddtest.sh
Testing 128k bs 
time sh -c "dd if=/dev/zero of=ddfile1 bs=128k count=262144 && sync"
262144+0 records in
262144+0 records out
34359738368 bytes (34 GB) copied, 390.852 s, 87.9 MB/s

real    7m14.423s
user    0m0.100s
sys     0m48.347s
dd if=/dev/zero of=ddfile2 bs=128k count=131072
131072+0 records in
131072+0 records out
17179869184 bytes (17 GB) copied, 185.633 s, 92.5 MB/s
time dd if=ddfile1 of=/dev/null bs=128k
262144+0 records in
262144+0 records out
34359738368 bytes (34 GB) copied, 348.634 s, 98.6 MB/s

real    5m48.760s
user    0m0.100s
sys     0m21.365s
Testing 64k bs
time sh -c "dd if=/dev/zero of=ddfile3 bs=64k count=524288 && sync"
524288+0 records in
524288+0 records out
34359738368 bytes (34 GB) copied, 402.961 s, 85.3 MB/s

real    7m21.077s
user    0m0.172s
sys     0m49.999s
dd if=/dev/zero of=ddfile4 bs=64k count=262144
262144+0 records in
262144+0 records out
17179869184 bytes (17 GB) copied, 189.68 s, 90.6 MB/s
dd if=ddfile3 of=/dev/null bs=64k
524288+0 records in
524288+0 records out
34359738368 bytes (34 GB) copied, 343.573 s, 100 MB/s

real    5m43.575s
user    0m0.124s
sys     0m21.549s
Testing 8k bs   
time sh -c "dd if=/dev/zero of=ddfile5 bs=8k count=4194304 && sync"
4194304+0 records in
4194304+0 records out
34359738368 bytes (34 GB) copied, 404.313 s, 85.0 MB/s

real    7m22.135s
user    0m1.012s
sys     0m52.867s
dd if=/dev/zero of=ddfile6 bs=8k count=2097152
2097152+0 records in
2097152+0 records out
17179869184 bytes (17 GB) copied, 185.731 s, 92.5 MB/s
time dd if=ddfile5 of=/dev/null bs=8k
4194304+0 records in
4194304+0 records out
34359738368 bytes (34 GB) copied, 343.383 s, 100 MB/s

real    5m44.631s
user    0m0.660s
sys     0m21.885s

A lot better numbers. The lockup-issues I had, also seems to be more or less gone (that is; they seem to be present sometimes, but not nearly as much as during previous tests).

And with 3.2-kernel;

Code:
jocke@noshut:~$ ./ddtest.sh
Testing 128k bs
time sh -c "dd if=/dev/zero of=ddfile1 bs=128k count=262144 && sync"
262144+0 records in
262144+0 records out
34359738368 bytes (34 GB) copied, 316.665 s, 109 MB/s

real    5m42.946s
user    0m0.056s
sys     0m45.907s
dd if=/dev/zero of=ddfile2 bs=128k count=131072
131072+0 records in
131072+0 records out
17179869184 bytes (17 GB) copied, 146.632 s, 117 MB/s
time dd if=ddfile1 of=/dev/null bs=128k
262144+0 records in
262144+0 records out
34359738368 bytes (34 GB) copied, 339.492 s, 101 MB/s

real    5m40.922s
user    0m0.096s
sys     0m22.561s
Testing 64k bs
time sh -c "dd if=/dev/zero of=ddfile3 bs=64k count=524288 && sync"
524288+0 records in
524288+0 records out
34359738368 bytes (34 GB) copied, 315.238 s, 109 MB/s

real    5m43.151s
user    0m0.176s
sys     0m45.535s
dd if=/dev/zero of=ddfile4 bs=64k count=262144
262144+0 records in
262144+0 records out
17179869184 bytes (17 GB) copied, 151.175 s, 114 MB/s
dd if=ddfile3 of=/dev/null bs=64k
524288+0 records in
524288+0 records out
34359738368 bytes (34 GB) copied, 340.231 s, 101 MB/s

real    5m41.556s
user    0m0.164s
sys     0m22.277s
Testing 8k bs
time sh -c "dd if=/dev/zero of=ddfile5 bs=8k count=4194304 && sync"
4194304+0 records in
4194304+0 records out
34359738368 bytes (34 GB) copied, 325.439 s, 106 MB/s

real    5m54.724s
user    0m1.144s
sys     0m48.927s
dd if=/dev/zero of=ddfile6 bs=8k count=2097152
2097152+0 records in
2097152+0 records out
17179869184 bytes (17 GB) copied, 154.258 s, 111 MB/s
time dd if=ddfile5 of=/dev/null bs=8k
4194304+0 records in
4194304+0 records out
34359738368 bytes (34 GB) copied, 336.814 s, 102 MB/s

real    5m37.950s
user    0m0.608s
sys     0m23.453s

Even better. And I guess I can assume that these values are more what I can expect? Or should I really be seeing even higher numbers?
 
Looks correct to me, the only software or hardware RAID that I know of that will double read speed in R1 is ZFS. Everything else you basically get the performance of a single disk.

WD Black drives usually have issues with RAID controllers, because of their lack of TLER. Don't be surprised if you see a drive fall from the array.
 
Looks correct to me, the only software or hardware RAID that I know of that will double read speed in R1 is ZFS. Everything else you basically get the performance of a single disk.
Yeah, ok. Good.

WD Black drives usually have issues with RAID controllers, because of their lack of TLER. Don't be surprised if you see a drive fall from the array.
Oh. Snap. I'll have to see if I can get them returned, in exchange for some proper SAS-disks then. Thanks for the heads-up! (-:
 
Yeah, ok. Good.


Oh. Snap. I'll have to see if I can get them returned, in exchange for some proper SAS-disks then. Thanks for the heads-up! (-:

I dropped a single WD Green 1.5TB drive into a LSI2008 raid card and it dropped out within 2 minutes tried it again the then remembered that it was missing TLER.
 
I dropped a single WD Green 1.5TB drive into a LSI2008 raid card and it dropped out within 2 minutes tried it again the then remembered that it was missing TLER.
Auch.

I haven't really decided what to do yet.

Currently I have 4x2TB WD2002FAEX, and 2x1TB Seagate ST1000DM003.

The original plan was to use 2x1TB in RAID1 as system-disks. Then put the 4x2TB as RAID5 for storage.

However, with this TLER-thing, it seems I'm stuck with two alternatives (other suggestions are welcome);

1) Put all 6 disks as individual RAID0-disks (that is, 6xRAID0 in total), and then use md-raid (2x1TB as RAID1, 4x2TB as RAID5). This way I would avoid the TLER-issue, right?

2) Switch all the 6 disks with Raid Edition/SAS disks, and use the controller as originally planned.

If I go down the software-route on top of RAID0-disks; in case the controller fails, can I put the disks in any other machine with a different controller, and be able to get the raid going again? Or does the controller do something with the drives even if it's a single disk in RAID0?

Any suggestions are much welcome.
 
I believe you can get around this using hitachi drives if you want HW raid1. These do not seem to be as much a problem as Seagate or WDC.

Also mdraid does not have the TLER issue. I know I have over a dozen WDC 1TB+ black drives in linux software raid6 at work. Although I am buying Hitachi 7k3000s for all new arrays.
 
Also mdraid does not have the TLER issue. I know I have over a dozen WDC 1TB+ black drives in linux software raid6 at work.
Would that be even if the drives is single RAID0 disks on the controller? I mean; the issue with TLER is between the drive and the controller, right? Or does mdraid actually figure these things out, even if the drive is running RAID0?

Maybe I'm asking stupid questions here, now, but...
 
The drives should be fine if the controller is not doing the raid. At work I use a few different LSI based SAS 1 and 2 cards with some of them flashed to IT firmware instead of the raid firmware that the controller supports the others just use the disks unassigned to any raid array which show up as individual disks in linux using the fusion mpt sas driver.
 
The drives should be fine if the controller is not doing the raid. At work I use a few different LSI based SAS 1 and 2 cards with some of them flashed to IT firmware instead of the raid firmware that the controller supports the others just use the disks unassigned to any raid array which show up as individual disks in linux using the fusion mpt sas driver.
Well, thats the core of the problem; this is a PERC 5/i, not a SAS 5/i. Hence, it's not a MPT-card.

In order for me to use this controller with md-raid, I have to set up the 6 drives as 6 individual RAID0's. This is what I'm concerned about, since the controller would be doing RAID against 6 drives, and would probably be affected by the "TLER-issue"...?
 
Okay, so I've bought a Dell SAS 6/iR. It has pass-through support of devices not in any array.

I'll stick with the WD2002FAEX, I think. I bought 2 more, and I'm going for 6x2TB in RAID6 using md-raid.

Thanks for all the help and inputs in this thread! (-:
 
So, 6x2TB installed, using Dell SAS 6/iR (yes, it supports pass-through of devices not in any array).

Partitions on all 6 drives;
Code:
6x1MB: bios_grub
6x100MB: use as physical RAID
6x25GB: use as physical RAID
6x~2TB: use as physical RAID

mdraid using the above partitions (except bios_grup);
Code:
md0: 6x100MB RAID1
md1: 6x25GB RAID6
md2: 6x~2TB RAID6

Partitions on the raid;
Code:
md0: 1x100MB /boot (ext2)
md1: 1x100GB LVM
  32GB: swap
  30GB: /usr (ext4)
  10GB: /var (ext4)
  ~28GB: / (ext4)
md2: 1x~7TB LVM
  200GB: /home (ext4)


And the results are far better than previously (as expected, though, since I'm going from RAID1 to RAID6).

Code:
jocke@noshut:~$ uname -a
Linux noshut 3.2.0-0.bpo.2-amd64 #1 SMP Sun Jun 3 21:40:57 UTC 2012 x86_64 GNU/Linux

jocke@noshut:~$ time sh -c "dd if=/dev/zero of=ddfile1 bs=128k count=262144 && sync"
262144+0 records in
262144+0 records out
34359738368 bytes (34 GB) copied, 107.128 s, 321 MB/s

real	1m51.058s
user	0m0.028s
sys	0m55.395s

jocke@noshut:~$ time dd if=ddfile1 of=/dev/null bs=128k
262144+0 records in
262144+0 records out
34359738368 bytes (34 GB) copied, 63.9567 s, 537 MB/s

real	1m3.972s
user	0m0.072s
sys	0m22.193s
 
You can't have individual drives as Raid0's. You can pass single drives in JBOD mode, but R0 requires at least two devices to stripe across.
 
You can't have individual drives as Raid0's. You can pass single drives in JBOD mode, but R0 requires at least two devices to stripe across.

Actually on older RAID [Perc 5/i] cards this is how you had to pass single drives onto the OS, that were plugged into the RAID card, as "RAID-0" arrays.
 
Well, thats the core of the problem; this is a PERC 5/i, not a SAS 5/i. Hence, it's not a MPT-card.

In order for me to use this controller with md-raid, I have to set up the 6 drives as 6 individual RAID0's. This is what I'm concerned about, since the controller would be doing RAID against 6 drives, and would probably be affected by the "TLER-issue"...?

I was checking on dell website and there is non-raid firmware for the PERC 5/i available for download but I didn;t manage to flash it yet because the updater says it is not compatible. I've done teh same in the past with a SAS H200, clearing out the flash and then uploading the IT firmware. I'm not sure if the same procedure would work with the Perc 5/i. Do you know?
 
What size of DDR stick are you using in the PERC 5/i card? They officially support 512MB sticks which gives a bit of a performance boost over 256MB.

When I was building up a server for a client, I also managed to get some 1GB sticks that work in the card. The card still only shows 512MB, but the 1GB stick did give a slight performance increase compared to the 512MB stick.

The setup I did was a 4 drive RAID 10 setup with Seagate 320GB 7.2k drives.

I also flashed to the compatible LSI bios since it is quite a bit newer than the Dell BIOS. The newer BIOS doesn't allow you to force write back on. It will only work in write back mode if the battery is good.

Here is a link to a forum thread dedicated to the PERC 5/i cards.
http://www.overclock.net/t/359025/perc-5-i-raid-card-tips-and-benchmarks
 
Just the standard 256 that came with it. What I really wanted is to use the card in IT mode to have ZFS running there.

I'll probably ebay the 2950, not much of a use with that raid controller
 
You would be surprised, it probably can run ESX 5.0 just fine. Takes some CPU and memory upgrade.

Power piggy though.

I'm still using several (Including one with 6x2tb hitachi 7k2000 disks) that is pretty busy at work.
 
It can, but then a R515 can do the job just fine with 12 bays :)

damn cheap server for storage, I have a few with H700s and cachecade, now I want to het one with H200 and implement ZFS, the 12 bay version also has 2 2.5 inches internal bays, perfect for the price range
 
I dropped a single WD Green 1.5TB drive into a LSI2008 raid card and it dropped out within 2 minutes tried it again the then remembered that it was missing TLER.

I have six WD20EARS and four WD15EADS drives in an Areca 1882ix-12 with no issues at all. They hit 900/950MB/s and 440/380MB/s respectively.
 
Back
Top