Question: Napp-it (HDD Array) poor and not lineal Benchmarks results (without SSD)

skalada

n00b
Joined
Jun 3, 2014
Messages
7
Hello,

I´m testing Napp-it over OmniOS in a 48 x 1TB HDD Array without SSDs. I created many testing disk pools (4, 20 and 40 HDD pools) with a RAID10 (striped mirror) configuration.

With the optimal block size one single HDD have a performance of 76Mb/s W and about 102Mb/s R (secuencial). And all are the same model.

The different Benchmarks results give me an (in my opinion) slow performance with the 4HDD pool but even worse, the performance doesn´t increase in a lineal way (as the on-line RAID calculators* suggest it should happen) when I add more HDDs to the pool.

for exemple the default bonnie++ gives me;



1 x HDD pool ------> 76 Mb/s Sequential Write - 102 Mb/s Sequential Read (Basic pool)

4 x HDD pool ------> 154 Mb/s Sequential Write - 258 Mb/s Sequential Read (RAID 10)
8 x HDD pool ------> 308 Mb/s Sequential Write - 458 Mb/s Sequential Read (RAID 10)
16 x HDD pool ----> 364 Mb/s Sequential Write - 573 Mb/s Sequential Read (RAID 10)
24 x HDD pool ----> 397 Mb/s Sequential Write - 671 Mb/s Sequential Read (RAID 10)
34 x HDD pool ----> 384 Mb/s Sequential Write - 692 Mb/s Sequential Read (RAID 10)

bonnies.jpg


*On-line RAID calculators;
http://wintelguy.com/raidperf.pl
http://www.wesworld.net/raidcalculator.html

I thing something must be wrong here, can somebody give a tip about this results?

Thanks in advance. Regards,
 
Last edited:
Hello, here I atach the HW info and some more about configuration. I think is not a HW limitation (I dissable the Monitor before using the Benchmarks) but if i don´t, the only red display is on the Disk;
Monitor: inactive Pool Cap Disk Net CPU Job

And in Sytem>Stadistics I can see that even while a Benchmark process is running CPU idle is high:

98 % CPU idle (waiting, nothing to do)
457.74 MB free physical memory
2.29 GB free virtual memory (swap)


HW info:

1 x SuperMicro X8DT3-F
http://www.supermicro.com/products/motherboard/QPI/5500/X8DT3-F.cfm

-----> with 1 x LSI SAS3801E HBA SAS controller via PCI express connected. http://www.lsi.com/downloads/Public... Bus Adapters Common Files/lsisas3801e_pb.pdf

2 x Sun Storage J4400
http://docs.oracle.com/cd/E19928-01/

48 x HDDs: SEAGATE ST31000N and HITACHI H7210CA3

Extra info;

root@storage1:~# isalist
amd64 pentium_pro+mmx pentium_pro pentium+mmx pentium i486 i386 i86
root@storage1:~# isainfo
amd64 i386
root@storage1:~# isainfo -b
64
root@storage1:~# uname -a
SunOS storage1 5.11 omnios-6de5e81 i86pc i386 i86pc

pool: tank4 (SYNC=disabled)
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM CAP Product
tank4 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c4t29d0 ONLINE 0 0 0 1 TB SEAGATE ST31000N
c4t32d0 ONLINE 0 0 0 1 TB SEAGATE ST31000N
mirror-1 ONLINE 0 0 0
c4t27d0 ONLINE 0 0 0 1 TB SEAGATE ST31000N
c4t33d0 ONLINE 0 0 0 1 TB SEAGATE ST31000N

Thanks
 
Last edited:
These systems are quite old.
Have you ever seen better values from them?

You may connect as many disks as possible directly to the internal SAS or Sata Controller in AHCI mode and compare a similar directly connected Raid-10 setup vs the external Sun boxes.
 
Thank you... No, I never saw other results before. I can not connect more tan 8 HDD to the internal controller.

I was thinking and I saw that the problema could be here
-----> with 1 x LSI SAS3801E HBA SAS controller via PCI express connected. http://www.lsi.com/downloads/Public... Bus Adapters Common Files/lsisas3801e_pb.pdf
This SAS Controller has only 8 SAS ports and it's up to 8 HDDs when the performance stops to increase lineal while adding HDDs;


bonnies_conclusion.jpg


If I understod the idea of SAS ports propperly. I think the problema can be here. Sounds it reasonable? I tested it with 2 HBAs HDD Controllers and here are the results.

resultados_2_hba.jpg



Thanks! Regards.
 
Last edited:
If you connect your systems over a single miniSAS cable that combines 4 SAS ports, you have a max possible data transfer rate with SAS devices of 4 x 3 Gb/s = 12 Gb/s = around 1500 MB/s. (With 1,5 Gb Sata halv that value).

Only intersting point in your config where you reach about halv of what is max possible is if the external expander is the limiting factor or not.

Have you used daisy chain cabling or a miniSAS cable to each Sun JBOD Box to allow that transfer rate to each Sun box?
 
I'm using miniSAS cable, connected as the Sun Array Manuals indicate.

SASMultipath-5.gif


Now I updated the testing result grafics (in the last post), with 1 and 2 miniSAS cables (one for every SAS controller in the host)

I enabled the Multipath configurations of the Host (with the stmsboot -e command) and the results were the same as with it disabled.

I dont know if a bad Sun Array Multipath configuration cause the bottleneck or if is the 2 SAS controllers (8+8 ports) cause a bottleneck?

SAS.GIF


4 x 3 Gb/s = 12 Gb/s
Do this calculations work for a SAS Controller, for a SAS Expander or for both?

Thanks!
 
Last edited:
Do this calculations work for a SAS Controller, for a SAS Expander or for both?

Thanks!

You can connect up to 8 disks directly to your controller over 2 miniSAS SFF 8087 cables.

If you connect 4 single disks directly per cable or 128 disks over an expander does not matter. The capacity of the miniSAS link is limited to 4 x 3 Gb.
You can use newer LSI SAS controllers that offer 6 or 12 Gb/s per port but that requires new disks/expanders as well.
 
Thanks for the help.

After reading some SAS controller reports

I think there is a bottleneck here, I think that I connected one of the SAS HBA to the wrong PCI Slot.

mboard_4.jpg


And if this information from the same SAS controller reports is right, the results on the graphics I got would make more sense.

limit.jpg


Will post results after testing, regards!
 
Last edited:
Thanks for the help.

After reading some SAS controller reports




I think there is a bottleneck here, I think that I connected one of the SAS HBA to the wrong PCI Slot.

mboard_4.jpg


And if this information from the same SAS controller reports is right, the results on the graphics I got would make more sense.

limit.jpg


Will post results after testing, regards!



This is interesting, I hope that this is the bottleneck, I would think that you are just saturating your controller, but I might be wrong. An interesting test.
 
Long time without testing! :)

As I explained here in my benchmarks made with Napp-it in OmniOS (without SDDs) it seemed to give problems and poor non lineal benchmark results and I started using OpenIndiana because results where better.

Since I added Solid State Drives SLC for ZIL write log and MLC for read cache it didn’t happen anymore. So I went back to OmniOS because of the monitoring Websockets supported.

The doubts come now with the multipath Options. Once Multipath option (stmsboot -e) is enabled, ressults go irreversibly worse

qDy7t8YA


I think is not my biggest problem. But just for curiosity. Any ideas what may be happening?

Regards!
 
Back
Top