Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
root@OmniOS:/root# cd smartmontools-6.4
root@OmniOS:/root/smartmontools-6.4# ./configure --prefix=/usr --sysconfdir=/etc
checking for a BSD-compatible install... /usr/gnu/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /usr/gnu/bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking whether to enable maintainer-specific portions of Makefiles... no
checking for g++... no
checking for c++... no
checking for gpp... no
checking for aCC... no
checking for CC... no
checking for cxx... no
checking for cc++... no
checking for cl.exe... no
checking for FCC... no
checking for KCC... no
checking for RCC... no
checking for xlC_r... no
checking for xlC... no
checking whether the C++ compiler works... no
configure: error: in `/root/smartmontools-6.4':
configure: error: C++ compiler cannot create executables
See `config.log' for more details
root@OmniOS:/root/smartmontools-6.4# make && make install && make clean && cd $HOME
make: Fatal error: No arguments to build
i'm having trouble install smartmontools on omnios r151016 - nappit v16.01f. I even try 5.4 and 6.3 version. Do i need other packages install? thanks
OmniOS 151016 switched from gcc48 to gcc51
The compiling process does not care at the moment.
If you can't wait, install from
http://scott.mathematik.uni-ulm.de/
pkg set-publisher -O http://scott.mathematik.uni-ulm.de/release uulm.mawi
pkg search -pr smartmontools
pkg install smartmontools
root@OmniOS:/root# pkg info -r smartmontools
Name: system/storage/smartmontools
Summary: Control and monitor storage systems using SMART
State: Installed
Publisher: uulm.mawi
Version: 6.3
Branch: 0.151012
Packaging Date: Mon Sep 29 13:22:53 2014
Size: 1.83 MB
FMRI: pkg://uulm.mawi/system/storage/[email protected]:20140929T132253Z
# Defaults for smartmontools initscript (/etc/init.d/smartmontools)
# This is a POSIX shell fragment
# List of devices you want to explicitly enable S.M.A.R.T. for
# Not needed (and not recommended) if the device is monitored by smartd
#enable_smart="/dev/hda /dev/hdb"
# uncomment to start smartd on system startup
start_smartd=yes
# uncomment to pass additional options to smartd on startup
smartd_opts="--interval=1800"
/dev/rdsk/c1t50014EE2B6A43F25d0s0 -a -d sat,12
/dev/rdsk/c1t50014EE2B6AC03AFd0s0 -a -d sat,12
/dev/rdsk/c1t50014EE2B6B9E8F4d0s0 -a -d sat,12
/dev/rdsk/c1t50014EE20B9C56B6d0s0 -a -d sat,12
/dev/rdsk/c1t50014EE20B82D7B5d0s0 -a -d sat,12
/dev/rdsk/c1t50014EE20BFFC2BEd0s0 -a -d sat,12
/dev/rdsk/c1t50014EE20BFFD1F3d0s0 -a -d sat,12
/dev/rdsk/c1t50014EE20C070DE3d0s0 -a -d sat,12
/dev/rdsk/c1t50014EE20C168388d0s0 -a -d sat,12
/dev/rdsk/c1t50014EE2614E6475d0s0 -a -d sat,12
/dev/rdsk/c1t50014EE2614EFAD1d0s0 -a -d sat,12
/dev/rdsk/c1t50014EE26154D4CEd0s0 -a -d sat,12
thanks I installed smartmontools but on nappit it still says not installed and the smartctl command doesn't work?
I have updated the wget installer to compile the newest smartmontools 6.4
and napp-it v16.01 p with a tuning panel for ip, system, nfs, nic and vnic (vmxnet3s) settings
Both are Illumos distributions so this is easy. Think of Openindiana 151a as an OS that is quite identical
to the first OmniOS release some years ago. Indeed OmniTi intended first to use OpenIndiana as
base for their minimalistic enterprise storage OS but then decided against as there were too many
dependencies with the full size desktop version. So they build their own from Illumos and it is now
the ZFS storage server with the smallest footprint. Not just enough OS but just enough ZFS Storage OS -
the main reason why OmniOS has a stable for years while OpenIndiana has not.
If you want to migrate on the same machine, just install OmniOS and import the pool. If you want to keep
OpenIndiana, use a new system disk ex a small 30GB+ SSD. If you want to transfer data between pools,
do a ZFS send or a sync files with rsync or robocopy (Windows) that keeps permissions intact.
If everything is ok, update the pool for newer features.
zfs snapshot -r tank/data@fullbackup
zfs send -R tank/data@fullbackup | ssh 192.168.1.143 zfs recv -vFd WD5TBX12/WD5TBX12DATA
KexAlgorithms [email protected],diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group-exchange-sha256,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521
_Gea, you've often recommended Xeon E3s, citing their per-thread performance. This testing used 2x E5520s, which are significantly slower per thread. Did CPU load limit any of the results?I added another round to my 10G benchmarks with SMB2 on OmniOS
Are the multi user write degradations due to the copy on write nature of ZFS or a SMB 2.1?
_Gea, you've often recommended Xeon E3s, citing their per-thread performance. This testing used 2x E5520s, which are significantly slower per thread. Did CPU load limit any of the results?
Have you tested with E3s or newer E5s?
The very old OpenIndiana 151a does not support 1m recordsize while the current OpenIndiana Hipster do.
I have not tried if zfs send preserves blocksize or use the parent settings.
Try it.
Memory size: 16384 Megabytes
write 12.8 GB via dd, please wait...
time dd if=/dev/zero of=/hdd/dd.tst bs=2048000 count=6250
6250+0 records in
6250+0 records out
12800000000 bytes transferred in 25.665755 secs (498719016 bytes/sec)
real 25.6
user 0.0
sys 3.1
12.8 GB in 25.6s = 500.00 MB/s Write
wait 40 s
read 12.8 GB via dd, please wait...
time dd if=/hdd/dd.tst of=/dev/null bs=2048000
6250+0 records in
6250+0 records out
12800000000 bytes transferred in 1.669509 secs (7666924629 bytes/sec)
real 1.6
user 0.0
sys 1.6
12.8 GB in 1.6s = 8000.00 MB/s Read
Memory size: 16384 Megabytes
write 12.8 GB via dd, please wait...
time dd if=/dev/zero of=/ssd/dd.tst bs=2048000 count=6250
real 2:08.5
user 0.0
sys 1.5
12.8 GB in 128.5s = 99.61 MB/s Write
Thank you.I have used the above hardware as it was available.
I have not done CPU tests but I doubt it would be too relevant.
If you have the option, newer CPUs are faster and frequency
is more important than number of cores.
I would say, the smaller part is due network collisions.
Mostly its due concurrent read/writes to storage.
Running some dd benchmarks, can anyone confirm if these are good?
6 x 4TB WD Se in RAID Z2
Code:Memory size: 16384 Megabytes write 12.8 GB via dd, please wait... 12.8 GB in 25.6s = 500.00 MB/s Write wait 40 s read 12.8 GB via dd, please wait... 12.8 GB in 1.6s = 8000.00 MB/s Read
And 2 x Samsung 840 Pro 512GB in a Mirror. Seems really slow for some reason
Code:12.8 GB in 128.5s = 99.61 MB/s Write
Finally I'm running an all-in-one on ESXi. Have a few questions about that too:
- Would iSCSI perform better than NFS as a datastore? I currently have a ZIL on each zpool which has worked wonders in reducing slow downs on my VMs. (25 mins vs 1 min for a simple system update on Ubuntu)
- Is it worth enabling Jumbo frames at all (and where should I enable it) for faster transfers between my VMs? No physical devices I own actually support 10Gbe, only the virtual machines running a VMXNET3 NIC
I don't mean this to derail the thread, but I would have thought that NVME could handle many requests for both reads and writes.
So to install my 10 x 2tb from OI (i should do export) and install them to Omnios (i should do import) and it should be accessible ? without export will the pool get corrupted?
I have 20 disks that used to be a part of a zpool but did not have the zpool destroy command used on them. Are they safe to use with a new zpool? Or is there metadata left behind on them that should be wiped?
2. about writes to the SSD mirror
I would check the SSD pool with sync=always vs sync=disabled
the difference shows the sync write quality of the pool or the slog device
If these are sync=disabled values, there is another problem
Memory size: 16384 Megabytes
write 4.096 GB via dd, please wait...
time dd if=/dev/zero of=/ssd-Bd0/dd.tst bs=4096000 count=1000
1000+0 records in
1000+0 records out
4096000000 bytes transferred in 113.319963 secs (36145441 bytes/sec)
real 1:53.3
user 0.0
sys 0.8
4.096 GB in 113.3s = 36.15 MB/s Write
I have 20 disks that used to be a part of a zpool but did not have the zpool destroy command used on them. Are they safe to use with a new zpool? Or is there metadata left behind on them that should be wiped?