OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Possible: yes
Helpful: I doubt (only new or modified files are written twice)

Better:
use a second disk (or two fast 16GB+ USB sticks with OmniOS/OI server) and ZFS mirror them

Can't I use USB sticks with OI desktop ? I just saw that it doesn't even take 10GB, and I've installed most of what I'm expecting to use. I could get two 32GB sticks.
 
Can't I use USB sticks with OI desktop ? I just saw that it doesn't even take 10GB, and I've installed most of what I'm expecting to use. I could get two 32GB sticks.

Solaris is not really optimized for USB sticks but if you
- use two modern and fast (USB3) sticks
- disable atime (log access time)
- ZFS mirror them (doubles read performance and reliabilty)


you can use them. My own "napp-it to Go" distribution is based on 16 Gb sticks and OmniOS-
I would prefer a minimal distribution like OI server or OmniOS

(OmniOS is currently the most up to date free distribution and stable - optionally with support)
 
I've alreay completed an initial seed replication and some incremental replications of two ~2TB iSCSI zvols. Now I'd like to try the replication extension as an easy way to automate ongoing remote replication.

1) Can I start with the seed already in place or does the extension need to start with it's own seed?

When I run this from the shell, replication works:

zfs send -I @daily-1365112602_2013.04.15.06.00.01 r10/B1@daily-1365112602_2013.04.17.09.00.02 | ssh 192.168.1.186 zfs receive -vF r10/B1

But I get this from the job created on the receiving host with the nappit extension:
settings: nappit1 192.168.1.184 r10/B1 nc -I r10/B1 port 51175
info: job-repli 299, source-dest snap-pair not found, check target ZFS and repli_snaps

Snap names match on source/target hosts but I do generally have to use -F to rollback on target.
 
Hi _Gea,

I think I've come across a bug in the latest Napp-It (v.0.9a9 nightly Mar.04.2013).

It looks like when importing a zvol LU via COMSTAR the system cannot handle a LU with an underscore ( _ ) in the name. The GUI errors out with a "metadata" error and the LU import fails. The command shown in the live log window at the bottom is cut off at the underscore in the zvol name.

Running the command as it is shown but with the rest of the zvol name after the underscore is successful and the LU is imported.

Thnanks!!
Riley

thanks
comment line 130 in file "/var/web-gui/data/napp-it/zfsos/09_comstar iscsi/02_logical units/05_import LU/action.pl"

like
$lu=~s/\_.*//; # remove end statement -> modify to
# $lu=~s/\_.*//; # remove end statement (disable, hinders _ in name)

fixed in next release
 
I've alreay completed an initial seed replication and some incremental replications of two ~2TB iSCSI zvols. Now I'd like to try the replication extension as an easy way to automate ongoing remote replication.

1) Can I start with the seed already in place or does the extension need to start with it's own seed?

When I run this from the shell, replication works:

zfs send -I @daily-1365112602_2013.04.15.06.00.01 r10/B1@daily-1365112602_2013.04.17.09.00.02 | ssh 192.168.1.186 zfs receive -vF r10/B1

But I get this from the job created on the receiving host with the nappit extension:
settings: nappit1 192.168.1.184 r10/B1 nc -I r10/B1 port 51175
info: job-repli 299, source-dest snap-pair not found, check target ZFS and repli_snaps

Snap names match on source/target hosts but I do generally have to use -F to rollback on target.

- The extension has its own snap naming. While you may rename your snaps its easier to do a new initial sync with deleting or renaming target ZFS

- If you touch the target filesystem even readonly, a rollback is needed

- napp-it replication is up twice as fast than replication over ssh
 
- The extension has its own snap naming. While you may rename your snaps its easier to do a new initial sync with deleting or renaming target ZFS

Found the rename command: zfs rename tank/home/cindys@083006 today
Is it just a matter of changing to a supported name? I'm sure it's more complicated or you would have said so, but reseed will take at least a day even with netcat so I want to be sure.

- If you touch the target filesystem even readonly, a rollback is needed

I will periodically for test restores. This is a remote DR setup for Vsphere Data Protection. can nappit rep set -F flag? Possible to edit command if not available in GUI?

- napp-it replication is up twice as fast than replication over ssh

Sweet! napp-it is a great tool! I'm pimping you out big time over on the VMware Backup and Replication forums. VDP is a VMware's free included backup appliance. It does great dedup and let's you easily backup your whole ESXi environment but they didn't include any way of safely storing the data off-site or recovering it in a disaster! Enter ZFS and napp-it!
 
Last edited:
- The extension has its own snap naming. While you may rename your snaps its easier to do a new initial sync with deleting or renaming target ZFS

Found the rename command: zfs rename tank/home/cindys@083006 today
Is it just a matter of changing to a supported name? I'm sure it's more complicated or you would have said so, but reseed will take at least a day even with netcat so I want to be sure.

- If you touch the target filesystem even readonly, a rollback is needed

I will periodically for test restores. This is a remote DR setup for Vsphere Data Protection. can nappit rep set -F flag? Possible to edit command if not available in GUI?

- napp-it replication is up twice as fast than replication over ssh

Sweet! napp-it is a great tool! I'm pimping you out big time over on the VMware Backup and Replication forums. VDP is a VMware's free included backup appliance. It does great dedup and let's you easily backup your whole ESXi environment but they didn't include any way of safely storing the data off-site or recovering it in a disaster! Enter ZFS and napp-it!

napp-it zfs receive does the -F always and additionally sets the received filesystem to read-only. If you want to transform an existing replication set to napp-it you may:

- rename the target zfs
- create an appliance group (extension - appliance group) with the source server
- create a replication job (only possible when target does not exist), remember jobid
- rename target back to original name

- now you need a last replication snap on both sides with the same ongoing number
and the jobid (create a test-replication to check for namings), rename the snaps

- now you should be able to start the incremental replications in napp-it
 
gave it the old college try but I'm just gonna reseed over the weekend. Does it look like I got close?
zfs rename -r r10/B1@daily-1365112602_2013.04.19.11.51.24 1366409769_repli_zfs_nappit1_nr_1
 
Hi Gea, If source and target have multiple network adapters, can nappit rep take advantage of them and run multiple concurrent replication jobs on discrete network paths?

10.10.2.201->10.10.2.211
10.10.2.202->10.20.2.212

thanks,
jb
 
gave it the old college try but I'm just gonna reseed over the weekend. Does it look like I got close?
zfs rename -r r10/B1@daily-1365112602_2013.04.19.11.51.24 1366409769_repli_zfs_nappit1_nr_1

on sender side like
tank/data@1364671967_repli_zfs_backup_nr_1

on receiver side like
tank/backup/data@1364671967_repli_zfs_backup_nr_1

hostname of target server: backup
 
Hi Gea, If source and target have multiple network adapters, can nappit rep take advantage of them and run multiple concurrent replication jobs on discrete network paths?

10.10.2.201->10.10.2.211
10.10.2.202->10.20.2.212

thanks,
jb

you can run several replications at the same time,
one or more nics/networks does not matter unless hostname and ip keeps constant.
 
Solaris is not really optimized for USB sticks but if you
- use two modern and fast (USB3) sticks
- disable atime (log access time)
- ZFS mirror them (doubles read performance and reliabilty)

you can use them. My own "napp-it to Go" distribution is based on 16 Gb sticks and OmniOS-
I would prefer a minimal distribution like OI server or OmniOS

(OmniOS is currently the most up to date free distribution and stable - optionally with support)

I looked into OmniOS but OI desktop is closer to my current philosophy. I won't use it as a desktop computer per se but for many operations I'll be using a (remote) desktop, and am not too concerned so far with virtualization (well, I did install several windows programs thanks to wine). From what I understand I can change the OS later without trouble.

So it would be USB2 sticks not USB3 (or USB3 limited by USB2).

Also, now that I'm hands on, I'm discovering lots of things. For example I considered that the checksums at all levels would take some space, but I didn't know about the 1/64th of total space reserved by the filesystem. Does that value change with the size of a pool ?

Next, I had filled the pool without realizing it, and in that state, it's not possible to rename files. Strange really.

I did a (manual) send/receive of a 3TB RAIZ1 pool to a 3TB drive and that worked fine, although there is no progress indication, a little annoying for so long an operation (I did zpool status on another terminal to see what was going on).

Then, a stupid question : if I just copy the files, through the provided file explorer, from the 3TB drive to a new pool, will data consistency be guaranteed ?

The data came from NTFS drives (I installed ntfs-3g) and I had CRC32 hashes of all the files, so once copied I checked them all, and am wondering if I should do it again when copying from ZFS to ZFS or if it's 100% safe. I might add that I have ECC memory.
 
you can run several replications at the same time,
one or more nics/networks does not matter unless hostname and ip keeps constant.

I expected to be network limited based on my Bonnie scores for sequential reads and writes and thought I might be able to increase total throughput by using two source/target NIC-pairs. Disk utilization during replication in the source pool is around 40%, 15% on the target. Processor utilization is less than 10%. But I only got around 51MBps on the initial replication using the management IPs. There are two other NICs on each box. Is there any sense in trying to utilize these? Right now I'm doing a LAN rep. Ultimately I'll be WAN contrained to about 20Mbps so it may only be of academic interest.

Seperate question: If I'm doing nightly replication, should I now discontinue my nightly auto-snap job?

thanks,
jb
 
Last edited:
looks like replication target only keeps last 2 snapshots. Option to save more as desired - 7?
 
looks like replication target only keeps last 2 snapshots. Option to save more as desired - 7?

If you create a replication-job with the -I option it transfers all intermediary snapshots and clones otherwise you can create an independent autosnap-job with the desired snap-history.
 
I looked into OmniOS but OI desktop is closer to my current philosophy. I won't use it as a desktop computer per se but for many operations I'll be using a (remote) desktop, and am not too concerned so far with virtualization (well, I did install several windows programs thanks to wine). From what I understand I can change the OS later without trouble.

So it would be USB2 sticks not USB3 (or USB3 limited by USB2).

Also, now that I'm hands on, I'm discovering lots of things. For example I considered that the checksums at all levels would take some space, but I didn't know about the 1/64th of total space reserved by the filesystem. Does that value change with the size of a pool ?

Next, I had filled the pool without realizing it, and in that state, it's not possible to rename files. Strange really.

I did a (manual) send/receive of a 3TB RAIZ1 pool to a 3TB drive and that worked fine, although there is no progress indication, a little annoying for so long an operation (I did zpool status on another terminal to see what was going on).

Then, a stupid question : if I just copy the files, through the provided file explorer, from the 3TB drive to a new pool, will data consistency be guaranteed ?

The data came from NTFS drives (I installed ntfs-3g) and I had CRC32 hashes of all the files, so once copied I checked them all, and am wondering if I should do it again when copying from ZFS to ZFS or if it's 100% safe. I might add that I have ECC memory.

-You can upgrade to any os that supports your ZFS version
- ZFS is copy on write, you need space to rename, delete snaps or reservations
- noting is save in this world beside the fact that everything has an end
but regarding storage all other options are far behind
 
Hello from Spain, I am a new user who is testing Nexenta to experience at home.

I installed the latest beta 4.0, and I have a problem, my hardware is a HP Microserver NL40 with a modified BIOS to support Hotswap. This works correctly on Windows Server or Linux.

In Nexenta, when a HDD hot extract and insert it back, this is no longer recognized by the system.

I've tried to break through the webgui "Rescan all HBAs and refresh device links" but still no detection. The only option is to restart the server.

thanks in advance and excuse my bad English.
 
I tested with version napp-it to Go (thanks GEA) and neither detected.

Only settings displayed on the controller:

No disks found!
Disks on Interface
Interface Type/Online Busy Phys_Id Modell
sata1/0 connected unconfigured unknown Mod: ST500DM002-1BD142 FRev: KC45 SN

root@to-go-13a:~# cfgadm
Ap_Id Type Receptacle Occupant Condition
sata1/0 disk connected unconfigured unknown
 
Hello,
I am using napp-it latest stable version on OmniOS stable.
I had problems installing TLS for email but I haven't looked into it yet, something else caught my eyes: napp-it takes a continuous 0.6-1% cpu power (0.6% for the first few hours, then after couple of days is 1% steady) with the process

/usr/bin/perl /var/web-gui/data/napp-it/zfsos/_lib/scripts/socketserver.pl

I have a 1.6-3.1 GHz Xeon E3-1220 and 1% is enough to increase its power consumption because it kicks it out of idle too often. I mean, with a watt meter I can tell if napp-it is running or not, there are about 3-5W difference. This is a home fileserver: most of the time is idle and therefore napp-it is the small addition that makes the processor get out of idle. On a normal machine the processor would be already out of idle and 1% does nothing.

I already disabled the monitor (what is the monitor is exactly monitoring?).
Since napp-it, in my configuration, is now basically expected only to make some periodic snapshots, why 1% continuous? I already restarted it after disabling the monitor.
If the 1% issue cannot be solved, I will have to implement the snapshot thing in cron manually and disable napp-it, but I liked to be able to get some stats and statuses via browser.

Side note: I suggest implementing this script to get an idea whether a dedicated ZIL is actually useful for performances or not (too many people overrate it):
http://www.richardelling.com/Home/scripts-and-programs-1/zilstat
 
Hey guys, I was able to OpenIndiana and nappit installed and configured with Raid Z2 with 8 2 TB drives, but I feel like my knowledge ends there. Are there any learning resources to really understand the system settings and configuration settings in nappit/openindiana? Thanks
 
I've solved by running the following command:



As would be solved in Nexenta without having to access the bash?

This is RUDE, why are you hijacking this thread with Nexenta questions? Many of us are subscribed to this thread because we have a specific interested in the topic, we have no interest in Nexenta questions, or certainly, not within this thread.

Please start you own. I myself have come across this exact issue but I would never answer within this thread.
 
This is RUDE, why are you hijacking this thread with Nexenta questions? Many of us are subscribed to this thread because we have a specific interested in the topic, we have no interest in Nexenta questions, or certainly, not within this thread.

Please start you own. I myself have come across this exact issue but I would never answer within this thread.

Look at the topic, nexenta is included. lol
 
Beside questions about licencing, the Web-UI and some add-ons, most questions and solutions are common
between Illumian, NexentaStor, OmniOS and Solaris 11.

Indeed NexentaStor3 is OpenSolaris Build 134 + some bugfixes/backports + Web-UI + some addons like HA
The upcoming NexentaStor4 is Illumos based + Debianlike packaging, more or less the same like OI or Omni
but with Nexenta Web-UI + commercial addons.

The main difference to the free alternatives is the payed service option, a thing that is missing
in the CE edition that is available for noncommercial homeuse as well.
 
The socketserver is a html5 websocket server to deliver realtime page updates and realtime monitoring like the ample that shows current state.
If you need to reduce the load by this single percent, you can kill the process without problem.
(You can completely disable in /etc/init.d/napp-it)

btw.
zilstat is in napp-it Menu System - Statistic - ZIL
 
_Gea,

I'm hoping you or anyone else can give me some insight to my problem. In a nut shell beyond anything my goal is to be able to saturate a 10Gb link. I'm trying to establish this goal by using SSD’s for storage. Right now I’m just doing testing. As I stated in a previous post I first tried the “all in one” approach but realized that the vNIC would be the limiting factor in regards to bandwidth and stability. I then installed OI directly onto the same hardware and created a quick mirror vdev’s of two 256GB Crucial M4’s and started testing.

SAN
MBD-X9DRL-3F-O
64GB ECC Memory
2 X E5-2603
IBM M1015
SuperMicro JBOD 837E16-RJBOD1
Intel 10Gb 82598
Boot 256GB Crucial M4
2X 256GB Crucial M4 (mirror)

Test Machine – 2008 R2
MBD-X9SCL-O
8GB ECC Memory
E3-1220
Intel 10Gb 82598
60GB Vertex Boot

Network
HP 2910AL Switch

For this test machine I’m using a 4GB RAM disk for testing. With this setup I was getting about 350MB/s transfers over SMB both directions with NFS being even slower. Things were a little faster when transferring from SAN to test machine. I’ve tried removing the switch from the equation, changing the MTU and disabling sync. When I use the benchmark tools it reports amazing numbers none of which I can duplicate across the network. With these disk and that much RAM I would expect much higher numbers out of the box. Does anyone have any insight into what I might be doing wrong or what I can try or do?

Thanks in advance!
 
350 MB/s (3 times GigaBit) is not that bad with SMB. I have not had better values.
I would expect that FTP, NFS or AFP are faster (sync disabled) as well as iSCSI with larger blocksize.
Replication via buffered netcat should also be faster.

Tuning options are mainly on the IP side like
Jumbo frames (MTU 9000) or
intr_throttling=1;

both in the /kernel/drv/ixgbe.conf.

You may also read my tuning page http://napp-it.org/manuals/tuning.html or
http://blog.cyberexplorer.me/2013/03/improving-vm-to-vm-network-throughput.html
http://docs.oracle.com/cd/E19963-01/html/821-1450/giozw.html
 
Thanks for the reply. NFS using Windows was slower for me for whatever reason and right now I do not have the ability to do 10Gb on my Mac. I was able to adjust the MTU to 9k but it gave me worse numbers then 1500. I'll try the intr_throttling=1; entry to report the results. Again thanks for the quick response!
 
In a nut shell beyond anything my goal is to be able to saturate a 10Gb link.
With a single reader? With many readers? Over CIFS? With what application on the receiving end? A gigabyte per second isn't something that most software is capable of handling.
SuperMicro JBOD 837E16-RJBOD1
It might be a good idea to try some local benchmarks; some configurations of SAS controller to SAS expander lead to using a single lane between them, limiting you to 600 MB/s.
With this setup I was getting about 350MB/s transfers over SMB both directions with NFS being even slower. Things were a little faster when transferring from SAN to test machine. I’ve tried removing the switch from the equation, changing the MTU and disabling sync. When I use the benchmark tools it reports amazing numbers none of which I can duplicate across the network.
What kind of benchmark, network or disk? Try both and report your numbers. Watch "vmstat 1" during this process. If you're getting a lot of interrupts, the interrupt throttling setting is a good start.
Thanks for the reply. NFS using Windows was slower for me for whatever reason and right now I do not have the ability to do 10Gb on my Mac. I was able to adjust the MTU to 9k but it gave me worse numbers then 1500. I'll try the intr_throttling=1; entry to report the results. Again thanks for the quick response!
In my experience "jumbo frames" are a waste of time. You need to run them on a separate network where everything supports them, and performance when you actually get them working isn't any better than 1500 MTU.
 
With a single reader? With many readers? Over CIFS? With what application on the receiving end? A gigabyte per second isn't something that most software is capable of handling.

Great questions. Honestly the speed is intended for my VMWare cluster but at the same time I want to be able to hook up a 10Gb card to my Mac and be able to transfer files over AFP as fast as possible. Beyond that I don't really have a particular usage in mind. This is sort of a proof of concept for me as well.

It might be a good idea to try some local benchmarks; some configurations of SAS controller to SAS expander lead to using a single lane between them, limiting you to 600 MB/s.

That's another great point. I sort of just hooked up two SFF-8088 cables from my JBOD to my HBA and assumed everything would work. With two lanes per cable I should be able to hit a theoretical peak of 2400 MB/s.

Am I able to do that with the built in benchmarks that are installed with Napp-It. By chance is there anyway I can turn off RAM caching so I don't get deluded numbers?

What kind of benchmark, network or disk? Try both and report your numbers. Watch "vmstat 1" during this process. If you're getting a lot of interrupts, the interrupt throttling setting is a good start.

The benchmark tool I used was iozone. I'll run it again this weekend and watch vmstat 1.

In my experience "jumbo frames" are a waste of time. You need to run them on a separate network where everything supports them, and performance when you actually get them working isn't any better than 1500 MTU.

Yeah I've experienced pretty much the same thing. Thanks so much for your reply unhappy_mage.
 
Last edited:
Am I able to do that with the built in benchmarks that are installed with Napp-It. By chance is there anyway I can turn off RAM caching so I don't get deluded numbers?
Try "bonnie++", which is in the menu structure somewhere. It automatically does a large enough benchmark to reduce the impact of RAM caching.
 
Or you can run test with iozone, but sample file must be at least 2*RAM SIZE...

MAtej
 
I seem to have run into a strange issue:

I ordered 6 new drives for adding a 3rd vdev and when the disks are in Napp-it seems to hang on the following:

errors: No known data errorsexe (get-desk.pl 28): parted -lm

I suspect a drive is DOA, but it seems that the drive that sits on the front of my chassis with a solid activity light seems to change around, across seperate backplanes, SAS cabling, and even RAID cards.

Any idea how to increase the timeout of Napp-it so it can complete? Or else, how to positively identify the failed disk and remove it so I can RMA it?
 
I seem to have run into a strange issue:

I ordered 6 new drives for adding a 3rd vdev and when the disks are in Napp-it seems to hang on the following:

errors: No known data errorsexe (get-desk.pl 28): parted -lm

I suspect a drive is DOA, but it seems that the drive that sits on the front of my chassis with a solid activity light seems to change around, across seperate backplanes, SAS cabling, and even RAID cards.

Any idea how to increase the timeout of Napp-it so it can complete? Or else, how to positively identify the failed disk and remove it so I can RMA it?

On new disks without a valid partition, the parted command that reads partition infos sometimes hangs.
In such a case, you must initialize your disks (ex via napp-it, use rollovermenu Disk - Initialize)
 
You sir are my hero. That was it! Odd that this is the first time I've had to do that, I think the others worked fine. I could be mistaken however.

BTW, From my research, there's no way to convert from ashift 9 to ashift 12 on a pool with data on it correct? The best I can do is move it to a new pool, and then re-initialize the drives in the old pool as 4k (modifying sd.conf) and then adding new vdevs to the new pool?

All my drives are 3TB 512e drives, but the first 2 vdevs are using 512-byte sectors. It looks like I have a failing disk in one of my RAIDZ2s, and I'm running into an issue with replacing the drive, saying the sectors are different. I get a "cannot replace <olddrive> with <newdrive>: devices have different sector alignment". There's no way to force a single drive to use 512-byte sectors is there?
 
ashift is a vdev property and yes, you are right -
there is no way to modify after creating a vdev beside destroy and recreate the pool.
 
@_Gea

I'm almost in the same boat as MistrWebmastr, I'm experiencing my first drive fail since running my All-In-One ESXi setup with Napp-IT.

zn2zaCc.png


sdntq6V.png


Code:
pool: vol01
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on older software versions.
scan: scrub repaired 0 in 19h25m with 0 errors on Tue Apr 23 22:40:11 2013
config:

NAME STATE READ WRITE CKSUM
vol01 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c6t50024E9002A47376d0 ONLINE 0 0 0
c6t50024E9002A47B3Fd0 ONLINE 0 0 0
c6t50024E9002A48158d0 ONLINE 0 0 0

errors: No known data errors

The disk is part of a 3 way mirror. (vol01) the disks are all 1 TB once created with ashift=9

I have 3 questions:

1) Can I replace the disk that is dying with a 2 TB Seagate ST2000DM001 (without needing to fix the ashift=9 ) ?

2) What would be the correct procedure? (I've read the manual which states to shut down the whole esxi machine which is fine but before I shutdown, do I first tell Napp-IT that I'm going to replace of remove the drive?
And then should I use a different cable to connect the drive to my M1015?

3) I'm still running on 0.8k2 (was planning to upgrade this weekend) but not sure if I should first upgrade to 0.9 before attempting to replace the drive ?

EDITED (added question 3 since it might be relevant)
 
Last edited:
Since it's a 3-way mirror, I'd say offline the drive (command line: zpool offline <pool> <device ID>), pull the drive, then go to Disks> Replace. So long as it doesn't report as a 4k drive you should be gold. If it gives you the error I got then you need to find another older model drive that reports 512-byte sectors, or else use another set of drives to re-build the pool and build it with 4k sectors (ashift 12). You can always put a 512-byte drive in a 4k vdev, but you cannot place a 4k drive in a 512-byte vdev.

EDIT: DOH! You can use Disks>Hotswap>Set-Offline to prep disk for removal.
 
Last edited:
Isn't the max read speed of a m4 disk around 350MB/sec?

There is a good amount of tuning you need to do to get good 10gigabit speeds. My system would fallover itself around 300MB/sec, till I tuned it, then I was able to max out 20gbit without any issues.

The jumbo frames was mainly to solve cpu issues. With rdma and large offload on nics, this isn't a huge deal anymore, as long as your using new enough nics and drivers. Though, I do run jumboframes on a lot of stuff, never had issues with it, just set mtu to 9000 on everything, and make sure the router is doing pmtu correctly. Though most of the time, I do leave this network unrouted and completely seperate.
 
Back
Top