Synology and VMware tricks

Joined
May 22, 2006
Messages
3,270
Any of you guys using Synologies have any little tips or tricks you've used when running it as a VMware datastore? I just got a DS1812+ yesterday.

I've got mine set up right now as an NFS datastore with two SSDs as cache and 5 drives in RAID 5 with one volume on it. I was expecting to be able to set up quotas on my NFS shares so I could create multiple NFS exports but come to find that isn't an option in the GUI, only user quotas.

I was also disappointed that I can't create virtual interfaces on the physical NICs so I can trunk VLANs. My hope was to use it to serve up NFS and iSCSI now that DSM 5.0 Beta has improved iSCSI performance but since I can only assign a single IP to each interface and my iSCSI and NFS traffic is on completely separate VLANs and subnets, this isn't an option.

Otherwise the box is a really snappy little workhorse as a NFS datastore, just needs some management enhancements.
 
What's the benefit of putting iSCSI on a different VLAN? I just run them both over one.
 
Just for the premise of traffic segregation. Not really necessary in a home lab.

My goal with the Synology was to assign one NFS IP per physical port, both in the same subnet, and connect my hosts to NFS share 1 on the first IP and the second NFS share on the 2nd IP. Then I'd add an iSCSI IP in iSCSI subnet A to one interface and an iSCSI IP in iSCSI subnet B to the other so I have MPIO for iSCSI. Then another virtual interface to each physical for management connectivity.

I know the underlying Linux is capable of doing this, but the GUI doesn't support it.
 
Don't over engineer your home lab. Trust me. :) Later that becomes a pain.

Yes, you could do it in the underlying Linux system but no telling what may happen on the next major DSM update.
 
Don't over engineer your home lab. Trust me. :) Later that becomes a pain.

Yes, you could do it in the underlying Linux system but no telling what may happen on the next major DSM update.

Then I'll just send them a feature request and cross my fingers they include it in a future update.

In the meantime, I can just put NFS traffic on the same subnet as management and use one physical port for NFS, one for iSCSI and call it good.
 
What's the benefit of putting iSCSI on a different VLAN? I just run them both over one.

Wow, please tell me you don't do this in production networks. iSCSI, and well really ANY storage traffic should be on its own network. Some network engineers are beyond paranoid (I call them stupid morons) and will literally pitch the idea of separate switches for iSCSI/storage traffic. Storage and vMotion traffic should be contained in their own unroutable vlans. Not only for security but due to the nature of the traffic. I've seen plenty of production networks crash in horrible fashion due to the admins not having an ounce of network education so they were running iscsi and vmotion on the same vlan as production traffic.
 
Wow, please tell me you don't do this in production networks. iSCSI, and well really ANY storage traffic should be on its own network. Some network engineers are beyond paranoid (I call them stupid morons) and will literally pitch the idea of separate switches for iSCSI/storage traffic. Storage and vMotion traffic should be contained in their own unroutable vlans. Not only for security but due to the nature of the traffic. I've seen plenty of production networks crash in horrible fashion due to the admins not having an ounce of network education so they were running iscsi and vmotion on the same vlan as production traffic.

You can rest assured that NetJunkie probably doesn't do this in a production environment.

We're talking about a home lab here :)
 
Wow, please tell me you don't do this in production networks. iSCSI, and well really ANY storage traffic should be on its own network. Some network engineers are beyond paranoid (I call them stupid morons) and will literally pitch the idea of separate switches for iSCSI/storage traffic. Storage and vMotion traffic should be contained in their own unroutable vlans. Not only for security but due to the nature of the traffic. I've seen plenty of production networks crash in horrible fashion due to the admins not having an ounce of network education so they were running iscsi and vmotion on the same vlan as production traffic.

Hi! I don't believe we've met... ;)
 
You can rest assured that NetJunkie probably doesn't do this in a production environment.

We're talking about a home lab here :)

You never know... I am C-level now....

Right now in my home lab I have two NICs on each host dedicated to iSCSI and NFS just goes over the first vmkernel. On my 1813+ I have all 4 NICs in a LACP LAG. iSCSI target and NFS use the same IP.
 
Back
Top