Is it worth setting up 802.3ad for Diskstation 1812 and a ESXI server?

Modboxx

Gawd
Joined
Nov 6, 2008
Messages
521
Hardware
Diskstation 1812 8x2tb greens
T5500 2x5650s, 48gb ram

I'm debating on purchasing a cheap managed switch to configure 802.3ad and jumbo packets to increase performance, not looking for speed, just looking to run multiple VMs of the diskstaion (AD, FS, SQL, Exchange client/hub, Exchange mailbox, RDS probably a two server setup and possibly owa, and sharepoint.)

The ESXI server only has one built-in nic. Do I need to purchase just another nic? or do i actually have to find nics that support 802.3? I'm having trouble finding this information.

The main question is will any of this actually do anything for performance? or am i just wasting my time?
 
Last edited:
is this just for lab? I wouldnt worry about it unless for some reason you were doing a lot of file transferring. I currently only run one nic on my diskstation to exsi box and unless im doing a large file transfer i dont really run into bottlenecks.
 
It will help when running multiple vms. I recently added a Cisco SG200 switch to my Synology NAS and everything is snappier. They're pretty cheap to buy anyways.
 
Under most load, especially in a lab that size, you'll rarely go above 1Gb throughput on a NAS like that. The only time I really use the LAG going to my Synology is when doing svMotions or migrations. Also...to really use more than one NIC's throughput you'll need to run iSCSI on that Synology. NFS is faster on it...so I would just do NFS and a single connection.
 
I'd look at getting a couple SSDs instead. They will make the most of a difference. I easily have 8 VMs on a single SSD and they all are very quick.

Jumbo Frames really wouldn't make THAT big of a difference TBH with your usage. With modern CPUs/NICs the need for it really isn't there anymore.

LAG is more for redundancy rather than speed.
 
Jumbo frames is very hit and miss depending on network hardware and servers. Always test and benchmark before deploying it anywhere, production or test lab to see if there's a benefit.
 
More or less your wasting time. If you want to increase bandwidth you should create an additional datastore and connect to it using a separate IP/NIC on your Synology. Obviously use a separate NIC within VMware as well.

Your issue is going to be IOPS and not bandwidth with WD green drives. The best way to gain more IOPS is via the usage of SSD's which is why others are suggesting them.

http://www.vmware.com/files/pdf/VMware_NFS_BestPractices_WP_EN.pdf

Caveat: NIC teaming provides failover but not load-balanced performance (in the common case of a single NAS datastore)

It is also important to understand that there is only one active pipe for the connection between the ESX server and a single storage target (LUN or mountpoint). This means that although there may be alternate connections available for failover, the bandwidth for a single datastore and the underlying storage is limited to what a single connection can provide. To leverage more available bandwidth, an ESX server has multiple connections from server to storage targets. One would need to configure multiple datastores with each datastore using separate connections between the server and the storage. This is where one often runs into the distinction between load balancing and load sharing. The configuration of traffic spread across two or more datastores configured on separate connections between the ESX server and the storage array is load sharing.

just my 2 cents
 
Under most load, especially in a lab that size, you'll rarely go above 1Gb throughput on a NAS like that. The only time I really use the LAG going to my Synology is when doing svMotions or migrations. Also...to really use more than one NIC's throughput you'll need to run iSCSI on that Synology. NFS is faster on it...so I would just do NFS and a single connection.

How many disks are in your NFS group? I'm in a pickle right now, i have five drives used directly for MKVs, 6.5tb worth, and another three drives dedicated to crap. I was going to dump the three drives and make 1tb lun for a datastore but if NFS is better i might just go that route.

I ordered this Nic for the ESXI box and a Cisco SG200-08.

edit... btw have you tried dsm5? i hear that ISCSI performance is much better.
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
How many disks are in your NFS group? I'm in a pickle right now, i have five drives used directly for MKVs, 6.5tb worth, and another three drives dedicated to crap. I was going to dump the three drives and make 1tb lun for a datastore but if NFS is better i might just go that route.

I ordered this Nic for the ESXI box and a Cisco SG200-08.

edit... btw have you tried dsm5? i hear that ISCSI performance is much better.

I'd clear off those 3 drives and add it to the 5 drive volume you have to increase IOPS and just connect via NFS. You want IOPS, not throughput.

I have 4 Synology boxes. The one I use for my lab is a 1813+ that I just moved from 8x1TB 7200RPM to 8x200GB Kingston E100 SSDs. Yes, I'm using DSM5 Beta. Yes, iSCSI is better. It's not always as good as NFS though. I've found testing to show anywhere from iSCSI being the same to iSCSI being half the performance of NFS, still. It mainly starts to lag behind as I ramp up the tests. Some tests at 4K IOPS were 1:!... Tests ramping it up to 17K IOPS and iSCSI was 50% the performance.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I'd clear off those 3 drives and add it to the 5 drive volume you have to increase IOPS and just connect via NFS. You want IOPS, not throughput.

I have 4 Synology boxes. The one I use for my lab is a 1813+ that I just moved from 8x1TB 7200RPM to 8x200GB Kingston E100 SSDs. Yes, I'm using DSM5 Beta. Yes, iSCSI is better. It's not always as good as NFS though. I've found testing to show anywhere from iSCSI being the same to iSCSI being half the performance of NFS, still. It mainly starts to lag behind as I ramp up the tests. Some tests at 4K IOPS were 1:!... Tests ramping it up to 17K IOPS and iSCSI was 50% the performance.

That's a nice setup right :eek::D what raid are you running in? does Trim work?

In your opinion what would be a better option? grabbing a couple of SSDs for cache or just putting them in Raid0 and dropping the NFS share on them?
 
Last edited:
That's a nice setup right :eek::D what raid are you running in? does Trim work?

In your opinion what would be a better option? grabbing a couple of SSDs for cache or just putting them in Raid0 and dropping the NFS share on them?

If you can, make a volume out of SSDs. That will work better than read caching. Caching is okay but you don't always get a cache hit and it won't help on writes (unless you have a big XS model and the new DSM5).

Mine is RAID5. TRIM works but it depends on the drives.
 
If you can, make a volume out of SSDs. That will work better than read caching. Caching is okay but you don't always get a cache hit and it won't help on writes (unless you have a big XS model and the new DSM5).

Mine is RAID5. TRIM works but it depends on the drives.

What SSDs would you recommend? Can't afford the enterprise stuff but i can swing two or three 120gb Samsung EVOs?
 
Where this topic is heading is what led me to sell my Synology and start using ZFS. Synology is great but for more enterprise storage features ZFS is superior. I run a pool of four 256 M4 SSD's in a RAID10 like setup with dedupe, performance is awesome.
 
Back
Top