Private school on a budget: What model NAS works for this usage?

Joined
Dec 9, 2014
Messages
12
**Tried to break up the paragraph so I don't get too many TLDRs.**

I am currently running 3 ESXi hosts on an ISCSI FreeNAS system for our private K-8 school with 9 VMs. Having no support and being completely dependent on this open source software is keeping me up at night.

I am trying to get an idea of which model I need in our environment, and I have browsed through the home lab thread hoping hearing everyone's configuration will give me a better idea, but I can't seem to find the numbers I am looking for.

I am leaning toward Synology as we have a DS1512 already being used for file backup and PC image storage, and we love it!

I called Synology and spoke with someone but they ended up recommending a $5,000 (w/o disks) unit to me and I feel that it might be overkill for what we do.

We run 9 total VM's:
-2 domain controllers
-file server
-app server, for things like network monitor and a helpdesk (Spiceworks)
-print server
-wsus server
-library server (SQL database for checking out books, not heavily used)
-computer class lab server (never been used yet)
-Test VM for whatever, this gets changed out a lot, for linux/windows testing, never heavily used

Our active directory consists of 100 staff and 450 students. Our file server contains 3-4TB of data.

I am considering the 1815+ and I am hoping to run NFS with SSD caching (add some RAM to it) for a total of 12 TB or so (which allows for us to double our current usage). I am just not sure if the 1815+ can handle this workload...

Should the 1815+ work for us, or do I really need to be looking at a $4,000-$5,000 model?

PS. Any other input you have as to recommendations for our situation is greatly welcomed and appreciated!
 
So unless I glossed over it... What is the real issue here? Are you having slowness in the VMs? Low IOPS? etc.

I just finished building a C2100 with 12x 3TB WD,10Gig Brocade 1020 card directly connected to two C1100 ESXi hosts. Backup up everything to a Synology DS1513+.
 
He's worried about the lack of support on the FreeNAS box (for good reason).

Any of the Synology ones would do you fine like you specced. :)
 
Why are you using SAN storage in the first place?

You didn't mention a vCenter server (unless it's physical), so I'm guessing your three hosts are standalone and you aren't really making used of any shared storage features (HA, vMotion, etc.).

If you're not doing HA and SAN support is keeping you up at night, it doesn't get simpler than local storage/DAS.

Having a SAN allows you to easily power on a VM on a failed ESXi host, but it also leaves you fully down if your storage array is what fails.
 
He's worried about the lack of support on the FreeNAS box (for good reason).

Any of the Synology ones would do you fine like you specced. :)

Good point totally missed that. However I am not sure how having "support" from Synology is going to make things better.
 
Last edited:
I'n no great fan of freenas but having said that I have been using it as shared storage for a couple of years now no problem.

If you choose to buy a commercial solution its highly likely you will get much less hardware for your $$$.

If I were to move from freenas it would be to an illumos based system like napp-it but IMHO now I have used ZFS, anything else, commercial or otherwise is a step backward.
 
Good point totally missed that. However I am not sure how having "support" from Synology is going to make things better.

If the OP gets two Synology units to work around that single point of failure, perhaps? But I imagine a second FreeNAS server for replication would also solve that problem.
 
Good point totally missed that. However I am not sure how having "support" from Synology is going to make things better.

They're on the HCL - you'll get help from VMware, if this is a paid environment, and from Synology as well.
 
Thanks so much for all of the input!

Yes, the biggest concern is lack of support, and knowing that if I have a problem I can call either Vmware or Synology is a huge bonus for me. Especially considering I do not plan on being here forever and I doubt anyone they hire going forward will have too much experience on a Vmware platform.

I just need to get this basic storage in place, and then hopefully next year I will be able to work on replication. (We have a location across the street and I am hoping to replicate there but that is for another day)

I have had 2 instances where I had problems and needed support, and my only option was to go to the FreeNAS forums and look for help. While that would be fine for even my own personal home storage, that cannot be my only means of support in a production environment.

As far as the Synology NAS models...will the 1815 be enough for what we do? Is it overkill? Not enough?

Hopefully if I get some confirmation from you all I can get this ordered and perhaps move all of my VM's over during Winter break. I just need to know if an 1815 or equivalent model can handle this type of workload.

What model Synology do you have and what type of load do you put on it? Thanks again everyone for your help!
 
If you can get budget for the 1815+, that would be plenty of hardware (I'd max out the RAM, personally), and should easily give you the storage space you're looking for. The only other one I could think of, and would recommend against, would be the 1515+. You lose 3 drives though, so hitting the space you want will most likely require the add-on unit to expand to right off the bat.
Getting a 2nd Synology in the 2nd building and setting up HA or a DR target is pretty dang simple. Any time I've had to contact Synology (very few, mind you) it's been mostly painless.
Overall, from my experience, your use case is pretty tiny as it's been presented. If you have vCOPs, it'd be worth at least looking in there to see what kind of use profile you can see there - I'm betting it's pretty low though :).

How are you currently presenting the storage (iSCSI? NFS?) and how are you planning on presenting the new?
 
I am currently presenting it via ISCSI, but I plan on switching to NFS. NFS just seems simpler and easier to manage and troubleshoot. In my testing before, ISCSI was 15% faster on average with FreeNAS but I really don't think we will notice a difference like that in production.

The only servers that really get hit during the day at all are print server and file server, and the domain controller for authentication, which is basically nothing.
 
I would also move to NFS for simplicity but only with a solution that supports remote replication (with open files),
snapshots (with open files) and checksums for data security to have a proven backup from where you can import
and start (or move/clone) VMs.

You can have these features only with ZFS and ZFS is far better than ext4 that is used in Linux boxes like Synology.
Two FreeNAS boxes or a faster Solaris based NFS solution with OmniOS (free) or NexentaStor (commercial) is
technically the best and fastest way.

I would look for a good SuperMicro store (or second best option use a standard server from Dell or HP
with hardware support), use two of them and prepare a second cloned boot disk.
Nothing easier as use this disk for booting and import a zfs pool. NFS sharing is immediatly active
(its a ZFS property on Solaris).

I would add a SSD pool (2 x 512 GB+) for VMs and a regular Raid-Z2 pool for fileserving and backup.
The Web-UI on FreeNAS, NexentaStor or my napp-it should be usable for anyone that is able to manage
your VMs as you only need to setup in a default way and manage just like with any NAS.

Even one or two ultracheap HP Microserver, each with a Raid-Z2 is an option especially for backups on a
different physical location. Either BSD and Solaris works fine on a Microserver
 
So if I don't go with Synology, is there another hardware solution WITH SUPPORT and on the Vmware HCL that you can recommend?

I am running FreeNAS on a dell c6100 blade and using ZFS in RAIDZ2. It works, but again, having a problem and not having support is the big obstacle I'm trying to overcome.
 
I would also move to NFS for simplicity but only with a solution that supports remote replication (with open files),
snapshots (with open files) and checksums for data security to have a proven backup from where you can import
and start (or move/clone) VMs.

You can have these features only with ZFS and ZFS is far better than ext4 that is used in Linux boxes like Synology.
Two FreeNAS boxes or a faster Solaris based NFS solution with OmniOS (free) or NexentaStor (commercial) is
technically the best and fastest way.

I would look for a good SuperMicro store (or second best option use a standard server from Dell or HP
with hardware support), use two of them and prepare a second cloned boot disk.
Nothing easier as use this disk for booting and import a zfs pool. NFS sharing is immediatly active
(its a ZFS property on Solaris).

I would add a SSD pool (2 x 512 GB+) for VMs and a regular Raid-Z2 pool for fileserving and backup.
The Web-UI on FreeNAS, NexentaStor or my napp-it should be usable for anyone that is able to manage
your VMs as you only need to setup in a default way and manage just like with any NAS.

Even one or two ultracheap HP Microserver, each with a Raid-Z2 is an option especially for backups on a
different physical location. Either BSD and Solaris works fine on a Microserver

Your first part is only true on free/open-source based systems. There's nothing keeping a commercial company from designing a new filesystem and plugging it into a Linux kernel. This also doesn't solve his support problem.
 
So if I don't go with Synology, is there another hardware solution WITH SUPPORT and on the Vmware HCL that you can recommend?

I am running FreeNAS on a dell c6100 blade and using ZFS in RAIDZ2. It works, but again, having a problem and not having support is the big obstacle I'm trying to overcome.

Nexenta is on there, but only in very specific configurations and support levels, and it has to be the commercial product for full cooperation (obviously).
 
Your first part is only true on free/open-source based systems. There's nothing keeping a commercial company from designing a new filesystem and plugging it into a Linux kernel. This also doesn't solve his support problem.

You can use NetApp. They have a similar feature set like ZFS and full support on storage hardware and software.

On the other side, who in the price sensitive edu sector has full support on Windows, Adobe, Office, ESXi or on any other standard software? You usually have hardware support. If a server (HP, Dell or Synology) fails, you send it in and get a new one. The rest is mostly yours (or mine as I work in the edu sector as well). A standard storage installation of FreeNAS, OmniOS or Nexenta on a hardware base like it is suggested at hardforum and others give you no more problems than any other server installation. Problems are often on hardware. A service that mostly replaces a server is ok in this case but not helpful if you are not knowing basics about your setup.

I would look at someone in your area, that you can pay and ask. Do not really count on any manufacturer support below the real enterprise area to help you beside very simple questions that you should be able to answer yourself.

And last but not least, buy two systems that are redundant like two ESXi boxes and two NAS boxes or two All-In-One that combine the ESXi+ shared NFS NAS.
 
Last edited:
NetApp is running a BSD kernel, but that's one of them for certain - there are others as well.

I deal with Edu all the time, and most of the medium+ sized edu markets or groups are buying commercial storage as well as servers, as most vendors will discount significantly in that market (and they too need software support beyond just the hardware as well, especially for larger ones (Universities) that have actual production environments that tie revenue to uptime and reliability). Smaller ones, sure - but that's where Synology and hte like come into play if you want that extra guarantee.
 
I know, mainstream is either using NetApp (really expensive even with massive edu discount) or something like Synology. But beside these extremes expensive, secure, high performance, expandable to Petabytes with 12 hours SLA and cheap, easy, slow, reduced expandability and lower data security there is a third way that means

- use Opensource
- demand real high-end features like with ZFS
- use standard server quality hardware Dell, HP or SuperMicro)
- use ready to use web based standard storage appliances like FreeNas, napp-it or NexentaStor

This gives you basically near NetApp features and hardware/software quality at a few thousand Euro/$
either with high performance or high capacity up to Petabytes at the fraction of the price of NetApp an Co.

Of course you must need to know about your resources and needs but you should know either.
 
And have no software support, which is a deal-killer for many places. That's the problem with Open Source in the storage market - and why places like RedHat and SuSE sell software support licenses as well (it's not that CentOS or OpenSuSE is any different or worse, but when you need a specialized kernel patch, you have to write it yourself for those or provide a REALLY compelling reason to the OSS community - RH/SLES folks will write it if you're paying for support and it's a real problem).

And don't forget Tintri, Nimble, and a few others out there that can easily undercut NetApp pricing wise and offer better performance with the same features too.
 
Tintri, Nimble etc are in the upper price level like NetApp.
Lower price with a OpenSource base is partly Nexenta, IX-Systems and others, without SLA even my napp-it on OmniOS.

But the main point is:
what is the difference between a standard storage server and a Windows AD, a pfsense firewall, Apache webserver, mySQL and Typo3 on standard hardware nowadays for many use cases: nothing

Its just a 0/8/15 Intel server hardware with a LSI HBA and a Intel Nic.
Many "high price vendors" use SuperMicro, as I do as well.

For sure, you need someone that tells you what hardware is ok and what not, but you do not
need that "certified disk" at five time the price, And you need a qualified local seller - always.

The reason is:
100 TB high quality storage or 10 TB high performance SSD storage: up from 10-15k Euro/$ or 100k
 
Tintri, Nimble etc are in the upper price level like NetApp.
Lower price with a OpenSource base is partly Nexenta, IX-Systems and others, without SLA even my napp-it on OmniOS.
Um, no. Not even close to the same price point, if you're comparing like specs to like specs. Can you get a Netapp for the same price as a C200/T540/etc? Sure - but they're massively different in terms of capability. Compare capability at a given price point, and the differences become clear. If you're only shopping price for hardware/software/storage, you're going to have much bigger problems.

SLA matters. That's my key argument. And so does HCL support for whatever vendor you're working with.
But the main point is:
what is the difference between a standard storage server and a Windows AD, a pfsense firewall, Apache webserver, mySQL and Typo3 on standard hardware nowadays for many use cases: nothing

Its just a 0/8/15 Intel server hardware with a LSI HBA and a Intel Nic.
Many "high price vendors" use SuperMicro, as I do as well.

For sure, you need someone that tells you what hardware is ok and what not, but you do not
need that "certified disk" at five time the price, And you need a qualified local seller - always.

The reason is:
100 TB high quality storage or 10 TB high performance SSD storage: up from 10-15k Euro/$ or 100k

100k for 10T of high performance SSD? Maybe if you're only looking at list price (And even then, that's not what it goes for from a major vendor) - street price is no where near that.

I'm done here - we'll ahve to agree to disagree. I've seen too many places who spun-their-own and ceased to exist as a result to ever go that way. I recommend the Synology for the OP's use case.
 
Yes I am fairly certain I do want to go with one of those "pay more because it says certified" types of devices. I will be going to a part time contract type position with the school (12 hours a month) and I will not be there immediately if there is a problem. Now on the FreeNAS, we were at 250 days of uptime without one single issue. I am hoping to surpass that by a large margin if possible.

Because this is a production environment, I cannot be telling people "sorry we will fix it in 12 hours when I get off of work and can come in tonight." So I am hoping that the person who IS here, and knows the setup, but not how to troubleshoot ESXi datastore connection issues, can call VMware or Synology, and get some troubleshooting help from them, until I can get in to look at it myself.

The fact is, I have about $2200 I can spend on this little upgrade of mine, and I really think that as long as the 1815+ can handle this workload I can add 6 3TB WD Red Drives, and 2 240GB Intel SSDs for caching, and get a great little setup with full support if needed and plenty of storage with room to grow.

So far I have had 2 people say the 1815 will do what I need it to do. Does anyone else have input on that model? Thanks for all your help. It is so refreshing to post on a forum and get real input.
 
Synology DS1815+ will be fine.

However, I would look at getting 3 or 4x 480GB SSDs just to run in RAID 5 for your VM's OS and data VMDKs then 3 or 4x 3TB WD Reds for your file share. Don't run all those VMs on the WD Reds; SSDs are so cheap.

Right now Amazon:

$1,050 ds1815
$920 4x480gb intel 530
$456 4x3tb wd red

$2,426

1.3TB SSD for VMs
8.1TB SATA for file shares
 
That sounds like it would work well. One problem I have is that at the moment, I have my file storage on the datastore along with all of my VMs. It is presented to the VM as a separate disk (call it D:).

There are 2 disks, "fs1.vmdk" is the C: OS drive. "fs1_1.vmdk" is the D: files drive. Would I just move the .vmdk to the WD Red drives NFS location? Essentially I will have 2 separate NFS datastores, one is SSDs and one is WD Reds...correct?
 
You could also do:
Present the 1.3TB SSD as a datastore to the ESX host(s)
Present the 8.1TB HDD as iSCSI to a Windows File server VM

Or, as you said, present both to the host(s) as datastores.

How you tackle that build will determine the best method to move things around.
I assume the current VMDKs for the file server are greater than the 1.3TB SSD storage?

Not knowing your full setup or anything, my initial hunch would be to spin up a new file server vm, present the 8.1TB HDD volume to it via iSCSI and share that out. Then, move the data over from the old file server (leave it running on present datastore).

Of course, there's multiple ways to tackle it, but that would be near the top of how I'd do it, with the information given :).
 
Back
Top