NAS for small business

She loved E

Limp Gawd
Joined
Dec 30, 2012
Messages
130
Hi Everyone,

Currently speccing out my first NAS & wanted to run it by some experts before buying everything. It's pretty vanilla: I don't want this to be a project... just something quick and reliable. It's for my graphic design studio. I want a shared file drive plus archive for files that are currently on 5 different PC's (3 Win10, 2 macOS).

Synology DS918+ $550

2x 6TB (raid 1) WD Red Pro archive $213 each (the Pro is cheaper than a regular Red for some reason)

2x 1TB (raid 1) WD black NVMe SSD - active jobs/working folders $220 ea

Automatic cloud backups of everything using Backblaze

Sync the SSDs with Dropbox for remote access (or w/o Dropbox if possible with built-in DSM software... I'm still researching this).

Questions:

I've read mixed reviews of SSD read/write cache performance, which is why I thought I'd get 1TB SSDs and use them for active file storage. Good idea or bad idea? Smaller SSDs are cheaper obviously, but I don't mind moving up to 1TB if that nets me consistently faster read/write speed over a caching setup.

Should I downgrade to a NAS that supports SATA SSDs instead of NVMe drives? I'd like to get a long lifespan out of this by using newer/faster tech, but if the speed difference won't be noticable I'd rather save the cash.

Can I use the NAS to backup other computers on the network, then have the NAS back up everything via Backblaze? Or do I need to back up each computer individually? I want to set this up properly out of the gate to quickly restore data if something goes down.

Appreciate any insight you can share! Hopefully I'm on the right track...
 
Last edited:
I see where you're going with this- but I don't see the real application here.

You'd need a 10Gbps network to really make use of any 'SSD caching'. For the purpose, the Black drives are good, but they also don't need to be 1TB.

Beyond that- the reason to get a Synology unit is for their software; their hardware is overpriced and underpowered. By a lot.

However, they're still popular because they work, and that's important for storage. You just don't need to get a 'Plus' version for a fileserver, which is largely a streaming device (on top of being a fileserver); a basic 418j would be just as fast with a 1Gbps network. With that savings and dropping the SSDs, buy two more drives. Run them in RAID5 (or DSM equivalent) and roll out.

Now, if you want actually faster, then factor in 10Gbit. Factor in a switch, new cabling, and new cards for the clients.
 
Thanks for the response. I was just reading up on network speed & didn't realize it was a common bottleneck (I'm new at this).

I'm a little concerned that serving files off the NAS will slow down workflow, which is why I wanted to include SSDs. Will do some digging to see if going to 10GbE is realistic. We're moving locations next month, so now would be a pretty good time to plan cabling if needed. That said we could always try a 418j and HDDs and see if speed is an issue, then upgrade later if needed.
 
Well, you have two bottlenecks with remote storage: actual speed, which will top out at about 115MB/s on gigabit, and latency, which is the sum of all of the extra infrastructure between your systems and the storage array.

You will notice both; 10Gbit really only helps with the former. The latter requires enterprise-level investment into infrastructure.

Further, moving up to 6+ drives can saturate 10Gbit. Here SSDs might be helpful in reducing latency a little, but not much; and HDDs are still better at sustained writes than most SSDs. If you have to move very large files, you won't see a difference.

The biggest problem is that Synology charges tremendously for 10Gbit. You can get an ASUSTOR box for far less that will also do fileserving duties, just with a less slick interface.

But really, the cost is going to be in the network hardware. The switches start at ~US$600 for 10Gbase-T (uses CAT6 up to 55m and CAT6A up to 100m), and the NICs are about $80/each, assuming that you have a PCIe 3.0 x4 slot for them. If not, stuff gets complicated.
 
Great info, thank you. Good call on the 1G limit of most Synology units, I hadn't noticed that before. Will do more research, I'm learning a ton fast. But it's looking more likely I'll go the HDD/raid 5 route as you suggested.
 
If you're going simplistic checkout some of QNAP's units as well, they're along the same basis as Synology, but alot of the units have 10GbE or SFP+ built in (I have one of the 1635AX models and it has dual 10Gb SFP+ ports).
Something like the ts-431x2 has a SFP+ port built-in and is $600 for the 8gb version.

Keep in mind with the smaller cheaper units you might be limited by the CPU before the disk/network if you're going for the lowest cost.


Additional Note: So your total storage for 2x 6tb drives and 2x 1tb SSD is about $866
Depending on your expected workload, you could consider getting 4x 2TB SSD and put it in raid 5 since everything will be backed up via dropbox/backblaze.
It would be about the same storage size and slightly highercost as the two platter drives + two nvme SSD , but you'd get substantially better performance and latency though.
Based off newegg its $250/ea for the WD blue NAND 2TB, and $290/ea for the Crucial MX500 2TB
 
Last edited:
Good idea to look into entry level QNAP boxes with 10G. I wish Synology had anything in that price range with similar capabilities. To take that idea a step further, for another couple hundred you can get a QNAP Thunderbolt unit which opens up even more possibilities (more on that below).

Instead of sleeping last night I did lots more reading (so much reading), here's where I think I'm gonna land:

Super simple 1G - $670

Synology DS218+ $300

1G switch - $30

2x 6TB WD Red HDDs in raid 1 $340

Pros: Nothing exotic about it. Everything has a warranty. Synology software. Gets us up & running the quickest. Cons: 1G might be a bottleneck, in which case we'd need to replace the entire setup eventually.

Basically if I get 2-3 years out of this setup I think I'm good with that. I can revisit then and by then 10g might be more consumer-friendly. (Built into mobos, on lower-end NASes, etc). The more I read, the more I felt that my use case is right on the edge of needing 10g, but for most of my work I'll be ok without it. If any workflow ends up being too slow to work directly off the NAS I can do that work on a local SSD & archive it afterwards.

Here are some other ideas I looked at but like less for now. I like the idea of getting used to the new workflow on a basic setup and I can always upgrade later.

-----

Cheap(ish) way to get the Macs Thunderbolt connections now, and I can bring in 10G later as needed. - $1,140

QNAP TS-453-BT3 $800

2x 6TB WD Red HDDs in raid 1 $340

Pros: Save a ton because no 10G switch or Tbolt adapters. Room to grow into more drives or 10G later. Cons: No Synology sw. Direct connect limits setup options & # of clients.

---------

10G aka Yep Guess it's a Project - $2,250

DS1817 10GBE NAS - $800

QNAP Thunderbolt 3-to-10G RJ45 adapters x3 - $510 (good god)

Mikrotik or Aruba S2500 (used) 10G switch <$150

2x 6TB WD Red HDDs in raid 1 $340

2x 1TB Samsung Evo SSD (sata) in raid 1 $300

Cable/fittings (fiber or DAC) - $150

Pros: Speed. Scalable. High-end hardware. Cons: Cost, more involved setup. All this $$$ and still can't take advantage of NVMe or Thunderbolt speeds.

 
You don't need 10GbE to have SSD caching help. It really depend on your workload. Hard drives are fine with large sequential reads/writes, but if you have a lot of random IO, then even with 1GbE, an SSD will help a fair bit.

Honestly I wouldn't worry about 10GbE for now. Even in large companies with money to spare, you will seldom see it on workstations. The DS918+ should serve you quite well. Just get an inexpensive switch that supports link aggregation and use both NICs on it.
 
Super simple 1G - $670
Is this the cost for a basic switch or does it have POE? That seems really pricey if its basic, what brand/model is it?
You can pick up a 48port managed Ubiquiti switch (with 2-10Gb SFP+ ports) for about $400.
https://www.amazon.com/Ubiquiti-EdgeSwitch-Managed-Gigabit-ES-48-Lite/dp/B013JDNN3K
if you want a centrally managed one that requires their software to provision theres this too:
https://www.amazon.com/Ubiquiti-switch-Managed-gigabit-US-48/dp/B01LZZ6DQ9

Pros: Nothing exotic about it. Everything has a warranty. Synology software. Gets us up & running the quickest. Cons: 1G might be a bottleneck, in which case we'd need to replace the entire setup eventually.
10Gb wouldn't really be much of a bottleneck with 2 drives in raid 1.
With only 2 drives you'd likely have to replace it all long term to expand anyway unless you pay through the nose for large drives.

I agree with Blue Fox overall, for only 2-4 non-ssd I would skip 10Gb all together for now.
Secondary notation, is it likely multiple users will be using it simultaneously for hard work (not just copying files to/from), if its primarily for archive/backup then 1G is fine.

Keep in mind for your 10Gb future expansion, 10Gb ethernet switches tend to be alot more expensive than the 10Gb SFP+ (and there generally aren't transceivers for SFP+ to 10Gbase-T).

That being said if you really want to take advantage it would be having multiple people be able to use the NAS at full gig capability (with 5+ drives).
There are many switch options under $500 with 48 gig ports and 2-4 10Gb SFP+ ports that you could connect the NAS to like the Ubiquiti switch I noted above.
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
Thanks guys for the guidance. I'm definitely still considering all options at this point, and now understand a little better how many variables there are.

The plan is for 3 users to simultaneously be working directly off the NAS. Files will primarily be 1MB - 200MB Adobe CC files, with the occasional 1GB+ layered Photoshop file. Not a ton of batch processing, but it comes up from time to time. Some video editing as well but nothing too intense. Right now we probably have 2-3 TB of data, so I'd like some headroom on the new NAS so that I can set it & forget it for at least a couple years.

10G seems a little spendy for me, but if you guys think it's a good idea I could piece something together. I was looking at a used Aruba switch or the Mikrotik to save $$ vs a $500+ switch.

I'd also totally consider SSD caching & the DS918 has m.2 slots for that. A full SSD raid array is too cost prohibitive at this point... that was my original fantasy build. So in the OP I dialled it back to an SSD side for active jobs and a larger archive using spinners.
 
Last edited:
Is 'graphics design' an IOPS-bound workload?
Depends, are they working on the files on the NAS directly or are they downloading them locally, editing, and then re-uploading? (normally downloading them locally is more common and then uploading varying revisions)
 
I wouldn't worry about ssd caching for your workload. Assuming user PCs have SSDs, it's easy and fast enough to just transfer projects to and from the local drives when they need the added speed.

I have a customer using a DS916 and 2x6tb Reds with a similar work load and no speed complaints. 4-5 designers on iMacs and ~10 PC general office users.

They have a second smaller unit (DS216 I think) for NAS and computer backups. Time machine backups can do a pretty good job of saturating the network but after the large initial sync it's not too bad.
 
SSDs are really, really not going to be of great utility for a file server. If you were running databases or VMs (or both) off of it like a small SAN, sure- that's no different than if you were running the same on your own desktop.

With respect to 1G or 2G with LACP (combining links), the disparity is with two drives in a mirror you'll see ~350MB/s - 400MB/s from the array, but at most ~230MB/s (between multiple clients!) due to the network limitation, and only ~115MB/s to individual clients.

Now, the reason I suggested 10G was two-fold: first, as you say you're at 2-3TB of data, a mirror of 6TB drives is only going to give you ~5.5TB usable. That's why I recommended four- here, you can have ~11TB usable and you can hit read speeds in excess of 700MB/s. Speaking from experience, I'm running this at home right now, albiet with a custom 'fileserver'.

The basics are a 4+ bay NAS with 10Gbit and support for SMB (aka Samba). They all have SMB, it's the 'Windows' network filesystem.

Depends, are they working on the files on the NAS directly or are they downloading them locally, editing, and then re-uploading? (normally downloading them locally is more common and then uploading varying revisions)

Generally, the software does this as well- the files will be downloaded to the client system upon access and stored in memory and local cache. That's why I'm also not big on SSDs in NAS devices used only as local fileservers.
 
They really don't need 10GbE for workstations. They will be just fine with 1GbE (and has been the standard at every Fortune 500 I've worked for). 100MB/s will not be inadequate. You're also assuming large sequential workloads, which is not always the case and SSDs do help with caching random IO.
 
I have a customer using a DS916 and 2x6tb Reds with a similar work load and no speed complaints. 4-5 designers on iMacs and ~10 PC general office users.

This is great context. If it works for them it should work for my smaller team.

Depends, are they working on the files on the NAS directly or are they downloading them locally, editing, and then re-uploading? (normally downloading them locally is more common and then uploading varying revisions)

I'd prefer that my group is able to work directly off the NAS. Its better for version control and file structure, and less time needs to be spent moving files around. That's why I'm so interested in transfer speeds... if it takes several seconds every time someone hits Save its going to be an issue.

With respect to 1G or 2G with LACP (combining links), the disparity is with two drives in a mirror you'll see ~350MB/s - 400MB/s from the array, but at most ~230MB/s (between multiple clients!) due to the network limitation, and only ~115MB/s to individual clients.

Right now I'm limiting the search to LACP-capable hardware (NAS and switch) for the added bandwidth. 10G is basically a $450 add-on not counting wiring and associated hardware (so realistically that becomes $1k+ in add-ons). At this point, based on cost and complexity I'd only be buying a 10G-capable box to allow for upgrading later.
 
Right now I'm limiting the search to LACP-capable hardware (NAS and switch) for the added bandwidth. 10G is basically a $450 add-on not counting wiring and associated hardware (so realistically that becomes $1k+ in add-ons). At this point, based on cost and complexity I'd only be buying a 10G-capable box to allow for upgrading later.

It's not a bad idea- a used Aruba S2500 (I have one) with 24 1G ports and no PoE would allow you to set up LACP and provide 10G in the future. If the NAS you select has the expansion option, such as the higher-end Synology and some of the QNAP ones, you can grab fiber interfaces and a cable (both cheap) or a fiber DAC (you'll want one customized from FiberStore for the device on each end to make sure that it will work).
 
Just a quick update on my gameplan. I decided to wait 6 months or so to make the plunge for budget reasons. And when I do buy everything I'll get a 10G capable NAS afterall. It looks like cheaper 10G stuff is starting to hit the market (example: I found an entry level unmanaged switch for $150). The current build looks like this:

DS1817 or DS1618+ with PCIe NIC

3x shucked WD drives of whatever size is on sale

I still need to research wiring & connectors. Initially I'll probably just rock Cat5e until I can afford to upgrade everything. Best case scenario the extra speed isn't needed and I can put that off indefinitely.
 
If you have a basic layer 2/3 switch you can get away with the synology units with several 1gbps ports and do link aggregation.

That way, several people working on single files at a time won't be bottlenecked by a single gigabit connection.
 
The plan is for 3 users to simultaneously be working directly off the NAS. Files will primarily be 1MB - 200MB Adobe CC files, with the occasional 1GB+ layered Photoshop file. Not a ton of batch processing, but it comes up from time to time. Some video editing as well but nothing too intense. Right now we probably have 2-3 TB of data, so I'd like some headroom on the new NAS so that I can set it & forget it for at least a couple years.
I just found this thread, but wanted to add some ideas from my experience of building storage for smbs.

What type of file saving times are the users used to right now? And how many would be doing saving/etc at the same time? The idea is to understand what they're used to waiting for io as this would be their baseline tolerance--faster will always be better, but slower would make them want to avoid using the nas.

If they currently have ssds in their computers, a nas will be a real bane to their existence as they will end up waiting for files to save over 1Gbit. 10Gbit would almost be a must along with an array of drives or large ssds.

And speaking of arrays of drives, you need to really think about integrity and how 'safe' you want your data to be. Drive arrays like raid 5 can recover from the odd 'bit rotted' bit, but they also are vulnerable to catastrophic failure due to multiple points of failure like the raid controller (which can also fail) or a multiple drive failure. Raid 1 is better for catastrophic failure, but 'bit rot' will go undetected. Bit rot isn't as much of an issue with video or photos as a bit changed here or there won't even be noticeable, but for data files like pdf, etc, a bit changed in the wrong place will render a file completely corrupted and useless.

Sometimes simple works well so don't discount that. A single computer with dual 10G and a couple of high capacity SSDs raid1 would give you a centralized storage solution with cloud backup pretty quick. And being off the shelf also allows quick replacement in the event of a major failure. Of course the potential downside to this is cost.

But a similar solution using hdds could be raid1+0, which would give you at worst case the same catastrophe resistance as raid5, but in the best case you could survive the failure of half the drives. Being raid1+0 should give you enough speed as well with enough striped drives.

Another solution would be a pair of raid1 (mirrored) drives for each user in that single computer with 10G. Full redundancy in case of a failure, and with windows built-in software raid, it's not vulnerable to a controller failure. Hard drives would be slower than local ssd though, so that could be an issue.

Lots of variables to think about! :eek:

This all being said, I actually like your original configuration. You'll have speed from the SSDs and can upgrade them in size as SSD prices fall, and the larger HDD can be upgraded as you need more storage. The only single point of failure would be the synology itself, but you'd have enough other backups to get to your data if you're down for a week or so. And if your downtime costs a lot, the 918+ also supports high availability, which basically is a second 918+ waiting in standby in case the primary fails:
https://www.synology.com/en-us/know..._availability_configuration_with_Synology_NAS
 
Thanks for the advice, I have to admit even after thinking on this for a few weeks I feel like I'm much more informed but still back & forth almost daily about what setup to get.

I also have a new requirement to add two security cameras and the easiest way to do that seems to be to use the NAS as an NVR as well. I'll leave the NVR and cam specifics out of this thread for simplicity's sake, other than to say that it does add to the need for faster bandwidth if I use POE cams.

What type of file saving times are the users used to right now? And how many would be doing saving/etc at the same time? The idea is to understand what they're used to waiting for io as this would be their baseline tolerance--faster will always be better, but slower would make them want to avoid using the nas.

If they currently have ssds in their computers, a nas will be a real bane to their existence as they will end up waiting for files to save over 1Gbit. 10Gbit would almost be a must along with an array of drives or large ssds.

Great point and why I originally thought 10Gbe might be necessary. I personally work off an SSD storage drive, so I will be the guinea pig for how fast this thing is. Once I'm happy with it I'll integrate the other users. They won't be as picky as me but I can't have users working off their hard drives because its more convenient. I used to work on a team of 8-10 designers, and our server wasn't capable of keeping up so files were everywhere and version control was an issue.

And speaking of arrays of drives, you need to really think about integrity and how 'safe' you want your data to be. Drive arrays like raid 5 can recover from the odd 'bit rotted' bit, but they also are vulnerable to catastrophic failure due to multiple points of failure like the raid controller (which can also fail) or a multiple drive failure. Raid 1 is better for catastrophic failure, but 'bit rot' will go undetected. Bit rot isn't as much of an issue with video or photos as a bit changed here or there won't even be noticeable, but for data files like pdf, etc, a bit changed in the wrong place will render a file completely corrupted and useless.

What do you (and others) think about using Synology's hybrid RAID? That's what I would prefer as it seems like it works well out of the box and provides an easy path to add capacity later. I'm starting with 3x 6TB drives (caved & bought em on sale already).

This all being said, I actually like your original configuration. You'll have speed from the SSDs and can upgrade them in size as SSD prices fall, and the larger HDD can be upgraded as you need more storage. The only single point of failure would be the synology itself, but you'd have enough other backups to get to your data if you're down for a week or so. And if your downtime costs a lot, the 918+ also supports high availability

The problem I have with the 918+ is the lack of 10Gbe even as an option. Which is why I'm leaning toward the 1618+: I can hook it up with LCAP and an HDD array and test performance. If its good enough, end of project! If not I can see where it bottlenecks and either add a smaller SSD array in the 3 spare bays for 'live' files and/or add a 10GBe card, switch and cables.
 
Last edited:
Unless you really need the security cameras connected to the nas, I would just get a dedicated nvr for that since they are cheap and will also be able to view the cameras without any additional load on the nas.

If you're also going to be using it, something dead simple to start may be to add a 10Gb card to your workstation and put a drive in your system and share it. If that seems to work, then the 1618+ or qnap should fit the bill.

I don't like the idea of proprietary systems too much as they create vendor lock-in and can be a real problem if you have to do any data recovery. That being said, it is an interesting concept for sure. And adding capacity is never really an issue--it's only an issue if you need that capacity in a single volume. And honestly there are very, very few use cases that absolutely require a single volume, especially when being used as archive. You can always have different volumes for different archives--2007-2010, 2010-2018, 2018-2019--as opposed to a single volume that has 2007-2019. And as you upgrade the storage, you can always combine the data from volumes if and when it makes sense. Keep in mind that you should be refreshing your drives about half-way through their warranty. We typically get enterprise class drives with a 5 yr warranty and cycle them out of production after 2-3yrs--still keeping them as backups.

I agree with you on the move to the 1618+ as 10Gb will be important. And you can even test the 10Gb locally without a switch by just having a 10Gb in your workstation and connect directly without a switch.
 
Synology may have proprietary motherboards, but on the software side, it's all mdadm and recovery is no different than any other Linux system. They even provide a guide on how to do it on their website.
 
Synology may have proprietary motherboards, but on the software side, it's all mdadm and recovery is no different than any other Linux system. They even provide a guide on how to do it on their website.
This is good to know. This means that most recovery houses should have no problem either. (y)
 
Let me give you an alternative with focus mainly on performance and data-security what means using a ZFS appliance.
This is the current state of the art storage filesystem that gives you:

- crash resistency:
no corrupt filesystem on a crash during write
no corrupt raid on a crash during write

older filesystems like ext4, hfs+ or ntfs cannot, see http://www.raid-recovery-guide.com/raid5-write-hole.aspx


- checksums on data and metadata to detect all sort of data errors
- ZFS raid to repair them on access or a pool scrub

older filesystems like ext4, hfs+ or ntfs cannot, they cannot detect data corruption (no checksums)

- superiour rambased read and wite caches with SSD readcache extension as an option
"best in town"


- secure writes (protect the content of the rambased writecache) due a ZIL/Slog device
older filesystems like ext4, hfs+ or ntfs can only with the help of a hardware raid with BBU/Flash protection but the raid-hole problem persists there


- ECC Ram
Cheaper NAS devices from Synology or Qnap do not offer. A RAM error can corrupt data -
on any filesystem even on ZFS.


- Ransomware save/readonly snaps/versioning
Create as many as you want (every15min, hourly, daily, weekly, monthly, yearly etc)


My suggestion:
- use an entry class server (with a serverclass chipset from Intel + ECC) from Dell, HP, Lenovo etc
CPU is uncritical, use at least 8GB RAM (Dell T30 etc).

or

If you can build your own or have an IT specialist/vendor around:

ex
- case Silverstone CS 380 (Backplane with 8 Sata/multipath SAS bays, removeble backup disks)
- mainboard SuperMicro [X11SSH-CTF (10G + SAS/Sata) or X11SSH-TF (10G, Sata)
https://www.supermicro.com/products/motherboard/Xeon/C236_C232/X11SSH-CTF.cfm

- 8 GB ECC RAM or more if you want to work from NAS with Adobe apps
- medium class G44, i3 or Celeron CPU or Xeon



For ZFS you need a webmanaged ZFS appliance software that can run on most Intel systems.
Common is FreeNAS. I prefer Solaris based ones as this is the origin of ZFS with best of all integration of OS, ZFS and SMB server

- ex OmniOS (free Solaris fork). For this I offer napp-it, a free webbased appliance software with a pro option (support and extra features)
https://omniosce.org/
https://napp-it.org/doc/downloads/napp-it.pdf
 
Last edited:
I would 100% recommend against ZFS for what you want. The above poster pretty much just pushes ZFS and their software at every opportunity (in which they have a commercial interest), with no regard for what your needs might be. You don't want to be stuck supporting it when something goes wrong. I would recommend sticking with Synology or the likes.
 
ZFS is by far superiour over ext4 or even btrfs.
ZFS runs on FreeBSD, Linux or OmniOS, all OpenSource and gives real enterprise storage features
nearly comparable to a high cost netApp storage appliance. Qnap has also ZFS boxes (very expensive)

The ZFS Web-management tools like FreeNAS or my napp-it are free.

For both you can buy support/extras but you do not need. No limit regarding OS functionality or capacity.
 
The technical merits don't really matter when the OP is looking for something that is easy to use, setup, and maintain. What you suggest isn't that. Maybe consider that ZFS is not a good fit for most people? Pushing something that you have a commercial interest in doesn't exactly help your case either.
 
Even though I don't use it myself, I see zfs all the time as 'the' goto in the enterprise world. It's solid beyond solid and quite proven now, but it usually touted as 'overkill' (although personally I don't think there can be too much overkill when it comes to data integrity).

Besides, in a nas the file system no longer matters and can be something different than the native file system on a local drive. That's one of the benefits of a nas. (y)
 
Hah I'm on the fence about Qnap boxes because it seems like their support sucks and I'm concerned setup & maintenance will be too difficult. No effin way I'm building a ZFS/freenas box. ;)
 
Qnap used to be the name in nas until synology came along, and I don't think they would be as strong as they are today if the product didn't deliver.
 
When I first started looking the
TS-453BT3-8G-US was on sale, the connectivity and hardware puts the 1618+ to shame.

I think one reason this search is so difficult is that Syn and Qnap have some of the absolute worst naming systems I've ever seen. Seriously, outside of Qnap's marketing department, who is supposed to be able to decipher that? They even throw search algorithms for a loop and show me a random assortment of products every time I search.

Sorry /rant
 
No effin way I'm building a ZFS/freenas box. ;)

Well, when talking about 'support', the nice thing with ZFS is that you can just shove it into a box. You're not dependent on a vendor to access your data; if a QNAP or Synology box breaks, you're reliant on them to get to your data.

With ZFS, something breaks, put the drives in something else. Boot up a USB stick, hell, put them into Windows and boot up a ZFS-supporting VM.

But then there's the whole building it and stuff... so yeah, Synology.
 
I am using a FreeNAS box now. There was a learning curve, and it is still not 100% bug free. While there are many positive qualities of my setup, for commercial application, I'd look elsewhere for a solution with support.
 
Back
Top