NAS for small business

For sure, if it was for personal use I'd consider it. Maybe if the work NAS goes well I'll take what I learn and do a custom home media server.
 
I am using a FreeNAS box now. There was a learning curve, and it is still not 100% bug free. While there are many positive qualities of my setup, for commercial application, I'd look elsewhere for a solution with support.

Was running FreeNAS in a Hyper-V instance on Server 2016... then migrated to CentOS 7 :). All commandline here, no worries. Tried UniversalMediaServer and found the UI for ZFS to be lacking.

I'd want ZFS on anything I use because I can, but if someone else has to use it, I'd want to have them logging into a Synology. I'd want a backup plan either way, as ZFS is more about availability.
 
Well, when talking about 'support', the nice thing with ZFS is that you can just shove it into a box. You're not dependent on a vendor to access your data; if a QNAP or Synology box breaks, you're reliant on them to get to your data.

With ZFS, something breaks, put the drives in something else. Boot up a USB stick, hell, put them into Windows and boot up a ZFS-supporting VM.

But then there's the whole building it and stuff... so yeah, Synology.
That's just not true. There is no vendor lock in. I already addressed this earlier: https://hardforum.com/threads/nas-for-small-business.1979773/#post-1044177514

If anything, recovering from one of those appliances is arguably easier than ZFS due to the larger availability of tools for Linux on account of it being far more used. Synology even provides a guide: https://www.synology.com/en-us/know...I_recover_data_from_my_DiskStation_using_a_PC
 
If anything, recovering from one of those appliances is arguably easier than ZFS due to the larger availability of tools for Linux on account of it being far more used. Synology even provides a guide: https://www.synology.com/en-us/know...I_recover_data_from_my_DiskStation_using_a_PC

Versus ZFS, where you just plug it in?

Sorry. ZFS is like LVM but far less pissy. If being able to recover from a host failure to get back to work is a priority, I'd pick it every time, as with Synology you're using their OS to manipulate LVM and an underlying filesystem, while ZFS is both. And since you still have to check that the 'Linux tools' are installed (which you'd do with ZFS), I see more potential complications.
 
Well, when talking about 'support', the nice thing with ZFS is that you can just shove it into a box. You're not dependent on a vendor to access your data; if a QNAP or Synology box breaks, you're reliant on them to get to your data.
And really this is only a question of budget. Because any type of downtime costs, and depending on how fast you need to be back online, having a vendor to point at yell at and throw money at can actually be the cheapest and easiest solution in the long run.

I saw a thread where someone was cooling $600k in server gear with a $600 portable ac that broke. They were upset at the shoddy design, but in another thread I saw someone doing the same thing that priced out a 1 ton hvac that would cost $5k to install--and in their use case that would pay for 17 years of the portable units (assuming one broke every year), and they can even keep spares and get them at the local home depot.

Every business case is unique and finding the right technology mix for the business is what matters, not the technology itself. My nas? 3 external usb cases with enterprise drives in them replicating the main drive the other two every 15 minutes--all hanging off a thin client as a file server. Why? Because it's low power, low tech, cheap and my downtime is minutes since I can just plug a drive into any computer and be back up and serving in minutes. Speeds aren't a concern compared to reliability, so this works perfect. But I'd never recommend this to anyone that needs speeds beyond usb 2.0 as even usb 3.0 is slower than native sata drives. And I love sas drives compared to sata, but then there's a controller to consider in your failure mix along with everything else.
 
Synology doesn't have any proprietary OS. It's just Linux and LVM/mdadm are the same as anywhere else. You can be back up and running a few minutes from a live CD/USB image.
 
Synology doesn't have any proprietary OS. It's just Linux and LVM/mdadm are the same as anywhere else. You can be back up and running a few minutes from a live CD/USB image.
But to those of us that don't breathe linux/unix, it might as well be proprietary.
 
Side topic I'd like to bring up... I bought 3 6TB WD externals with the intent of shucking them and putting them in the NAS (whichever one I end up with). In doing more research it looks like they will be Blue drives, however, and I'm concerned about failure rates and the short warranty on these (1 year) vs the 8 and 10 TB WDs (2 year IIRC). So I will probably return them and get the 8 or 10 TB white label drives instead.

I bought 3 drives so that I can run RAID 5. But is that really needed? I really only need 4-6TB of storage... what if I just buy two 8-10TB drives and use them in RAID 1? Any reason not to do that?
 
If your data is important, I would only buy drives with a 5yr warranty--period. I've found the less warranty drives are lesser drives.

Also, I wouldn't run raid5 unless it was necessary as with 2 simultaneous drive failures (any array size) will lose all your data. I really like the idea of running 8-10TB drives raid1--just be sure to get 5yr warranty enterprise class drives or again you're in the same boat with a simultaneous 2 drive failure with 100% data loss. Since the external versions of these are cheap too, I'd get 2+ of those and use them as off-site backups in rotation.

I like to mix and match manufacturers too. I'll get a HGST, a WD, and a Seagate product. This way, you're not having to worry about batches of bad drives or design issues.
 
If your data is important, I would only buy drives with a 5yr warranty--period. I've found the less warranty drives are lesser drives.

Also, I wouldn't run raid5 unless it was necessary as with 2 simultaneous drive failures (any array size) will lose all your data. I really like the idea of running 8-10TB drives raid1--just be sure to get 5yr warranty enterprise class drives or again you're in the same boat with a simultaneous 2 drive failure with 100% data loss. Since the external versions of these are cheap too, I'd get 2+ of those and use them as off-site backups in rotation.

I like to mix and match manufacturers too. I'll get a HGST, a WD, and a Seagate product. This way, you're not having to worry about batches of bad drives or design issues.

I go a different route than Brot[H]er SamirD on this. While I agree and, in fact, I only use enterprise drives for spinning storage - that also means every drive that I have ever had fail was a failed enterprise drive, of which there have been several. I'm not sure that it exactly translates to longer life in practice, but if you are curious there is the HDD tester group - what's it called? They test thousands of them.

I do not, however, mix and match manufacturers. All of my drives are matched per array. If I get any drive acting funny, it gets changed out with another matched drive - but this one, of course, not being of the same vintage. Hes correct that same drive, same lot, same vintage, same entered into service date yields a risk of simultaneous failure. Especially under the stress of rebuilding an array. It's a risk, I suppose, I am willing to take.
 
It's interesting that you've seen a lot of failures of enterprise drives. It all depends on a lot of factors--temperature, use, vibration, etc play a factor. There's some really great data out there by backblaze, et al on which drives are the most solid and which drives are not in their data centers, and that really can help you steer clear of drives with a higher failure rate. Although if your luck is bad, you could be that lucky one person that gets the .01% failure drive. :D

I don't think it's a bad thing to keep the same drives in an array--in fact I think that's the best thing to do when striping. (Back in the days of raid2, that was a must as all the spindles were actually lock synced.) But for mirroring (raid1) since there's less complicated methods working in the raid level (no parity calculation, no stripe calculation, etc), drives by different manufacturers doesn't pose as much risk to raid problems.
 
Two things

1) LACP and unmanaged switches don’t mix

2) 1GB will be your limiting factor and not the drive speeds. With that in mind you can shuck external hard-drives and use cheap storage that way. 160 bucks for 10 TB helium filled western digital drives. They work fine. That’s what I’m using in my QNAP NAS. My NAS is a budget model, (231-p) but for small household use it “feels” as fast as a internal local platter hard drive. As fast as an internal SSD? No way. I’ve got mine setup with LACP and a managed office connect 1820 switch, which was a pain in the butt, but only because I had a lot to learn too. Ultimately i got it working. I didn’t understand how LACP was supposed to work at first. https://hardforum.com/threads/recom...with-lacp-for-nas-use-at-lan-parties.1961076/



If your devs are used to internal SSD they won’t be working off the 1GB NAS... they’ll move the files locally to work on them, and then copy them back for versioning and archival.
 
Last edited:
It's interesting that you've seen a lot of failures of enterprise drives. It all depends on a lot of factors--temperature, use, vibration, etc play a factor. There's some really great data out there by backblaze, et al on which drives are the most solid and which drives are not in their data centers, and that really can help you steer clear of drives with a higher failure rate. Although if your luck is bad, you could be that lucky one person that gets the .01% failure drive. :D

I don't think it's a bad thing to keep the same drives in an array--in fact I think that's the best thing to do when striping. (Back in the days of raid2, that was a must as all the spindles were actually lock synced.) But for mirroring (raid1) since there's less complicated methods working in the raid level (no parity calculation, no stripe calculation, etc), drives by different manufacturers doesn't pose as much risk to raid problems.
Well. I wouldn't call it a lot in absolute terms, but in a percentage it's a thing. So, I have probably 32 drives performing storage in total. If even one drive fails, that's an unacceptably high failure rate.

In the last 7 years, I've lost 3 drives. One was DOA, so that's a thing. One got hot because I made a mistake. I'll take the blame on that one. The other died earnestly. Still, that's 3 of 32. One through my own fault.

Edit: the one that got hot died two months later.
 
Well. I wouldn't call it a lot in absolute terms, but in a percentage it's a thing. So, I have probably 32 drives performing storage in total. If even one drive fails, that's an unacceptably high failure rate.

In the last 7 years, I've lost 3 drives. One was DOA, so that's a thing. One got hot because I made a mistake. I'll take the blame on that one. The other died earnestly. Still, that's 3 of 32. One through my own fault.

Edit: the one that got hot died two months later.
Joust how much storage is that?
 
Well. I wouldn't call it a lot in absolute terms, but in a percentage it's a thing. So, I have probably 32 drives performing storage in total. If even one drive fails, that's an unacceptably high failure rate.

In the last 7 years, I've lost 3 drives. One was DOA, so that's a thing. One got hot because I made a mistake. I'll take the blame on that one. The other died earnestly. Still, that's 3 of 32. One through my own fault.

Edit: the one that got hot died two months later.
Hmmm...based on that metric, I've also seen some high failures on a percentage basis--one WDC RE out of a batch of 6. It was replaced under warranty and then those drives were cycled out to some HGST which are about to be cycled out to another set of HGST and maybe some Seagates (haven't decided on this one yet since the HGST seem to be quite nice.)

Yep, I've seen the DOA at least at some point since I've been buying drives since the 1990s (Steve at Sound Electro Flight was my HD and storage main man until they shut down shop). And I've killed my fair share of drives with excessive heat, most recently I thought I toasted a older 3.5" WDC Raptor that was left to run in a system in 100F heat inside a case with little or no ventilation. It ended up with some bad blocks from the experience, but survived otherwise which really shocked me. Ah, and then there was the Quantum Atlas 4G UW SCSI back in the day--that thing got cooked too. :(
 
Decided to pull the trigger! Final build:

Synology DS1618+
Mellanox ConnectX-3 CX354A NICs
Finisar FTLX8571D3BNL-E5 transcievers
Eaton 5S1000LCD UPS
Aruba S2500-24P switch

And i found enough old 1-2 TB external hard drives to use for a 4TB RAID array, plus a newer 4TB to use as a USB backup.

All in cost is under $1k! Hopefully I can figure out the switch...
 
Started setting everything up & may have an issue with my ebay-sourced S2500. When I plug it in the fans spin up, all port LEDs flash, but I get nothing from the power, status or stack LEDs or the LCD screen. Fans just spin as long as its plugged in. Does that mean its DOA? Or is there anything I can try to see if its responsive?
 
Started setting everything up & may have an issue with my ebay-sourced S2500. When I plug it in the fans spin up, all port LEDs flash, but I get nothing from the power, status or stack LEDs or the LCD screen. Fans just spin as long as its plugged in. Does that mean its DOA? Or is there anything I can try to see if its responsive?

Do you get any response from the front panel? You should be able to do a complete factory reset from there. If that doesn't work I'd consider requesting a return or a refund. Mine worked out of the box.

I also followed this guide.

I'm traveling in Sweden right now so I may not have time to reply frequently. That said, even if I did, I may not have much more advice than this.

I would give it a few minutes though. First time I booted the switch, it took quite a while to boot, and during that time it did have high fan speed.
 
Last edited:
Nice thanks guys. Yep it might be DOA but the seller has been responsive & helpful as well. I'll try opening the cover to see if anything is loose, then button it up and let it run for 20-30 minutes to see if it will fix itself. If not it's return time.
 
Take a look at the new Asustor 5304T, 2 x 2.5GbE with link aggregation.
I just build one with 1x1TB SSD as a cache and 3 WD Red 10TB's in RAID 5, using a Netgear Nighthawk SX10 switch.
smooth as butter
 
Please do not use Raid5 or equivalent, use raid 10 or raid 6, raid 5 is dead for spinning rust. you will lose data if you need to rebuild over 12TB
 
Back
Top