Advice on a Data server for main pc.

Ikasu

[H]ard|Gawd
Joined
Jul 24, 2007
Messages
1,479
Alright. Not too much of a networking guy, aside from home networking, router configuring, Nas use, etc.

I'm trying to offload my plethora of mechanical hard drives from my main PC to a secondary system. ATM I have 8 hard drives in an acryllic mounting system with a psu on top of it, running to sata to esata backplates in my tower, then sata to sata to my board. Safe to say not the most pleasant looking.

Here comes the questions. What would the performance comparison be between having a secondary pc setup, with network mapped drives to my main pc, against something in a Truenas or alternative setup? I currently have a Qnap TS-851, and although it does the job for what it does. I absolutely HATE being tethered to proprietary file systems, or being forced to format a NTFS drive with tons of data on it, just to accommodate the NAS. Also get these random weird quirks from time to time on initializing the drives, takes forever, or even has these weird spurts of slowdowns switching between folders, on drives that have already been woken. Is there any open source DIY nas system out there that can use NTFS formatted drives with data on them already? I literally have nearly 70tb of drives (spanned across 8 drives, and just bought two new 14tb easystores to shuck), mostly full, and do not want to have to migratae data off and on to any nas or DIY nas system. Plus the thought of having a nas die (my qnap), or a proprietary DIY nas system die, and leave me with data that is unrecoverable with ease unless rebuilt, or have to go through many hoops to get my data back is unappealing, and not worth my time.

Here's my plan....

Build a secondary system to accommodate my drives. Small form factor with hot swap storage bays, or just lots of bays that aren't hot swap is fine as well. Got a ryzen 5 1600 mATX setup laying around, as well as an i3 8300, i5 6500, etc. I got parts to build MITX or MATX. Load it up with all my drives. Buy some cheap 10gb rj45 networking cards, one for my main rig, and one for the storage server, along with a 10gb switch. Map the drives, and use it as my personal drive storage. I would literally be the only one using it, and that is fine to me. Don't have experience with linux, so hoping to either just do it with windows, or a DIY Nas OS that lets me keep NTFS. How feasible would the performance in this type of route be? Anyone have any experience in terms of speed with this type of system? I've shared folders over networks to other windows pc's before, and mapped them, but mostly over wireless for these devices, so can't really get a good emphasis on how wired speeds would be. Curious how file interaction for MANY small files would be over the network, compared to current sata connections. I know large files will be no problem, as I've done that currently to mapped drives, but curious how this will affect tiny files performance wise.

Which also leads to another question if I'm on the right path. Advice on two 10gb Network cards, and a 10gb switch? Looking to get them cheap, so don't mind fleabaying used hardware from data center pulls...I just don't have ANY experience in the brands and their reliabilities, so any advice on this would be grand.
 
Last edited:
So I completely hear you and this is one of the reasons I stayed away from nas units for so long--proprietary lock-in.

However, now that I've used several and am comfortable with them, I really do like them as a 'plug and forget storage'. My solution to the proprietary issues is to simply not use their raid features (just one large drive in each or each drive its own volume) and just have multiple nas units. If one goes down, who cares? Data is on another and you just rebuild the bad one at some point.

Now, one thing you can do is use external drives with nas units to quickly get them on the network, but that usually isn't the best for speed. Building a system and just sharing the drives will be your best bet, and the system processor doesn't even have to be fast, and the more memory it has the better caching will be. Wired speeds in this type of a scenario with 10GB will be literally as fast as the drives can go, so no lag at all.

As far as small files, that's really a file system issue as smaller files simply have more overhead. The only way to really speed that up is by using an SSD instead.

Used 10GB enterprise stuff is generally managed, but a simpler way to get 10GB is to just use Direct Attach Cables or DACs. This will give you point to point links at 10GB without a switch. You can even use SFPs and fibre cables as the pre-terminated ones are pretty cheap used. Now, I say all this but haven't done a 10GB setup myself, just the research and reading on others setups so someone else may be able to advise on the caveats of doing this.
 
Thanks for the input. I feel you on the proprietary lock in. It's made me want to stay away from using them as my permanent system storage solution. ATM I have my 8 bay qnap simply as a media serving solution for movies, shows, etc. I don't put any data on there that I have on my main system, so if I do lose the data on the NAS, not a big issue for me atm. I don't think I could do the multiple nas route, just takes up so much space to have multiple.

In terms of external drives, I just avoid them as much as I can. Hence why I went with this 8 bay acryllic tower monstrosity, with all of my drives passing through a sata in between to my board...lol. After taking a quick google search of DAC and SFP, I think that is a good solution. Although I've never set up something like that before, so I may be getting in a bit over my head, hence why I was looking for the convenience of 10gb rj45 with a switch. Regardless it gives me something to think about. I'm currently using a system I have in the other room that is mining 24/7 with a 3080, also use it as my Foundry VTT rig, as the cpu isn't being used. So I just wifi over to it for transferring files over to it via windows share. Probably going to hardwire a system to my network and test transferring between systems, as well as between drives on my pc to get an idea of the difference. If it's good enough, I'll just build a windows filer server. It seems to be the only method that doesn't have any downsides of non NTFS file systems. From my limited research online, many posts state longevity concerns of NTFS in a Nas OS (for the ones I have found that will do so). So going to just Windows 10 it up, or start messing around with Windows Server for once and see how that goes.

In terms of cases...Man, there isn't any decent cases that have lots of hard drive support with a small footprint. The ones that are out there, are like 220 bucks for a silverstone, but have heat issues. Think I'm going to take the time and 3d print a case of my own design, haven't done it before, but seems like a fun little project. Mini-itx rig using the hardware I have laying around, and a modular cube compartment design that houses a 120 mm fan for each set of 3 drives. That'll let me add another cube each time I want to add an additional 3 drives, with a cable feeding system that makes sense. Hmmm...Might have to go MATX, for the extra sata, and expansion possibility for sata cards. Guess I gotta spend my free time blueprinting this up...lol.
 
I think the Synology stuff uses file systems you can recover on other things BUT, that's neither here nor there.

You're probably not going to find Opensource solutions that use NTFS and if so, not right out the box. The problem is licensing. I think Microsoft never opened NTFS, I wish they did [aside: because why the fluff are the file systems for EFI limited to 4GB files! even something like ext3/4 would be nice, but no we're stuck with fat32] and I know FATx was specifically made in a way to make something they can control. There are Open Source NTFS implementations but they are all sort of legally grey and I don't know if anything but Windows works with their software raid.

That said, I like your idea, I think it will work just fine for you and as long as you don't need stuff that needs it to be an actual drive letter, you're fine. If you do, I'd look into iSCSI, I've not had windows host it and never looked into it. It's just a protocol, so instead of windows networking you'd be using iSCSI, the benefit is that windows treats it as a local hard drive. I know it can connect to targets though. I had OpenNAS as my source but again, that's going the route you don't want to go with converting drives and so on. You don't need a switch if you're going to just have one device connected to another. I've not used 10GbE just Fiber, you could use a short cable or what's called direct attached. You'd get SFP+ cards for your machine, they are slots for different devices and that would be two permanently connected SFP modules to connect them. As far as the system is concerned, it's a network port so no surprises there.
 
  • Like
Reactions: Ikasu
like this
Thanks for the input. I feel you on the proprietary lock in. It's made me want to stay away from using them as my permanent system storage solution. ATM I have my 8 bay qnap simply as a media serving solution for movies, shows, etc. I don't put any data on there that I have on my main system, so if I do lose the data on the NAS, not a big issue for me atm. I don't think I could do the multiple nas route, just takes up so much space to have multiple.

In terms of external drives, I just avoid them as much as I can. Hence why I went with this 8 bay acryllic tower monstrosity, with all of my drives passing through a sata in between to my board...lol. After taking a quick google search of DAC and SFP, I think that is a good solution. Although I've never set up something like that before, so I may be getting in a bit over my head, hence why I was looking for the convenience of 10gb rj45 with a switch. Regardless it gives me something to think about. I'm currently using a system I have in the other room that is mining 24/7 with a 3080, also use it as my Foundry VTT rig, as the cpu isn't being used. So I just wifi over to it for transferring files over to it via windows share. Probably going to hardwire a system to my network and test transferring between systems, as well as between drives on my pc to get an idea of the difference. If it's good enough, I'll just build a windows filer server. It seems to be the only method that doesn't have any downsides of non NTFS file systems. From my limited research online, many posts state longevity concerns of NTFS in a Nas OS (for the ones I have found that will do so). So going to just Windows 10 it up, or start messing around with Windows Server for once and see how that goes.

In terms of cases...Man, there isn't any decent cases that have lots of hard drive support with a small footprint. The ones that are out there, are like 220 bucks for a silverstone, but have heat issues. Think I'm going to take the time and 3d print a case of my own design, haven't done it before, but seems like a fun little project. Mini-itx rig using the hardware I have laying around, and a modular cube compartment design that houses a 120 mm fan for each set of 3 drives. That'll let me add another cube each time I want to add an additional 3 drives, with a cable feeding system that makes sense. Hmmm...Might have to go MATX, for the extra sata, and expansion possibility for sata cards. Guess I gotta spend my free time blueprinting this up...lol.
I should have mentioned that the proprietary stuff isn't all that proprietary either as most linux live cds will read drives pulled from a nas just fine. It's one of the reasons I don't mind the nas units because even if a unit dies, the storage is still accessible, and vice versa.

The dac sfp+ route is quite easy and much, much cheaper, like 10x cheaper. 10GB rj45 still commands a hefty premiums and once you hook everything up, the software setup is the same (drivers, etc).

If you can't find a case, I'd look into an external sata solution. I forgot that I have an older estat only one of these set up for testing drives:
https://www.sansdigital.com/tr8utplusbn.html

You can basically connect this to any existing system (like your mining one), add your drives, share them and you're done. :)
 
  • Like
Reactions: Ikasu
like this
I think the Synology stuff uses file systems you can recover on other things BUT, that's neither here nor there.

You're probably not going to find Opensource solutions that use NTFS and if so, not right out the box. The problem is licensing. I think Microsoft never opened NTFS, I wish they did [aside: because why the fluff are the file systems for EFI limited to 4GB files! even something like ext3/4 would be nice, but no we're stuck with fat32] and I know FATx was specifically made in a way to make something they can control. There are Open Source NTFS implementations but they are all sort of legally grey and I don't know if anything but Windows works with their software raid.

That said, I like your idea, I think it will work just fine for you and as long as you don't need stuff that needs it to be an actual drive letter, you're fine. If you do, I'd look into iSCSI, I've not had windows host it and never looked into it. It's just a protocol, so instead of windows networking you'd be using iSCSI, the benefit is that windows treats it as a local hard drive. I know it can connect to targets though. I had OpenNAS as my source but again, that's going the route you don't want to go with converting drives and so on. You don't need a switch if you're going to just have one device connected to another. I've not used 10GbE just Fiber, you could use a short cable or what's called direct attached. You'd get SFP+ cards for your machine, they are slots for different devices and that would be two permanently connected SFP modules to connect them. As far as the system is concerned, it's a network port so no surprises there.
Personally, I don't like NTFS because data recovery places don't like NTFS. The other problem I have with it is that unlike FAT that allows lost clusters, NTFS will just 'terminate' them in order to keep the disk 'consistent' (MSFT's words not mine). But with a FAT volume you can recover the cluster and the data inside them. Personally, I like NAS units now as they're just a volume to a system and it no longer matters what file systems is underneath.

Not sure what you mean about the 4GB file limit on NTFS--I've only found that to be true for FAT32.

I'm glad you mentioned iscsi--that is another way except that the iscsi target is usually running some sort of 'layer' between the initiator and the drive, so the drive wouldn't be native ntfs and again you're back into something proprietary.
 
I use file servers for just about all of my storage these days. It's really nice not having tons of mechanical drives in my main system. I have two file servers at the moment, my main file server is on 24/7 and has all my best drives in it, and my secondary file server has mostly smaller, older drives with less important data and is turned on only when needed. I have every drive in both servers assigned as a mapped network drive on my main computer. Both of the file-servers are run headless, running Server 2019 (until I upgrade to Server 2022 soon) and I log in via Remote Desktop as needed.

I also like it because I now run my bittorrent client directly on my file server instead of my desktop, since almost all the files I'm seeding are on the file server anyway, it just keeps things more simple that way. I also run a DLNA media server directly on my file server to make all the media on the file server more easily accessible to more devices.

There is really no need for expensive 10GbE or even 2.5GbE hardware. Starting with SMB 3.0 windows will automatically combine all available network connections and use them simultaneously even for individual file transfers. This is known as "multipath SMB". So you can just get a cheap 16+ port Gigabit switch and a few dual or quad-port Intel gigabit adapters, so that both your main computer and your file-server have multiple gigabit connections, all connected to the same switch, and it will all just work automatically. It's flexible also, so for example, if you got a Gigabit switch that had 16 gigabit ports and 1 10GbE port, you could hook your file server to the 10GbE port, connect 4x 1Gb connections to your main computer, and be able to get 4x Gigabit speeds. Being able to use regular-old Gigabit switches makes things a LOT easier than trying to fully commit to expensive 10GbE equipment all-around.

An example, transferring a single file (Windows 10 ISO) from my main computer to my file server via 3x Gigabit Ethernet connections using multipath SMB:

multipathSMB.png
 
Some great suggestions on that mutlipath smb--I forgot how well that works. Makes for scaling nice and easy too--just add another card and plug it in. :)

The only problem I've ever run across when having a lot of drives (this was on a local computer) is that I ran out of drive letters. :oops: I always use the unc path on my network drives so that's not an issue, but if I mapped them all I would be out for sure, lol.
 
  • Like
Reactions: Ikasu
like this
All of this information has been SUPER helpful. Really do appreciate it. Was looking at OMV before I checked in on all the updates, my plans be evolving a bit after reading all your posts, thanks again for the info.

The SMB with multipath honestly seems to be a great idea. In terms of ease and setting up as well which is great. Already have a 16 port GB switch hooked up to my Asus AX11000, and another 8 port TP-link laying in my closet. I currently remote desktop into my mining rig which also doubles as my Foundry Table top server when I need to access files and what not, which wouldn't change much how I currently use my other machine with this approach which is nice. But from what I've seen, this method is still read as a network drive correct? Meaning any software that doesn't allow writing to a network drive will be limited? This was not a problem before, but trying to weigh out my options to see which path is the most fitting. Like this idea though, definitely putting it as one of my solutions.

After the mentions of ISCSI, the allure of it is quite nice considering my main PC itself will see it as a letter drive with no network limitations placed upon it, which I've used software in the past that had this limitation, although not so much right now. But probably bound to pop up eventually, also being able to have it as a direct lettered drive would be nice for game libraries and what not as long as I make sure network speeds are in order. Although, the in between layer is what I'm concerned with. If I run this method in a Windows Server install as the ISCSI server, would this main server be able to read and write to the files as well? Or is it a virtual disk and has no interactive capabilities with the data on the disk from the server end?
 
All of this information has been SUPER helpful. Really do appreciate it. Was looking at OMV before I checked in on all the updates, my plans be evolving a bit after reading all your posts, thanks again for the info.

The SMB with multipath honestly seems to be a great idea. In terms of ease and setting up as well which is great. Already have a 16 port GB switch hooked up to my Asus AX11000, and another 8 port TP-link laying in my closet. I currently remote desktop into my mining rig which also doubles as my Foundry Table top server when I need to access files and what not, which wouldn't change much how I currently use my other machine with this approach which is nice. But from what I've seen, this method is still read as a network drive correct? Meaning any software that doesn't allow writing to a network drive will be limited? This was not a problem before, but trying to weigh out my options to see which path is the most fitting. Like this idea though, definitely putting it as one of my solutions.

After the mentions of ISCSI, the allure of it is quite nice considering my main PC itself will see it as a letter drive with no network limitations placed upon it, which I've used software in the past that had this limitation, although not so much right now. But probably bound to pop up eventually, also being able to have it as a direct lettered drive would be nice for game libraries and what not as long as I make sure network speeds are in order. Although, the in between layer is what I'm concerned with. If I run this method in a Windows Server install as the ISCSI server, would this main server be able to read and write to the files as well? Or is it a virtual disk and has no interactive capabilities with the data on the disk from the server end?
Glad we've been helpful. :) I've even learned (and remembered) some stuff from the great posts. (y)

Yep, the multipath is still a network drive--just that the protocol detects all the paths between the source and destination and uses them all in tandem automatically. :) I don't know if you've tried mapping a drive letter to get around the network drive limitation as this usually fools almost anything I've had to do this with.

I don't know if there is a windows iscsi target instance (aka server as you put it). I do believe the initiator driver is available in windows so that part would be easy. As far as the server being able to access its own iscsi volume...hmmm...my guess is that it too would need an initiator driver and then be able to access it same as a client even though it is local. The abstraction layer though will more than likely make the drive not a 'regular' ntfs volume if you remove it and put it in another system. But this is just theory for me, so it will be helpful to read some answers on these issues from experience.
 
  • Like
Reactions: Ikasu
like this
Been doing more research as I go back and forth with what everyone has said. Both seem to be the right solutions for me. Huge advantage to the SMB method, is the drives are native in a windows NTFS environment, so I can access the data readily if the system dies. I currently use SMB for sharing between my devices, so less time to learn new stuff which is nice....BUT...learning new stuff is fun though =P...lol. Mapping the drive will get the letter, but still just file level access. Would have to pick up a couple of quad nic cards for the server and my main pc...Although honestly, the thought of Fiber and the DAC/SFC combo seems appealing as well, considering the costs are quite low on used hardware for both.

I like the idea of of Iscsi though, it's very appealing that my main pc will be using the drives at block level. From what I've read online, multiple computers accessing it is a big no bueno, which is fine by me, as I can just set up an SMB share to the drive on my main pc to act as an intermediary if need be I believe, and my main pc will be handling the writing/reading of data, which shouldn't mess things up. But my only concern is if I'll have access to it on the main server if I ever need to access the files on that side, or remote desktop in to access the files. Google searching isn't providing and clear info, as they all keep talking about just the initial setup of the initiator and target.
 
Been doing more research as I go back and forth with what everyone has said. Both seem to be the right solutions for me. Huge advantage to the SMB method, is the drives are native in a windows NTFS environment, so I can access the data readily if the system dies. I currently use SMB for sharing between my devices, so less time to learn new stuff which is nice....BUT...learning new stuff is fun though =P...lol. Mapping the drive will get the letter, but still just file level access. Would have to pick up a couple of quad nic cards for the server and my main pc...Although honestly, the thought of Fiber and the DAC/SFC combo seems appealing as well, considering the costs are quite low on used hardware for both.

I like the idea of of Iscsi though, it's very appealing that my main pc will be using the drives at block level. From what I've read online, multiple computers accessing it is a big no bueno, which is fine by me, as I can just set up an SMB share to the drive on my main pc to act as an intermediary if need be I believe, and my main pc will be handling the writing/reading of data, which shouldn't mess things up. But my only concern is if I'll have access to it on the main server if I ever need to access the files on that side, or remote desktop in to access the files. Google searching isn't providing and clear info, as they all keep talking about just the initial setup of the initiator and target.
The cool thing with multipath is that you could have multiple 1Gb links and then add a direct 10Gb link and it would use all of them (to the max that it could as drive speeds would become an issue once you start getting above 1GB/sec file transfer speeds).

I've always had a mixed feeling about iscsi and hence that's what's kept me from implementing it. Because if only a single device can access it like a block device installed in the client system, then why not just install the device in the client system? I'm sure iscsi and sans make more sense in the enterprise, but I still haven't found a good reason for them in even an extensive home setup.

My gut is that any initiator that wants to access an iscsi target will need to simply access it via the network the same way. On the server itself it might be possible to use the loopback ip of 127.0.0.1, but that would be more semantics than anything else I think. Again, someone's personal experience would be a definitive answer, but I recently have been researching a san unit that is pure iscsi and that's essentially how client/initiator would have to access it since it's an actual san unit, and I think the same approach would apply.
 
  • Like
Reactions: Ikasu
like this
Yea. I'm probably going to load up windows server on some of my hardware laying around, and do a test to see what the transfer numbers are looking for for SMB and ISCSI. Considering I have two new 14tb easystores ready to be shucked, got the empty drives to test with. Probably going to go the DAC route with SFP + Fiber, as it seems pretty cheap on fleabay,. But if I go SMB, might go the quad nic route, as it's just easier and requires very little setup. My only concern atm is about ISCSI and data/connection loss in terms of power outages and what not. Since I'm unfamiliar with it, how would it react in terms of power loss mid write? If the motherboard dies in the server, and I swap it to a new one and setup the configuration again, will my data still be there to access? If anyone has experience on ISCSI and can shed some light on these aspects, would greatly appreciate it.
 
Yea. I'm probably going to load up windows server on some of my hardware laying around, and do a test to see what the transfer numbers are looking for for SMB and ISCSI. Considering I have two new 14tb easystores ready to be shucked, got the empty drives to test with. Probably going to go the DAC route with SFP + Fiber, as it seems pretty cheap on fleabay,. But if I go SMB, might go the quad nic route, as it's just easier and requires very little setup. My only concern atm is about ISCSI and data/connection loss in terms of power outages and what not. Since I'm unfamiliar with it, how would it react in terms of power loss mid write? If the motherboard dies in the server, and I swap it to a new one and setup the configuration again, will my data still be there to access? If anyone has experience on ISCSI and can shed some light on these aspects, would greatly appreciate it.
I'd say use the SMB multipath first as a cheap 4-port gigabit nic is really cheap. Then you can add the 10Gb as you like. Or you could do that what I would do and try both at the same time. :D

The iscsi power loss factor is something I didn't think about. On a single system/nas it's pretty easy to mitigate such an issue with a UPS, but with the network in between, you'd want a UPS on the network as well since if it goes down, it would probably be the same as a drive disappearing on a local system and the resulting havoc. I think in terms of power loss after write, it's the same 'did the cache get written/does it have a battery backup' issue that's faced even with local disks or raids--it will depend on the hardware used. Good question on the motherboard--that's another point of failure. That's one of the reason we use external drives for backups--very easy and universal retrieval. (y)
 
There is really no need for expensive 10GbE or even 2.5GbE hardware. Starting with SMB 3.0 windows will automatically combine all available network connections and use them simultaneously even for individual file transfers. This is known as "multipath SMB"...
That's pretty awesome. I wish MS would fix their network stack in other ways though! (rant: I've got a 12ms ping for a 1Gbps connection, kills it to about 10% of it's potential, a 2ms ping kills it by 50%!)
I've not looked into SMB differences except security wise. I wonder if it's supported by third party implementations.
 
Is there a way to symbolic links to point to a network path with out needing to use a mapped drive
 
Alright. So got a server up and running as a test bed, two identical shucked drives, brand new. One with SM, the other with ISCSI.

Performance wise, the drives are performing nearly identically. Although the SMB transfers around 10-15 mb slower on unqueued 1mb sequential . But so close that it doesn't seem to matter which route I go. But there is one peculiar aspect. This is my first time using a windows server environment. Got everything working correctly, but I noticed an odd aspect about the ISCSI solution. Pasted a large file over to the SMB drive, and it copied over as a consistent 113 mbps thick and through (testing on a 1gb network atm, I'll accommodate network with faster hardware once I build the actual server. But if I transfer to the ISCSI, it ZOOMS quickly to roughly 80 percent. What took roughly a minute and 30 seconds on the SMB, takes roughly 30 seconds on the ISCSI. It transfers at 103 mb over, but finishes quicker. Opening up my taskmanager on both machines, once the transfer window closes, I notice network traffic is still running in the background on both machines, it comes out to roughly around the same time as it would take for the SMB transfer.

Is this normal? Could see this causing issues if I transfer something over, and try to open said file. It just doesn't make sense for 8gb to be transferred within a few seconds on a gig network, then slow down to gig speeds for the rest...lol. Interestingly enough though, I opened a video file transferred in this means, and it opened and scrubbed through perfectly. Ram usage on the server did not change either. This perplexes my mind.
 
Sorry I thought I replied to this. I haven't closely looked into this as my use was server based so you setup and kind of just let it rip. It was over 24TB so it was going to take time anyway. I'm thinking maybe this is due to caching?
 
Back
Top