Best option for adding more sata ports

This is exactly what i just did tonight. I wiped out my VM and reformatted the drives, then switched over to Drivepool+Snapraid. It was very easy to setup. Did a sync and a scrub on some files as a test. Then i created a registry key to hide the original drives in Explorer. Haven't automated syncs or scrubs yet, but I plan to.
you don't have to hide the drive via registry you just don't give them a drive letter or remove the drive letter from disk management (drive pool access the NTFS volume directly not via drive letter) make a folder on C drive called drivepool and most add all your drives as folders in there so you can use snapraid on onto each disk not just the pool, as using snapraid on the the drivepool letter won't check the duplication drives

https://wiki.covecube.com/StableBit_DrivePool_Q4822624
 
you don't have to hide the drive via registry you just don't give them a drive letter or remove the drive letter from disk management (drive pool access the NTFS volume directly not via drive letter) make a folder on C drive called drivepool and most add all your drives as folders in there so you can use snapraid on onto each disk not just the pool, as using snapraid on the the drivepool letter won't check the duplication drives

https://wiki.covecube.com/StableBit_DrivePool_Q4822624
Snapraid requires drive letters in the config file, so it's got nothing to do with drivepool. And I didn't have to hide the letters, i just did it so it's cleaner and less cluttered. I have snapraid configured at the drive level, not pool or folder levels. I don't even know if you can configure it at the pool level.

My exact configuration is drives U & V are the data drives in the pool and are also listed as data drives in the snapraid config file. Drive P is parity drive for snapraid and therefore is not used by drivepool whatsoever. Drive F is the pool of U & V, but it's not called in the snapraid config file at all. I hid U & V and may hide P later since I really have no use for the parity drive being displayed.
 


If you plan to go really big, this is a good start. You can eventually get a retired SAS Expander board ( ex: https://www.ebay.com/itm/NEW-IBM-46...864157&hash=item213c7914cd:g:JSoAAOSwD31ayGAc ) for about $30 to pair with it and it'll make the total number of ports to be 24 [1x 4, 1x20]. That's how I have my home server setup. If you do two SAS Expander boarders, it can be 40 [2x 20] I believe.



The expander boards work wonders if you're doing a Just a Box Of Drives as you can mount the board in the JBOD box and just have the single thick cable linking the server with the box. The expanders only use the PCI-e slot for power, so I used an old mining PCI-e extender to inject the power with the JBOD PSU.

One note is the expanders have a total bandwidth capacity that is shared, so that model linked is 6Gbps per port (~96Gbps total, iirc) if set in normal mode (2x Inputs from an HBA card, leaving 4x outputs [4x4 = 16 ports]) or can operate with one of the inputs as an extra output (so 1x input, 5 x output [5x4 = 20 ports]).


I replaced all my home stuff with a single powerful server (retired DELL on eBay) that runs XCP-NG (virtualization). It takes longer to setup, but you can then virtualize/containerize (Docker, etc.) all your home services (Plex, File Services, Development stuff, IoT stuff, cameras, etc). Makes updating and rollbacks painless.
 
Last edited:
As an eBay Associate, HardForum may earn from qualifying purchases.
If you plan to go really big, this is a good start. You can eventually get a retired SAS Expander board ( ex: https://www.ebay.com/itm/NEW-IBM-46...864157&hash=item213c7914cd:g:JSoAAOSwD31ayGAc ) for about $30 to pair with it and it'll make the total number of ports to be 24 [1x 4, 1x20]. That's how I have my home server setup. If you do two SAS Expander boarders, it can be 40 [2x 20] I believe.



The expander boards work wonders if you're doing a Just a Box Of Drives as you can mount the board in the JBOD box and just have the single thick cable linking the server with the box. The expanders only use the PCI-e slot for power, so I used an old mining PCI-e extender to inject the power with the JBOD PSU.

One note is the expanders have a total bandwidth capacity that is shared, so that model linked is 6Gbps per port (~96Gbps total, iirc) if set in normal mode (2x Inputs from an HBA card, leaving 4x outputs [4x4 = 16 ports]) or can operate with one of the inputs as an extra output (so 1x input, 5 x output [5x4 = 20 ports]).


I replaced all my home stuff with a single powerful server (retired DELL on eBay) that runs XCP-NG (virtualization). It takes longer to setup, but you can then virtualize/containerize (Docker, etc.) all your home services (Plex, File Services, Development stuff, IoT stuff, cameras, etc). Makes updating and rollbacks painless.
I think that's a ways down the line for me. I can't afford that size expansion ha ha. Once i get all my data transferred from my Blue to my Gold array, I'll be at less than 6TB/20TB, so i got some time.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
PCIe slots are wired straight to the motherboard or CPU; PCIe to SATA add-in cards are quite common and the traditional way to add more storage.
PCIe slots are just that: an extremely fast interface to add input/output to a motherboard.
 
PCIe slots are wired straight to the motherboard or CPU; PCIe to SATA add-in cards are quite common and the traditional way to add more storage.
PCIe slots are just that: an extremely fast interface to add input/output to a motherboard.
If this was in reply to the OP, I got this figured out a while ago lol Thanks though 😁
 
I assume that drivepool works the same as Drive Bender. You do not have to have drive letters assigned. You can poin
Snapraid requires drive letters in the config file, so it's got nothing to do with drivepool. And I didn't have to hide the letters, i just did it so it's cleaner and less cluttered. I have snapraid configured at the drive level, not pool or folder levels. I don't even know if you can configure it at the pool level.

My exact configuration is drives U & V are the data drives in the pool and are also listed as data drives in the snapraid config file. Drive P is parity drive for snapraid and therefore is not used by drivepool whatsoever. Drive F is the pool of U & V, but it's not called in the snapraid config file at all. I hid U & V and may hide P later since I really have no use for the parity drive being displayed.
Instead of drive letters, you can point snapraid to mount points for your drives. If you point snapraid to the folder mount points as suggested in the previous post, you don't have to mess with the registry stuff.
 
I assume that drivepool works the same as Drive Bender. You do not have to have drive letters assigned. You can poin

Instead of drive letters, you can point snapraid to mount points for your drives. If you point snapraid to the folder mount points as suggested in the previous post, you don't have to mess with the registry stuff.
I'll keep that noted, but the registry thing works fine for me. Yes, its my understanding that drive bender works the same as drive pool. I just went with drivepool cause it seems to be more widely used. All is good with my setup now. Everything is up and running and i got my files copied over.
 
I'll keep that noted, but the registry thing works fine for me. Yes, its my understanding that drive bender works the same as drive pool. I just went with drivepool cause it seems to be more widely used. All is good with my setup now. Everything is up and running and i got my files copied over.
Awesome News! Glad you found a solution that works for you.
 
Back
Top