Best option for adding more sata ports

Everyone is new at this at some point. Frankly, I did not know much about this specific area before quarantine life. But, I had plenty of time to tinker and figure things out.

It is odd that as much info as there is available on Youtube, info in this area is sparse.

If you are looking for an overview of cards, this is a good place to start. This guy also has videos on other topics (including sas cables.) He also sells HBA cards flashed into IT mode on ebay. I find his prices a little high though.


The card I have can handle 8 internal drives. Unfortunately, I don't have any recommendation on a card with an internal and external ports. Don't really even know if that is a thing. For internal cards, you probably want to look for an LSI 9211-8i card. they are the most price effective tat meet your needs. There are three variants that most people get. The Dell H200, Dell H310 and IBM M1015. I personally have the H310. I bought it on ebay for around $50 already flashed into IT mode. There are a couple variants of each card. Those differences are discussed in the video above.

SAS cables really are not complicated. But, you can mess it up if you aren't paying attention. There are different cables for internal and external connections. The external cables are more substantial with heavy duty connectors and wiring. There are also forward breakout and backward breakout cables. If connecting directly to the drives, you need forward breakout cables. My drives are internal and the cables are directly connected to the drives. So, I ordered these
https://www.amazon.com/gp/product/B018YHS8BS/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1

Not sure which case you are using. But, if you have an older case with 5.25" bays, you could consider something like this item. There seem to be some quality issues with some of the low end vendors. This vendor seems to avoid those issues.
https://www.amazon.com/gp/product/B004IMKTX4/ref=ox_sc_saved_title_8?smid=A23NVCSO4PYH3S&psc=1

Moving onto the recovery of data. In hardware and software RAID, you can recover the data if a drive is lost. However, if you have a system wide failure, the entire array "could" be lost. There are ways to work around this. Those avenues vary depending on the solution you are using. Utilizing something like drivepool or drive bender, the disks can simply be plugged into another system and they will instantly be readable. This does not work with RAID solutions.

I cannot speak to your specific issues with drivepool as I have not used it. I do use Drive Bender. When comparing the two, they seem to be fairly comparable. If you search for people's suggestions on the internet, most point towards drivepool. It seems there was a period of time that Drive Bender was not being developed. This is no longer true. It has been consistently updated for several years now. I chose it mostly because I thought it seemed just a bit easier and I liked the UI much better. It also has a trial version.
https://www.division-m.com/drivebender/

Your experience with system manager showing both the physical hard drives and the pooled drive is normal. There are settings to have the physical drives no longer show in explorer if that is your preference. In my case, explorer only shows the pooled drive. Not the 4 other drives that contribute to that pool.

I hope this info helps. Happy to help more along your journey!

Thank you for taking the time to explain all of that!

That video you linked is actually the last video I was watching on the topic. It's a long video that I was watching while getting ready for work, so I kind of jumped around a little. I need to rewatch it again.

Getting a card with all internal connectors isn't the end of the world. If it came to it, i could rig some way to turn a few of the sata cables into external connections. I have a decent about of space in my case. Its a Phanteks full tower, but I forget the model number at the moment. I intentionally bought a case with 5.25 bays because i wanted to add a 4k BD burner. So one of my 3x 5.25 bays has a 4k friendly BD OD with custom firmware from cloner alliance that enables 4k burning. Other 2 bays are empty. The case also has 6x 3.5 bays and 4x 2.5 mounts for SSDs. At the moment, I have the top tray for 3x 3.5 drives removed and a few wires running through the area lol.

IMG_20200608_220714.jpg


When you say a system wide failure would make raid data unrecoverable, what kind of failure are you talking about? Like if my mobo dies or my CPU craps out or Windows takes a shit and needs a complete rebuild, am I SOL on that data when using software raid? That would be an unacceptable level of protection in my opinion. I need some way to be able to recover data with any type of failed, except of course incidents of physical damage or the highly unlikely multiple drive failure. I now have everything on a UPS with surge protection, but who knows what could happen to components inside the case. Current looking at options for expanding the storage, and I see WD Red Pro and WD Gold apparently have firmware features that help with recovery efforts. Is this all marketing hype, or could these actually help this type of failure scenario?

I guess what i don't get about drivepool showing both the drive and pool is, how does it handle file transfers between the two? They'd have to somehow be locked to the same file structure in order for both to be accessible and useable, no?
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
With software raid, you should be fine as long as the drive(s) with the metadata are functioning, and the drive with the raid configuration is intact. Theoretically, you could recover or move a raid where the raid configuration was lost or corrupted if you have a backup of the configuration and the actual raid array is intact, but that's not guaranteed. This is why raid isn't considered a backup solution, and should only be used for when you need minimum system downtime, or fast data access (or both).
 
With software raid, you should be fine as long as the drive(s) with the metadata are functioning, and the drive with the raid configuration is intact. Theoretically, you could recover or move a raid where the raid configuration was lost or corrupted if you have a backup of the configuration and the actual raid array is intact, but that's not guaranteed. This is why raid isn't considered a backup solution, and should only be used for when you need minimum system downtime, or fast data access (or both).
Maybe i don't know enough about raid yet. I thought raid was more or less just a concept for spreading data across multiple drives for redundancy or speed. Like if I had a raid 1 array, could i not just take one of those drives and connect it to a different computer and access all the data? I get that raid 5 spreads data at the bit level (or is it byte level?), but I always assumed raid 1 was similar a mirror image. Is this not true?
 
Maybe i don't know enough about raid yet. I thought raid was more or less just a concept for spreading data across multiple drives for redundancy or speed. Like if I had a raid 1 array, could i not just take one of those drives and connect it to a different computer and access all the data? I get that raid 5 spreads data at the bit level (or is it byte level?), but I always assumed raid 1 was similar a mirror image. Is this not true?
Yes, but depending on the software implementation, the configuration may be required in order to access the data. Technically, it's just two (or more) images of one virtual drive, so recovery should be simple. But whether it actually is, depends on implementation.
 
Yes, but depending on the software implementation, the configuration may be required in order to access the data. Technically, it's just two (or more) images of one virtual drive, so recovery should be simple. But whether it actually is, depends on implementation.
OK. Guess I need to figure out a good software solution then.
 
Decided on the LSI SAS 9211-8i. Thanks, drescherjm.
I have the 9300-8i, works but slows startup a bit, adds about 2-5 seconds. Not an issue if you rarely power down the machine.

Let us know what you think!
 
Good luck with everything. Post and let us know how it goes!
Thanks!

Been researching raid options for Windows, and now I'm getting stuck there. Are there not alternative software raid programs that run in Windows? From what I'm gathering, freenas and unraid basically need to be installed as their own independent OS, either on a drive or USB stick. I imagine this would make running and managing these programs difficult, if not impossible, on a machine that needs to be running Windows 24/7/365 for surveillance.

I've seen more than enough warnings about using storage spaces for anything besides mirroring drives. And I can't seem to get Stablebit to detect by files or fully cooperate with Windows. So that just leaves raid in the BIOS unless someone has a better idea. Can you even do BIOS raid from a PCIe card, or does it need to be the sata ports on the board?
 
I have the 9300-8i, works but slows startup a bit, adds about 2-5 seconds. Not an issue if you rarely power down the machine.

Let us know what you think!
Yeah, not being powered down much. I had initially set Windows to reboot itself weekly, but I think windows somehow unconfigured that. Either way, waiting a few seconds for the card to initialize is no big deal for my situation. I intend on keeping the surveillance drive on the board for now anyway, so it would only affect the plex storage. Don't need to get my dose of The Office THAT quicky lol.
 
Thanks!

Been researching raid options for Windows, and now I'm getting stuck there. Are there not alternative software raid programs that run in Windows? From what I'm gathering, freenas and unraid basically need to be installed as their own independent OS, either on a drive or USB stick. I imagine this would make running and managing these programs difficult, if not impossible, on a machine that needs to be running Windows 24/7/365 for surveillance.

I've seen more than enough warnings about using storage spaces for anything besides mirroring drives. And I can't seem to get Stablebit to detect by files or fully cooperate with Windows. So that just leaves raid in the BIOS unless someone has a better idea. Can you even do BIOS raid from a PCIe card, or does it need to be the sata ports on the board?
If you want/ need to keep windows going, you can try Drive Bender as I suggested above. Unraid might be able to do the surveillance as well. Something you can look at if you are interested in tinkering with it. I was looking hard at unraid. Ultimately decided I didn't want to spend the money on a second system. THis machine is my daily driver. I know there are options I could have run windows on top of unraid in a VM. However, I didn't care to delve that deep. Not sure if you can do MB raid from an HBA card. My gut tells me that you cannot. Even if you could, I would worry as if something happened to your mobo, I think you would be SOL for recovery. Another option would be snapraid. This is a command line raid program. It makes snapshots of your drives and places those snapshots on a parity drive. In this instance you could lose a drive and snapraid would be able to recover it. I actually plan to mess with snapraid in the near future. I ran into a stumbling block in using it in conjunction with drive bender. I know it can be done. I just need to mess with it more. Anyway, there are a couple of things for you to look at. Hopefully, one of them will meet your needs.
 
BTW, what is your use case for this system. You mentioned surveillance. What is the goal of the storage you are adding and want to put into RAID?
 
If you want/ need to keep windows going, you can try Drive Bender as I suggested above. Unraid might be able to do the surveillance as well. Something you can look at if you are interested in tinkering with it. I was looking hard at unraid. Ultimately decided I didn't want to spend the money on a second system. THis machine is my daily driver. I know there are options I could have run windows on top of unraid in a VM. However, I didn't care to delve that deep. Not sure if you can do MB raid from an HBA card. My gut tells me that you cannot. Even if you could, I would worry as if something happened to your mobo, I think you would be SOL for recovery. Another option would be snapraid. This is a command line raid program. It makes snapshots of your drives and places those snapshots on a parity drive. In this instance you could lose a drive and snapraid would be able to recover it. I actually plan to mess with snapraid in the near future. I ran into a stumbling block in using it in conjunction with drive bender. I know it can be done. I just need to mess with it more. Anyway, there are a couple of things for you to look at. Hopefully, one of them will meet your needs.
Ok, I'll look into those two options. Ideally, I'm looking for data integrity first, ease of replacement or addition of drives second, capacity third, and speed last. I doubt I'd need anything beyond standard data transfer rates for what I'm doing here. I liked the concept of Stablebit in that it splits whole files across drives such that any drive can be taken out and all data is intact as-is. It's not necessarily reliant on the software. Just sucks it's not working for me. Maybe I'll try again with the new drives when they arrive.

To answer the question in your second post, this machine is both a surveillance PC and a Plex server. I originally built it for surveillance, but then I stumbled across Plex and got kinda sucked in lol. Plex is much more flexible, so the surveillance aspect really controls everything. The current surveillance software I use is Windows only I believe, but I've actually been searching for alternative software for several reasons. I haven't gotten a chance to play around with it yet, but Im leaning towards DW Spectrum, which is available on windows, mac, and linux. I don't know Linux and don't even have a mac, so back to Windows it is again lol.

Even though the surveillance requirements control the OS and other aspects of this build, surveillance storage isn't as critical. The chance of me losing my surveillance drive the same time i need footage off it is so small it's basically zero. So i don't need redundancy on that storage, and I cache 24hrs of footage on my SSD which later gets written to disk, so speed is no issue either. I originally wanted the HBA just simply to expand storage since I'm out of sata ports already. But then i started thinking about my Plex storage, which is currently on a single WD Blue, and I thought it is probably a good idea to back that up. Not only do I not currently have any kind of redundancy, I'm running the WD Blue 24/7 which it's not designed for. Now maybe its not spinning 24/7, but it's still powered up and generating some amount of heat I would imagine.

I don't know that i necessarily need to have the raid disks on the card. I could put the surveillance storage, my old Seagate 2.5" drive, and maybe the optical drive on the card instead. I want to keep the OS on the board, but I also picked up a 970 Pro, so that's going straight on the board once i get it and mirror the current OS SSD to it. Haven't decided what to do with the leftover SSD after I switch to nvme... Maybe a Plex cache drive, or maybe I'll just create a pool to extend the capacity of my other SSD.
 
Decided on the LSI SAS 9211-8i. Thanks, drescherjm.
If you've decided that a "server-class" HBA is really your best choice, I think you would be better off with a LSI 9207 (instead of a 9211). The 9207 uses PCIe 3.0 (vs 2.0 on the 9211). That means you can get the same throughput with the 9207 in your x4 slot (x16 physical) as you would get with a 9211 in a x8 slot (which you really don't have without slowing down your graphics card). Plus, the 9207 tends to be available for similar price as the 9211 ...PLUS, the 9207 is available in a -4i4e version which gives you 4 internals & 4 externals, without screwing with adapter plates/cables. and the 9207-4i4e tends to be even lower price than the 9207-8i.

Do have a look at SnapRAID. It's good software, runs well on Windows & Linux.
 
If you've decided that a "server-class" HBA is really your best choice, I think you would be better off with a LSI 9207 (instead of a 9211). The 9207 uses PCIe 3.0 (vs 2.0 on the 9211). That means you can get the same throughput with the 9207 in your x4 slot (x16 physical) as you would get with a 9211 in a x8 slot (which you really don't have without slowing down your graphics card). Plus, the 9207 tends to be available for similar price as the 9211 ...PLUS, the 9207 is available in a -4i4e version which gives you 4 internals & 4 externals, without screwing with adapter plates/cables. and the 9207-4i4e tends to be even lower price than the 9207-8i.

Do have a look at SnapRAID. It's good software, runs well on Windows & Linux.
Damnit seriously? I didn't know I would be restricting my gpu with the 9211.
 
If you've decided that a "server-class" HBA is really your best choice, I think you would be better off with a LSI 9207 (instead of a 9211). The 9207 uses PCIe 3.0 (vs 2.0 on the 9211). That means you can get the same throughput with the 9207 in your x4 slot (x16 physical) as you would get with a 9211 in a x8 slot (which you really don't have without slowing down your graphics card). Plus, the 9207 tends to be available for similar price as the 9211 ...PLUS, the 9207 is available in a -4i4e version which gives you 4 internals & 4 externals, without screwing with adapter plates/cables. and the 9207-4i4e tends to be even lower price than the 9207-8i.

Do have a look at SnapRAID. It's good software, runs well on Windows & Linux.
So I'm looking into this 9207 cards, and they're still spec'd at 8 lanes of PCIe 3.0. This is probably a stupid question, but how do you run them on only 4 lanes?
 
So I'm looking into this 9207 cards, and they're still spec'd at 8 lanes of PCIe 3.0. This is probably a stupid question, but how do you run them on only 4 lanes?
Put it in a 4-lane slot, or configure a 4/8/16 slot as 4-lanes, if you have the option for that in the bios.
 
Put it in a 4-lane slot, or configure a 4/8/16 slot as 4-lanes, if you have the option for that in the bios.
These are the pcie specs of my board:

SLOTS
  • 3 x PCIe 3.0 x16 slots (support x16/x0/x4, x8/x8/x4 modes)
  • 3 x PCIe 3.0 x1 slots
So if my GPU is x16, then I have to use the bottom x16 slot and I'm limited to 4 lanes? I need to configure this in the BIOS? I imagine it must be configurable if it doesn't automatically select lanes itself, no?
 
These are the pcie specs of my board:

SLOTS
  • 3 x PCIe 3.0 x16 slots (support x16/x0/x4, x8/x8/x4 modes)
  • 3 x PCIe 3.0 x1 slots
So if my GPU is x16, then I have to use the bottom x16 slot and I'm limited to 4 lanes? I need to configure this in the BIOS? I imagine it must be configurable if it doesn't automatically select lanes itself, no?
Yeah, if it doesn't automatically, it's almost definitely in the bios. Some boards use jumpers for things like that, but it's rare.

From what you have posted, yeah, looks like the bottom slot is always x4. You could verify by looking at what pins on the slots have traces running to them on the board, but good chance that's the case and you won't have to configure anything.
 
Yeah, if it doesn't automatically, it's almost definitely in the bios. Some boards use jumpers for things like that, but it's rare.

From what you have posted, yeah, looks like the bottom slot is always x4. You could verify by looking at what pins on the slots have traces running to them on the board, but good chance that's the case and you won't have to configure anything.
So assuming it's a x4 slot, do I need to do anything to get full functionally of the card? The pcie 3.0 card still states x8 in the specs, so am i losing functionality/speed by running it in x4? Or do i need to find a card spec'd at x4?
 
So assuming it's a x4 slot, do I need to do anything to get full functionally of the card? The pcie 3.0 card still states x8 in the specs, so am i losing functionality/speed by running it in x4? Or do i need to find a card spec'd at x4?
It should function just fine as long as you don't saturate the lanes. You'd probably need a pretty serious raid to do that, though. SATA tops out at 6Gb (gigabits) transfer rate, and 4x PCIe 3.0 is nearly 4 GB (Gigabytes). 6Gb is ~750MB, so you'd need 5 drives going full-tilt to saturate the bus, theoretically.
 
It should function just fine as long as you don't saturate the lanes. You'd probably need a pretty serious raid to do that, though. SATA tops out at 6Gb (gigabits) transfer rate, and 4x PCIe 3.0 is nearly 4 GB (Gigabytes). 6Gb is ~750MB, so you'd need 5 drives going full-tilt to saturate the bus, theoretically.
Oh ok, i got it now. Ok well i just ordered one from Art of the Server on ebay, the exact model you suggested with 4i4e (glad to have it split for future expansion!). I guess I'll just return the other unless return shipping makes it pointless to do so. That one was only $49. This one i just ordered was $67.
 
I installed the 9207 tonight, and now I'm getting this before the MSI BIOS logo. Am i stuck waiting on this to go away now every time I reboot?

IMG_20200620_003014.jpg
 
I installed the 9207 tonight, and now I'm getting this before the MSI BIOS logo. Am i stuck waiting on this to go away now every time I reboot?

View attachment 255048
Looks like you can configure it for OS only mode, which would disable the built-in bios initialization. That would speed up boot, but you would need to install your OS on a drive directly connected to your motherboard, as the drives would be inaccesable until the OS initializes the controller and drives.

I don't know if there are other implications to the configuration change, so I'd strongly advise reading the documentation before you proceed.
 
Looks like you can configure it for OS only mode, which would disable the built-in bios initialization. That would speed up boot, but you would need to install your OS on a drive directly connected to your motherboard, as the drives would be inaccesable until the OS initializes the controller and drives.

I don't know if there are other implications to the configuration change, so I'd strongly advise reading the documentation before you proceed.
As it happens, I also installed a 970 Pro NVMe drive in place of my 860 Evo boot drive, so I'm about as on-the-board as I can get ha ha. In fact, I don't even have a single drive connected to the card at the moment.

Read what documentation? This thing came with absolutely nothing lol Its an ebay card. Guess I'll try looking up a manual tomorrow or Sunday, unless someone else knows what to do.
 
It is typical an HBA to slow down your boot due to the bios. Based on the description you have given on this machine, I can't imagine you boot too often. Every card is different. My boot only changed by a few seconds.
 
It is typical an HBA to slow down your boot due to the bios. Based on the description you have given on this machine, I can't imagine you boot too often. Every card is different. My boot only changed by a few seconds.
Yeah, i mean it's not the end of the world, i just thought it might have meant there was something wrong or something needed to be configured. If it's normal, it's kind of annoying but not that big of deal really.
 
Yeah, i mean it's not the end of the world, i just thought it might have meant there was something wrong or something needed to be configured. If it's normal, it's kind of annoying but not that big of deal really.
yep, very normal
 
yep, very normal
Just got the drives installed tonight. I'll tell you...those SAS to SATA cables seem flimsy as shit! I bought Cable Matters brand, cause I thought they were regarded as good. But the cables are like thin strips of tin foil...virtually zero insulation. I'm paranoid about breaking them. They are working though, all three drives showed up right away in Windows.

Just gotta figure out what I'm going to do about redundancy. I keep going back and forth about traditional Raid 5 vs. Drivepool + Snapraid vs. Drivepool duplication. Really don't want to run straight duplication considering these 3 drives ran me $1,100. But I also don't want to risk losing data using Raid 5 or forgetting to take snapshots in Snapraid. Why can't there be more support for Windows :-/
 
If you're complaning about a few seconds waiting on the HBA to initialize... my 9008 takes seven to ten minutes. For reasons unknown. Firmware is from 2012. Also, that one is so old that Red Hat dropped support for it with RHEL8, I had to go find drivers for it. Ubuntu is still good to go, though.
 
If you're complaning about a few seconds waiting on the HBA to initialize... my 9008 takes seven to ten minutes. For reasons unknown. Firmware is from 2012. Also, that one is so old that Red Hat dropped support for it with RHEL8, I had to go find drivers for it. Ubuntu is still good to go, though.
Jesus, these things should come with gigantic disclaimers on them.
 
Jesus, these things should come with gigantic disclaimers on them.
They're cheap as shit, and they work. I'm getting >700MB/s reads across a 10Gbit network (both machines to same ethernet switch), with four Ironwolfs in a stripped mirror set run by ZFS.

Also, stop dicking around with Windows. Use ZFS. If you need data integrity and you're not worried about speed, use RAIDZ1, the ZFS single-parity (RAID5-equivalent) configuration. You can pass your drives through a hypervisor like Hyper-V (you set them offline in the host Windows OS) and run FreeNAS in a VM to manage them with almost no overhead while getting full speed from the drives and all the integrity benefits of ZFS.


And that all said, you don't need these SAS cards for ZFS. You just need them hooked up through some HBA. Or half a dozen different cheap ass HBAs, doesn't matter so long as they're stable. Main reason to use these LSI cards is for simplicity or if you're hooking up to enclosures or actual SAS drives.
 
They're cheap as shit, and they work. I'm getting >700MB/s reads across a 10Gbit network (both machines to same ethernet switch), with four Ironwolfs in a stripped mirror set run by ZFS.

Also, stop dicking around with Windows. Use ZFS. If you need data integrity and you're not worried about speed, use RAIDZ1, the ZFS single-parity (RAID5-equivalent) configuration. You can pass your drives through a hypervisor like Hyper-V (you set them offline in the host Windows OS) and run FreeNAS in a VM to manage them with almost no overhead while getting full speed from the drives and all the integrity benefits of ZFS.


And that all said, you don't need these SAS cards for ZFS. You just need them hooked up through some HBA. Or half a dozen different cheap ass HBAs, doesn't matter so long as they're stable. Main reason to use these LSI cards is for simplicity or if you're hooking up to enclosures or actual SAS drives.
I don't have a choice with the OS. This machine is also running my surveillance software which is Windows-only. And I've been using it for a workstation since the pandemic started. No idea what implications that would have for my company's software that I need to remote in. I imagine they do not have anything configured for Linux. Which leads me to my last point, which is that I know nothing about Linux OSs or virtualization. It would take me a bit of time to learn everything, and I cannot afford that downtime. I realize Windows isn't ideal for this type of situation, but there's nothing I can really do at this point.

The whole reason I'm using this SAS card is because everyone here said it's the best option when you're out of sata ports on the board like I am. They said it's far superior to cheap sata cards.
 
Which leads me to my last point, which is that I know nothing about Linux OSs or virtualization. It would take me a bit of time to learn everything, and I cannot afford that downtime. I realize Windows isn't ideal for this type of situation, but there's nothing I can really do at this point.
That I understand -- mine is a personal machine, so it may go six months or six hours between operating systems. The main reason I mention something like FreeNAS in a VM is because it can be done without changing the OS; it's not very hard, but there is a slight learning curve of course. Main reason I mention ZFS is that it's the gold standard with respect to data integrity. Maybe they'll get it running on Windows someday, since Microsoft seems to have abandoned serious development of ReFS, which would be their solution.

However, I also understand that 'best' isn't always 'right' for right now. I'd honestly recommend Storage Spaces, using mirrored drives to provide data integrity.

The whole reason I'm using this SAS card is because everyone here said it's the best option when you're out of sata ports on the board like I am. They said it's far superior to cheap sata cards.
I'd say that that's a matter of perspective. It's mostly true, all things considered, but in this case where the card is simply being used to provide more ports, there's essentially no difference in terms of result. I think the main thing is that they're the cheapest option to provide >4 ports, but if you just need four more ports, then the four-port HBAs available do just fine.
 
That I understand -- mine is a personal machine, so it may go six months or six hours between operating systems. The main reason I mention something like FreeNAS in a VM is because it can be done without changing the OS; it's not very hard, but there is a slight learning curve of course. Main reason I mention ZFS is that it's the gold standard with respect to data integrity. Maybe they'll get it running on Windows someday, since Microsoft seems to have abandoned serious development of ReFS, which would be their solution.

However, I also understand that 'best' isn't always 'right' for right now. I'd honestly recommend Storage Spaces, using mirrored drives to provide data integrity.


I'd say that that's a matter of perspective. It's mostly true, all things considered, but in this case where the card is simply being used to provide more ports, there's essentially no difference in terms of result. I think the main thing is that they're the cheapest option to provide >4 ports, but if you just need four more ports, then the four-port HBAs available do just fine.
Oh I misunderstood what you were saying. I thought you meant to run freenas as base OS and run Windows in VM. If it's the other way around, maybe it can work. I literally know nothing about virtualization though, so I couldn't even say for sure if it would work in my situation.

I only really need to run my WD Golds in a redundant array for Plex. I don't need any other drives in parity arrays. I will likely just backup my SSDs, OS nvme, and anything else on my WD Blue using folder mirroring in drivepool or drive bender. So as long as I could configure a VM to run the drives in the background while managing Plex and file transfers in Windows, I would think it would be fine. Again, i know nothing about virtualization, including the fundamentals of how it works, so what I just said might not even make any sense lol

Although i haven't ruled out storage spaces, I am pretty concerned about all the issues with it, especially in the 2004 build. Last thing I want is to lose my data while trying to protect my data from loss. Speed isn't that critical considering Plex ran relatively fine on a 5400rpm drive and these WD Golds are 7200rpm. From my understanding storage spaces isn't the best for speed, but hopefully that would be a non-issue in my case. The possibility of having actual raid 5, in particular having the additional storage capacity over mirroring, is really the only thing that's attractive about storage spaces. Snapraid seems to be the only other option for raid 5, but I really hate the fact that it doesn't do real time syncing. Knowing myself, I'll forget to sync for like a month or more then lose a bunch of data.

Do you know what happens with storage spaces arrays when you say, rebuild Windows? Is that data lost? What if it's an emergency rebuild like in the case of a bad virus or ransomware attack? Presumably you cannot simply plug in the drives to a fresh build and read the files directly like in drivepool, but can you at least reconfigure the raid array without wiping the data? I guess i can't add drives at any time either, which sucks.

Given the price of the SAS cards, it's hard not to justify buying one over a sata card. The boot delay really isn't that big of deal. I've got 3 drives on it now and have room for 1 more internal drive. Then i can further expand with the external sas port. My case is pretty freaking full at the moment, so I am glad i have the external port on this card.
 
So as long as I could configure a VM to run the drives in the background while managing Plex and file transfers in Windows, I would think it would be fine.
Well, I did just that -- FreeNAS in a Hyper-V VM. You take the drives you want to use offline in Disk Management in Windows, then add them to the VM settings, and FreeNAS sees them directly. FreeNAS gets its own IP, and you set up SMB / Samba (same thing, different names) on FreeNAS, then you can map the drives as shares on Windows.

The possibility of having actual raid 5, in particular having the additional storage capacity over mirroring, is really the only thing that's attractive about storage spaces. Snapraid seems to be the only other option for raid 5, but I really hate the fact that it doesn't do real time syncing. Knowing myself, I'll forget to sync for like a month or more then lose a bunch of data.
ZFS does this the best, I'd say, outside of a relatively expensive enterprise SAN solution. RAID5 is single-parity, meaning that you dedicate one drive worth of space to parity. For ZFS, this is called RAIDZ1. FreeNAS will do all the work for you if you pass the drives to it.

Bonus: OS shits the bed, FreeNAS VM shits the bed... your data, and all the data needed to reconstruct it in the event of a failure, is still on your drives and accessible by ZFS. Just need an OS (could be a USB booted image!) with the same version of ZFS or later and your data is all there.

As for performance... Plex needs basically none. Unless you're streaming to like 20 clients, then yeah, you're going to need more, but two or three? Not a big deal bandwidth wise.

Do you know what happens with storage spaces arrays when you say, rebuild Windows? Is that data lost? What if it's an emergency rebuild like in the case of a bad virus or ransomware attack? Presumably you cannot simply plug in the drives to a fresh build and read the files directly like in drivepool, but can you at least reconfigure the raid array without wiping the data? I guess i can't add drives at any time either, which sucks.
Storage Spaces isn't terrible, it's just not received a whole lot of attention from Microsoft, and it's feature list (including future features) reads like the feature list for ZFS a decade ago. Now, Storage Spaces isn't terrible and it does work, and it's even somewhat portable, it's just not something I'd want to rely on for important data.

Given the price of the SAS cards, it's hard not to justify buying one over a sata card. The boot delay really isn't that big of deal. I've got 3 drives on it now and have room for 1 more internal drive. Then i can further expand with the external sas port. My case is pretty freaking full at the moment, so I am glad i have the external port on this card.
Price -- especially for used server pulls -- is the real attraction. Plenty of downsides that just don't matter in many cases; a prominant one is that they all need 8 PCIe lanes, and all of the affordable ones are PCIe 2.0. That means that using a GPU with such a system will be limited if you don't already have extra lanes in say an HEDT setup or one of the newest desktop chipsets.
 
Well, I did just that -- FreeNAS in a Hyper-V VM. You take the drives you want to use offline in Disk Management in Windows, then add them to the VM settings, and FreeNAS sees them directly. FreeNAS gets its own IP, and you set up SMB / Samba (same thing, different names) on FreeNAS, then you can map the drives as shares on Windows.


ZFS does this the best, I'd say, outside of a relatively expensive enterprise SAN solution. RAID5 is single-parity, meaning that you dedicate one drive worth of space to parity. For ZFS, this is called RAIDZ1. FreeNAS will do all the work for you if you pass the drives to it.

Bonus: OS shits the bed, FreeNAS VM shits the bed... your data, and all the data needed to reconstruct it in the event of a failure, is still on your drives and accessible by ZFS. Just need an OS (could be a USB booted image!) with the same version of ZFS or later and your data is all there.

As for performance... Plex needs basically none. Unless you're streaming to like 20 clients, then yeah, you're going to need more, but two or three? Not a big deal bandwidth wise.


Storage Spaces isn't terrible, it's just not received a whole lot of attention from Microsoft, and it's feature list (including future features) reads like the feature list for ZFS a decade ago. Now, Storage Spaces isn't terrible and it does work, and it's even somewhat portable, it's just not something I'd want to rely on for important data.


Price -- especially for used server pulls -- is the real attraction. Plenty of downsides that just don't matter in many cases; a prominant one is that they all need 8 PCIe lanes, and all of the affordable ones are PCIe 2.0. That means that using a GPU with such a system will be limited if you don't already have extra lanes in say an HEDT setup or one of the newest desktop chipsets.
I'll try looking into the FreeNAS in VM option you mentioned, but I kind of suspect it's going to take me a while to learn everything necessary to get something like that setup. Like what is SMB/Samba? No idea what that even means lol. Hyper-V VM – is that different from a "regular" VM? I just built my first computer with the help of my brother-in-law, so that's the level of skill I'm at now ha ha

The whole point of running a raid array or any other mirroring/duplication is for data integrity, so if Windows Storage Spaces cannot be relied upon for data integrity then it's something I should probably avoid.

The SAS HBA I got is in fact PCIe 3.0 (LSI 9207-4i4e). I actually initially ordered a 2.0 card per my earlier posts, but then decided on a 3.0 card per comments from UhClem and Nobu a few posts up. I still don't fully understand the concept of PCIe lanes, but UhClem had indicated this 9207 card could run on 4 lanes with the same bandwidth of the 9211 on 8 lanes. I do know that each version of the PCIe specification has doubled in speed, so the math makes sense. But what I don't know is if that means I'm capped at using only 1 of the 2 SAS ports on the card when running on only 4 lanes or if it's load-dependent. My 2060 KO is taking 16 lanes, and the only other PCIe card I have is a Rosewill USB 3.0 x1 card. My 3rd x16 slot is empty. If my understanding of my board's specs is correct, I should be able to keep this HBA in the bottom x16 slot to force it on 4 lanes. If I moved it to the middle x16 slot, it would/could run on 8 lanes which would limit my GPU to 8 lanes. This is all pretty new to me, so I'm not sure I really get the lane / bandwidth allocation aspect of PCIe yet.
 
You can set snapraid to run in task scheduler. It will run each day if that is what you want.
 
for real time protection of data (its not a backup)

you have ZFS (like freeNAS that is soon to be TrueNAS) but more ideal for a dedicated system not for main pc as its for nas use

if your wanting to stay with windows and only one PC but want real time redundancy then a SAS Raid card (with battery if you want write back enabled) instead of a SAS HBA card you can get them quite cheap, and that will allow you to use RAID6 and the card will do all the hard work,, use LSI megaRAID software to manage it, so you can setup times when you want it to do read patrol and consistency check, common suggestion is 1 week for read patrol and once a month for consistency check) if you do go with raid card make sure you buy 2 of them (just in case rare case one card fails)

(GPU be fine at 8x speed it really does not need 16x)
 
You can set snapraid to run in task scheduler. It will run each day if that is what you want.
Is there a limit to how often it can be run? Can I set it to run hourly or every 3-4 hours, or would that be detrimental to performance?
 
Back
Top