Nas/SAN/Plex server redesign

I use four 8TB USB pulls in RAIDZ1 (RAID5) to backup four 6TB Ironwolfs in RAID10, still on ZFS...

Still getting ~750MB/s sequential across 10GbE. I had planned to add a few drives to that just to get performance up to 1GB/s, but I simply have no use for the space (yet).

I don't think the use of hardware controllers would be helpful in my case, while at work we use them for two-drive mirrors... ...because...
I'm jealous, been scoping 10gbe, but can't justify the cost for my house honestly. Hoping it starts coming down in price because it's my biggest bottleneck at the moment. I can get 1GB/s from my drives, but 1gb throughout on my network only nets me like 100MB/s. I do have quad Ethernet ports but only a 4 port router so I can't bridge all of them, and even if I did I still would only have 1gb to a single endpoint. 750 for 4 drives is good, I have 6 drives, was going to do 8, but 2 were different manufacturers and slightly slower drives. Sadly my servers onboard is SATA in, so my SSD only gets like 230MB/s, but has great random I/O. I put things like my Plex transcoding on the SSD, large movie/files on the RAID, etc. I have backups of my programming stuff, but all my movies I still have the original discs, so not critical in any fashion. Like I said, it was mostly just to play and learn, and.. why not? I have dual Xeon with 96GB ram. Least I can do is use a fast RAID.
It doesn't sound like youd gain much from hardware raid. Me, I'm look into a larger rack mount that can hold a bunch of drives and switch to an external raid card. It wouldn't make sense to do all that and then run to each disk individual just to use software raid. The upgraded card I'm scoping also has battery backup, faster i/o and 1GB of onboard ram. Seems a waste to not use any of it :).
 
Well. I decided to go with two 24-port RAID cards with battery backup and VMware, running on a 1950x with some ECC RAM. Hardware incoming from every which way.
Sounds awesome. Which raid card and which what kind of Bay you ended up with. My server can hold 8 2.5" drives, but I've been looking into external raid with something like a 12 3.5" Bay rack. Still not completely settled so any info/feedback would be great.
 
Sounds awesome. Which raid card and which what kind of Bay you ended up with. My server can hold 8 2.5" drives, but I've been looking into external raid with something like a 12 3.5" Bay rack. Still not completely settled so any info/feedback would be great.

ASR-72405 cards.

I actually prefer supermicro chassis. I have a 12-, 16-, and 24-bay chassis. All 3.5" hotswap SAS backplanes.

Cannot give feedback on the cards, yet, but I expect it will work just fine.

Edit: I used the 12-bay at first. It worked very well, but I was using a dual Xeon setup. Power hungry. Then, I moved up to the 16-bay, switched out for a 3570k -based consumer grade system. By comparison, very energy efficient - but not as resilient as the other rig.

Now, moving to a Threadripper, because it has a boatload of lanes and can transcode.

I will say, before I was running FreeBSD / FreeNAS. Putting a Plex server and storage on the same machine without them being virtualized was a mistake. When it worked, it was great. When it messed up, it was a real pain in the ass.
 
Last edited:
I'm jealous, been scoping 10gbe, but can't justify the cost for my house honestly.

Yup- there are cheaper / used SFP+ switches, but then you get to run fiber; the 10Gbase-T switches all start around US$600, which is where I got onboard.

Doing it again, given the limited scope of applications, I'd probably just do fiber.

Now, moving to a Threadripper, because it has a boatload of lanes and can transcode.

The lanes I get, and it can transcode, but why not use a GPU for that? Just asking :)

I will say, before I was running FreeBSD / FreeNAS. Putting a Plex server and storage on the same machine without them being virtualized was a mistake. When it worked, it was great. When it messed up, it was a real pain in the ass.

I'd tried it both ways; with my simpler setup, eight spinners for storage with an SSD for the OS and a 7600K-based system, running everything on CentOS7 (we use it at work) gets along fairly well. If I were to start streaming externally, or increase storage more, I'd probably work out more logical if not physical separation.

That said, going back to ZFS: I was able to go between four or five OS options, including running FreeBSD in Hyper-V on Server 2016, without breaking the two pools.

As I'm also doing this to learn, with ZFS not being something used at work (well, I assume it was used, we tossed some Sun hardware last year), that's been invaluable.

[as a further aside, I'm kind of annoyed at the licensing Sun used for ZFS; that's precluded it being mainlined into the Linux kernel, and now we have the likes of Red Hat hacking some other solution together...]
 
The lanes I get, and it can transcode, but why not use a GPU for that? Just asking :)



I'd tried it both ways; with my simpler setup, eight spinners for storage with an SSD for the OS and a 7600K-based system, running everything on CentOS7 (we use it at work) gets along fairly well. If I were to start streaming externally, or increase storage more, I'd probably work out more logical if not physical separation.

Without opening a can of debate worms, I'll say that I perceive an image quality difference between CPU and hardware accelerated transcoding. I have GPU's not in use, but it's just not my preference at this time.
 
Without opening a can of debate worms, I'll say that I perceive an image quality difference between CPU and hardware accelerated transcoding. I have GPU's not in use, but it's just not my preference at this time.

Honestly, to my knowledge that's the only 'good' reason :).
 
ASR-72405 cards.

I actually prefer supermicro chassis. I have a 12-, 16-, and 24-bay chassis. All 3.5" hotswap SAS backplanes.

Cannot give feedback on the cards, yet, but I expect it will work just fine.

Edit: I used the 12-bay at first. It worked very well, but I was using a dual Xeon setup. Power hungry. Then, I moved up to the 16-bay, switched out for a 3570k -based consumer grade system. By comparison, very energy efficient - but not as resilient as the other rig.

Now, moving to a Threadripper, because it has a boatload of lanes and can transcode.

I will say, before I was running FreeBSD / FreeNAS. Putting a Plex server and storage on the same machine without them being virtualized was a mistake. When it worked, it was great. When it messed up, it was a real pain in the ass.
I'm running everything virtualized from the start. I am currently using docker, switched from proxmox about a year ago. Mines an older dual Xeon box that's pretty power hungry (Dell R710) but it works well for me. I'll eventually upgrade, but haven't really felt compelled. Thanks for the info on the hardware, I'll look into some of those chassis.
 
Without opening a can of debate worms, I'll say that I perceive an image quality difference between CPU and hardware accelerated transcoding. I have GPU's not in use, but it's just not my preference at this time.
Yeah, as far as I know that's not really a debate, it is pretty well known that CPU transcode has better quality. That being said, newer hardware is getting closer and the perceived difference may be more/less acceptable to some, so whatever you prefer. I run CPU transcodes because I have old crappy xeons that don't have hardware encoding, and most of my transcodes are 720p anyways so not to hard on the server.
 
proxmox is a hypervisor first with docker capabilities. so yes a linux install with docker added has some benefit there. my usage case is opposite where i like heaving the separation of real VMs and only minimally (currently zero) usage of docker.

ZFS vs HW RAID is the same. my move from HW RAID with battery backup was simply because ZFS is cheaper to implement when you want more than about 8 drives. in HW RAID you start needing to look at large HW RAID cards or expanders and the headaches included. ZFS it is pretty minimal to just toss another 8 drive HBA into the server, add more drives to the pool, and sail on.

one is not just 'better', just the trade offs lean for what you want out of the install. for either your ZFS vs HW RAID, or hypervisor vs docker, installs.
 
Without opening a can of debate worms, I'll say that I perceive an image quality difference between CPU and hardware accelerated transcoding. I have GPU's not in use, but it's just not my preference at this time.

I do a lot of ffmpeg work on the side. I see differences with CPU vs Hardware encoded transcoding. For me, its like anti-aliasing isn't working the same or something. Hard to explain. Only thing I can think of(and I havent messed with it much) is that some of the hardware specific tunings are generic enough to work on all hardware encoding, but needs a bit of tweaking to take full advantage of my gfx card. Just gotta read this closer: https://trac.ffmpeg.org/wiki/HWAccelIntro
 
So, by way of update:


As I took out the server board and etc from my 24-bay supermicro chassis, I discovered that unlike the other backplanes in the 16- and 12- bay chassis, this one only has three connections for the backplane. For 24 port. The others all had one connection for every 4 drives. This is a very interesting development, and hopefully the backplane is like an expander card itself. Otherwise, I've got some rework to do.

Man.

Any insight as to how this backplane functions would be appreciated. CSE-846 chassis.
 
3 multilane ports means you do indeed have a SAS expander backplane. If you aren't familiar with SAS expanders, think of them like USB hubs. Connect 4 or 8 lanes (ie 1 or 2 cables) from your HBA and all 24 drives wound up being attached. Many 24 port cards are just internally 8 ports and have a SAS expander on the same PCB.

Having the SAS expander backplane means you don't have to buy an expensive 24 port HBA. Also makes cabling a whole lot easier. I think most people prefer it when they're not running all SSDs. Just remember that you have to use a SAS HBA. SATA will not work (you can still utilize SATA drives however).
 
3 multilane ports means you do indeed have a SAS expander backplane. If you aren't familiar with SAS expanders, think of them like USB hubs. Connect 4 or 8 lanes (ie 1 or 2 cables) from your HBA and all 24 drives wound up being attached. Many 24 port cards are just internally 8 ports and have a SAS expander on the same PCB.

Having the SAS expander backplane means you don't have to buy an expensive 24 port HBA. Also makes cabling a whole lot easier. I think most people prefer it when they're not running all SSDs. Just remember that you have to use a SAS HBA. SATA will not work (you can still utilize SATA drives however).
I figured this to be the case. I am familiar with expanders, though I haven't used them.

I already bought 24-port controllers. Oh well. I should've looked first. I can use the extra ports to a...whatever the external SAS cable is, run them down to my other box and use the 12-drive array there. I've gotta figure out if the backplane there needs some kind of initiation signal to power on the backplane - and if so, how to satisfy it.
 
I figured this to be the case. I am familiar with expanders, though I haven't used them.

I already bought 24-port controllers. Oh well. I should've looked first. I can use the extra ports to a...whatever the external SAS cable is, run them down to my other box and use the 12-drive array there. I've gotta figure out if the backplane there needs some kind of initiation signal to power on the backplane - and if so, how to satisfy it.
The idea is that you use the 3rd port on the backplane to connect another (assuming all expanders), though from your HBA will work too.

Supermicro makes a power board for disk only chassis. Look up the CSE-PTJBOD-CB1 and newer revisions.
 
The idea is that you use the 3rd port on the backplane to connect another (assuming all expanders), though from your HBA will work too.

Supermicro makes a power board for disk only chassis. Look up the CSE-PTJBOD-CB1 and newer revisions.

That actually saved me a ton of research. Thanks for that.

So, after further research on the backplane, I think I will need to replace it. It appears to be a SAS836el1 which, I guess has a drive size limit. That's not going to work for me.

BPN-SAS-846TQ has individual passthrough for each drive.
 
The 846EL1 does not have any issues with drive size. That was only the original 846E1. It's pretty easy to tell the difference. The newer one has only solid state capacitor and the older electrolytic.
 
Ah, sorry, slight confusion on my part with the chassis naming scheme vs backplane naming one. If you have a sticker that says SAS2 or all solid state caps, then you do have the newer version. Just look for any electrolytic caps. That's the easiest way to tell.
 
You'll want a SAS2 or A type backplane then. Fortunately they're a lot cheaper than they used to be these days.
 
Back
Top