DEAD WD Easystore 14TB $189.99

gawkgawk

Limp Gawd
Joined
May 29, 2015
Messages
246
I shoot film on a cinema camera.
1 hour of ProRes Raw is 1 TB in Cinema 4k 23.98 FPS. If I shoot 6k 23.98 FPS it’s about 40 minutes. Cinema 4K 59.94 FPS is also around 50 minutes.
ProRes Regular (a minimally compressed lossy format and not LT and HQ) is about 5 hours for 1 TB for Cinema 4K downsampled 6k 23.98 FPS. And Cinema 4K 59.94 FPS is about 2.5 hours for 1 TB.

Shooting anything in RAW takes boat loads of data. Hollywood films are literally 100s of terabytes to film. Even ProRes regular or HQ shreds drives.

To say that I am going to need a personal server in short order is just an obvious truth. It’s more or less near the top of my list of things I need to buy. Hence my frustration that aren’t 20TB drives or higher in terms of density. Because even buying and building a 14 drive RAID 10 array out of theoretical 20TB drives would only net 140 TB of useable space which again would only be 140 hours of ProRes Raw not including any other data. And frankly that isn’t a lot of recording time.
I'd be frustrated too. When you do anything on a pro level like cinema quality recording you definitely have to PAY TO PLAY.....
 

nwrtarget

Gawd
Joined
Aug 10, 2010
Messages
893
I'd be frustrated too. When you do anything on a pro level like cinema quality recording you definitely have to PAY TO PLAY.....
UnknownSouljer
I would be curious to understand why you want RAID 10 for this application? For large writes like this parity calculations aren't a huge deal with modern RAID controllers. RAID 6 would be a reasonable choice, but I am sure you have thought about this and I am curious to understand. Raid 10 shines when you are doing small writes and updates to existing data, like you would in a really bad database server that doesn't do things properly. It doesn't take very many bytes of writes per second to bring a RAID 6 to a crawl if they are updates scattered all over the array. RAID 10 doesn't really have that issue though, it can update tons of small stuff constantly.

RAID 6 / 5 is just like RAID 0 for performance on reads, and writes either RAID 0 fast or limited by the controllers ability to calculate parity, assuming that the disks are fast enough to exceed the parity calculation rate on the controller. Considering controllers now are built to support RAID 6 SSD's, and the SSD write speeds, I haven't seen a platter based array that can outrun the parity calculations on a decent controller. RAID 6 parity calculations can write at a gigabyte a second on a reasonable, but older controller. I haven't tested the latest crop, but I am guessing they are even more capable. It sounds like your max write rate is about 450 megabytes a second, so you should be able to write to a big RAID 6 faster than real time if you set it up right. (LOTS of disks) Supermicro makes chassis' with 24+ 3.5 inch bays. Stacked full you should be able to write a couple of hundred TB's. Two RAID 6, each with 12 drives. One to read in from, and one to write out to during post. There are plenty of other options.
 

smoothmove

Limp Gawd
Joined
Jul 11, 2004
Messages
226
I have 5 of these in a Synology currently. The white label WD's aren't bullet proof by any means. Out of the Original 12 bays of 12tb drives four have died in the past 14 months. They drop out of the array and won't pass a smart test. These are CMR drives. Just FYI.
 

UnknownSouljer

Supreme [H]ardness
Joined
Sep 24, 2001
Messages
6,892
nwrtarget I think it’s obvious. I need the performance. This array has to make random and sequential writes constantly. Every time you change anything inside of an edit, that is more writes that gets stored as part of the project document as a cache - re-rendering the sequence. That data in cache is necessary for the playback of edits in real time.

Ideally this array won’t just be RAID 10, but will also have and NVME and SATA SSDs as part of the pool.

For more on this look into Black Magic Design’s disk speed test. As the bit rate of movie clips increases so does the demands on read and write speeds of the drive or array in order to edit and have smooth playback.
 
Last edited:

UnknownSouljer

Supreme [H]ardness
Joined
Sep 24, 2001
Messages
6,892
Just more reasons to not use NewEgg I guess. Although it will help other down stream customers if they resell these items again.
 

nwrtarget

Gawd
Joined
Aug 10, 2010
Messages
893
nwrtarget I think it’s obvious. I need the performance. This array has to make random and sequential writes constantly. Every time you change anything inside of an edit, that is more writes that gets stored as part of the project document as a cache - re-rendering the sequence. That data in cache is necessary for the playback of edits in real time.

Ideally this array won’t just be RAID 10, but will also have and NVME and SATA SSDs as part of the pool.

For more on this look into Black Magic Design’s disk speed test. As the bit rate of movie clips increases so does the demands on read and write speeds of the drive or array in order to edit and have smooth playback.
OK, I thought the cache/edit type stuff would be on a separate flash based array.

On my system I use RAID 6 for movie storage, and smaller RAID 10 for all the small files and database type stuff. I didn't realize when you edit you are actually changing the master, to me that sounds crazy! Anyway, sorry, I don't know much about high end video editing systems.
 

UnknownSouljer

Supreme [H]ardness
Joined
Sep 24, 2001
Messages
6,892
OK, I thought the cache/edit type stuff would be on a separate flash based array.
The point in building a server would be to have everything accessible at one time that I can edit directly off of. If I have to move things around just to create edits that defeats the purpose and the utility of creating a server in the first place. At that point I might as well just buy 2TB-4TB NVME USB-C drives en masse instead - because again the purpose is I have to get edits done and I also need the space in order to do it.

As it stands with my space requirements rotational drives will still be the most cost effective way, as mentioned before SSDs still aren't cheap if you need really big drives. With this new camera there will be projects that easily exceed 4TB as well compounding that particular problem. And the only way to make rotational drives fast enough is to have a large amount of them in the pool using a RAID format that actually increases speed along with SSD's that can dynamically change what's on them in to make the pool faster.

To be clear, I'm not a big server admin guy, I'm basically looking at solution from QNAP that should hopefully make this process easy - which is what it needs to be - albeit at a higher cost. Which of course is worth it when you're dealing with someone that wants to manage nothing, but at the same time is holding mission critical data. RAID 10 only gives single redundancy, but as long as I have spares on hand it should be more than reasonably safe - although I will likely have to start looking into remote server storage options as a secondary backup storage solution - which likely means I'll need to build a second server and rent rack space somewhere, as a more cost effective solution to simply just paying for server space eternally.

This is more info than necessary to answer your question, but hopefully will give insight to my thinking process. RAID 5 is terrible. RAID 6 is "pretty safe" but doesn't offer speed increases to writes (only reads). Raid 10 is basically the only solution that offers some redundancy and speed. I suppose in theory making a RAID 60 would/could also make sense but that requires an enormous pool of drives that frankly I can't afford. Stretching to a 12-16 drive pool +SSDs is already a massive investment to a small fry like me - something this will likely cost $5000+ (for a no hassle QNAP solution). I need to maximize space and speed somehow. And 10 is basically it.
On my system I use RAID 6 for movie storage, and smaller RAID 10 for all the small files and database type stuff. I didn't realize when you edit you are actually changing the master, to me that sounds crazy! Anyway, sorry, I don't know much about high end video editing systems.
No, NLE's don't alter the original movie files. It's similar to what happens when you edit RAW photos, just that it has the overhead of needing to play it back in real time (whatever your project fps is: 23.98, 24, 29.98, 30, 59.94, 60 - fps or whatever) essentially requires an already rendered version of that file. I guess what you could say is that it literally renders a second copy of your file with the adjustments on top of it; and this process happens over and over again as you continue to add more changes. It's not uncommon to have Libraries that end up being TB's in size because of all the caching. After you're done with your edit you can simply delete the cache to get your space back. If you need to open up a previous library and get smooth playback again you simply have the NLE re-cache everything. (In NLE's they don't use this language they call it "Timeline Rendering" but this process of rendering files and storing it on your drives is essentially caching, or "short term" processed storage).

After you're done with your edit you export it into a "delivery format" which of course at that point will have all your alterations baked in - and you keep your project files if you ever need to make another edit or change anything.
 

Luke M

Limp Gawd
Joined
Apr 20, 2016
Messages
441
I've read about big budget movies being edited on laptops. The key is the editing is done with a compressed version of the video. The edits are later applied (automatically) to the uncompressed video stored elsewhere.
 

nwrtarget

Gawd
Joined
Aug 10, 2010
Messages
893
The point in building a server would be to have everything accessible at one time that I can edit directly off of. If I have to move things around just to create edits that defeats the purpose and the utility of creating a server in the first place. At that point I might as well just buy 2TB-4TB NVME USB-C drives en masse instead - because again the purpose is I have to get edits done and I also need the space in order to do it.

As it stands with my space requirements rotational drives will still be the most cost effective way, as mentioned before SSDs still aren't cheap if you need really big drives. With this new camera there will be projects that easily exceed 4TB as well compounding that particular problem. And the only way to make rotational drives fast enough is to have a large amount of them in the pool using a RAID format that actually increases speed along with SSD's that can dynamically change what's on them in to make the pool faster.

To be clear, I'm not a big server admin guy, I'm basically looking at solution from QNAP that should hopefully make this process easy - which is what it needs to be - albeit at a higher cost. Which of course is worth it when you're dealing with someone that wants to manage nothing, but at the same time is holding mission critical data. RAID 10 only gives single redundancy, but as long as I have spares on hand it should be more than reasonably safe - although I will likely have to start looking into remote server storage options as a secondary backup storage solution - which likely means I'll need to build a second server and rent rack space somewhere, as a more cost effective solution to simply just paying for server space eternally.

This is more info than necessary to answer your question, but hopefully will give insight to my thinking process. RAID 5 is terrible. RAID 6 is "pretty safe" but doesn't offer speed increases to writes (only reads). Raid 10 is basically the only solution that offers some redundancy and speed. I suppose in theory making a RAID 60 would/could also make sense but that requires an enormous pool of drives that frankly I can't afford. Stretching to a 12-16 drive pool +SSDs is already a massive investment to a small fry like me - something this will likely cost $5000+ (for a no hassle QNAP solution). I need to maximize space and speed somehow. And 10 is basically it.

No, NLE's don't alter the original movie files. It's similar to what happens when you edit RAW photos, just that it has the overhead of needing to play it back in real time (whatever your project fps is: 23.98, 24, 29.98, 30, 59.94, 60 - fps or whatever) essentially requires an already rendered version of that file. I guess what you could say is that it literally renders a second copy of your file with the adjustments on top of it; and this process happens over and over again as you continue to add more changes. It's not uncommon to have Libraries that end up being TB's in size because of all the caching. After you're done with your edit you can simply delete the cache to get your space back. If you need to open up a previous library and get smooth playback again you simply have the NLE re-cache everything. (In NLE's they don't use this language they call it "Timeline Rendering" but this process of rendering files and storing it on your drives is essentially caching, or "short term" processed storage).

After you're done with your edit you export it into a "delivery format" which of course at that point will have all your alterations baked in - and you keep your project files if you ever need to make another edit or change anything.
This workflow is foreign to me so I was approaching it like I would any sort of server system where I ideally break up the workloads across the storage types that make the most sense.

Your statement for RAID 6 not offering speed increases to write is interesting though. I find that many people misunderstand RAID array performance by trying to apply very simple rules to all workloads. Understanding the workload is the critical step, and I don't fully understand yours.

The common incorrect guidance I hear is that RAID 6 performs the same as a single drive. This isn't remotely the case in sequential writes. However if you are updating blocks all over the array, and you don't have a good amount of cache that can defer those writes, then you can get performance far worse than a single drive. The trick with RAID 6 is caching those small writes.

Looking at your RAID 10 setup in large sequential writes at 12 drives we know write performance is 50%, so the equivalent of 6 disks. This is roughly 1.2gigabytes per second in large sequential write. If you increase it to 16 disks, then we have a write speed of about 1.6gigabytes per second.

I did try RAID 60 across two controllers, and based on my limited tested, it had potential to be faster, but it frequently underperformed likely due to timing issues between the two controllers. If you are doing RAID 60 on a single controller, you are actually halving your performance as you are calculating four parity streams instead of two.

I won't claim to know the QNAP line but I am guessing the reasonably priced ones don't have high power RAID controllers that can calculate RAID parity at write rates approaching your needs, so your desire for RAID 10 makes sense. To further that thought process, a good RAID 10 controller can read your video stream from one half the array and the edits off of the other half making the performance quite good during playback. I don't know if the QNAP does this properly.

Your setup should be pretty awesome, and I hope it serves your needs well for a really long time.
 

Ducman69

[H]F Junkie
Joined
Jul 12, 2007
Messages
10,542
Still mad that 20TB+ drives don't exist at this price. Magnetic media is expanding at a snails pace. Actually HD tech in general.
I see people getting excited about 10TB drives for $150, which I checked my receipts and is the exact same drive I bought more than a year ago for the same price. Feels bag mang.

They do have 18TB drives though at least, which is nice, but the per TB price isn't shrinking fast.
 

UnknownSouljer

Supreme [H]ardness
Joined
Sep 24, 2001
Messages
6,892
I've read about big budget movies being edited on laptops. The key is the editing is done with a compressed version of the video. The edits are later applied (automatically) to the uncompressed video stored elsewhere.
This workflow is different. Basically they render in the industry what they call “proxies” which are much lower resolution versions of the files. And then when they actually export they switch back to the original files. It’s effective if you have a big team or need to do work on slower hardware.

I’m a single operator so I basically have have to do everything myself. I don’t really get a speed gain from separating tasks and generally it’s not worth the time to make proxies. Because I cut, correct, and grade all in the same app it’s more beneficial to see the full resolution file that I’m working with so I can make all of the creative decisions at the same time. ProRes is also a very optimized very uncompressed codec, so generally it doesn’t take a lot of system resources to playback or work with (comparatively speaking). I may have certain files that will be encoded in h265, and those can be a pain to work with with slow computers or computers that don’t have hardware that can decode it quickly. However in that case I would likely just reencode and make optimized media instead.

In Hollywood where you have a big editing team, the editor likely doesn’t need to see anything other than proxies. The colorist will use the full resolution files in an entirely different app (likely Resolve) any effects will be made in a third app and a third person, and audio engineering work on a fourth app/person. While in theory this server would be able to have everyone work from it at the same time. Again, I’m just me. But perhaps that’s future goals.
 

Ducman69

[H]F Junkie
Joined
Jul 12, 2007
Messages
10,542
You guys have a very informative conversation going on here. Honestly.

But every time I see this thread pop back to the top I think I have a chance at getting some 14TB drives.. and It's tearing me apart :D
I think we have to flag them for mods to put [DEAD]. Done.
 

nilepez

[H]F Junkie
Joined
Jan 21, 2005
Messages
11,720
I have 5 of these in a Synology currently. The white label WD's aren't bullet proof by any means. Out of the Original 12 bays of 12tb drives four have died in the past 14 months. They drop out of the array and won't pass a smart test. These are CMR drives. Just FYI.
Did you contact WD? They have a 2 year warranty.
 

ND40oz

[H]F Junkie
Joined
Jul 31, 2005
Messages
12,004
I won't claim to know the QNAP line but I am guessing the reasonably priced ones don't have high power RAID controllers that can calculate RAID parity at write rates approaching your needs, so your desire for RAID 10 makes sense. To further that thought process, a good RAID 10 controller can read your video stream from one half the array and the edits off of the other half making the performance quite good during playback. I don't know if the QNAP does this properly.
QNAPs don't have RAID controllers, most NAS's don't have them because they're not needed. Any modern CPU can handle the load a specialized controller was needed for 20 years ago. My Ryzen based QNAP will expand a RAID 6 array full of 10TB WD Whites faster then my LSI 9265-8i will expand a RAID 5 array full of 8TB WD Reds. The QNAP can do it in 2 days, the LSI took almost 3 weeks.

If you're going to roll your own now, don't do it with RAID controller, do it with TrueNAS and compatible hardware.
 

Hashiriya415

Weaksauce
Joined
Mar 17, 2019
Messages
127
This came by fedex with bubble wrap. Took out the drive and it's got something broken inside the case which is moving around. Should I not even bother trying to test it? I was going to use sentinel. What is the best buy to exchange this if anyone knows how best buy deals with stuff like this. I received this about 30 days ago and decided to open it today. It would take over an hour of my time to drive to bestbuy and back.

IMG_20201222_195617.jpg
 

jmilcher

Supreme [H]ardness
Joined
Feb 3, 2008
Messages
4,792
This came by fedex with bubble wrap. Took out the drive and it's got something broken inside the case which is moving around. Should I not even bother trying to test it? I was going to use sentinel. What is the best buy to exchange this if anyone knows how best buy deals with stuff like this. I received this about 30 days ago and decided to open it today. It would take over an hour of my time to drive to bestbuy and back.

View attachment 311856
They have extended returns through the holidays, until mid January. I’d make the drive and exchange it.
 

kirbyrj

Fully [H]
Joined
Feb 1, 2005
Messages
27,456
This came by fedex with bubble wrap. Took out the drive and it's got something broken inside the case which is moving around. Should I not even bother trying to test it? I was going to use sentinel. What is the best buy to exchange this if anyone knows how best buy deals with stuff like this. I received this about 30 days ago and decided to open it today. It would take over an hour of my time to drive to bestbuy and back.

View attachment 311856

Are you going to use it as is or shuck it? I'd do a complete test on the drive and if it passes just shuck it. Probably one of the mounts inside came loose.
 

Hashiriya415

Weaksauce
Joined
Mar 17, 2019
Messages
127
It would take an hour of my time. That is why I was thinking shipping is easier. Unless I could ask BestBuy for some way to reimburse me for my work. I don't think I want to deal with trying this drive, even though I do plan to shuck it. I can imagine the force that broke something must have probably damaged the drive in some way.
 

jmilcher

Supreme [H]ardness
Joined
Feb 3, 2008
Messages
4,792
It would take an hour of my time. That is why I was thinking shipping is easier. Unless I could ask BestBuy for some way to reimburse me for my work. I don't think I want to deal with trying this drive, even though I do plan to shuck it. I can imagine the force that broke something must have probably damaged the drive in some way.
Oh they won’t be doing any reimbursing. They could not care less about products damaged in shipping. They will return or exchange it but that’s about it.
 
Top