4K Video editing RAID 0 configuration… What to do?

NeilAndrew

n00b
Joined
Jun 17, 2014
Messages
5
Shooting films in 4K and in RAW is a beautiful thing, but managing the data has proven a trying task.

I apologize if my questions/quandaries are quite basic or trivial.

I have researched, googled, made phone calls, talked to tech support agents, etc.
Low and behold, all the information I gathered is either contradictory
or out-dated, in regards to the nitty gritty aspects of RAID – which motherboards are supported, which drives will work reliably,
which controller cards will run reliably given their newer tech (PCIe 2 or 3) and the conflicts with UEFI bios.
Many RAID controller reviews are 2+ years old, and many of the newer RAID cards lack reviews or feedback.

I could not find current information on:
• 1) SATA vs SAS drives for RAID 0 in regards to read speed and reliability – never used SAS drives before…
• 2) Number of drives & capacity vs read speed – would 8x1TB drives have higher read speeds than 2x4TB drives in RAID 0?
• 3) Current UEFI stability with newer LSI controllers… Is this a thing of the past? Should I worry about installing a current LSI RAID controller into a mobo with X79 chipset?

System specs:
ASUS Sabertooth X79
i7-3930k @ 4.6 GHz
64GB ram
GTX 690
CASE: COSMOS with 7 empty drive bays

Disk 1: SSD 240GB – scratch disk
Disk 2: SSD 240GB – scratch disk
Disk 3: SSD 240GB – OS and software
Disk 4: 4TB Current location for footage for editing
Disk 5: 4TB Current location for footage for editing
Disk 6: 4TB Current location for photo editing
Disk 7: 4TB Editing Effects/Sound Effects/Soundtracks

WORKFLOW:
A) Data is moved to separate single 4TB hard drives for dual redundancy.
B) Data is copied to desktop for editing. (to drives 4, 5, or 6)
C) Final edits are moved to dual redundant backups.
D) Desktop footage on drives 4, 5, or 6 deleted, editing space restored.
E) New project = new data – Start from step A.


4TB drives installed and used for backup are all Seagate ST4000DM000-1F2168
Should I think about transitioning to HGST or WD?

Optimistically,
my ideal configuration would replace Disks 4, 5, and 6 in my current workflow and allow me a RAID 0 setup of 8TB or more of useable space with a read speed higher than 240MB/s.
I just can’t figure out which card, which interface type, and how many of which drives.

Here's a quick list of datarates to put the workflow into perspective...
4K ProRes = 6 GB/min
2.5K RAW = 7 GB/min
1080p ProRes = 1.4 GB/min
5D3 Full-Frame 1080p RAW = 5 GB/min


I hope this thread will help others whom are also looking into video production workflow optimization and or current RAID 0 hardware configurations/applications.
 
I can't help directly, but I will ask three questions:

1. Are the new 6 TB drives faster than the 4 TB ones?

2. Isn't RAID 0 a bit dangerous for non-scratch disks? Especially with so many drives? Wouldn't RAID 0+1 or 5 or 6 be superior for reading while retaining some measure of resilience?

3. You're obviously doing this professionally, and time is money, so why not bite the bullet and go pure SSD? You can get 1 TB Samsung SSDs for US$420. Ten of those will set you back ~$5K, add in a RAID 6 controller and a few 2-in-1 or 4-in-1 enclosures and you'll be set.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
thanks for your reply

I can't help directly, but I will ask three questions:

1. Are the new 6 TB drives faster than the 4 TB ones?

2. Isn't RAID 0 a bit dangerous for non-scratch disks? Especially with so many drives? Wouldn't RAID 0+1 or 5 or 6 be superior for reading while retaining some measure of resilience?

3. You're obviously doing this professionally, and time is money, so why not bite the bullet and go pure SSD? You can get 1 TB Samsung SSDs for US$420. Ten of those will set you back ~$5K, add in a RAID 6 controller and a few 2-in-1 or 4-in-1 enclosures and you'll be set.

1. They might be, but not sure if they're faster than the 4TB WD Blacks. With 4 x 4TB drives in RAID 0, I'd be surprised if the read speed couldn't keep up with my workflow. I'm just not sure if I should keep the budget low for the first build and buy Seagate 4TB or WD Black - the only difference I'd really notice would be reliability, I assume, even though the WB Black is faster in single drive configuration.

2. Good point and good question but essentially any and all data on the footage drives is deletable. If data was lost, it would just be transferred back at a speed of about 130MBps.
How long does it usually take to build or rebuild a RAID 0 array?
130 MBps is about .127 GBps
.127 GBps x 60 sec = 7.62 GBpmin =
6144GB / 7.62GBpmin = 806 minutes / 60 = 13.43 hours to transfer 6TB or 6144GB
My guess is that restoring the RAID 0 upon failure combined with 6TB data transfer is a 2 day ordeal minimum?

3. If I hand the funds lying around, I would quickly spring for pure SSD, but it would be overkill for sure given the number of drives needed to accumulate 8+TB of editing space. Even RAID 0+1 or RAID 5/6 would require many more SSD drives, increasing the controller costs and costs per GB beyond what my data rate requirements are.
Could you provide an example of a 2 in 1 or 4 in 1 box?


- So it's now quite clear to me that the number of drives in RAID 0 increases the read speed exponentially - giving much more reason for 4x4TB.
Is a 16TB RAID 0 configuration ill-advised?
The chance for it going down is increased of course each time a drive is added to the array - but I'm quite used to routine maintenance and the data doesn't matter - just the read speed and capacity.
- I dug up quite a bit of documentation and enterprise benchmarks on the different RAID controllers. It seems that although LSI - and other manufacturers - do not technically support RAID controllers to work in desktop/consumer motherboards, they might still function just fine, and reliably. I also found that same case for the majority of hard drives - too bad it was so time consuming to find such seemingly known information XD

At a datarate of 7GBps, 8TB of editing space allows for nearly 20 hours of RAW footage, which is really quite reasonable.
Does anyone have any RAID configurations they can share read speeds for and which hard drives are in the array?
Anyone with a RAID controller on an X79 board?

I'm surprised this forum doesnt have an "end-all-be-all" sticky about RAID, listing the finer points of hardware configuration practice and known reliable environments. From all the threads I've read over the years, this has to be one of the most knowledgeable and healthily active forums around.

Does anyone have one of these LSI 12Gbps RAID Controller cards?
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
You're post is too long... so I'm going to focus on what you seem to have highlighted and go from there....

I could not find current information on:
• 1) SATA vs SAS drives for RAID 0 in regards to read speed and reliability – never used SAS drives before…
• 2) Number of drives & capacity vs read speed – would 8x1TB drives have higher read speeds than 2x4TB drives in RAID 0?
• 3) Current UEFI stability with newer LSI controllers… Is this a thing of the past? Should I worry about installing a current LSI RAID controller into a mobo with X79 chipset?

1) Enterprise SAS drives are typically faster with lower latency. So they would be better for performance RAID configuration.
2) More spindles = more performance. So 8x1 is better than 2x4.
3) Are you booting from the RAID controller? No? Then it doesn't matter.


4TB drives installed and used for backup are all Seagate ST4000DM000-1F2168
Should I think about transitioning to HGST or WD?

Are the Seagates dead or throwing errors? If not then why replace them?

I just can’t figure out which card, which interface type, and how many of which drives.

We're talking about RAID'ing rotating rust here so any half decent controller will keep up. Also we're just talking RAID0 so you don't need a high end RAID on Chip design. Something like a LSI 9211-8i, Dell Perc H310, IBM M1015 will do just fine.



Also to address the 12Gbps RAID controller in your 2nd post... why? Only if you're using SSD would that help. HDD can't saturate 6Gbps controllers so a 12Gbps would just be a waste of money.
 
Other than the fact that I'd never recommend a card without any cache onboard, techmattr is connect.

I would grab another matching Seagate 4TB(assuming you are happy with the drives) and either:
LSI SAS2208-based Controller w/ BBU. Dell Perc H710p, IBM M5110
LSI SAS2008-based Controller w/ BBU. Dell PercH700, IBM M5015
And run your set in either Raid10 for a measure of redundancy with enhanced performance (8TB usable), or if you have to for capacity, Raid 0 (full 16TB) with no redundancy.

I'm not familiar with the demands of video editing, but depending on the need for random performance, all the cards I listed are potentially compatible with LSI's Cachecade technology which allows an SSD to cache reads and/or writes (depending on the card and version of Cachecade, 1 vs 2).

Side note, What kind of budget do you have to work with?
 
--If you don't already have the drives, get 5 or 6TB (if they aren't cost objectionable) and swap everything below from 4TB to 5/6TB - at least for the 3 video drives. 4TB will be fine for photo/audio--


*edit* I misread your initial post think 5 drives, not 4. You're going to need at least 1 more 4/5/6TB drive. There's no good way to do what you'd like using 4 disks w/o some major reconsideration of how you want to organize things.


Here are some of your options:

Stick with fewer disks. 8 drives in tandem will be faster than 2, but I've never had a reliable raid 0 setup worth the hassle of getting everything ironed out with more than 2 disks. Just not worth it. I hate when people say that but this sounds like a production machine, not for a goofofffuntimesexperiment machine.

Get a 4 or 5 (5 ideally) disk raid enclosure (esata) that will handle raid 0/1/5/etc/jbod for you.

1 4TB for your photo
1 4TB for your audio

and one of these

3 4TB raid 0+1. You'd get raid 0 sequential performance you'd need for 2/4K w/ redundancy. Not really a backup, but good enough

2 4TB in raid 0. Keep the 3rd 4TB as a backup drive. You can either manually or automate your backup process. I'd treat the backup as a clone rather a running backup. NLEs won't work well with anything other than a clone.

Either one of those options will limit your workable space to 4TB because ultimately that's all you can reliably backup.

or

3 4TB drives in raid 5, but I'd only do this with a $$$ hardware raid card, and I wouldn't necessarily recommend this route. If at all possible, let windows create your raid sets for you rather than through hardware. With hardware raid, you're limited to only that hardware being used with your raid set. In the event of catastrophe, the only way you can resurrect your data is with the same hardware. This is also one of the reasons raid hardware doesn't change that much. You mentioned you haven't seen a lot of recent reviews, but that's mainly because the technology is much more stable and doesn't try to bleeding edge regularly. Something well reviewed from 2 years ago will work just fine now.

or (this is dangerous, but if you have no further budget and the speed still isn't there)

3 4TB raid 0 and rely on your tape/flash/capturedevice as a backup. If all falls apart you can at least start from scratch :)

Realistically tho, I don't know your workflow or if you have multiple projects at the same time (assuming indie style single project rather than commercial editing), but lets assume you have a shooting ratio of about 8:1. For a 90m feature you're only dealing with about 4.5TB of data, and that's basically everything straight from the camera, no cutting room floor crap, which if you do a bit of cleaning while dumping all of your footage, you'll save even more space.



Here's what I would do:

2 4TB drives in raid 0, limited to 4TB. Use a 3rd 4TB as a snapshot of the days work. If you can spring for another drive (or more), use those for additional snapshots so you can round robin back a few days. There's no way you'll be able to keep a running backup from day 1 with every single bit of data touched (NLEs don't work well with incremental backups) so this way you can fill secure about whatever you've been most recently working on not poofing on you.

1 4TB drive for your photo and audio. You won't need raid 0 speed for these. Also, unless your absolutely sure your assets combined are going to be greater than 4TB, I'd keep them all on one 4TB disk and use the other for the backup (this one you can incremental backup, no snapshot necessary). If a separate drive was designated for each media type simply for organization purposes, you can always partition it as two drives.





One last thing.. if you haven't purchased any of this yet, you can get 1TB SSDs now for around $400 per. Get 4 or 5 to give you your editing space and a single 4/5/6TB backup drive. No raid to worry about, no external enclosures, etc. You can JBOD them in windows to get one large contiguous space. Probably ~ $2k.
 
Other than the fact that I'd never recommend a card without any cache onboard, techmattr is connect.

I would grab another matching Seagate 4TB(assuming you are happy with the drives) and either:
LSI SAS2208-based Controller w/ BBU. Dell Perc H710p, IBM M5110
LSI SAS2008-based Controller w/ BBU. Dell PercH700, IBM M5015
And run your set in either Raid10 for a measure of redundancy with enhanced performance (8TB usable), or if you have to for capacity, Raid 0 (full 16TB) with no redundancy.

I'm not familiar with the demands of video editing, but depending on the need for random performance, all the cards I listed are potentially compatible with LSI's Cachecade technology which allows an SSD to cache reads and/or writes (depending on the card and version of Cachecade, 1 vs 2).

Side note, What kind of budget do you have to work with?

Cachecade licenses are ~$300.00. So you'd have to factor that into the additional cost of a card that supports it. Crossflashing the variants is hit or miss whether the licensing will actually work. So purchasing a cheaper Dell variant for example doesn't always get you functioning cascade. Here's a list of the variants: http://www.servethehome.com/lsi-sas-2208-raid-controller-information-listing/

Additionally I would think the editing he'd be doing would be read heavy and not benefit from the cache anyway. So it might be a 4x ~ 6x cost for not much benefit.
 
Agreed, hence the "depending on the need" and "potentially" phrases... I'd like to hear more form the OP about his budget before getting to deep in discussion about theoretical paths.
 
2. Good point and good question but essentially any and all data on the footage drives is deletable. If data was lost, it would just be transferred back at a speed of about 130MBps.
How long does it usually take to build or rebuild a RAID 0 array?
130 MBps is about .127 GBps
.127 GBps x 60 sec = 7.62 GBpmin =
6144GB / 7.62GBpmin = 806 minutes / 60 = 13.43 hours to transfer 6TB or 6144GB
My guess is that restoring the RAID 0 upon failure combined with 6TB data transfer is a 2 day ordeal minimum?

How much would that time cost you? And don't forget to add in the time required to redo the work lost.

If you're doing this for money, you must look at it from a business perspective as well as a technical one.
 
Are the Seagates dead or throwing errors? If not then why replace them?

Also to address the 12Gbps RAID controller in your 2nd post... why? Only if you're using SSD would that help. HDD can't saturate 6Gbps controllers so a 12Gbps would just be a waste of money.
- The Seagates have performed at great speed and great reliability - I just keep thinking about this BACKBLAZE drive lifespan post XD
- As per the 12Gbps controller - it would be difficult to saturate full bandwidth. However, this higher speed is built on from what I understand is architecture/software/hardware that is more stable in a non-server configuration. Perhaps I am entirely foolish in believing that, but my lack of RAID experience feels more confident in newer products.

And run your set in either Raid10 for a measure of redundancy with enhanced performance (8TB usable), or if you have to for capacity, Raid 0 (full 16TB) with no redundancy.

I'm not familiar with the demands of video editing, but depending on the need for random performance, all the cards I listed are potentially compatible with LSI's Cachecade technology which allows an SSD to cache reads and/or writes (depending on the card and version of Cachecade, 1 vs 2).

Side note, What kind of budget do you have to work with?
- No redundancy of any kind is required on this editing workstation.
- In my own situation - due to RAW and 4K, video editing requires multiple streams of high datarate files. So if 5 different files are layered on top of each other within the editing software (Premiere Pro, After Effects), the 5 files will be read at the same time, at whatever bitrate/datarate they may be, thus creating a huge bottleneck in single drive configurations.
- Budget - for an initial setup, I'm trying to keep as cost effective as possible.

Stick with fewer disks. 8 drives in tandem will be faster than 2, but I've never had a reliable raid 0 setup worth the hassle of getting everything ironed out with more than 2 disks. Just not worth it. I hate when people say that but this sounds like a production machine, not for a goofofffuntimesexperiment machine.
Why is RAID 0 a hassle to setup with more than 2 disks?

Get a 4 or 5 (5 ideally) disk raid enclosure (esata) that will handle raid 0/1/5/etc/jbod for you.
You can either manually or automate your backup process. I'd treat the backup as a clone rather a running backup. NLEs won't work well with anything other than a clone.
Either one of those options will limit your workable space to 4TB because ultimately that's all you can reliably backup.
- Could you link to an enclosure?
- The content on the RAID 0 array would not need to be cloned.
RAID 0 failure would not risk daily production efforts. It would consume time to rebuild, but does not have the ability to jeopardize editing progress in my given configuration.
Realistically tho, I don't know your workflow or if you have multiple projects at the same time (assuming indie style single project rather than commercial editing), but lets assume you have a shooting ratio of about 8:1. For a 90m feature you're only dealing with about 4.5TB of data, and that's basically everything straight from the camera, no cutting room floor crap, which if you do a bit of cleaning while dumping all of your footage, you'll save even more space.
- Yes - this makes sense for space. What about read speed? My concern really isn't creating a volume just to edit a bunch of data on - my objective is to be able to pull multiple streams of high bitrate footage from a single volume, which given the datarate, would have to be a somewhat big volume.

Here's what I would do:
2 4TB drives in raid 0, limited to 4TB. Use a 3rd 4TB as a snapshot of the days work. If you can spring for another drive (or more), use those for additional snapshots so you can round robin back a few days. There's no way you'll be able to keep a running backup from day 1 with every single bit of data touched (NLEs don't work well with incremental backups) so this way you can fill secure about whatever you've been most recently working on not poofing on you.
- Again I wouldn't consider RAID 0 if I was concerned about backup or data loss.
- 4 TB working space isn't realistically enough unfortunately =/

One last thing.. if you haven't purchased any of this yet, you can get 1TB SSDs now for around $400 per. Get 4 or 5 to give you your editing space and a single 4/5/6TB backup drive. No raid to worry about, no external enclosures, etc. You can JBOD them in windows to get one large contiguous space. Probably ~ $2k.
- Yes someone else had mentioned affordable 1TB SSDs which is quite enticing. But how would you secure maintained read speed of more than let's say 400 MBps on the SSDs?
- It is normal for my average filesize to be about 10GB, maxing out around 50GB. That's a single 4K video file, or "take". If I've shot 4 takes of one scene, I've just shot 4 x 50GB for a single scene.
- What do you mean by 4/5/6TB backup drive? What would this drive be backing up?

Cachecade licenses are ~$300.00. So you'd have to factor that into the additional cost of a card that supports it. Crossflashing the variants is hit or miss whether the licensing will actually work. So purchasing a cheaper Dell variant for example doesn't always get you functioning cascade. Here's a list of the variants: http://www.servethehome.com/lsi-sas-2208-raid-controller-information-listing/

Additionally I would think the editing he'd be doing would be read heavy and not benefit from the cache anyway. So it might be a 4x ~ 6x cost for not much benefit.
- Not sure what cascading is buy 4x-6x does not sound ideal XD

Agreed, hence the "depending on the need" and "potentially" phrases... I'd like to hear more form the OP about his budget before getting to deep in discussion about theoretical paths.
Precisely. Budget is less than $3,000 for this.
Ideally the most cost effective is best. This application isn't for a client, it is for me to deliver work to clients. The more cash I keep in my pocket the better :D

How much would that time cost you? And don't forget to add in the time required to redo the work lost.

If you're doing this for money, you must look at it from a business perspective as well as a technical one.
- There would be no redoing work because it is not possible to lose progress on the project even if the RAID array burst into flames and melted my tower. For real. The work that gets put into the project is secured by means of networked project file backups in real time, cloud based backup, and auto save every 1 minute which updates local + networked + cloud versions in real time.
- Good point in regards to business perspective. That's really the only reason why if RAID 0 didn't work out, I'd go for RAID 5 I guess?
 
I would think that smooth and consistent i/o is the most important thing.
I have tried 3tb drives in raid 0 and found them choppy. ie. the performance varied a lot over the array..
I would use 2x raid0 arrays with 6-8 15krpm 2.5" sas 600Gb drives each.
 
Maybe I missed it, but how big are the files?

I've had an LSI 9265-8i running in an X79 board (actually C606) with a 3930k for 2 years, works great. Use it for ripping and re-encoding, 4 15k SAS drives in RAID 10. If I were building it today, I'd be using SSDs. Unless you need more space, 500GB or 1TB SSDs in RAID 10 will be your best bet, unless you really need a lot of space because of file size. Your other option is PCIe SSDs, plenty of lanes with 2011 processors, so you might as well use them.
 
Maybe I missed it, but how big are the files?

I've had an LSI 9265-8i running in an X79 board (actually C606) with a 3930k for 2 years, works great. Use it for ripping and re-encoding, 4 15k SAS drives in RAID 10. If I were building it today, I'd be using SSDs. Unless you need more space, 500GB or 1TB SSDs in RAID 10 will be your best bet, unless you really need a lot of space because of file size. Your other option is PCIe SSDs, plenty of lanes with 2011 processors, so you might as well use them.

Individual files are between 2GB and 50GB.
Average individual file size is about 10GB.
Total volume size needs to be more than 4TB per RAID 0 volume.

I think two RAID 0 arrays might be the best option.

Would the Seagate 4TB drives work well in RAID 0?
 
50 GB? Is that all? Then I don't understand your problem. My clients were dealing with flight trials data in that range over a decade ago with nothing particularly fancy. I thought your files were much larger. Rather than faffing about with disk arrays, just get yourself a motherboard with 128 GB+ RAM (you may want 256 GB or even 512 GB) and work on the clips entirely in RAM. And make sure your software is properly 64 bit.
 
Average file size is 10GB and you need 4TB of space, so that means you're actively working with 400 files at a time, is that correct? Just trying to determine the space requirements.
 
Tom Lowe created TimeScapes using a 12 terabyte RAID 5. He had about 20TB of 5.6k and 4k footgage (Canon and RED). Not sure what controller he was using but talks about his setup in on Vimeo. He was also using Adobe for the project, which had a $300k budget.

I mainly shoot HD but have done some 2.5k (DNG to prores 4444) and those files were massive. I edited them on SSD for speed and quickly moved them off to reclaim the space. I mainly edit off of a 2x2GB Seagate RAID-0 (300MB/s read/write), which is wincloned (hackintosh) to a 4TB internal Seagate drive and also backed up externally to a 4TB Hitachi Touro.
 
For stuff like video-editing I would go with SSD's without question. I used to have very expensive 13000rpm HDD's in raid-0 and a single 500GB EVO drive was noticably faster for editing in premiere pro.
2 SSD's of 1TB in raid-0 will be very fast and still a lot of storage. I work mostly with 1080p footage but have edited with 2.5K footage and never filled up the single 500GB drive (even with multiple projects running). Do you edit a documentary that you need 20 hours of footage on you editing drive?
Have you been editing RAW 4K files? Like do the editing with RAW 4K files? Or do you just render it in 4K in the end and work with proxies?

Also, when you have 20TB of footage from a film, the amount that actually reaches the editing drive is more like 5:1 depending on how it was shot.
 
Hi.

I'm video editor for tv channel and institutions in Europe.

If I could advise you one thing : put your storage in an external case, connected with Fiber 8 Gb or 16 Gb or 10GbE if you have more than 1 workstation managing media.

I've the same needs : I work with a 4u 24 bay hotswap, E3 1230, 32 Go Ram, 3x m1015 flashed in LSI, 10GbE... under ZFS in a pool filled with 6x vDev of 3 disk raidZ1 Western Digital 2 To SE.
It rock's !!

Cheers.
 
Individual files are between 2GB and 50GB.
Average individual file size is about 10GB.
Total volume size needs to be more than 4TB per RAID 0 volume.

I think two RAID 0 arrays might be the best option.

Would the Seagate 4TB drives work well in RAID 0?
So, what have you done until now ?
 
EDIT ops this is old and was revived. I leave my post thought below.


May I recommend you do what I am doing?

I am building a Norco 4224 case with a 1650v3 64-128GB of RAM and installing 10GBe to do a cross over between my main PC and the NAS/Server

This will give me 500-1200 MBps transfers (depending on drives and cache) and the server will be set up in SnapRAID so that I will never loose data due to failed drives or UREs or bit rot. Bit rot with moving that large amount of files is a huge issue and you should consider a file system that can handle that

BTW this solution I have and ST3F has covers long term storage, fast transfers, bit rot, and data redundancy.
 
Last edited:
So it's been nearly 8 months since the last post, and 2 years since the original post. Has anyone been able to post their results with RAID0? About 6 months ago I embarked on a journey to build a file server that I would be able to host a RAID6 array for backups and a RAID0 array for working drive for video editing. My video editing will comprise of anything from simple 1080p DSLR footage to 4k RAW footage, and even higher res timelapse footage.

First my workstation and file server information:
Server:
Roswill 12 bay 4U case
750W powersupply
EVGA GTX 750Ti
x79 chipset motherboard (consumer)
i7 3930k cpu (six core)
32GB RAM
LSI 9266 8i MegaRAID
4 6TB WD Black (RAID 6)
4 2TB WD Black (RAID 0)
intel x540t1 10GbE network card
1TB Samsung SSD for OS

Workstation
Corsair 750D Case
1000W Powersupply
EVGA GTX 580 Superclocked
x79 sabertooth motherboard
i7 3930k cpu (six core)
intel x540t1 10GbE network card
120 GB SSD for Adobe Cache
500 GB Samsung SSD for OS

I work from my work station in Adobe CC products. My experience has been great with file transfers with both my RAID6 and RAID0 I get about 500-650MB/s file transfers. I work directly off the RAID0 for all my projects, and I clone them on the RAID6 daily and when ever I have a footage ingest.

My experience working with RAID0 with regular Hardrives is great for single stream footage on a timeline. I still don't get great performance with CDNG files in Premiere, although Davinci Resolve plays them without a hitch (I'm assuming this is a software issue). When I have multiple streams (so files on top of each other) I get laggy performance. In fact, I had very limited performance recently working on a 1080p raw projects. I created lossless proxies, which was probably a mistake, and had 2 video layers stacked up. There were times I would wait for close to 20-30 seconds for my timeline to play. This is obviously a huge bottleneck not only in my hardware, but also in my editing workflow.

I'm curious if anyone has had similar workflows stacking multiple high demand footage layers on top of each other with SSD's in RAID0. I would like to know if they perform better in this scenario. I'm looking to expand my RAID sets. Separating the two with an additional LSI 9266 4i card to allow for a larger RAID6 backup drive. Right now I have an 8TB RAID0 set, and I can go as low as 4TB for a working drive with SSD's. So if the performance goes way up, then I may consider it.

On a cheaper note, i would like to know more about Cachecade with LSI. I haven't been able to find much about cachecade, and I would like to know how hard drives have to be configured. Do I loose Sata connections if I connect 2 SSD's for caching? Thus reducing my possible RAID to 6 drives and 2 SSD's for caching? Also, any performance information would be greatly appreciated.
 
Most modern single drive spinning rust will sustain the transfer rates you need. But it won't be smooth changing between files if that happens a lot.
So that's when SSD sourced files comes to play.

That said, prefer for shorter editing workflows, to use ram drives. No waiting whatsoever, fluid timeline movement, makes SSD seem archaic. You have 64gb, maybe use it for main clips or smaller projects, or go to more ram on a professional system if you're really worried about responsiveness.

The new PCIe SSDs magiggys are pretty wicked too.
 
Adobe Premiere CC 2015 : 4K h264 = disaster to play with Dual Xeon 2670v2, 64 GB, GTX 970 4GB
-> converting to Cineform : works flawless ! :)

For a two big projects which lasts 2 years of editing, one on Avid Media Composer v8, the other one with Adobe Premiere CC 2015 :
-> firstly I assemble an ZFS Storage system and 10bE network based with 24x 2 TB WD SE set up with 8x RaidZ of 3 HDD, on OmnisOS and Napp-it -> ok but slow due to SMB 1
-> then I modify this setup to try working with ZFS Guru -> ok but share limited to 8 characters !!!
-> then I modify again to install Ubuntu 14.xx + ZFS On Linux with 2x 12 TB (mirrored striped) -> better performance than the other + SMB 3​

Today I'm looking to build 2x server 24 hot swap, 10 GbE
-> MAIN : Xeon 26xx + 32 B + LSI SAS 9271-8i Bulk : 11x 2 To Raid10 + 2x HotSpare, Windows 8.1 / 2012r2 to have SMB 3.1 + RDMA
-> BackUp : ZFS with 4x RaidZ2 of 6x 4 TB​

OR

-> ZFS with OmniOS + Nappit 4x RaidZ of 4x 4 TB (real 512b, HGST 7k400) on Xeon 2670 + 128 GB + SSD NVMe ZIL & Log + IB ----->> IB + Xeon 2670 + 64 GB on Win 2012r2 + Starwind VirtualSAN + SSD for cache + 10 GbE
..... to share with SMB 3.1 & RDMA a secure storage which could grow with projects​
 

OR

-> ZFS with OmniOS + Nappit 4x RaidZ of 4x 4 TB (real 512b, HGST 7k400) on Xeon 2670 + 128 GB + SSD NVMe ZIL & Log + IB ----->> IB + Xeon 2670 + 64 GB on Win 2012r2 + Starwind VirtualSAN + SSD for cache + 10 GbE​

For 4k videoediting you can use either local storage or a ZFS NAS/SAN appliance over FC/iSCSI or SMB.
The ZFS option will give you a much better data security with checksums on a crash resistent filesystem, versioning with snaps,
backup with zfs send that can even backup open files in their last state on disk and superiour rambased cache options.

For 4k videoediting with > 500 MB/s throughput needs on a ZFS appliance with Solaris/OmniOS you need a
- serverclass mainboard with 10G Ethernet and an LSI HBA
- CPU is less relevant than RAM. Prefer higher frequency over number of cores
- lot of ECC RAM as readcache (between 16 and 128GB).
- a fast pool, best is an SSD only pool, prefer SSDs with powerloss protection.
In your case I would use a simple mirror of Samsung SM/PM 863 up from 960 GB (high petrfoemance pool) for current data. NVMe is an option if you need a multi-user capable editng:

Then add a larger pool for other data. Use a multi-raid-10 setup ex 2,4,6,8 or more disks
ex a pool from HGST Ultrastar 8HE gives 8TB per mirror. If you double number of disks/mirrors with 4TB HGST ultrastar disks, this wll double iops capability

A ZIL is not necessary and an l2ARC NVMe only in some use cases with very large pools, not in your case

Setup: http://www.napp-it.org/doc/downloads/napp-it.pdf
Build examples: http://www.napp-it.org/doc/downloads/napp-it_build_examples.pdf
Performance and tunings on OmniOS and OSX/Windows: http://napp-it.org/doc/downloads/performance_smb2.pdf
 
A very belated post - dunno, out of my league, but some newb thoughts:

a/ an important question is does your gpu work just as happily using 8 lanes as 16. If so, you can free some valuable and usually scarce io lanes.

b/

No one suggests software raid? I hear its good.

c/ I have seen cheap and cheerful 4 lane pcie gen 2 4x sata port cards ~$65, which offer what intel seems to be making such a fuss about with optane.

https://www.amazon.com/StarTech-com-Express-Controller-HyperDuo-Tiering/dp/B00BUC3N74

the card allows an ssd to "cache" a hdd, or it also allows (afaik) the ssd to "cache " the other 3 ports i fthey are used for a raid 0 HDD array.

maybe something similar exists in a more professional product



"The PEXSAT34RH 4-Port PCI Express 2.0 SATA Controller Card with HyperDuo adds 4 AHCI SATA III ports to a computer through a PCIe slot (x2), delivering multiple internal 6Gbps connections for high-performance hard drives and Solid State Drives (SSDs).

https://www.newegg.com/Product/Product.aspx?Item=N82E16815158365

"Featuring HyperDuo technology, the SATA card offers SSD auto-tiering which lets you balance the performance advantages of SSD storage with the cost-effectiveness and large capacity of standard hard drives. By combining SSD and HDD drives into a single volume (up to 3 SSD + 1 HDD), HyperDuo discreetly works in the background to identify and move frequently accessed files to the faster SSD drive(s) for improved data throughput – up to 80% of SSD performance! (Note: The HyperDuo automatic storage tiering feature is compatible with Windows® XP, Vista, 7 and 8 only)

The PCIe SATA controller card supports Port Multiplier (PM), enabling multiple SATA drives to be connected to one port over a single cable, for a total of 7 drives (Up to 4 drives through PM on one port, and a single drive to the remaining 3 ports)."

I realise cache has its limitsin this mainly sequential r/w app, but any non sequential r/w would destroy performance - relative access times are off the scale for hdd vs ssd.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Back
Top