XSAN - OpenFiler

Matt897

Weaksauce
Joined
May 4, 2010
Messages
87
Not to double post, but I don't think I posted properly in the other section - and nobody seemed to have an answer.

I work with some editors as their IT guy. Basically they are currently shuffling external drives to one another editing projects on Final cut 7. I'd like to simplify the process by getting them onto a SAN-like system where they can store the master footage, and essentially edit off the server. The only problem is from what I've seen, things can get costly very quickly. I heard somewhere that an ISIS 5000 server runs in the $20 grand area - which is ridiculous considering its basically a 32tb machine that runs windows server and some propriety avid software. Cost is a big issue - and I think we're limited to about $4,000. Please keep in mind that our floor is not wired for fibre-channel, I can easily do multiple gigabit connections, and POSSIBLY 10 gigabit.

While sticking with Final Cut 7 - I was imagining doing an Open Filer server (or similar) on a mac pro via virtual machine.

That would eliminate the cost of fibre-channel adapters and would hopefully allow XSAN to work - anyone have experience with this?

Also, does anyone have experience running Open Filer on a virtual machine with good enough performance that people can edit off it? I'd consider doing a SSD main drive too - with RED nas drives for storage.

Other suggestions are welcome! I'd like to stick with OSX because of the XSAN element. If that wouldn't work out, i'm open to ideas.


Thank you for the input!
 
You could likely throw (8) 3TB drives into a Drobo B800i for around 4K. It's iSCSI so it should work fine with your xsan environment.

Do you already have a machine running a hypervisor with spare overhead to allocate to running Openfiler? Because having to buy a server, and then vmware licenses is going to eat through 4K in the blink of an eye, and thats before you even get to storage.
 
What does he need vmware licenses for? The free version of vsphere 5 should work fine, no?
 
OP, the big question si how much disk I/O the users will need. The most common usage for your case will be access via an SMB share, and unless you'll be ponying up for 10GbE hardware, you'll be limited to a theoretical 125MB/s (realworld 100-110MB/S) sustained read/write speeds vis Gigabit Ethernet. If this is acceptable. I'd look at a good NAS solution for them. QNAP (my preferred) or Synology. You should be able to build or buy an 8bay unit (Rack or Tower) full up with 2TB+ ENTERPRISE SATA drives for around $4k.
 
I don't edit videos myself but I was under the impression people made crazy RAID0 for that work, so I doubt any reasonable and even not so reasonable network system will really work. Now if they only copy the raw video from the NAS, then work on it locally on a SSD, that could work (they would go get a coffee during the initial copy). Not sure it's really an improvement over the external drives though.

Now I'm wondering how backups and file integrity are managed.
 
Maybe crazy raid 0 for home use...

A typical small(er) system for video editing looks like this:

nearline - 66TB raid 6 7.2k NL SAS
realtime - 13TB raid 6 10k NL SAS

which gets you:
Code:
# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/testfs1             79T  432G   79T   1% /testfs1
from the realtime pool (20 spindles):
Code:
# time dd if=testfile.local of=/dev/null bs=2M
220757+0 records in
220757+0 records out
462960984064 bytes (463 GB) copied, 381.05 s, 1.2 GB/s
real    6m21.053s
2k playback (24fps) comes in at just under 300MB/sec - double that for editing. Of course you would need multiple bonded 1gig wires, or start looking at 10g or fibre channel to service that.


Nate's advice is solid and should fit within your budget. It's going to cost a *lot* more for something faster, and then you have to consider the cost of beefing the network up to match.
 
Last edited:
I would

- Skip XSAN and Openfiler
- Use a SAN based on ZFS (web-appliance based on Solaris or FreeBSD or Linux via CLI - check in this order)

Use as much RAM as possible (32-512 GB)
Prefer 10 Gbe (Macpro) or at least 10 GBe or trunking to a switch and dedicated 1 Gb lines to your Macs

Use fast disks with multiple mirrors or multiple Raid-Z[1-3] (similar multiple striped Raid 50/60)
Keep fillrate below 60% (performance is always a function of fillrate)

Use
filebased access: AFP or NFS (SMB is really slow on Macs, thanks Apple)
blockbased access: iSCSI (Comstar as target and GlobalSan as Initiator)

Software: free, prefer webbased appliance solutions (Solaris/OI/OmniOS based: napp-it or FreeBSD based: nas4free, freenas, zfsguru)
Hardware: prefer SuperMicro cases and Mainboards, LSI HBA and Intel Nics.
use at least a 19" case with 24 bays

minimal costs:
Server + 24Slot SAS/Sata backplane without disks: 3000$ + disks (single Xeon, 32-64 GB ECC RAM, Dual 1 GbE Intel, LSI HBAs)
Throughput: depends mostly on disks, Raid-config, fillrate, nics and if the RAM is large enough to deliver most reads from Cache (buy RAM)

Others
buy a second system for backups (may be a smaller one with a single Raid Z[1-3] vdev = slower but less disks,
in extreme a HP Microserver with 4 x 4 TB disks = 12 TB capacity in Raid Z1 = 200 Euro without disks)
Async Replicate data between them (transfer changed datablocks only), use ZFS snaps for versioning your data.
ZFS snaps work without delay or initial space consumption, in contrast to TimeMachine.
 
Wow, Thank you guys for the huge response!!!

Yes I've taken a look at Drobo - it looks like an awesome option. I'm just concerned it wouldn't have the bandwidth to support 3 or 4 people editing off of it at once. Has anyone tried something like this?
I feel like a single gigabit connection wouldn't be fast enough - I'd like to avoid doing 10Gig because of the cost.

By skipping Openfiler do people get good enough read/write speeds with Freenas? If I bonded up maybe 6 nics then, that should bring my theoretical speeds up to about 600 mb/s?

I might be able to get my hands on a Poweredge 2900 that I'm looking to 'retire' - It should be good to load up 8 drives in - I can theoretically fill it with 3 or 4 tb drives. I'll need to double check but it should be able to take upto 48 gigs of ram!

I've thought about doing the whole Raw files on the server, copy locally, work, then re-upload to the server. Although no, it doesn't really change much in terms of external drives, it would give me the ability to centrally backup everything. Currently when a final project is finished it is sent offsite, however I would like to add the ability to backup the project files as having to re-edit something is ridiculous in the event of a bad drive. I recently started here and I'm only now getting into the nitty gritty details of their lack of backing up safely, and something needs to change.

Now that i'm writing it out, I'm starting to think that a realtime editing server might be a little out of reach considering the bandwidth requirements - it would probably be easier to have people copy locally, work, and upload when finished. I can load either the Poweredge or a self-built machine that will essentially be there for archiving use. I can then backup to Crashplan offsite, and utilize the LTO3 backup autoloader we have on the 'new' server for critical files.
 
Back
Top