Best way to connect a video editing station to a SAN

lithiumion

n00b
Joined
May 14, 2013
Messages
19
Hello All,

I'm building an OpenIndiana SAN to host around 20TB of video files and I want to connect this SAN to my Mac Pro machine so I can edit directly from/on the SAN. Any suggestions on the link I should use between the Mac Pro and the SAN?

The 1 Gigabit ethernet link is not sufficient at all and I do need a much faster connection between the two.

Also, the distance between the two machines is 15 meters.


Thanks
 
Infiniband or Fiber would probably be the best bet for affordable, >1Gbps speeds. Do you have a budget and/or a desired throughput in mind?
 
i would try

on Mac side: 10 Gbe Adapter ex Atto or a Sonnet 10 GBe Converter Thunderbolt -> 10 Gbe
on Solarish side (I would use OmniOS) a 10 GBe adapter like a Intel x520 or x540-T1 (TP)
add a Netgear Switch like a XS708 see http://www.netgear.com/landing/10gigabit/

or connect them directly without switch
 
Hello Jesse,

I want a 8GBit throughput. I have a friend who has a 4Gbit link between his Mac Pro and a Promise VTrack and he is really suffering. I know it could be a misconfiguration issue for him, but I want the assurance of the higher throughput.

As for the budget; I plan on spending $5K on the build. But I'm flexible on that too if the more money spent means better performance/reliability.

Thanks
 
What version Mac Pro and what PCIe slots do you have available on it? You may just be able to do a 10GbE to 10GbE connect between the two, most of the cards are 8x, so you'll want to use a 16x PCIe slot on the Mac Pro, not one of the 4x slots.
 
Gea,

Very interesting idea for the Sonnet 10 GBe Thunderbolt Converter. I will research that.

Any hints on why would you prefer OmniOS to OpenIndiana?

Thanks
 
What PCIe slots do you have open? You don't have thunderbolt, so you'll need a PCIe 10GbE card.

All are free except for one where the GPU is installed. Default setup.

I do have thunderbolt on the Macbook Pro though, which I do use for editing too.
 
All are free except for one where the GPU is installed. Default setup.

I do have thunderbolt on the Macbook Pro though, which I do use for editing too.

So you have a free PCIe 2.0 x16 slot, pop in a 10GbE card there, thunderbolt to 10GbE for your MacBook Pro, 10GbE on the SAN and throw the netgear switch in the mix, call it day.
 
Any hints on why would you prefer OmniOS to OpenIndiana?
Thanks

OI server and OmniOS are extremely similar.

But with OmniOS you have
- newest ZFS features from Illumos like LZ4 compression
- a stable edition (now the third stable, a stable every 6 months)
- biweekly updates
- optionally commercial support

Oi is a pure community project with currently very minimal development.
no stable in sight, no updates, no bugfixes
 
So, here is my SAN build list;

SUPERMICRO CSE-847E26-R1400LPB

Gigabyte GA-Z77X-UP5 TH ATX LGA1155 Motherboard (Has Two Thunderbolt Ports)

Intel Core i7-3770K 3.5GHz Quad-Core Processor

Corsair Vengence 64GB DDRIII 1600MHz RAM

Intel 520 Series Cherryville SSDSC2CW120A3K5 2.5" 120GB (OS SSDisk)

Intel 330 Series Maple Crest SSDSC2CT060A3K5 2.5" 60GB (Two SSDs as L2ARC)

LSI SAS 9211-8i RAID Controller

Intel E10G41AT2 AT2 Server Adapter 10Gbps PCI Express (Two of them)

The NETGEAR-XS708E switch that Gea has suggested

Here is also another NETGEAR ProSafe GSM7328FS-100NAS Managed Switch that looks good for the job

I still need suggestions for the SAN drives please. I want something reliable yet affordable. I need a 12TB of storage that I can easily expand as needs grow.

Also, what is the best cabling to use with this switch/setup?

Thanks
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
From what I am aware you can't simply drop any 10GbE card into a Mac Pro.
Small Tree offer some which are compatible with OSX (http://www.small-tree.com/10GbE_Cards_s/4.htm)
Supposedly these drivers will not work with regular cards...

Are you going hardware raid or software? (sorry if something is in OpenIndiana, never heard of that before)

Any reason for the workstation board over something like the Supermicro X9SCM?
 
Yes, this card from Small Tree looks OK to me. And it uses twisted pair cabling which is really good.

Sorry, I forgot to type in the SAS card of choice. It is the LSI SAS 9211-8i

I have opted for the Gigabyte mainboard as it already has two Thunderbolt ports and two 16X PCI-Express slots. Better future upgradeability.
 
So, here is my SAN build list;

SUPERMICRO CSE-847E26-R1400LPB

Gigabyte GA-Z77X-UP5 TH ATX LGA1155 Motherboard (Has Two Thunderbolt Ports)

Intel Core i7-3770K 3.5GHz Quad-Core Processor

Corsair Vengence 32GB DDRIII 1600MHz RAM

Intel 520 Series Cherryville SSDSC2CW120A3K5 2.5" 120GB (OS SSDisk)

LSI SAS 9211-8i RAID Controller

Intel E10G41AT2 AT2 Server Adapter 10Gbps PCI Express (Two of them)

The NETGEAR-XS708E switch that Gea has suggested

Here is also another NETGEAR ProSafe GSM7328FS-100NAS Managed Switch that looks good for the job

I still need suggestions for the SAN drives please. I want something reliable yet affordable. I need a 12TB of storage that I can easily expand as needs grow.

Also, what is the best cabling to use with this switch/setup?

Thanks


There are no Thunderbolt driver in Solaris.

You are better with a real server-class mainboard
Prefer Supermicro, think about a 2011 mainboard that supports up to 256 GB ECC, 4channel RAM, with a quadcore Xeon, use 64GB+ RAM

ex X9SRH-7T with 10 GBe and LSI 2308 onboard, add a 16 Cannel LSI cards like a LSI 9201 and a 8 channel 9211
Instead of the LSI 9211, you can use IBM M1015, they are quite the same after reflashing,

For the Mac, buy a card that includes Mac driver
For the disks, Seagate Constellations, Hitachi or WD red should do the job
But with Sata disks, i would avoid expanders
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
I know Solaris doen't support Thunderbolt so far. But if I decide later to turn that board into a Hackintosh it would be useful. No?

The LSI card I opted for is already an 8 channel one as indecated by the "-8i" portion of its name.

Awesome suggestion for the X9SRH-7T!. Thanks. I have found X9SRH-7TF here for $600

With two 10Gbit ports onboard I can save a bit.
 
I know Solaris doen't support Thunderbolt so far. But if I decide later to turn that board into a Hackintosh it would be useful. No?

The LSI card I opted for is already an 8 channel one as indecated by the "-8i" portion of its name.

Awesome suggestion for the X9SRH-7T!. Thanks. I have found X9SRH-7TF here for $600

With two 10Gbit ports onboard I can save a bit.

Your SuperMicro case has 36 bays, so you would need 4 of these 9211 unless you use an expander with SAS disks (Mainboard has 3 pci-e slots)
The mainboard has 8 port SAS so you need to add 24 ports, ex one 16 channel + one 8 channel HBA.
 
Your SuperMicro case has 36 bays, so you would need 4 of these 9211 unless you use an expander with SAS disks (Mainboard has 3 pci-e slots)
The mainboard has 8 port SAS so you need to add 24 ports, ex one 16 channel + one 8 channel HBA.

This SuperMicro has built-in expanders in the backplane as told by SuperMicro's site here .And it was deployed by ZFSBuild here. So I should not need an additional expander. They have also used the same LSI card.

This feature is what justifies the high price of this chassis for me.
 
This SuperMicro has built-in expanders in the backplane as told by SuperMicro's site here .And it was deployed by ZFSBuild here. So I should not need an additional expander?. They have also used the same LSI card.


With an expander, i would use SAS disks.
But in general, a solution with several HBAs is faster and more trouble free especially with Sata disks. (I even use a 50 bay chenbro case with 6 x IBM 1015 adapters)

The model without expander seems
http://www.supermicro.nl/products/chassis/4U/847/SC847A-R1400LP.cfm
 
With an expander, i would use SAS disks.
But in general, a solution with several HBAs is faster and more trouble free especially with Sata disks. (I even use a 50 bay chenbro case with 6 x IBM 1015 adapters)

Yes. But this will exceed my budget if we add the cost of the server class mainboard, the drives, the extra 32GB of server RAM and the two L2ARC SSDs.

Besides, if I go SAS I should go SAS, Not NL-SAS, which is expensive.
 
Just noticed the notification about Amazon and its support of the forum. I shall find Amazon alternative links for my parts soon.
 
Yes. But this will exceed my budget if we add the cost of the server class mainboard, the drives, the extra 32GB of server RAM and the two L2ARC SSDs.

Besides, if I go SAS I should go SAS, Not NL-SAS, which is expensive.

The server mainboard is only a little more than a dedicated 10 Gbe adapter + an LSI HBA (both included).
If you want to go quite cheap, you can use 3 x IBM 1015 (=LSI 9211) extra (each about 120 Euro/$) for your 36 bay case
(with 4 TB disks, a 24 Bay case may be enough, think more about a second smaller box for backups)

You also do not need L2ARC SSD, invest in RAM.
With pure HBA, you can use regular and cheap Sata 4 TB disks instead of SAS without unexpected problems.

On Mac-side a Intel X540-T1 (single port) may be enough.
 
Just out of curiousity, why don't you build a DAS instead? It will be much cheaper, and less hassle to put together and configure.

Case with 24bay drives
PSU
HP Expander (power board required)
RAID Controller
HDDs
SAS cable
Done

If you are going for LSI as RAID controller, then get some SSD and buy that CacheCade option. I have the 1st generation of this card: MegaRAID SAS 9285CV-8e, and it works well for what I use it for (storage). I built a 10disks RAID6 arrays on a 20bay case, which I'm at 85% capacity, so I'll be buying another 10disks and build another RAID6 array and using some kind of pooling software to make it look like one volume, while keeping both raids' redundancy.

Just a thought....
 
Just out of curiousity, why don't you build a DAS instead? It will be much cheaper, and less ha.

I suppose, you have never used ZFS or similar,
with a direct attached storage plus HFS (or EXT or NTFS) you have:

- no Copy on Write (no always consistent file system, no need to do a fschk, no snaps without delay and initial space consumption)
- no checksums, no data security
- no online scrubbimg to fix silent errors, no auto-healing filesystem
- no controller independent pools build from softwarre raids
- no RAM caching
- no ssd caching
- no multiuser access

welcome in a world of horses when someone discovered a steam engine
 
Last edited:
Just chucking in my potentially misguided 2c worth.....

With SAS expanders does the (usually) x8 slot the HBAs use not become a limiting factor when connecting to a large drive chassis as opposed to using multiple HBAs to increase overall bandwidth? I'm hoping the answer will be it would be a factor for SSDs but not for spinning disks because they are limited anyway.
 
I suppose, you have never used ZFS or similar,
with a direct attached storage plus HFS (or EXT or NTFS) you have:

- no Copy on Write (no always consistent file system, no need to do a fschk, no snaps without delay and initial space consumption)
- no checksums, no data security
- no online scrubbimg to fix silent errors, no auto-healing filesystem
- no controller independent pools build from softwarre raids
- no RAM caching
- no ssd caching
- no multiuser access

welcome in a world of horses when someone discovered a steam engine

Considering my RAID card does some of that, the OP needed speed and as far as I know, only one workstation would access the storage. Dedup is useless on media files, and that's what the OP wants to edit.

Before I started building my storage, I tried most of the software I could get my hands on to see if I could build something that would fit my needs. After trying everything, I decided to just build a DAS as I just wanted storage, and I could share out the drive and my HTPC would just stream the data. So far it has worked well, and since I set it up, I haven't even bother to check for anything. No drives have failed, the system has never failed me. Guess that's what you get when you paid 1k for a stupid RAID card... ;)

;
 
There is one advantage of a solution build on lokal storage, that is reduced complexity.
But the OP asked about a SAN connectivity.

So the question is:
Is a SAN with 10 Gbe and multi-terabyte RAM for caching faster: yes, a lot
Is data security overall and in case of a disk failure better: yes a lot
Are there features that are essential like ZFS snaps without delay vs never ending TimeMachine sessions: yes

Even if you pay 1k for a raidcard, you get none of the features comparable to a Solaris ZFS SAN,
where you put the 1 k in a server mainboard wth HBA, 10 GBe and a small Xeon.
 
Thanks dhluke for your input,

As Gea has already said I do need a SAN not a DAS.
I have been working with ZFS for a while and it is impressive in every aspect. Besides I do need connectivity and expandability. If I add another machine to my LAN and I needed that machine to access my DAS storage volume I'll have to share that storage volume through the machine it is attached to. Not ideal for me.
 
If you want to save some money I would go AMD for the ZFS build. There's no reason to spend that kind of money for a single socket.

Is a SAN with 10 Gbe and multi-terabyte RAM for caching faster: yes, a lot
Be careful not to oversell this point. In the case of RAM caching yes ZFS will be faster.... as long as you can keep the data set in memory and as long as you have the money to do so. If you can't it will be no faster than other distributed parity systems.

Video files in particular can be much larger than what most people deal with therefore there is a larger chance that things will fall outside of what RAM caching can handle. It also only kicks in on the second read. The first read receives little benefit. The OP could also go with a SAS switch which would be cheaper (and easier) to do than 10GBe+ZFS SAN/NAS and you would be able to share storage. It's the switch (10GbE/SAS) that's really the most important factor here.

I'm not saying one is definitely better than another but it's not all roses and there are things to think about with regards to what ZFS's strengths are and large files most resistant to corrupted data and more likely not to fit in RAM completely is just one of those things to be mindful of. It doesn't discount it. It's just something to think about.
 
I have lately been entertaining the idea of zfs direct attached storage via sas to the servers.
I will defently have to test how stable that driver is. It's also limited to 3gbit sas speeds.
 
Just get some 10-gigE cards and call it a day. Great performance, low(er) cost, and since you only have one client, no need for a switch.

And you are building a NAS, not a SAN.
 
Greetings

Sorry for the late reply after you appear to have already made a decision but I thought I would put forward my suggestion for consideration as it may be an alternate scenario that might still be available to you.

What version Mac Pro and what PCIe slots do you have available on it? You may just be able to do a 10GbE to 10GbE connect between the two, most of the cards are 8x, so you'll want to use a 16x PCIe slot on the Mac Pro, not one of the 4x slots.

If this is the case then get two cheap 10Gbe cards but I'm not sure what cable would suit it given the 15 meter length required, I presume you would have to use a Mellanox active fiber cable, this would be the best option provided it would be sufficient.

I want a 8GBit throughput. I have a friend who has a 4Gbit link between his Mac Pro and a Promise VTrack and he is really suffering. I know it could be a misconfiguration issue for him, but I want the assurance of the higher throughput.

Well in that case have I got a solution for "higher throughput" for you! however, you'd better sit down while your reading this as it's not going to be cheap.

Firstly your going to need two FDR rate infiniband cards and a 15 meter cable. If these cards can be configured point to point without the use of a switch then this is the best way to go as you can do both 10,40 and 56 Gb infiniband and also 10 and 40 Gb ethernet.

If you do require a switch you will have to get a qdr one that will restrict you to 40 Gb infiniband as the cheapest 56Gb infiniband with the smallest amount of ports available is way too expensive.

As this is expensive equipment you will want additional 4 year warranties to bring it up to 5 years total for the cards, cables and switch.

Total cost without a switch will be $1969.00 and with a switch is $3917.00,
so amortised over the 5 year period we are talking about $400.00 p.a. and $800.00 p.a. which while expensive is quite reasonable when you look at something like what a Promise Vtrack or Facilis Terrablock setup would cost on its own. Given that Gbe came out what 10 years ago and we are all still waiting for cheap 10Gbe gear (adapters and switches) that probably still won't appear for another couple of years anyway you will have at least 40 Gb infiniband/ethernet that probably won't be seen at the consumer lever for about another 5-10 years, so your investment won't depreciate quickly and you should have most of the 5 GB's bandwidth available if your NAS/SAN has as many spindles as possible, from what I understand video workloads are mostly reads/writes of sequential data so even individual consumer drives can deliver 100-200 MB's each.

Since your going the ZFS route then this means you don't need SAS drives and you can use ordinary SATA drives as you don't need the TLER feature, also I believe that ZFS has the capability of reading ahead on sequential reads but I don't know if this capability can be tuned upwards in size or not cache wise, do you really need the L2ARC SSD's? given you have 32GB's of RAM on your system, how many simultaneous video streams will you have going at the same time?

If your NAS/SAN is fast enough you would probably want to have a fairly large SSD on the host PC/Mac, alternatively given these are very high end cards and you have full TCP/IP offload and RDMA then since you have remote boot capabilies over iSCSI, infiniband as well as ethernet you might be better off by getting rid of the local storage on your PC/Mac altogether and turning it into a diskless workstation and have all the storage on your NAS/SAN.

Cheers
 
HobartTas

Thanks for your input but your suggested parts specs/prices are way beyond my budget and needs. Actually I do not want to use fiber connections at all. This was the main reason behind posting my initial question here. I believe deploying fiber in such a small network is like walking your way into a financial death-trap consciously.

I'm still going with what Gea suggested. It is the best solution considering the network size and the amount of money I'm welling to invest. I guess his way of thinking is very similar to mine too :)

Will keep you guys posted about how this project develops.
 
Back
Top