Hello Forum,
I'm working for a film post production company. In peak times we have 20 win7 workstations and 16 rendernodes accessing our Fedora/Samba3 Raid system via 1GbE. The server comes to an I/O Limit mainly cause of saturation of the Network. Other than these deadline problems the system is running stable for some years.
We are now planning our next storage system and love the simplicity our storage solution has. But we're aiming 1 PB in the end of 2013 and would like to add hardware on demand starting maybe with 0.25 PB. I understand that this would give us different problems to solve than what we had before.
Looking at the market we see systems like Isilon with virtualised storage and massive I/O, Looks impressive but is a bit over our budget. So we want to develop our system our self.
Software:
We would like to use ZFS in RaidZ3 preferable on centos (if it gets stable soon). We like the caching concept of zfs in ram, zil and L2arc. File sharing would be over CIFS. We have an extra AD controller (Win2008 Server) for user management.
Hardware:
Supermicro
Xeon whatever is needed
Ram plenty
48 bays SAS (softraid)
SSD Cache
10GbE Ethernet (or FC)
Infiniband interconnect (if we need to cluster)
Our plan we have right now is to start with 2 or 3 Servers and add whatever is needed when its needed.
So my questions:
1. Is a clustered filesystem necessary?
2. What is your experience in "virtualising" storage?
3. How would you try to achieve this - I would really like to hear your opinion.
4. Performance is critical - how can we avoid bottlenecks?
5. Any hints on OS, clusters etc ... is much appreciated
6. How does samba scale is ctdb a must? Which underlying filesystem works well?
7. How are you managing your solution?
I can provide more information if necessary (Failover, IOPS ...). Thank you for your time.
Johannes
I'm working for a film post production company. In peak times we have 20 win7 workstations and 16 rendernodes accessing our Fedora/Samba3 Raid system via 1GbE. The server comes to an I/O Limit mainly cause of saturation of the Network. Other than these deadline problems the system is running stable for some years.
We are now planning our next storage system and love the simplicity our storage solution has. But we're aiming 1 PB in the end of 2013 and would like to add hardware on demand starting maybe with 0.25 PB. I understand that this would give us different problems to solve than what we had before.
Looking at the market we see systems like Isilon with virtualised storage and massive I/O, Looks impressive but is a bit over our budget. So we want to develop our system our self.
Software:
We would like to use ZFS in RaidZ3 preferable on centos (if it gets stable soon). We like the caching concept of zfs in ram, zil and L2arc. File sharing would be over CIFS. We have an extra AD controller (Win2008 Server) for user management.
Hardware:
Supermicro
Xeon whatever is needed
Ram plenty
48 bays SAS (softraid)
SSD Cache
10GbE Ethernet (or FC)
Infiniband interconnect (if we need to cluster)
Our plan we have right now is to start with 2 or 3 Servers and add whatever is needed when its needed.
So my questions:
1. Is a clustered filesystem necessary?
2. What is your experience in "virtualising" storage?
3. How would you try to achieve this - I would really like to hear your opinion.
4. Performance is critical - how can we avoid bottlenecks?
5. Any hints on OS, clusters etc ... is much appreciated
6. How does samba scale is ctdb a must? Which underlying filesystem works well?
7. How are you managing your solution?
I can provide more information if necessary (Failover, IOPS ...). Thank you for your time.
Johannes
Last edited: