Building an Openfiler SAN, need some input.

C7J0yc3

[H]ard|Gawd
Joined
Dec 27, 2009
Messages
1,353
The story for this build is I work for a IT managed services provider. We work mainly with companies with under 200 employees. I love it because I get to use a great mix of hardware and software and can tailor solutions to individual needs not to just a company outline. We are in the process of getting a quote together for a client who is going to be doing a P-V conversion going from HP DL360 G5s to the same servers just running Server 2008 R2 Enterprise + Hyper V. This client is very budget conscious and its not that they don't have the money for really high end gear, they just don't want to spend it if they don't have to. Originally my colleague and I were going to use a DroboElite with 8 2TB Western Digital RE4 drives. For the $7000 it was going to cost it was going to do everything we wanted much cheaper then HP, Dell, EMC, Sun, or Promise could offer. Then today I get a call from my colleague asking if I had ever heard of openfiler, I said I had but had never used it myself but was excited to. After about 6 hours of going over configurations, changing this, tweaking that we finally settled on one we think is going to suite our needs perfectly.

The overall goal was to get a 10TB RAID 5 for as close to $3000 as we could.

1x Supermicro CSE-836TQ-R800B 3U Rackmount Chasis $899.99
http://www.newegg.com/Product/Product.aspx?Item=N82E16811152090

1x ASUS P5Q Pro Turbo LGA 775 Motherboard $124.99
http://www.newegg.com/Product/Product.aspx?Item=N82E16813131377

1x Intel Pentium E5300 Wolfdale 2.6Ghz LGA 775 Processor $69.99
http://www.newegg.com/Product/Product.aspx?Item=N82E16819116074

2x Corsair XMS3 4GB (2x2GB) DDR3-1600 (8GB) RAM $195.98
http://www.newegg.com/Product/Product.aspx?Item=N82E16820145260

1x EVGA GeForce FX5200 PCI Video Card $34.99
http://www.newegg.com/Product/Product.aspx?Item=N82E16814130188

1x Intel Pro/1000 PT Quad RJ-45 Server NIC $406.99
http://www.newegg.com/Product/Product.aspx?Item=N82E16833106019

1x HighPoint RocketRAID 2340 PCI-E x8 RAID Card $479.99
http://www.newegg.com/Product/Product.aspx?Item=N82E16816115031

1x Western Digital Caviar Blue WD800AAJS 80GB 7200RPM SATA II (OS Drive) $36.99
http://www.newegg.com/Product/Product.aspx?Item=N82E16822136195

12x Western Digital Caviar Black WD1501FASS 1.5TB 7200RPM SATA II (RAID) $1319.88
http://www.newegg.com/Product/Product.aspx?Item=N82E16822136592

Total including 2 years of hardware warranty offered by Newegg $3610.76


So here is the big questions
1: Are we going to have any unexpected surprises with Openfiler and this config?
2: What parts would you change and why?
 
I'd change the motherboard to a SuperMicro X8SIL-F with a Xeon X3430 and ECC RAM. Remote control and ECC RAM are worth it. It also has 2 onboard Intel NICs, so you don't need a Quad NIC.

Change the RAID card to an Adaptec or an Areca. Both make 16 port cards.

Change the drives to Hitachi - non RE series WDs don't do well in RAID. Don't bother with an OS drive. Just carve a LUN.

Has OpenFiler been updated from 2.3? If not, then its still using the ancient IET iSCSI target which will not work with Windows 2008 clustering, let alone Hyper-V. Unless you're willing to pay, you'll have to figure out the latest RC of SCST on Linux or maybe istgt on FreeBSD.
 
You're going for a $900 Supermicro case but skimping on the rest? Doesn't really make all that much sense. I'm going to agree with the post above me except for the bit about Adaptec.
 
I have to agree on the RAID card. I have been in IT for 13 years and there are NO companies selling good servers with HighPoint RAID cards in them. Switch to a good card. I really like the Dell PERC cards (LSI based) but I don't know if any of them go to 12 drives without extenders which I have personally not used. I have my PERC 5/i maxed with 8 drives so I could be looking at extenders in the near future.

As for the motherboard you cannot go wrong with SuperMicro.

You will most likely need more than 2 Gig NICs. I have my setup such that I have 2 Gig NICs dedicated for iSCSI traffic only. Then I have 1 Gig NIC for management and SMB/NFS traffic.

I have personally had no problems with Western Digital non-RE drives being used in a RAID array. I have 4 WD 6400AAKS 7200 RPM drives in RAID 10 and they work great! I know many folks on here talk about time out problems, guess I am lucky. I would go either Hitachi or Western Digital. Both seem to have the best drives right now.
 
I'm still interested in knowing why people keep on buying 3+ year old NIC's. You're far better off getting a real server board and an Intel ET dual:

http://www.newegg.com/Product/Product.aspx?Item=N82E16813182235
http://www.newegg.com/Product/Product.aspx?Item=N82E16833106034

Edit: The newegg description of the ET dual is totally off.

Use the onboard 82574L's (which are superior to the Pro/1000 PT) for management, and use the ET duals for your VM's (they have some VT offloads supported by Hyper-V). Failing that, if you insist on getting a quad adapter, get the ET Quad (pretty much the same price as the PT quad):

http://www.amazon.com/Intel-Quad-Port-Server-Adapter/dp/B0025KXUJQ
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I'd change the motherboard to a SuperMicro X8SIL-F with a Xeon X3430 and ECC RAM. Remote control and ECC RAM are worth it.

The problem is this adds about $390 to the price, not to mention the extra spent on the ILO card. Bringing in the X3430 and ECC RAM would be useful in a media server where you were encoding video for live streaming, however all this server is doing is I/O and thus there is a bit too much horsepower for what we need.

It also has 2 onboard Intel NICs, so you don't need a Quad NIC.

True however the plan was to team the NICs and thus have 2x 2GBps connections for multipath.

Change the RAID card to an Adaptec or an Areca. Both make 16 port cards.

I would probably agree the highpoint was used 1 for cost, and 2 because it comes with the cables. I know where to find SAS breakout cables but if you have suggestions for other specific cards I am open.

Change the drives to Hitachi - non RE series WDs don't do well in RAID. Don't bother with an OS drive. Just carve a LUN.

I was looking at the Hitachi 7k2000s however was shaky because they are "Deathstar" drives. I was also looking at the Seagate ST31500341AS. I liked the price but the WDs have bigger cache and I have always had good luck with WD. As far as the OS drive goes we wanted to have a drive that would be separate so that if the OS gets corrupted we can reinstall / reformat without damaging the data array. And in the reverse if the RAID controller card bites the dust we have the OS drive separate so that we just need to replace the card, let the array rebuild / repair and we are back in business.

Has OpenFiler been updated from 2.3? If not, then its still using the ancient IET iSCSI target which will not work with Windows 2008 clustering, let alone Hyper-V. Unless you're willing to pay, you'll have to figure out the latest RC of SCST on Linux or maybe istgt on FreeBSD.

Openfiler has been updated and the current version for download on their server is 2.3. As far as paid versions go as this is for a business environment paid support is not an issue. If we have to buy a license to get extra functionality then so be it.
 
You're going for a $900 Supermicro case but skimping on the rest? Doesn't really make all that much sense. I'm going to agree with the post above me except for the bit about Adaptec.

I personally would prefer a Norco 3116 / 3216 however the supermicro comes with redundant 800W PSUs which is the only reason we are using it.
 
I have to agree on the RAID card. I have been in IT for 13 years and there are NO companies selling good servers with HighPoint RAID cards in them. Switch to a good card. I really like the Dell PERC cards (LSI based) but I don't know if any of them go to 12 drives without extenders which I have personally not used. I have my PERC 5/i maxed with 8 drives so I could be looking at extenders in the near future.

Personally I am a big fan of the Dell PERC 6/i or the HP P410 because of their cache options and BBWC. However as stated a $600 adapter + a SAS expander card goes outside the budget.

You will most likely need more than 2 Gig NICs. I have my setup such that I have 2 Gig NICs dedicated for iSCSI traffic only. Then I have 1 Gig NIC for management and SMB/NFS traffic.

That's the plan, 1 NIC (the onboard) will be there as the management interface, the Quad port will be there for the iSCSI traffic.

I have personally had no problems with Western Digital non-RE drives being used in a RAID array. I have 4 WD 6400AAKS 7200 RPM drives in RAID 10 and they work great! I know many folks on here talk about time out problems, guess I am lucky. I would go either Hitachi or Western Digital. Both seem to have the best drives right now.

I have similar experiences with WD Caviar blue series which is why I wasn't that worried, but again was also looking into Hitatchi and Seagate.
 
Openfiler is free and works pretty good for iSCSI use. But personally I've had better luck with Open-E. The software is not that expensive, and it works REALLY well. Plus its updated often. However, for the price *IE Free* you can't beat Openfiler.
 
The problem is this adds about $390 to the price, not to mention the extra spent on the ILO card. Bringing in the X3430 and ECC RAM would be useful in a media server where you were encoding video for live streaming, however all this server is doing is I/O and thus there is a bit too much horsepower for what we need.

If you are going to be using this in any type of a server enviroment then the costs savings of not using ECC is simply not worth it. If you don't need the processing speed, get an G6950 which works in the SIL board and supports ECC in that board. You can pick them up for <$100.

Also switch out the case for either the A or E16 case. The A case features mini-ipass connectors (aka 4x) and the E16 has an integrated SAS expander in the backplane with a mini-ipass connector. Get something like an LSI 9260-4i with the E16 and all you need to connect is 1 ipass connector and you are done. The 9260-4i is ~$300 and the E16 is only ~$150 more than the TQ which means for effectively the same cost of the highpoint card you have a much better system. A single ipass connector provides 24 Gb/s of bandwidth which is plenty for file serving over Ethernet. Even a 10 gbe connection would become the bottleneck before the disks.


True however the plan was to team the NICs and thus have 2x 2GBps connections for multipath.

Go with the SIL board, team the two nic on board and add a card with 2/4 nics if you need it.



I would probably agree the highpoint was used 1 for cost, and 2 because it comes with the cables. I know where to find SAS breakout cables but if you have suggestions for other specific cards I am open.

As I previously stated, using a LSI 9260 and the E16 would cost the same, require 1 cable, give better performance, and be easier to setup.



I was looking at the Hitachi 7k2000s however was shaky because they are "Deathstar" drives. I was also looking at the Seagate ST31500341AS. I liked the price but the WDs have bigger cache and I have always had good luck with WD. As far as the OS drive goes we wanted to have a drive that would be separate so that if the OS gets corrupted we can reinstall / reformat without damaging the data array. And in the reverse if the RAID controller card bites the dust we have the OS drive separate so that we just need to replace the card, let the array rebuild / repair and we are back in business.

Um, the "deathstars" were years ago. By all reports the 7k2000 and A7k2000 (effectively the enterprise version of the 7k2000) are some of the most reliable drives on the market at the moment.

For an OS drive for OF, I would look at USB/SATA DOMs, etc, in all honesty. They'll be more reliable than any mechanical drive. Even a fast SLC USB drive plugged into the onboard USB header of the SIL is fine. That or a cheap reliable 2.5" laptop or SSD.
 
Personally I am a big fan of the Dell PERC 6/i or the HP P410 because of their cache options and BBWC. However as stated a $600 adapter + a SAS expander card goes outside the budget.

Meh, you should go with the E16 version of the case and an LSI 9260-4i which is a better card than either the P410 or 6i and only requires 1 ipass cable and has full integrated management functionality with the drive backplane (SuperMicro uses LSI SAS expander chips). You can get the 9260-4i for ~$300 retail currently, its actually a excellent value. The E16 versions of the supermicro cases are only ~$150 more than the TQs. This costs the same as the Highpoint card you was going to buy, add another $100 to get the battery module for the 9260 and he's set.

That's the plan, 1 NIC (the onboard) will be there as the management interface, the Quad port will be there for the iSCSI traffic.

The SIL contains 3 nics. 2 Intel Gbe NICs with LACP support and a 100 mbit management NIC with integrated IMPI and support for remote mounting of USB/ISO/ETC. You can literally setup it up so that you can remote log into the IMPI interface from the otherside of the world and setup the bios/etc. The integrated IMPI is the best thing ever.
 
Aaronspink were you talking about a supermicro case like this? http://www.supermicro.com/products/chassis/3U/936/SC936E16-R1200.cfm

Also what "SIL" motherboard were you recommending?

As for the rest a new possible configuration is this

1x SuperMicro MBD-X8SIA-F-O LBA 1156
http://www.newegg.com/Product/Product.aspx?Item=N82E16813182235

1x Intel Core i3-530 2.93Ghz Clarkdale
http://www.newegg.com/Product/Product.aspx?Item=N82E16819115222

2x Kingston 8GB (2x 4GB) DDR3-1333 ECC Registered Memory Kit
http://www.newegg.com/Product/Product.aspx?Item=N82E16820134978

2x Intel Pro/1000 ET Dual Port RJ-45
http://www.newegg.com/Product/Product.aspx?Item=N82E16833106034

Drives RAID card and Chassis are still up in the air. RAID controller will depend on chassis, and drives will depend on us agreeing on those Hitatchis. Does anyone have anything against the seagates I posted? I got the idea for them off the BackBlaze posing on their 67TB storage pods and thus figured that they would be fine in RAID.
 
Aaronspink were you talking about a supermicro case like this? http://www.supermicro.com/products/chassis/3U/936/SC936E16-R1200.cfm

Also what "SIL" motherboard were you recommending?

For the case I was thinking: http://www.supermicro.com/products/chassis/3U/836/SC836E16-R1200.cfm which is the same as you originally just with the SAS 2.0 backplane. SIL is the X8SIL-F. Just a smaller version of the X8SIA-F really.

1x Intel Core i3-530 2.93Ghz Clarkdale
http://www.newegg.com/Product/Product.aspx?Item=N82E16819115222

2x Kingston 8GB (2x 4GB) DDR3-1333 ECC Registered Memory Kit
http://www.newegg.com/Product/Product.aspx?Item=N82E16820134978

Only the X34xx and L34xx series support registered memory, the i3/G6950 only support unbuffered. So you either need to switch the i3 for an x3430/x3440 or switch the memory to http://www.ec.kingston.com/ecom/con.../www.kingston.com&ktcpartno=KVR1333D3E9SK2/8G.
 
Some of my remarks:

Do you really need PCI videocard. can't you go onboard? Its 35 dollars wasted that could be spend on more RAM, better CPU, etc.

You may need TLER-capable "RAID edition" disks if you want hardware RAID5; or go Linux/BSD software RAID if you want to save on hardware cost and use non-TLER disks in combination with Software RAID which works great.

Also keep in mind 12 x 7200rpm disks will get rather hot; 5400rpm disks would be easier on cooling unless you expect alot of random I/O on your server, which is logical with so many clients. You should focus on alot of RAM and caching the most accessed data.

You are going to do link aggregation with your quad intel NIC, or just separate gigabit uplinks?
 
Thank you all for the excellent input, and pointing out those things we missed. Its always good to have extra sets of eyes look at the config just to make sure everything is going to work the way we want it to.

We re-evaluated the client's needs and decided that really they could get away with having 2TB of disk for their Hyper-V environment and therefore the 10TB goal was not as important as the $3000 goal was.
 
Case &#8211; Norco RPC-2008 $194.99
http://www.newegg.com/Product/Product.aspx?Item=N82E16811219024

PSU- Corsair HX Series 750W $149.99
http://www.newegg.com/Product/Product.aspx?Item=N82E16817139010

Motherboard &#8211; Supermicro MBD-H8SGL-F-O Socket G34 $264.99
http://www.newegg.com/Product/Product.aspx?Item=N82E16813182230

CPU &#8211; AMD Opteron 6128 Magny-Cours 2Ghz 8 Core Socket G34 $280.99
http://www.newegg.com/Product/Product.aspx?Item=N82E16819105266

CPU Cooling &#8211; Dynatron A6 $34.99
http://www.newegg.com/Product/Product.aspx?Item=N82E16835114113

RAM &#8211; Kingston 8GB (2x4GB) DDR3-1333 ECC Buffered $277.99
http://www.newegg.com/Product/Product.aspx?Item=N82E16820134978

RAID Controller &#8211; LSI MegaRAID 8i $269.99
http://www.newegg.com/Product/Product.aspx?Item=N82E16816118127

SAS connector cable &#8211; 2x Adaptec Breakout $59.90
http://www.newegg.com/Product/Product.aspx?Item=N82E16816103197

NIC &#8211; 2x Intel Pro/1000 ET Dual port $278.98
http://www.newegg.com/Product/Product.aspx?Item=N82E16833106020

OS Drive &#8211; WD Scorpio Blue 160GB 7200RPM SATA II $49.99
http://www.newegg.com/Product/Product.aspx?Item=N82E16822136278

RAID Drives &#8211; Seagate Constellation ES 1TB $1279.92
http://www.newegg.com/Product/Product.aspx?Item=N82E16822148590

Total $3142.89

Even though formatted in RAID 5 we will only get 5.3TB out of this solution, we feel that it will be perfect, and if we need to go to 2TB drives because the client wants more space we can swap the seagates for RE4s. Also as we have no way of knowing what the loads are going to be like truly yet so we figured 8GB of RAM would be a good base, and it could be expanded in the future if needed.

As far as the NICs go, each NIC is going to be teamed (so port 1 on nic 1 and port 1 on nic 2 in a team). This is going to be for 2 reasons.
1: Fallover, if we loose a NIC we don't loose the connection.
2: Speed, bigger pipe means that network is no longer the bottolneck, the disks are and that is what we would prefer.

We will use the LSi to do hardware RAID 5, and then attach the OS disk to the motherboard SATA controller.
 
Back
Top