My ZFS / Nexenta HA cluster build

stevebaynet

Limp Gawd
Joined
Nov 9, 2011
Messages
204
I have decided to start this thread and document my build because i have seen others do it and have gotten a wealth of information from it. I once read an article with a quote "the reason open source doesn't work is because 99% take and only 1% give back". I may not be able to contribute code to ZFS or illumos, but i can at least document my trials and tribulations and put them up somewhere where other people can see, benefit and even debate the merits of it.

The end result will be a Nexenta based HA cluster. I really like what i see with OI+napp-it but i need HA and most importantly for me, i need someone to call at 2am when something goes wrong, lol (this is for work).

Like most shops, we have (and have had) various legacy and proprietary based solutions in our racks. The frustration for me has mainly centered around price VS the usable lifetime of the product.

A colleague suggested nexenta after the last vmworld where nexenta participated in the "hands on lab". This was actually my first exposure to ZFS and i was highly impressed with the features i saw. Namely:

- commodity hardware
- no vendor lock in
- copy on write file system with snapshots that dont drag down your perf

The articles and data that came out of the vmware HOL at vmworld showed me that ZFS could scale and really keep up with the legacy systems. Also, i really liked the hybrid storage model teaming SSD's with spindle disks.

At the time, there were (for me) three standout options:

- OpenSolaris/OpenIndiana + Napp-IT
- TrueNAS (commercial version of FreeNAS)
- Nexenta

I liked the free OI+Napp-IT idea, but for commercial production stuff, that made me a little nervous (we intend to use this for one of our backup SAN's tho)

I quoted out solutions with both TrueNAS and Nexenta based setups to see how things would come out price wise. But mainly, i did my due diligence looking up reviews, builds, feedback, problems... basically, anything i could find.

For TrueNAS i pretty much came up with nothing, this scared me. i did however read alot about FreeNAS and their implementation of ZFS. I assume TrueNAS was new at the time so this is not a knock on them. + i think FreeNAS is a great piece of software. Ultimately, the lack of a track record with vmware based setups lead me elsewhere. For home or SOHO stuff i would definitely give it a second look. I assume with BSD's improving implementation of ZFS they will warrant a second look down the road. (love BSD)

For Nexenta, i liked that they were a certified partner with vmware, i liked that i was able to google out a good amount of information about them and their products. I did have some concerns tho, mainly:

- Nexenta is based on open solaris, which is affectively dead thanks to Oracle
- Although Nexenta based solutions are at release 3, i still saw in reviews and blog postings that some real issues still persisted that concerned me.
- The approved list of hardware seemed limited
- As a company that provides both a free community option along side a commercial one, i looked at the community forums to see what kind of support was there and it looked very limited to me.

Ultimately, after lots of thought as well as replying to nearly every blog or post i could find out there about ZFS/Nexenta, we decided to go the Nexenta route.

My logic, based on the above concerns:

- Nexenta ver 4 will be based on illumos (well, illumian really, but you get it)

It wasn't until i watched a video linked on this forum that i really understood the history of solaris and why illumos is the future for ZFS.

http://www.youtube.com/watch?v=-zRN7XLCRhc

(if you have an hour to spare and your interested in this kinda stuff, i recommend watching it)

- Nexenta stability at ver 3: My main concern was about problems i had seen online while doing my searches. Mainly, poor performance when a drive is starting to die (but hasnt died yet) and not being alerted to this via the Nexenta software (but being able to see the problem in the OS). I found some OS ways around this when it comes to alerting and while ultimately, i dont like it, the only way to see it get fixed is to join in and suggest workarounds or file bug reports to see it fixed in the next update (and pester if its not). I also noticed that alot of community issues had to do with hacking hardware to work with the OS. This concerned me less since i would be going the route of using only hardware tested with this system.

- Nexenta HSL (hardware supported list): is indeed limited, but i saw some flexibility here when comparing my self built quoted solutions against pre-built vendor created ones (that had similar items to the HSL, but not always exact). bottom line, work with a nexenta sales engineer and they can help you work out a solution that will hit the target goal (and in most cases price).

- Community support: the nexentastor.org forum is built on horrid forum software (i hear they are replacing it) and at first i did not see much action there. but over the past few months i have seen it improve, more help coming from community members as well as more nexenta employees starting to show up on the boards. This was encouraging for me. It also led me to joining myself, and while i am certainly no nexenta or ZFS guru, having a second pair of eyes looking at logical problems seems to help regardless. I see less posts going un-answered now.

Holy crap, this is turning into a novel, next up: Why we decided to go the self built route VS a nexenta/ZFS hardware partner.

(and after that, pictures of the build and the install)

I'd like future posts to include the testing phases while we put this thing through its pases. I'll have lots of questions on this forum i assume as i have never used iometer or bonnie. lol. so bear with me.
 
Last edited:
I should also add, the inteded usage of this SAN appliance will be to host ESXi 4.1 data stores. We plan to upgrade to ESXi 5, but not until it is a bit more mature.

This appliance will serve up either NFS or iSCSI or both via dedicated 10gbe storage network.

(i assume the NFS vs iSCSI stuff will play out while we test stuff further down in this thread when we get there)
 
Part 2: Self built VS pre-built

The great thing about ZFS from any vendor is that its based on commodity hardware. We initially looked at the following vendors:

http://www.pogostorage.com/
http://www.areasys.com/

While both provided great solutions that were cost effective (and would be great for people/companies wanting a "in the can" finished solution) i struggled with the following:

- If something happens with the hardware, the VAR will fix it
- if something happens with the software, nexenta will fix it

While the expertise of a nexenta var would be hard to duplicate, and it would be indeed handy to have, i couldn't ignore the alternative solution: have our existing supermicro VAR get us all the components direct and we just piece it together ourselves.

It allowed us to get the gear at the lowest cost. and we still get a very similar level of service. If a hardware issue comes up, our SMC VAR takes care of it. If an OS issue comes up, we call/email nexenta. Seems win-win to me, plus we get to have more fun getting all this stuff together.

+ this unit may be intended for production, but its for future expansion, so there is no rush or immediate gap to fill.

We ended up with the following hardware:

For each node (x2):

Supermirco 825TQ-R720LPB 2U case
Supermicro X8DTH-6F main board
6 x 8GB DDR3-1333 1.5V 2RX4 LP ECC REG RAM
2 x WESTMERE Quad Core E5620
AOC-STG-I2 (Intel) 10gbe dual NIC
LSI 9205-8e HBA
2 x Seagate ST9300605SS 2.5" 300GB SAS 2.0 10K RPM

JBOD:

Supermicro 847E26-RJBOD1

Disks:

Zil: Stec ZeusRAM 8gb SSD
Pool Disks: 17 x Seagate ST31000424SS Constellation ES 2TB 6gbs SAS

Next up, installing all this stuff in the rack.
 
Part 3: Racking things up

Pardon the cable mess, lol, but here goes:

Racked up the SC847 JBOD, this thing was a beast. but 45 hot swap drives in 4U's for the price couldn't be beat.

Front:

2pziv5x.jpg


Back:

m7tlec.jpg


It came with rails at least, so that helped.

Although it comes with the 4 SAS port bracket you can see on the back, you have to plug it in to the corresponding ports on the inside depending on how you plant to run it.

11hxppe.jpg


We got the E26 verstion, which basically means, dual expanders on each backplane. We also plan to connect each JBOD to an HBA direct instead of cascading, so with that in mind we connected each primary/secondary expander for each backplane (front and back) via the internal cables, which then maps them to the 4 ports at the back of the unit. (you have to take out those fans in the middle to do it, but they are quick release at least)

Next, we started populating this thing with drives. First up was the ZIL, which is a Stec ZeusRAM... most expensive SSD i have ever bought!! lol. But i am told its worth it. I suppose we'll see it in the stats/tests.

909buu.jpg


Now, onto the pool drives. Our normal drive reseller was quoting absurd prices (and not even sure if he could get enough) thanks to the crazy hard drive market right now. So i took to google and I'm glad i did. I must have called 50 different online stores when i finally managed to get 17 1 TB constellations for a surprisingly good price.

67mkpk.jpg


Since this isnt going right into production and there is no rush, we'll add more drives later

Next, we racked up the two head nodes

5cajc7.jpg


Now comes the fun of cabling everything together and starting the OS install process. Also, we went with the 2U form factor on the head nodes for expandability. Each has enough room for 7 expansion cards on the board. Also, we are leaving room under the JBOD to add more of those as needed.
 
I am doing the exact same thing. Trying to build-out a system for Nexenta HA. The problem is I was told buy a Nexenta Engineer that they will not sell the HA module unless you are using one of their prefered partners' solutions.

See my post for more information:
http://hardforum.com/showthread.php?t=1670015
 
I am doing the exact same thing. Trying to build-out a system for Nexenta HA. The problem is I was told buy a Nexenta Engineer that they will not sell the HA module unless you are using one of their prefered partners' solutions.

See my post for more information:
http://hardforum.com/showthread.php?t=1670015

Could have to do with maybe some of your parts not being on the Nexenta HSL. Maybe try discussing with someone else @nexenta and see if there is any flexibility. I know they have some sort of certification tool that their VAR's use to certify their systems for use/sale, perhaps something could be worked out. I would press on it further.

Since we are essentially using nearly all items from the HSL, we had no prob purchasing the HA cluster plugin.
 
Could have to do with maybe some of your parts not being on the Nexenta HSL. Maybe try discussing with someone else @nexenta and see if there is any flexibility. I know they have some sort of certification tool that their VAR's use to certify their systems for use/sale, perhaps something could be worked out. I would press on it further.

Since we are essentially using nearly all items from the HSL, we had no prob purchasing the HA cluster plugin.

I went through and read that thread again. I see your issue now. My suggestion would be to engage nexenta again and consider the suggestion of possibly using the DELL's for your head units, but using a supermicro JBOD (like the SC847 i have above) for the disk shelfs. Then install nexenta and as long as it passes their check, your're good to go (and if it doesnt, just return the SMC JBOD)

Just my .02
 
stevebaynet,

I love the thread, since I'm in a similar position.

I'm just stating to read & look at this thing called ZFS

I'll be reading all your stuff
 
Thanks for the thread.

I'm also beginning to build out a similar setup for our SAN. We've used OI+Napp-IT here at work for a while on a very large backup set storage server with great success (amazing cost/performance and stable). We have a 10gb Infortrend ESVA unit that we were using as our VMware storage SAN but have had a horrible experience with it, and most importantly with their support. That hardware has caused us more downtime and sleepless nights than everything else in our network combined and our 24/7 support is a whole bunch of voicemail boxes and only allows for Tier 3 support after 6-7pm PST (the engineers get to work in Thailand). I'm over it and want hardware I'm capable of troubleshooting and fixing, along with parts that I likely stock.

In any case, I've chosen the same route; our SuperMicro VAR and most likely Nexenta with all HCL components. If you don't mind, I'd really like to know where you got a hold of the SAS drives at a decent price...
 
obrith,

I was just going to send stevebaynet a PM re his supplier of hdd, then I saw your post
 
Thanks for posting this!
I'm planning something similar for work so it's very interesting to get more opinions and ideas.

While quickly browsing through the manual for the JBOD, it doesn't seem very redundant to me though. Isn't CSE-PTJBOD-CB2 a SPOF or am I missing something?

I'm probably going down the route of two SC846 instead since I was planning to combine node and storage space. Is that a valid option or should I go for a similair setup as you? What Pros/Cons did you have in that decision? I'm thinking 2 cases = nothing shared and no SPOF.

Nice hardware though, would love to get my hands on that Stec ZeusRAM :D
 
obrith,

I was just going to send stevebaynet a PM re his supplier of hdd, then I saw your post

lol... no secret really, i just hitup google shopping and called every one of the 50 places it listed. heh.

pm sent, but dont want that to hijack this thread, so 1x only, ymmv
 
Great contribution to the zfs community. I'll be following this.

Thanks, im excited. we could have sunk more money into legacy storage but after all the great things ive read about ZFS and wanting to get my feet wet i figured its time to put my money where my mouth is.
 
Thanks for posting this!
I'm planning something similar for work so it's very interesting to get more opinions and ideas.

While quickly browsing through the manual for the JBOD, it doesn't seem very redundant to me though. Isn't CSE-PTJBOD-CB2 a SPOF or am I missing something?

I'm probably going down the route of two SC846 instead since I was planning to combine node and storage space. Is that a valid option or should I go for a similair setup as you? What Pros/Cons did you have in that decision? I'm thinking 2 cases = nothing shared and no SPOF.

Nice hardware though, would love to get my hands on that Stec ZeusRAM :D

Good catch, yeah that would be a single point of failure, but all it does is distribute power in the absence of an actual mainboard. The rest of it is pretty redundant, hot swap PSU's, separate backplane/expanders. And since im not cascading the backplanes together, each one will have a separate cable/uplink to the HBA in each head node.

For now im fine with it as is, since im just testing. But once we get down the road a bit on this i will be adding 2 - 3 more of the same JBOD's underneath in that same rack and will spread pools out across all the JBOD's so i should be able to survive losing an entire JBOD and still be up (critical shit will be mirrored vdev's not raidZx)

So, about your point, having two separate servers with no shared storage. That was actually my original idea too. I figured i would have two storage servers, and they'll just replicate to eachother. That idea got shot down when people started filling me in on the overhead/lag of realtime sync between the two units, specially when your pushing a serious amount of data through it.

it would really depend on your particular situation. For me, to replicate large VMDK's from ESX across two boxes in near real time just wasnt the right fit, but YMMV.

if you want a truly redundant JBOD and wanna drop some coin, check out http://www.dataonstorage.com/

they have fully redundant options on their JBOD's and everything is field replaceable, which is sweet. but you pay for it.
 
Here are brand new benchmarks, NetApp servers vs Oracle ZFS servers:
https://blogs.oracle.com/7000tips/entry/great_new_7320_benchmark
https://blogs.oracle.com/si/entry/oracle_posts_spec_sfs_benchmark
Maybe you could look at the ZFS server and get some ideas. Apparantely, ZFS servers can rival NetApp servers.

Tip about clustering
https://blogs.oracle.com/7000tips/entry/tip_setting_up_a_new


And dont forget to rebalance your zpool, after adding new disks. All the data is stuck on the current 17 disks. If you add more disks, the old data is still stuck on the 17 disks. You need to rebalance your data so it will spread among all the old and the new disks. One way to do that, is to move old data to another zpool, and then move back the data. Or, you could just create a new zfs filesystem on your old zpool, and move all data to it - which means you dont need additional zpool. Google about "rebalancing zpool"
 
Last edited:
So, about your point, having two separate servers with no shared storage. That was actually my original idea too. I figured i would have two storage servers, and they'll just replicate to eachother. That idea got shot down when people started filling me in on the overhead/lag of realtime sync between the two units, specially when your pushing a serious amount of data through it.

it would really depend on your particular situation. For me, to replicate large VMDK's from ESX across two boxes in near real time just wasnt the right fit, but YMMV.
I think a dedicated 10Gbit link should be enough for me for the replication. I'll have to do some testing on that but worst case I can just add more cards.

How is your replication going to work? Will it be only internal in the JBOD? I have no experience in dual port SAS or that type of configuration so that's why I'm asking...

if you want a truly redundant JBOD and wanna drop some coin, check out http://www.dataonstorage.com/

they have fully redundant options on their JBOD's and everything is field replaceable, which is sweet. but you pay for it.
Seems overpriced at a first glance, what's the difference between their DNS-1600D and the SuperMicro 846E26-R1200? I could get three of the later for one DNS-1600D...
 
Last edited:
stevebaynet,

Did you research any other devices for ZIL?

The STEC Zeus is a 3.5", I'm looking for 2.5" alternatives
 
Here are brand new benchmarks, NetApp servers vs Oracle ZFS servers:
https://blogs.oracle.com/7000tips/entry/great_new_7320_benchmark
https://blogs.oracle.com/si/entry/oracle_posts_spec_sfs_benchmark
Maybe you could look at the ZFS server and get some ideas. Apparantely, ZFS servers can rival NetApp servers.

Tip about clustering
https://blogs.oracle.com/7000tips/entry/tip_setting_up_a_new


And dont forget to rebalance your zpool, after adding new disks. All the data is stuck on the current 17 disks. If you add more disks, the old data is still stuck on the 17 disks. You need to rebalance your data so it will spread among all the old and the new disks. One way to do that, is to move old data to another zpool, and then move back the data. Or, you could just create a new zfs filesystem on your old zpool, and move all data to it - which means you dont need additional zpool. Google about "rebalancing zpool"

Wow, those are some sweet numbers on those links.

And a good thought regarding "rebalancing". I will add that to the list of things i plan to test and document in this thread. (planning to do other stuff like test how failures work (zil, data drives, L2arc..) as well as how long it takes to resilver certain drives in different configs, etc.
 
I think a dedicated 10Gbit link should be enough for me for the replication. I'll have to do some testing on that but worst case I can just add more cards.

How is your replication going to work? Will it be only internal in the JBOD? I have no experience in dual port SAS or that type of configuration so that's why I'm asking...

Yeah, i would think 10g would be plenty of bandwidth. For my setup, there will be no replication between the two heads since they will be accessing shared storage. Since i am only using dual expanders and SAS drives (which means dual interfaces) if one head dies, the other head will just assume control of that pool/disks (and it's IP)

Later, before this goes into real production, i will probably have a second appliance that will receive snapshot updates at a regular interval for disaster recovery purposes. (and that appliance will serv other purposes too so its not a waste of hardware)

Seems overpriced at a first glance, what's the difference between their DNS-1600D and the SuperMicro 846E26-R1200? I could get three of the later for one DNS-1600D...

Yup, exactly, thats why i went with the supermicro JBOD. The only diff is field replaceable parts. nearly every part that could fail on the DataOn JBOD is hot swapable, where as on the supermicro JBOD, if something like the SAS expander dies on the backplane, you have to take it out of service, open it up and replace it. but like you said, if you can get 3 for the price of one, which gives you more options. It's really personal preference and what goals/budget you have. For me, the supermicro JBOD made more sense.
 
stevebaynet,

Did you research any other devices for ZIL?

The STEC Zeus is a 3.5", I'm looking for 2.5" alternatives

Yes i did (if only to try and find a cheaper option, heh)

This doesnt help you much, but i have been told by Nexenta support that two mirrored OCZ Talos C series SSD's would also work well. But that doesn't help you much as those are 3.5" too.

I have also read on this forum and online that Intel X25E's are acceptable too but those are "end of life" and im unsure as to what the latest equivalent would be.
 
stevebaynet,

I see you have some comment over at the iphouse blog http://blogs.iphouse.net/mike/2011/11/storage-cluster-a-year-in-review/

Did Mike's comments give cause you any concerns about going with Nexenta?

It did. That was part of my "due diligence" that i mentioned in the beginning of this thread. When i was searching for ZFS and Nexenta on google, one of the first links that came up was a blog posting on his site about his Nexenta HA build. Since it was very similar to what i was doing, i figured i would hit him up and see if i could get him to revisit that post and talk about how his experience has gone since.

As you can see, it wasnt very posative. But i tried to focus just on the items related to the Nexenta software. My main concern was the dying drive that was slowing things down and not being reported by the Nexenta mgmnt console.

I dived in and did some of my own research. And what i believe it boiled down to is a drive that was failing, but had not yet failed. If you use hardware raid on an LSI card i believe its called "Predictive Failure", basically, "this thing is gonna die soon", heh. The Nexenta mgmnt service apparently would only wait until the drive actually failed to report it. Obviously, this is a problem.

Then i looked at, how could i get around this? in Solaris you can view problem, failing (and failed) hardware by using:

fmadm faulty

You could also setup an SNMP trap and have it notify you, etc etc etc. Would i want to do this for software i just paid for which should reasonably handle this? no. But i weighed the pro's and the con's and the pro's won. It also seems like a simple issue which i will nag about until its fixed (unless its fixed already).

The other thing i struggled with from the feedback i got from Mike on his blog was ease of use. Yes, it would be nice if i never needed to hit up the command line to do anything. But at the same time, if you have a SAN appliance serving up data storage via a iSCSI or NFS data network to multiple ESX servers. This environment is complicated by nature and i dont believe it will ever be "set it and forget it".

So my logic again is pro's and con's. for me the Pro's far outweigh what i found out while doing my research. I'm also a realist and know there is no cookie cutter setup, everyone is going to be looking for different features and what is fine for me might be a deal breaker for someone else.
 
also, just to reply to my own comment above. My own motivations for doing this have to do with wanting a scalable ZFS based appliance at the lowest TCO and have it still be efficient and reliable. For me, this Nexenta based build is the way i decided to go. And if that means i use it and report bugs and contribute via the community, im hoping that means a better ZFS experience for all distros and a growth to what has been coined "Open Storage".
 
I've exchanged emails with a few Nexenta folks regarding drive failure notification

Here are some of the replies

Adding to Theron's comment. You can manage drives in an external JBOD via NMV which will have a graphic of the JBOD and obtain useful information such as fan speed and voltage. We will notify you of failures and you will have the ability to select a specific disk drive and have its LED flash.

Here are the list of supported JBOD's:

SuperMicro SC-216E16
SuperMicro SC-826E26
SuperMicro SC-847E26
Dell MD1200
LSI DE1600 (Now a NetApp product)
Xyratex SP2212S
Xyratex SP2424S
DataOn - All models on the HCL

Good to hear from you. As we discussed on the phone the drive failure notification issues that Mike raised could be contributed to any number of issues. We support drive failure notification with hardware on our HSL. If it isn't on our HSL, then it's hard to say what will happen. We certainly don't for every disk in every chassis. Here's an example integrator with drive notifications.
http://www.dataonstorage.com/images/PDF/Solutions/NX2260-S2%20Unifiled%20Storage%20System.pdf
 
Good to hear from you. As we discussed on the phone the drive failure notification issues that Mike raised could be contributed to any number of issues. We support drive failure notification with hardware on our HSL. If it isn't on our HSL, then it's hard to say what will happen. We certainly don't for every disk in every chassis. Here's an example integrator with drive notifications.
http://www.dataonstorage.com/images/PDF/Solutions/NX2260-S2 Unifiled Storage System.pdf

Yea, i didnt consider lack of HSL status to be a contributing factor, but makes sense. I assume wont be a prob then since my controller, JBOD and disks are all on the HSL. That seems to be a pretty big key so far, using supported hardware.
 
PROJECT UPDATE:

Everything is wired up, the JBOD is connected to each head node by two mini-sas cables each. both servers and the JBOD powered up fine.

Before i install the software, have one little issue to get around that shouldnt be a big deal.

The mainboard in each head node has an onboard LSI 2008 raid controller. I want to use this controller to serve up the mirrored hot swap OS drives. I have read that you can either:

A: create a hardware raid mirror via the LSI chip and present that to Nexenta/ZFS

or

B: Flash the LSI firmware to IT (initiator target) mode and pass the drives up to Nexenta/ZFS to mirror

I have been told in the past that Nexenta preferrs option B. Plus i'd preferr to have it at the OS level anyways. (i hate having to reboot boxes just to make raid changes)

With that said, we're armed with the firmware links from supermicro:

ftp://ftp.supermicro.com/driver/SAS/LSI

and the KB article from LSI regarding flash updates:

http://kb.lsi.com/KnowledgebaseArticle16266.aspx

I also found this link easiest when creating a bootable DOS USB stick for flash updates:

http://www.bay-wolf.com/usbmemstick.htm

Hope to get this done in the next 24hrs
 
Option B certainly. This would be ideal. Also, unlike other ZFS distros Nexenta allows you to create the OS/boot mirror immediately at install time. Saves you some configuration later too.
 

Thats actually the guide i first read (and where i got the LSI KB link from)

However, i assumed SMC would have the updated firmware instead of having to go to LSI and force the firmware version update due to PN difference.

Also, the SMC firmware update package comes with a batch file that runs the commands for you (though if you have multiple LSI cards/chips, u need to make sure you add the switch to select the card, which is why i added the LSI KB article link) although LSI firmware could be packaged with that same batch file, i didnt look. But i figured it would be easier to start with the vendor branded FW version first.
 
Also, in that how to the OP has a windows OS installed on the box and is able to just run it from a DOS prompt. In my case there is no OS yet and i would either need to install one, or go the DOS USB stick route (which is what we are doing)

Yep, you're right about the Windows o/s. Completely forgot about that.

Let us know how it goes, I'll probably be doing the same thing on Thursday.

My m/b is a SM H8DG6-F (AMD)

This is my demo rig, I'll use this to get a taste of ZFS/Nexenta
 
On the HA, at nexenta we give you 2 choices for HA, either get a precertified kit or you can buy some PS to have it installed and certified. In addition, we do require gold support contract
 
On the HA, at nexenta we give you 2 choices for HA, either get a precertified kit or you can buy some PS to have it installed and certified. In addition, we do require gold support contract

In our case, our SMC VAR worked with us and a Nexenta SE direct to ensure all components are supported in order to get the gold license and HA plugin. While i think the idea of a pre-certified or pre-installed appliance is excellent and important, since the systems are based on commodity hardware i liked the option of going with my existing VAR and getting to customize the components myself (also saved us some cash) but YMMV. I assume everyone will have different needs and differing opinions when it comes to this.
 
What brand is your RAM - 6 x 8GB DDR3-1333 1.5V 2RX4 LP ECC REG RAM?
 
What brand is your RAM - 6 x 8GB DDR3-1333 1.5V 2RX4 LP ECC REG RAM?

I'm pretty sure its Hynix, when we were making the purchase the only memory listed on the Nexenta HSL was Netlist (not sure if this is still the case). I like Netlist, but couldnt justify the price for this particular build. We went with tested/approved RAM dimms listed on the supermicro site for our particular mainboard.
 
That makes sense.

Not sure why Netlist is the only "approved" RAM vendor on the Nexenta HSL
 
stevebaynet,

How did the IT flash of the onboard SAS controller go?
 
stevebaynet,

How did the IT flash of the onboard SAS controller go?

Got swamped with other projects unfortunately, we should be doing the flash later this afternoon/evening.

Will post here with the results.
 
Back
Top