Need iSCSI suggestion

farscapesg1

2[H]4U
Joined
Aug 4, 2004
Messages
2,648
So, I built a new ESXi (4.1) server and now I'm thinking about using some older equipment to create an iSCSI storage server to run the VMs from in case I put together a second host and make use of additional VMWare features (HA, vMotion, etc.). This is for my home lab (which will also be running my media server as a VM).

I'm looking at Freenas, Openfiler, and Nextenastor (community version) as my options (because they are free). I also have access to TechNet, so Microsoft is an option.

The hardware I plan to use is;

Intel Core2Duo E6400
ASUS P5B-Deluxe
Dell Perc 5i
4+ GB of DDR2

It looks like Nexentastor recommends ECC RAM? Because it is running ZFS? I've also heard of some issues with Openfiler when used as an iSCSI target for ESXi.

I don't need anything real fancy right now. Basically I just want a RAID1 OS, 4 drives in RAID10 for the VMs, 2 drives in RAID0 for an encoding target drive, and then 4 more drives in a RAID5 for media storage passed to the media server. I think I can do all that with the 6 onboard SATA and the Perc5i card... right? RAID0 and RAID1 on the onboard with the RAID10 and RAID5 on the Perc? Or possibly RAID1 and RAID10 onboard, and RAID0 and RAID5 (allowing me to increase the RAID5 up to 6 drives).

Or am I being too optimistic with the older hardware? Would it make more sense to sell it off and use the funds to build an i3/i5 system for this? Especially since a new build would probably use less power? Recommendations in this case?
 
I would probably look at freenas or MS Storage Server since you have technet. I, myself, am planning to take a look at both of them in the next few weeks as I have a new test box available to me.
 
Hmm, why freenas over the other two? I thought about Storage Server... just not familiar with it to know what the pros/cons are. It looks like it has some dedup features, but they are file-based and not block-based. It looks like Nexentastor also has dedup (block level).
 
Honestly, I just don't know anything about the others. I may check them out at some point.

Given that I haven't yet had a chance to play with any of them, I'm expecting that Storage Server will likely be a lot easier to deal with given my windows experience vs Linux.

I'm also not sure that block level dedup buys much if anything over file level dedup. From what I have read, Block level takes a bit more overhead, and you often lose potential saved space for checksums compared to file level. The biggest benefit of block level appears to be that it's more independant of file systems.

Ultimately, I'm looking at something I can use as an iSCSI target for my server. In the end I might go with a dedicated NAS that support it, but the cheapest I have seen is around $500 and I have a nice machine with a 5 bay raid cage I can use already, so I'm not sure I want to spend the money.
 
WSS 2008 R2 is geared toward OEM's so its not a user friendly install. For starters it's not a standalone installation DVD you install to a new drive and you're done. Instead its a set of installer files you run on a preexisting Windows Server 2008 R2 install. It's essentially a set of storage "powertoys" laid into W2K8R2. There's even an executable which just rebrands the windows edition to "Windows Storage Server 2008 R2" - it mostly changes a few bitmaps and reg values. To me the whole approach they took feels kind of cheap, but again its geared toward OEM's which want that flexibility to add features modularly so they can customize it.

To install the iSCSI target software its a matter of running a small MSI installer file. I will say that MS's iSCSI target is easy to set up and manage, but I am having trouble booting iSCSI capable mobo's off of it at the moment, so YMMV.

Biggest disappointment of WSS2008R2 is Microsoft still hasn't seen fit to implement any driving pooling tech or any improved block-based softraid tech - on a STORAGE SERVER platform - its still stuck in the softraid5 stone age.

You might also try playing with NexentaCore+Napp-It GUI. I did and had a very easy time setting up Comstar (iSCSI) on there.
 
Last edited:
@OP

Nextentastor most likely recommends ECC RAM because ZFS uses what is essentially a form of software RAID; having bad data in RAM is a no-no, which is why hardware RAID controllers have ECC RAM on board also.

With regard to your hardware, be aware that the PERC 5/i has issues with Intel chipsets prior to X58, it's related to the SMBus signal lines. The fix involves covering one of the pins on the PERC. Look here for more details and tips. The rest of the hardware is fine.

Or possibly RAID1 and RAID10 onboard, and RAID0 and RAID5 (allowing me to increase the RAID5 up to 6 drives).

This would be my suggested configuration. RAID 5 performs best with an odd total number of drives, so for the RAID 5 array I would go 5 drives for storage, plus a hot spare. If you're not that worried about performance, considering the fact that you'll be running iSCSI over GigE (I assume), then you'll be better off running six drives and maximising your capacity.

However, the fact that you'll be running over GigE says to me that performance isn't that much of a consideration, so why bother with a RAID 0 scratch drive? You could dedicate an area on the RAID 5 for scratch duty, and add the two RAID 0 drives to RAID 5 instead. Of course if you want more performance, you could invest in channel bonding...

As to the OSes, best thing to do is use VirtualBox to have a play with them before you commit to one of them.
 
Thanks for the replys. In response to the PERC.. already tested it in the motherboard and it works fine without the pin-mod.

RAID5 - I'm planning on using WD20EADS drives (before WD changed them to no longer enable TLER). Technically I have 5 of them and was planning 4 with a hotspare, but I guess I can use all 5 and try to find a 6th for a hotspare.

RAID0 - Yeah.. nevermind :) Not sure what I was thinking. Encoding MPEG2 video to h.264 doesn't even begin to scratch a standard disk's speed. Guess that means more available ports on the PERC card ;) Hmm, I do have another 3 1.5TB Samsung HD154UI I could add as a second RAID5 (no hot spare)....

As for performance, like I said it is really a home test lab. The only VM that I'll really be worried about is my WHSv1 system (running SageTV for TV recording, Airvideo for watching videos on my iPhone, and hosting some private messageboards). That is, if I stick with WHSv1 for now (currently running on a seperate box, planned to move to VM over Christmas) or just suck it up and move to Server 2008 R2.

I had planned on getting a little more familiar with the options in VMs first, but was more curious if anyone had first-hand experience why one would be better than another. I can get used to almost any interface, but a lot of times it is hard to really compare performance/feature differences when running in a virtual environment.
 
I would recommend Starwind, but the free version seems to have been discontinued, although it may be coming back again. I would expect it to be capacity-limited though. You can get a 30 day trial of Starwind 5.5, and if you like it and are MS or vmware certified, you may be able to get a NFR license for free: http://www.starwindsoftware.com/news/31 - not sure if the offer is still valid.

Anyway, even the low end versions of Starwind support using system ram as cache (either WB or WT) and with 10GbE I can get wire speed out of it. For a home network, 1GbE will be the limiting performance factor, not which iSCSI target you choose, or your CPU etc.

The high end versions of Starwind support snapshots, mirroring and high availablity (sync. replication between iSCSI servers with failover/failback).

It's incredibly easy to setup - just need windows 2008 or 2008 r2 and then it's a five minute install.
 
<snip>If I plan for 500GB of storage 3x250 in Raid 5), I almost always wind up putting in another 500 - 1024GB .

You can use the on board raid to start and later upgrade to PCI-X raid controller as the need presents itself

did you just teleport in from 2004? :)
 
Last edited:
While your configuration would work, I think you have starved it for memory, especially if you will have any number of connections access storage simultaneously.

No, unless you are using system RAM as cache, 4GB is more than enough for a software iSCSI target. In fact, I've been running Starwind 4.2 for over a year inside a 1GB virtual machine, on a 6GB hyper-v box, which has at times been running other vm's, including Starwind 5.5 with 2GB while I tested it, supporting a production hyper-v cluster with a lot of virtual machines. With 4.2 (which can do RAM disks but not caching) there's no real benefit to >1GB RAM, performance is limited by NICs, RAID + HD, and CPU. With 5.5 and cacheing then the more RAM the better, but it's not going to help much with the bottleneck of 1GbE.

My impression is that iSCSI is actually less work for a server than file serving.
 
I've been using OpenFiler for my iSCSI SAN for about 2 years. I have 3 ESXi 4.0 and now 4.1 hosts connected to it. It's able to do some advanced things like NIC bonding and will support most types of hardware (c'mon, it's linux!).

I'm actually in the process of moving away from OpenFiler and onto ZFS+Solaris for 2 reasons:
1) The project appears inactive; not dead, just inactive. They have not relaesed any updates since 2008.
2) I like the easy of administration that ZFS provides, as well as the data integrity and data validation it provides. I've had issues with data corruption on OpenFiler in the past after power outages.

If you're just starting out, play around with OpenFiler, as I suspect you'll get the best hardware support from it. If you plan to store stuff long term on this device (or a future device), take a look at ZFS. They call it groundbreaking for a reason.

Also, I'd steer clear of anything proprietary (even if it's free). I've had issues in the past with hardware raid controller's failing and then being unable to restore volumes again. Thank god for backups! OpenFiler and ZFS are based around open source software, so this gives you a greater chance of fixing things if things go south at any point.
 
Freenas is great with NSF. You should use NSF if you plan to use it with vmware.
 
Been playing around with the options and so far Openfiler and Nexentastor have been the easiest for me to set up a software RAID5 array and share via NFS to VMWare. I'm also wondering if I wasted money buying the Perc5i instead of just using software raid. I can always sell it and pick up a 1068e-based card of course...

I'm still interested in ZFS... but it seems like it requires ECC memory, which would require a whole new system build (right?). Of course, I'm not against that if anyone can suggest a cheap, low-power build :)
 
about ecc-ram:
no os - not nexenta* or others need ecc, not with hardware-raid and not with software-raid.

but for enterprise use, it is always strongly recommended to use ecc-ram in any case, because ram arrors are one of the problems that can result in data-loss.

so if you have the money for a ecc-capable-mainboard, buy one with ecc-memory.
if you prefer some ultra low power (but low speed) solution, go with intel atom.
for recommended zfs-hardware solutions, see nexenta.org wiki.

but in any case, if you want to have performance + maximal data-security, go with a modern zfs-os (best with a xeon or i3/i5 on a intel server-chipset) from nexentastor.org (commercial, but free up to 18 TB with ce edition), Solaris Express 11 (the most up to date zfs-os with encryption, free for non-commercial and demo-use) or unlimited and free openindiana or nexentacore from nexenta.org.

for solaris express 11 and the free options, you can use my napp-it web-gui for management.
i suggest to use nfs instead of iscsi. its as fast and more simple. and you could share it also via sbm for easy copy/ move/ backup and access to zfs snapshots via a windows pc.

gea
 
Welcome back gea, glad you re-registered your account here. Check your PM.
 
Back
Top