ZFS Build Questions

jaw4322

n00b
Joined
May 26, 2010
Messages
58
So I got all the parts ordered. However, after do some more investigation i have a few questions. The first and most important is should I have got SAS drives for this build? :( From all the posts it seems yes, as I do want the option to expand via external SAS expander. Has anyone successfully used SATA drives in this kind of setup and expanded? The second question is opensolaris (b134) or nexenta core ? Is there any disadvantage with going with nexenta core? I understand opensolaris is all but dead, but the latest build seems to be stable according to most posts. Also, this will mainly be a CIFS share for window 2-3 windows PC's, that will be moving large video files to and from it. Not constantly, more sporadically throughout the day/night.

Basic Build Parts, which should start arriving today.

2x Western Digital Caviar Black WD6402AAEX 640GB 7200 SATA 6.0Gb/s (Mirror Boot)
9x HITACHI Deskstar 7K2000 HDS722020ALA330 2TB 7200 SATA 3.0Gb/s (Data)
2x LSI Internal SATA/SAS 9211-8i 6Gb/s PCI-Express 2.0
NORCO RPC-4220 4U Rackmount Server Chassis
Supermicro X8ST3-F Server Board
2x Crucial 6GB (3 x 2GB) DDR3 DDR3 1333 ECC Unbuffered Triple Channel Kit Server
Intel Xeon E5520 Nehalem 2.26GHz

Appreciate the help.

Update pics:

http://img842.imageshack.us/i/imag0030o.jpg/
http://img827.imageshack.us/i/imag0027oi.jpg/
http://img180.imageshack.us/i/imag0028z.jpg/
http://img205.imageshack.us/i/imag0029m.jpg/
http://img801.imageshack.us/i/imag0031r.jpg/
http://img818.imageshack.us/i/imag0032p.jpg/

FYI...Oh yeah, if you get the Norco 4220 chassis make sure you get silent fans. The thing is loud, which is an understatement. I got 4 silent fans for the hard drive bays, but forgot about the back 2 fans. They are super loud, and I'll need to replace those too.

Update Stats:
C:\>iperf.exe --client 192.168.0.47
------------------------------------------------------------
Client connecting to 192.168.0.47, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[136] local 192.168.0.171 port 36769 connected with 192.168.0.47 port 5001
[ ID] Interval Transfer Bandwidth
[136] 0.0-10.0 sec 367 MBytes 307 Mbits/sec

C:\>iperf.exe --client 192.168.0.47 --parallel 5
------------------------------------------------------------
Client connecting to 192.168.0.47, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[168] local 192.168.0.171 port 36778 connected with 192.168.0.47 port 5001
[152] local 192.168.0.171 port 36776 connected with 192.168.0.47 port 5001
[136] local 192.168.0.171 port 36774 connected with 192.168.0.47 port 5001
[160] local 192.168.0.171 port 36777 connected with 192.168.0.47 port 5001
[144] local 192.168.0.171 port 36775 connected with 192.168.0.47 port 5001
[ ID] Interval Transfer Bandwidth
[168] 0.0-10.0 sec 217 MBytes 182 Mbits/sec
[152] 0.0-10.0 sec 217 MBytes 182 Mbits/sec
[136] 0.0-10.0 sec 217 MBytes 182 Mbits/sec
[160] 0.0-10.0 sec 202 MBytes 170 Mbits/sec
[144] 0.0-10.0 sec 202 MBytes 169 Mbits/sec
[SUM] 0.0-10.0 sec 1.03 GBytes 884 Mbits/sec
 
Last edited:
When most people expand their ZFS pool they use something like this because you want AHCI SATA HBA without raid. I believe those raid cards are fine as long as HBA is used instead of raid.
 
I think you are going waaaay overboard on the cpu for a file server. I plan on doing the same thing but am going with an i3. Also, why are you using 2 hard drives for booting? The board has a built in USB slot, just stick a USB there and run the OS from that (what I plan to do as well). And last, why are you going with a xeon and not even using ECC memory? Not really a point in getting the server CPU if you aren't using it.
 
That was fast. ;) I love this forum.
When i say expand, I mean to another JBOD Chassis. Looking at this post it scares me that I'm using SATA: http://hardforum.com/showthread.php?t=1548145
I may have over did it with the CPU, but i wanted to use a server motherboard, and the few more bucks for a 5520 isnt that big a deal. As for the mirror boot. I know you can make all drives a boot drive in ZFS, but i wanted to keep it separated, and available if the drive dies. I just pop out the bad one, pop in a new one and rebuild the rpool. I am also using ECC memory??
 
I think it's more scary you use expanders, or want to. For some it works well; but it is a potential source of headaches.

You have an expensive ZFS build, so why not pick multiple HBAs instead of expanders?

Also the disks use a lot of power; i would have preferred 5400rpm disks for only half the power and get random I/O performance by adding an SSD instead. This is what makes ZFS so cool; you can combine the high sequential I/O abilities of the HDDs with high random I/O abilities of SSDs; and get the best of both out of them. One small SSD can accelerate your multi-TB array.

If this sounds appealing, i recommend either Intel, Sandforce or Micron (Crucial C300) for this task. None of these are suitable as SLOG device; but they would make excellent cache devices aka "L2ARC". Multiple Intel X25-V might offer the most value; higher random IOps per dollar i believe; especially since you can RAID them without trouble.

Also if you ever like to switch to BSD world, you can have the system installed to your pool directly, not requiring the two disks as mirror anymore; also saves you 16W idle power dedicated to the OS alone.

Would love to see some benches of your build when you got it running!
 
AFAIK the only place where the sata+expander(only lsi-based-expander?) thing, as described in that thread, is happening is in a solaris-based(excluding enterprise?) environment with a mpt-driverbased SAS HBA(only 1068E?)...
 
Ahhh....ok I feel stupid. I think I was combining HBA's and expanders in the same category.

Mesa, i really gave 5400 rpm drives a look, but was worried about the 666 platters/4k, and thought i may run into a config issue. I actually read a lot of your posts and thought you had to put those zpools together in a certain way. I will invest in L2ARC SSD soon. I wanted to get the system running and then start tweaking it. The only reason I'm not going with FreeBDS is the fact that CIFS is built into the kernel of opensolaris, which they say is better. I would have loved to use your interface.

So i guess you've answered my questions about using SATA drives. I am relieved and can concentrate on making a decision on which OS to use. I really appreciate the help guys.
 
If going OpenSolaris, consider using/checking out OpenIndiana (b147).

I'm running b134 in preparation of upgrading, I'll let that distro settle before upgrading.

I have the CPU... overboard for just ZFS, I have 3 VMs loaded in OpenSolaris, still probably overkill. If you can swing it, why not.

I've been using the same Hitachi drives, which have been acting better than my RE4-GP based vdev, only a recent issue where i think a controller was locking up (LSI 1068E based, its a USAS-l8i, I have 3 of them... I was expanding my zpool so I started off with 1.. just snowballed) causing my zpool to be unresponsive. Firmware upgrade on controller, and haven't seen the problem again. It was doing it daily, especially since I was scrubbing the zpool after the reboot.

If you don't mind modding the bracket, AOC-USAS2-L8i is based on the same chipset as the 9211-8i (?), looks to be about half the cost. Looks like LSI2008 uses a different driver, I honestly never paid attention, my on-board sas uses LSI2008.

I haven't been paying attention to to 2TB 5400rpm drives for OS, I know people say the 4k advanced sector WDs suffer performance issues. Report back if you decide to go 5400rpm drives

Don't mean to thread-jack, Sub.mesa, are there any affordable SSDs suitable for slog? I'm definately heavy on random read than random write, but just curious. I want some SSDs for l2arc, but I have no disk room. on my Norco 4220, I think I'll just mount it inside and tape it :)
 
jonny-Good info...thanks for that, and good to hear the Hitachi's are working well. Do you know of any detailed setup docs for opensolaris (setting up rpool (mirror), zpool, and expanding zpool). i have a good idea but like to double check with how others did it. Also, do you have a build post somewhere?
 
Mesa, i really gave 5400 rpm drives a look, but was worried about the 666 platters/4k, and thought i may run into a config issue.
I think you did well to avoid the newer 666GB platter/4K disks for now; there are 500GB platter 5400rpm disks with 512 byte sectors though, like WD 2TB EADS and Samsung F2/F3 EG. The 7200rpm disks are quite good at seeking though, so i'm not saying you made a bad choice!

But adding one or more good SSD(s) for L2ARC can reduce the need for high-rpm disks in the pool, for random read requests. The SLOG accelerates writes instead.

Because you have a quite expensive build already, i think the use of SSD is only logical with this kind of hardware cost; you would want ZFS to rock and roll, right? :D

I will invest in L2ARC SSD soon. I wanted to get the system running and then start tweaking it. The only reason I'm not going with FreeBDS is the fact that CIFS is built into the kernel of opensolaris, which they say is better. I would have loved to use your interface.
Well perhaps one day you can. You chose LSI SAS2008 controllers, which are good and work in OpenSolaris now. The driver for this chip has recently been integrated into FreeBSD 9-CURRENT (the development branch), but is very experimental as of yet. Hopefully this will improve soon and a stable driver will flow to 8.x and 9.0-RELEASE. Then your build should be fully compatible with FreeBSD. You could run the LiveCD, import the pool and setup Samba shares with a few mouseclicks and you may not even notice it is running on FreeBSD now instead. :D

But you can use OpenSolaris for now, think you made the right choice!

You might want to consider leaving the ZFS pool version at version 14 or 15 instead, unless you really need the higher features. Do note that in particular de-duplication is not yet stable, and above version 19 would not be recommended as of yet. If the OS runs a higher pool version, you can still create a pool with a lower version instead, by using zpool create -o version=14. This would allow compatibility with any ZFS system of at least version 14. FreeBSD's stable ZFS implementation currently is limited to version 15, the same as Solaris i think. Newer Solaris release has higher version (above 20) but deduplication will be disabled in that release, and marked as "reserved" instead.
 
You might want to consider leaving the ZFS pool version at version 14 or 15 instead, unless you really need the higher features. ....

I have also bought an LSI 9211 8i and is going to test your build today. I guess I have to wait now. :(
 
Last edited:
I should release 0.1.6 really soon now; may wish to wait for that. I also got up a private 0.1.6 test .iso based on older code. The 0.1.6 version features installing to ZFS and booting from ZFS directly; which is very cool! Drop me a PM for URL.

Since your controller may not be supported, you would only be able to use the onboard ports. But i likely will release an experimental build based off FreeBSD 9-CURRENT, which should have a driver for the LSI SAS2008 chip and thus your controller. But it may not be stable yet; still would be nice to try in a test setup; sure!

Cheers.
 
jonny-Good info...thanks for that, and good to hear the Hitachi's are working well. Do you know of any detailed setup docs for opensolaris (setting up rpool (mirror), zpool, and expanding zpool). i have a good idea but like to double check with how others did it. Also, do you have a build post somewhere?
Warning: this is going to be one fubar'd, long-winded, maybe incomprehensible post.

For ZFS:
http://docs.sun.com/app/docs/doc/819-5461
http://www.solarisinternals.com/wiki/index.php/Solaris_Internals_and_Performance_FAQ

CIFS:
http://wiki.genunix.org/wiki/index.php/OpenSolaris_CIFS_Service
I have other links for CIFS, depends on what mode you're going. I'm in Domain Mode since I run a Windows 2008R2 domain at home. The official SUn CIFS Administration Guide is good too.

Live USB Creator:
http://devzone.sites.pid0.org/OpenSolaris/opensolaris-liveusb-creator (genunix.org has a mirror, and the USB based download)

This helped me with Static IPs:
http://malsserver.blogspot.com/2008/08/setting-up-static-network-configuration.html
Though I'm a dumbass and have both network ports hooked up (not aggregated, see why later)... whatever it still works lol.

What I use for automatic scrubbing and alerting:
http://www.morph3ous.net/2009/09/05...ing-of-zpool-problems-and-weekly-zpool-scrub/
I didn't bother with smarthost, but the above really helped to explain how to implement the script below for a total newbie
http://www.sun.com/bigadmin/scripts/submittedScripts/zpadmin.txt

This helped setting up Smartmontools:
http://cafenate.wordpress.com/2009/02/22/setting-up-smartmontools-on-opensolaris/
I couldn't pull my serial#s of my disks and I didn't want to pull them.

IMHO, Oracle/Suns site is still a good reference point for OpenSolaris, and completely for ZFS.

If there's something specific let us know, I'm sure we all wouldn't mind helping. zpool creation is really simple (format to get the drive names, or the other commands, zpool to create), have you decided on what type of zpool you'll be creating? Mirroring, raidz(2,3, whatever)?

For my rpool, I actually hardware mirror (onboard) them which are just some simple 2.5" laptop drives, I didn't feel like going through the hassle of doing a zfs redundant rpool at the time, first Unix-based OS I touched and I wasn't confident. Honestly, I was running into problems with the install on my hardware (using the 2009.4 or whatever the stable build was), even though the driver checks were fine, got it running with b128. I'll do it when the config is completely supported. I don't keep anything of importance on the rpool. If it went down, it would just be a matter of reconfiguring everything, which is now... a little more complicated than the original plan. I really should document my network/cifs configuration because thats probably the biggest hassle (I like copy paste). Though Comstar would come in second. Supposedly you can back that config up.

I don't believe I have a build post, but it basically started with this.

Hardware:
Norco 4220
X8DTH-6F + E5520 (total overkill... PS IPMI does come in handy, especially if you're like me and the server is in like a closet, literally a closet, sitting on the floor... though it will soon be in a full size cabinet, once I make room in the doorway so I can shove it through)
1xMEM 2Gx3|PATRIOT PSD36G1333ERK RT - Retail
8xWestern Digital RE4-GP (carry over from hardware Raid6 array... worst decision I ever made)
1xAOC-USAS-L8i
2x160GB Scorpio Blues (rpool, "hardware" mirrored, onboard SAS)
$50 LCD, cheap keyboard, old mouse :)
2xD-Link DGS-2205, cheap solution, works pretty well. I didn't get anything fancy because the "backbone" runs across my house to my client PCs, and there's only 1 ethernet cable for that so it seemed pointless. I can pull pretty consistently 80-90MB per sec over the network. The Server isn't doing any link aggregation since those switches don't support it.

Original OpenSolaris/Environment Config
OpenSolaris b128, shortly upgraded to b130
RE4-GPs went into two raidz vdevs in my pool (lets call it spool1). I did this because I needed to transfer my data that was in my hardware raid6 array (6 disks). So I took a risk, pulled two disks, bought two disks, created 1xraidz, transfered it over, broke up the hardware raid6, added those as the second raidz vdev to spool1
One Domain Controller Virtualized on OS with VirtualBox
One Win7 VM virtualized on OS with Virtualbox, this is used as an always available client.. mostly for downloading.
CIFS in Domain Mode.
iSCSI, non-Comstar. Changed to Comstar shortly after b130, since the other way was going to be defunct.
Multiple ZFS File systems, setup for whatever particular purpose it serves.

Now:
3xAOC_USAS-L8i (and an Intel SASUC8i I wanted to test out because I was having issues). I'm going to start splitting up my vdevs across controllers. I guess it doesn't really matter, since a controller going down would still offline my pool... I think this is still a good card, well not for expanders apparently, I think the latest firmware (1.30) has fixed my recent issues.
1xFujitsu 15kRPM 74GB SAS drive used as l2arc... hey it was cheap and it couldn't hurt.
8xHitachi 7k2000 in another raidz2 vdev that is also part of the original storage pool (or rather, I built this, migrated spool1 over to spool2, then destroyed spool1 and added it to spool2 as another 8 disk raidz2).
1xRE4-GP as another spare (... I can't believe I bought another one, I think I got this on eBay for less than $150...). This was before I got the Hitachis.
2xHitachi 7k2000 as additional Spares (case is now maxed out).
Another 1xMEM 2Gx3|PATRIOT PSD36G1333ERK RT - Retail, so 12GB.

Current Config:
Added another Virtualized Domain Controller since it soon became responsible for my network infrastructure (DNS/DHCP/AD).DD-WRT based Internet Router is a backup DNS server, just have that configured to forward any internal domain/subnet resolves to the DCs. Maybe not the most elegant solution, but it works
I'm now using Comstar for iSCSI. The DCs back up to them (100GB volumes). Not really necessary, but I was just playing around and wanted to learn more about iSCSI.
A couple more VMs, but they're not always running. All VMs, but one DC, has their data, including additional virtual disks, on my spool.

Future Plans:
Get another Norco case, expand out to it...
Probably get another HBA since apparently Expanders don't like the ones I have :)
Who knows what drives I'll use. I hope WD comes out with something good (or 4K drives aren't an issue), because their RMA process rocks.
Upgrade network infrastructure, probably go with a few Dell Switches (5x24), but thats after I run redundant network cabling back to where my server/network equipment is stored. Pointless otherwise.

I would say if large, frequent transfers are going to happen, get a good network solution going. Although I'm ok if it drops to 60MB/sec, 90MB/sec is a big difference, especially since I keep copies of some (most) of my data on my Workstation (just single disks, no RAID, Vista). I also have family come over and backup/transfer data to and from. Local throughput and I/O don't mean squat if most of your data is going to and from the NIC.

Hopefully this stream-of-consciousness of a post helped :) I wanted to illustrate how keeping up with the need for storage, especially if you're a pack-rat like me, can spin out of control.. lol. I still have quite of bit of space left (7TBs). I hope it lasts for awhile.

PS Utilize Time-slider (Automatic Snapshots) when you can, makes those accidental deletes less worrisome (and Windows does recognize them/Previous Versions). Default settings are sufficient for me.
 
Last edited:
I read through the zfs admin guide and is still not clear on disk failure.

Let say I have raidz pool with 6 drives configured as follow:

rzpool ONLINE 0 0 0
raidz-0 ONLINE 0 0 0
disk1 ONLINE 0 0 0
disk2 ONLINE 0 0 0
disk3 ONLINE 0 0 0
raidz-1 ONLINE 0 0 0
disk4 ONLINE 0 0 0
disk5 ONLINE 0 0 0
disk6 ONLINE 0 0 0

With raidz, the pool can survive when there are no more than 1 drive failure. What I am not clear is when there are multiple vdev disk group. Can the pool survive if disk 1 and disk4 failed?
 
Yes, as that would still mean both RAID-Z arrays are in state ONLINE (well degraded); they are accessible so it can take a maximum of two drive failures. RAID-Z only guarantees one, though.
 
Jonny- Thanks for the detailed info. Very helpful, and nice build. I'm sure I'll run into a few questions. I received everything but the motherboard yesterday. ;( I'll definitely update this post as i go. Keep the questions going as it will help others and myself.
 
Yes, as that would still mean both RAID-Z arrays are in state ONLINE (well degraded); they are accessible so it can take a maximum of two drive failures. RAID-Z only guarantees one, though.
What he said.

This is a good explanation of how to think of the relationship between devices, zpools, and vdevs

http://constantin.glez.de/blog/2010/06/closer-look-zfs-vdevs-and-performance#singledisks

Basically, the Zpool can stay up as long as the individual vdevs aren't offline/faulted.

One thing to keep in mind to is adding another vdev isn't... "striping". As I understand it, new data will be balanced between the vdevs, but if vdev1 is 95% full, and then you add vdev2, it isn't going to balance out any old data.

I'm satisfied with the read/write performance of 8 device raidz2.
 
... As I understand it, new data will be balanced between the vdevs, but if vdev1 is 95% full, and then you add vdev2, it isn't going to balance out any old data.

I do make sense because there is no room for zfs to stripe the data across the vdevs. On the other hand if you add 2 new vdevs then the data should be stripped across the new vdevs.

I guess it pays to monitor the pool usage and plan for capacity increase before the pool is completely full.
 
Yeah it definitely writes to both vdevs, I just don't know what the ratio is when they're so out of wack in sense of the balance.

Like I said, 8 wide disk performance is fine for me, so I don't mind adding one vdev at a time. 4 disk raidz was pretty good too.

I'll take the instant storage pool expansion
 
So I'm at the point of installing opensolaris but it doesnt recognize the MegaRAID SAS 8208ELP. I get an UNK driver error. I read that i have to re-flash the firmware to "IT", but how do i do that? I have download the firmware from LSI, but its an exe that needs to be run in dos. I'm guessing i need some kind of usb boot os then do it from there? Any suggestions on the easiest way to do this?
 
FreeDOS; u can use a USB stick or even CD. I didn't use DOS for a very long time, but i think FreeDOS is easy coz u wont have to use a floppy.
 
IF you have a hard time doing it via a DOS Bootdisk (I did it via Win98/USB boot and SASflash was locking up) you can do it via an Operating System (Windows/Linux and Solaris... don't know if it supports OpenSolaris though). LSI provides you the programs to do it.

I flashed my LSI-based cards by popping them in my Windows' box. Just made it easier :)

OOPS Confusing it with the 9211... wait... I thought you were doing 9211-8i ?

*EDIT* ... nevermind its the onboard right?
 
Last edited:
Yeah it picks up the two 9211-8i cards, just not the onboard sas controller. Which is weird, as I thought it would be the opposite. Oh well, I'll give it a try..thanks.
 
Yeah, I would think the Onboard would be supported. My Onboard, was recognized and its LSI2008 based. Weird.

Also I briefly mentioned it, but keep this handy. www.openindiana.org

Looks like our best hope for OpenSolaris continuation.
 
Is Openindiana a stable release? If so, is there any reason i shouldnt give it a try?
 
OK..so i guess im an idiot...I cant figure out this SAS reflashing thing. I cant figure out how to use Freedos, and i've tried using a bartpe winxp boot. I know its probably really easy, but its blowing my mind. Can someone point me to instructions, or give me instructions? thx
 
There will be a stable release at some point in the future, but for now it's all as beta as OpenSolaris was.
Consider nexenta core, or nexentastor.
 
OK..so i guess im an idiot...I cant figure out this SAS reflashing thing. I cant figure out how to use Freedos, and i've tried using a bartpe winxp boot. I know its probably really easy, but its blowing my mind. Can someone point me to instructions, or give me instructions? thx
Are you doing USB boot? Can you boot at all or is it flaking out?

I could never get FreeDOS to work properly when I was trying to figure it out on my board.

http://thepcspy.com/read/bootable_usb_flash_drive/ is a good break down for USB boot using Win98.

To the answer to OpenIndiana... sure why not, try it out, if you don't like it, pop b134 on your system. Maybe it even has the driver you need... though assuming you want to do rpool mirroring, then I think you would still need to flash to IT. I think it may lack some tools, but it hits all the major ones for a home file server, as far as I know. CIFS, Comstar, Time-slider(?).

http://openindiana.org/download/ get the USB and read the wiki. I did the same install method (USB) for OpenSolaris. USB CD-Rom boot was flakey too when I originally tried installing OpenSolaris.

I've been meaning to try OpenIndiana in a VM...

You may just have to install Windows on the SATA ports and Flash :) Assuming you can use the sasflash from this package or from another LSI firmware package, like the ones for the 9211-8i. Make sure to elevate the command prompt if using Vista+/w UAC.
 
Ok so i finally got the onboard SAS flashed, installed opensolaris, and was able to boot from the drive. Now my question is how do i mirror that drive. I have the doc on how to mirror, but how do i know what the other drive is named. The main drive is c4t0d0s0. So my guess is the second drive is c4t1d0s0. However when i run the zpool attach rpool c4t0d0s0 c4t1d0s0, i get cannot open ....... permissions denied. what step am i forgetting? To add when i do a format it gives me No permissions or disks found?
 
Last edited:
Nevermind...i figured out that pfexec in front of the cmd will allow me to do it. However, i now get cannot open '/dev'rdsk/c4t1d0s0' : I/O error.
 
Last edited:
Nevermind...i figured out that pfexec in front of the cmd will allow me to do it. However, i now get cannot open '/dev'rdsk/c4t1d0s0' : I/O error.
This may help: http://opensolaris.org/jive/thread.jspa?messageID=379508&tstart=0 Long and short of it is sounds like the second disk isn't sized/EFI label.

I think this is one of the reasons I didn't do rpool mirroring. lol was still trying to wrap my brain around all the other concepts I just wanted to get up and running.

Btw, you don't need to guess.
Assuming you aren't running in su
pfexec format
will show you your disks.
iostat -En
will also, with drive manufacturer information and disks with short name (c#t#d#s#)
cfgadm -a
Will show your devices too.
 
Last edited:
Thanks...that post worked great. Now have mirrored boot drives. Next step is setting up the first 8 drive raidz2, then CIFS, and some testing. Probably not as easy as i thought it was going to be, but I guess it my first time with solaris, which is expected i guess. thanks again.
 
Another question. I've been thinking about L2ARC and ZIL. In my setup there isn't going to be a lot of files that are always/constantly read from the array. We'll basically use this as an archive for large video files. So is L2ARC really a good investment? Also, since the array is going to be used for large HD video files (1gig to 12gigs per file), will a 32 gig SLC SSD drive help? Once the ZIL drive is full is the data then written directly to the data pool? What will probably happen is every 2-3 days we'll dump a whole lot of large files to the array at once. Then every so often we'll pull some of those clips back to the main SAN. Let me know how you guys would tackle this. thanks.
 
Last edited:
jonny-Im also stuck at adding the server to our windows 2008 domain. I've read those docs you send but when trying to add it i get "failed to find any domain .....". I've tried:
1. I created the key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0\AllowLegacySrvCa ll (Type: DWORD (32-bit) Value / Value data: 1)
2. I downloaded the hotfix from
http://support.microsoft.com/kb/957441/

Still no luck....Any idea what I'm missing?
 
Overview:
http://docs.sun.com/app/docs/doc/820-2429/configuredomainmodetask
See this also:
http://wiki.genunix.org/wiki/index.php/CIFS_Service_Troubleshooting

Pay attention to nsswitch and resolv.conf

Have you tried pinging the DCs, at first thought I would think resolv.conf is wrong (DNS servers not figured and the Domain and Search lines are missing). Nsswitch could also not be using dns.

I did set this on my OpenSolaris server, I'm thinking NTLMv2 isn't properly supported, I can't quite recall:
sharectl set -p lmauth_level=2 smb

This is a good walkthrough: http://blogs.sun.com/timthomas/entry/configuring_the_opensolaris_cifs_server I think I got the Sharectl fix from the last comment on this blog entry. That or the OpenSolaris forums, been awhile.

I don't think I made any other change, I checked the registry of the DCs and I don't see it set (plus it doesn't apply to R2)

http://wiki.genunix.org/wiki/index.php/CIFS_Service_Troubleshooting

PS Make sure your clock is right too (pfexec ntpdate -u yourtimeserver) otherwise you'll fail the join (kerb error I believe). http://wiki.genunix.org/wiki/index.php/Getting_Started_With_the_Solaris_CIFS_Service does have a script for some checks. I know Solaris/OpenSolaris had one big kerb config script too but I can't recall where I found that and I forgot to note it down.
PPS I redacted my domain name(s):
/etc/resolv.conf
Code:
domain redacted.com
nameserver 192.168.1.210
nameserver 192.168.1.212
nameserver 192.168.1.211
search redacted.com REDACTED

/etc/nsswitch.conf (only the relevant parts)
Code:
# DNS service expects that an instance of svc:/network/dns/client be
# enabled and online.

passwd:     files ad
group:      files ad

# You must also set up the /etc/resolv.conf file for DNS name
# server lookup.  See resolv.conf(4). For lookup via mdns  
# svc:/network/dns/multicast:default must also be enabled. See mdnsd(1M)
hosts:      files dns mdns

# Note that IPv4 addresses are searched for in all of the ipnodes databases
# before searching the hosts databases.
ipnodes:   files dns mdns
/etc/krb5/krb5.conf (obviously only put your servers you have in), only the relevant parts... I don't care if you know my Server names.. I can't remember why I have a DC in [domain_realm]... I'm sure someone that understands krb config would know more.. it really should be .redacted.com = REDACTED.COM ... maybe redacted.com = REDACTED.COM too. I think I made a mistake.. and just noticed it:
Code:
[libdefaults]
default_realm = REDACTED.COM

[realms]
REDACTED.COM = {
kdc = svw2k8dc001.redacted.com
kdc = svw2k8dc003.redacted.com
kdc = svw2k8dc002.redacted.com
admin_server = svw2k8dc001.redacted.com
kpasswd_server = svw2k8dc001.redacted.com
kpasswd_protocol = SET_CHANGE
}

[domain_realm]
.redacted.com = REDACTED.COM
svw2k8dc001.redacted.com = REDACTED.COM

...For some reason I did configure /etc/hosts... I can't recall.. Long day. I think I did this so I could RDP from OpenSolaris (rdesktop) by name if my DNS servers were down. The first line just stops SSH from being a slow lil b***** when authenticating. My env, the DCs that usually run are virtualized on my OpenSolaris server (yeah... sloppy but its a home network). I'll bring up dc002 (VM on my desktop) when I know something is going down. At least you know the IPs in resolv.conf just point to my DCs.
Code:
127.0.0.1 svosolaris001.local localhost loghost svosolaris001.redacted.com
192.168.1.210 svw2k8dc001.redacted.com svw2k8dc001
192.168.1.211 svw2k8dc002.redacted.com svw2k8dc002
192.168.1.212 svw2k8dc003.redacted.com svw2k8dc003
 
Last edited:
Thanks for all the info jonny...Ok..So i couldnt figure out opensolaris. Being fraustrated Im currently trying nexenta core. After getting it installed and setup, i ran into the same problem. I then installed the free web gui add on "napp-it", and bam i was able to join my windows 2008 r2 domain from its web interface. Why, i dont know yet.

With that being said I'm able to transfer files from my pc (windows 7) to the nas, no problem. Getting about 85-90MB/s. However if I'm on another PC on the network, i cant transfer at all. It starts but says it transferring at 5Kb/s and doesn't really do anything. My pc and the nas are connected to the same gig switch, and the other PC's are down the chain. Any idea's on whats causing this?

UPDATE: This problem went away after a reboot. ;) I'm only get 11MB/s for the PC's not directly connected to this switch. Network is full gig, but probably a network limit issue somewhere though. All in all I'm happy so far. The Napp-IT web interface is actually pretty nice.

These are the benchmarks from Napp-it web gui.
NAME MC-NAS
SIZE 14.5T
Bonnie
Date(y.m.d) 2010.09.29
File 24G
Seq-Out(Char) 251 MB/s
%CPU 97
Seq-Out(Block) 652 MB/s
%CPU 68
Seq-Out(Rewrite) 279 MB/s
%CPU 43
Seq-In(Char) 221 MB/s
%CPU 98
Seq-In(Block) 651 MB/s
%CPU 36
Rnd Seeks 928.0/s
%CPU 3
Files 16
Seq-Create +++++/s
Rnd-Create +++++/s
 
Last edited:
Odd, IIRC Nexenta is still based on oSol b134 so it makes sense that you would still run into an issue

I completely forgot about Napp-it, never bothered with it since I learned what I needed to before I even heard of it. I wonder if it has better autosnap and monitoring capabilities than what I currently have (time-slider + cron script).

I think once OpenIndiana b148 is out and assuming Napp-it gets an installer out (hopefully we get the other tools too in the repository) I'll load that on a VM and see.

Definitely looks like it makes Comstar ISCSI much easier. That's sort of a PITA.
 
Back
Top