OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Can someone help me with a small issue. I just installed Nexenta CE 3.0.5 which looks like it still has a bug where when you try to change the MTU to enable Jumbo Frames, I get this error:

Code:
failed to configure bge0 with ip 192.168.1.100 netmask 255.255.255.0 mtu 4088 broadcast + up: ifconfig: setifmtu: SIOCSLIFMTU: bge0: Invalid argument

Here is a link to how people are able to fix it: Fix Here:

Basically people say you need to edit the bnx0.conf which I am assuming is the network driver conf, but I have no idea how to even find this file or what commands to run. I am a complete noob when it comes to Solaris. I am will continue look for a guide on basic commands and any advice is welcomed.
 
Nova, you to add the Port number afterwards like smtp.gmail.com:465

that may be on the initial setup page in a drop menu, but it doesn't show up on the config page in nexenta now. Basically you follow the imap/pop directions for setting up an email client (forget which offhand).
 
I'm interested in using Nexenta and I'm looking at the Intel SASUC8I SAS HBA and the Intel RES2SV240 (SAS expander) for use in my NORCO 4220.

Should these components work for Nexenta and Solaris? Would anybody recommend anything different?

Also, my file server is a low-power AMD system (2.7 GHz, single-core, 2 GB RAM). I'm looking to do a RAID-Z2, 14x1TB setup. Would this be sufficient?

Lastly, is it possible to replace drives with larger ones (ie. 1 TB --> 2 TB) in the future without rebuilding the entire array?

EDIT: Hmmm, I'm now reading that the SASUC8I isn't SAS2 based. Is that important? How about the LSI SAS 9211-8i? It's about $100 more though... ugh.
 
Last edited:
I'm interested in using Nexenta and I'm looking at the Intel SASUC8I SAS HBA and the Intel RES2SV240 (SAS expander) for use in my NORCO 4220.

SASUC8I is a great card, I have 3 myself. They are SATAII, not SATAIII as you said, but consider the mechanical drives you're attaching to them; I use 5900rpm Barracuda LPs that won't saturate 3Gbps, not to mention 6Gbps. No point in getting 6Gbps if SSDs aren't involved, IMHO.

Question about Sandy Bridge support...have a failing L2 cache on my E6600, and I have a P8P67 Pro + i5-2500K combo ready to tap in (just waiting on RAM from the 'egg). Anyone tried running Solaris 11 Express on 6-series chipsets yet?
 
Any idea if the Intel RES2SV240 expander works well with ZFS/Solaris?

In the OP it says to avoid expanders...?
 
Any idea if the Intel RES2SV240 expander works well with ZFS/Solaris?

In the OP it says to avoid expanders...?

I'm interested in this as well.

@Gea
Also, is there a way to create a pool through the GUI with ashift=12? With most of high density consumer hdd's being 4k sectored right now, it's only going to continue... and at a rapid pace. I'd like to create my initial pool aligned for 4k, so that I can upgrade vdevs appropriately as the time comes.
 
I'm interested in this as well.

@Gea
Also, is there a way to create a pool through the GUI with ashift=12? With most of high density consumer hdd's being 4k sectored right now, it's only going to continue... and at a rapid pace. I'd like to create my initial pool aligned for 4k, so that I can upgrade vdevs appropriately as the time comes.


about the Intel Expander
I would use it together with a LSI 2008 based controller like a LSI 9211, I expect it to work
with Solaris/ OI, but I would avoid expanders in general when they are not needed,

A second, even a third SAS controller are cheaper and faster and you avoid possible problems.
Expanders are fine, if you have
- not enough slots
- really a lot of disks
- disks in external cases


about 4k disks and ashift=12

you have currently two options:

-Option 1 (reported to work by a lot of people)
use ZFSGuru to create the pool

-Option 2 (untested, but someone has to test it or it will not work forever)
replace the zpool command file and create your pool (not sure if you can use the normal version afterwards)
http://digitaldj.net/2010/11/03/zfs-zpool-v28-openindiana-b147-4k-drives-and-you/

-Option 3
accept a lower performance with ashift=9 and 4k disks
according to a benchmark from Option 2 website,
you have nearly the same read performance and about 10% lower write performance

or
avoid 4k disks
use Hitachi 7k3000 disks if you are looking for a low power and cheap disk

Gea
 
Last edited:
Option 2 (untested, but someone has to test it or it will not work forever)
replace the zpool command file and create your pool (optionaly use the normal version afterwards)
http://digitaldj.net/2010/11/03/zfs-zpool-v28-openindiana-b147-4k-drives-and-you/

5 months and going strong with that option here. Totally network bandwidth limited, performance is great. 12 2TB Barrcuda LPs in 2 6-drive RAID-Z2s, with a 3rd one coming "soon", when funds are ready. (Either 6 more Barracudas or 6 5K3000s). The hardest thing to do is remembering to use zpool-12 instead of zpool when doing pool operations, or you're fucked and have to start over.

On the 6-Series note, looks like the integrated MAC on the chipset isn't supported, which is unfortunate. I'll just continue using my PCIe x1 adapter in the interim, not a huge deal.
 
about the Intel Expander
I would use it together with a LSI 2008 based controller like a LSI 9211, I expect it to work
with Solaris/ OI, but I would avoid expanders in general when they are not needed,

A second, even a third SASUC8I are cheaper and faster and you avoid possible problems.
Expanders are fine, if you have
- not enough slots
- really a lot of disks
- disks in external cases
Thanks so much for the response!

Are there any other reasons to avoid extenders aside from possible problems? (ie. are they slower, more prone to disk failure, causes more disk problems, etc)

Three SASUC8I cards are cheaper, but requires too many slots on the motherboard. (three x8 slots?)

The LSI 9211-8i + expander is a bit more expansive ($500) than the three SASUC8I ($450) but the expander also allows me to move to hardware RAID in the future, etc.

Anyways - I greatly value your opinion so I'd love to hear a bit more on why you advise against the expander (unless the conditions above are met).

EDIT: Also - what $50 - $60 1TB drives are generally the best? This ZFS build is just for home use and I don't need enterprise-level performance. Right now I have a RAID-6 (on a PERC 6i) with a mis-match of WD Green drives and such. However, since I'm buying six more 1TB drives I'd like to get some advice if there any any differences between them. The WD blue drives are $60/each right now...
 
Are there any other reasons to avoid extenders aside from possible problems? (ie. are they slower, more prone to disk failure, causes more disk problems, etc)

no

EDIT: Also - what $50 - $60 1TB drives are generally the best? This ZFS build is just for home use and I don't need enterprise-level performance. Right now I have a RAID-6 (on a PERC 6i) with a mis-match of WD Green drives and such. However, since I'm buying six more 1TB drives I'd like to get some advice if there any any differences between them. The WD blue drives are $60/each right now...

I would avoid 4k disks, for example you can use the new Hitachi 7k3000 disks.
They are 512bit versions

I would also use or move to 2 TB versions instead of 1 TB
Hitachi HDS723020BLA642 2 TB (SATA 600, Deskstar 7K3000, 24/7)

(less disks, less power, less failures)

Gea
 
Are there any other reasons to avoid extenders aside from possible problems? (ie. are they slower, more prone to disk failure, causes more disk problems, etc)

There are a lot of minor issues that can crop up with expanders, moreso than if you just have the requisite number of controller cards. Lots of people have spent a lot of money on various cards in the forum and have found working expander configurations that you'll find here. Personally, I don't really have the time and effort (and $$$) to expend chasing down bugs/quirks; I just want working server.

Three SASUC8I cards are cheaper, but requires too many slots on the motherboard. (three x8 slots?)
Correct, they are x8 electrical. A X58 based board should provide sufficient PCIe lane bandwidth in the configuration you need; quite a few are sold used in the FS/FT forum.

Supermicro also manufacturers server boards based on the X58 that also include a LSI 1068E (same controller used on the SASUC8I) on board the motherboard, but these are relatively pricey.

EDIT: Also - what $50 - $60 1TB drives are generally the best? This ZFS build is just for home use and I don't need enterprise-level performance. Right now I have a RAID-6 (on a PERC 6i) with a mis-match of WD Green drives and such. However, since I'm buying six more 1TB drives I'd like to get some advice if there any any differences between them. The WD blue drives are $60/each right now...
Have you considered storage density? A brief Google shopping search shows 2TB Hitachi 5K3000s for ~$80 a piece. They might be 4K I believe (so you have to do ashift=12, which is not a big deal), but if you have a limited number of bays, density is something to keep in mind.
 
Well - I already have eight 1TB drives and can't afford to upgrade them yet. Since I'm going to be doing RAID-Z2, all drives need to be the same size, right?

How are the Hitachi Deskstar 7K1000.C drives? (they are in the price range of $50 - $60/drive)

Other drives in that price range seem to be the "eco" drives (Samsung EcoGreen, WD Caviar Green, etc) and the WD Caviar Blue drives.

(Sorry for asking so many questions!)
 
Well - I already have eight 1TB drives and can't afford to upgrade them yet. Since I'm going to be doing RAID-Z2, all drives need to be the same size, right?

You can expand your pool by a second vdev build from 2 TB drives,
or you can use the current pool as a backup pool (you do backups !??)

How are the Hitachi Deskstar 7K1000.C drives? (they are in the price range of $50 - $60/drive)

Other drives in that price range seem to be the "eco" drives (Samsung EcoGreen, WD Caviar Green, etc) and the WD Caviar Blue drives.

as said, avoid 4 k drives
most eco drives are 4k.

If 4k, i would use Samsung F4 (Bios update needed!!)

Gea
 
Thanks again so much for the advice.

I'm really going to have to think about what I want to do I guess.

I've been reading more about the SAS expander problems (reset storms) and now it has me worried. I would just get 2x or 3x SAS controllers and be done with it, but my current motherboard only has one x8 slot (and one x1 slot). Since this is strictly a home fileserver performance isn't really that important so I might just get the expander and hope for the best (since I can use my current hardware).

Another option is to use one SAS controller for 8 ports and then the 6 onboard (but it's an AMD chipset) for 14 drives...

I guess I'm still leaning towards the SAS expander because I WANT it to work since it's such an easy solution and can work with all 20 drives but I'm skeptical about problems.

Anyways - I'm rambling. Thanks again so very much for the advice!
 
I would go with SAS HBA... You can use SAS HBA+onboard SATA ports for the moment and when you get some extra money, but new mobo+cpu. I run my system on SAS HBA+4 onboard SATA and it works great.

You can always try without the expander and then buy it if you find out that onboard SATA are too slow...

On the other hand, SAS expander costs as much as a mobo+cpu+memory and you can get an Intel mobo with good chipset and enough SATA ports to suit you for the moment and add SAS HBA when you need it.

Matej
 
Have you considered storage density? A brief Google shopping search shows 2TB Hitachi 5K3000s for ~$80 a piece. They might be 4K I believe (so you have to do ashift=12, which is not a big deal), but if you have a limited number of bays, density is something to keep in mind.

5k3000s are not 4k drives. http://www.hitachigst.com/tech/techlib.nsf/techdocs/02D9197756A273D0862577D50024EC1D/$file/DS5K3000_ds.pdf

I am looking at SAS cards right now too, and was curious if anyone has used the AOC-USAS2-L8I? It is $100 cheaper than the other 2008 cards I looked at, which is worth having to rig up a replacement back plate (assuming I am correct that it can be done).
 
Ok, goin to build new fileserver that uses RaidZ2 on a virtualized OpenIndiana and around 16TB worth of hdds, so how much ram do i need to run that kinda array?
 
5k3000s are not 4k drives. http://www.hitachigst.com/tech/techlib.nsf/techdocs/02D9197756A273D0862577D50024EC1D/$file/DS5K3000_ds.pdf
Thanks! Are these pretty much the best "cheap" 2 TB drives then? ($80/piece)

What is the general opinion on the Spinpoint F4's versus the Hitachi 5k3000? (they are both $80/drive but the Spinpoint's are 4k drives...)

EDIT: Actually, the Spinpoint F4's are $5 cheaper (NewEgg promotion)
 
Last edited:
Also, my file server is a low-power AMD system (2.7 GHz, single-core, 2 GB RAM). I'm looking to do a RAID-Z2, 14x1TB setup. Would this be sufficient?
Not sufficient! Minimum for Solaris is really 4GB RAM, for 14x1TB I would not go below 8GB RAM (~32 GB if you want to use deduplication). Any extra RAM is automatically used as cache.
The CPU is fine as long as you don't plan to encrypt your drives.

TLB
 
I've decided on 10x 2TB drives (Hitachi 5k3000) instead of 14x 1TB.

Would 4 GB of RAM be sufficient for the lower number of drives?

My file server motherboard only has two DIMM slots so that's why I'm using 2x 2 GB of ECC RAM. For some reason 4 GB DDR2 ECC RAM costs a fortune. I might upgrade the motherboard but in the future but I was really hoping that 4 GB would hold me over. What kind of performance difference would that make?

(I understand that I won't have much caching in the memory but that's OK for my setup)
 
I've decided on 10x 2TB drives (Hitachi 5k3000) instead of 14x 1TB.

Would 4 GB of RAM be sufficient for the lower number of drives?

My file server motherboard only has two DIMM slots so that's why I'm using 2x 2 GB of ECC RAM. For some reason 4 GB DDR2 ECC RAM costs a fortune. I might upgrade the motherboard but in the future but I was really hoping that 4 GB would hold me over. What kind of performance difference would that make?

(I understand that I won't have much caching in the memory but that's OK for my setup)

No problem with 4GB and 14 disks.
A Solaris OS needs about 1 GB for himself.
The rest is mainly needed for performance and caching data.

Gea
 
With 10 drives, he better up his RAM to at least 8GB. I run only 4 drives and I am constantly out of RAM almost. ZFS can use all the ram you can throw at it.
 
Not sufficient! Minimum for Solaris is really 4GB RAM, for 14x1TB I would not go below 8GB RAM (~32 GB if you want to use deduplication). Any extra RAM is automatically used as cache.
The CPU is fine as long as you don't plan to encrypt your drives.

TLB

Is the 32GB a blanket statement for using dedupe? I heard 1GB per TB of deduped data, so I was planning on only enabling dedupe on one folder to reduce my overall RAM consumption on my new build. I figured with just Windows vmdks, I would probably only have 2TB of deduped data that way. Also, it wasn't clear to me, is that estimate based on how much original data, or how much actual stored data? In other words, if I have 10 1TB files that are identical, does that count as 1TB or 10TB when calculating ram consumption?
 
With 10 drives, he better up his RAM to at least 8GB. I run only 4 drives and I am constantly out of RAM almost. ZFS can use all the ram you can throw at it.

You are always out of RAM with ZFS.
If there is unused RAM, ZFS uses all of them, independent if you have 2 GB
or 200 GB of RAM.

The more you have the faster is the pool especially with random reads by a lot of user
or virtual machines. So use as much as possible but if you use it for example as a media
server, used by a few persons only, 4GB are really ok independant from the pool size;
-but do not use dedp-.

Gea
 
Is the 32GB a blanket statement for using dedupe? I heard 1GB per TB of deduped data, so I was planning on only enabling dedupe on one folder to reduce my overall RAM consumption on my new build. I figured with just Windows vmdks, I would probably only have 2TB of deduped data that way. Also, it wasn't clear to me, is that estimate based on how much original data, or how much actual stored data? In other words, if I have 10 1TB files that are identical, does that count as 1TB or 10TB when calculating ram consumption?

it depends on your data.
The problem is, that each block is identified by a hash value. Reads or Writes have to check this hash table. If the hash table is larger than your RAM, its stored on disk with the result, that for example a deletion of a snap can take hours or days.

If you have lots of different blocks, this table can grow a lot.
I do not use dedup. But mostly it is recomended to have 2,5 GB - 3 GB per TB pool capacity plus a fast Read SSD-cache drive.

Disks are so cheap. I would use dedup only if really needed.
Example: a lot of nearly identical VM's in a destop virtualisation environment.

Gea
 
Could anyone please let me know what I'm missing? I've a new install of OpenIndiana (live) and cannot get gmail to work through napp-it.

From the napp-it - setup panel I have:

smtp mailserver: smtp.gmail.com:465
smtp username: [email protected]
smtp password: my.gmail.password
mailto: [email protected]

From the napp-it - jobs -email - smtp-test:

I press 'submit' (test email) and napp-it works away at it for a while. I can see the packet cross my firewall but I end up with the " oops... could not connect smtp-server smtp.gmail.com:465" error message.

As a secondary test I set up Thunderbird on the desktop and successfully send/received a message using the same credentials.

Thx
 
If I want a simple [?] ZFS/CIFS setup with CIFS shares for "downloads" and "videos", is it better to create a nested filesystem (ie. "downloads" is under "storage") or should I not use nesting like that? (what's the difference/benefits?)
 
Disks are so cheap. I would use dedup only if really needed.
Example: a lot of nearly identical VM's in a destop virtualisation environment.

Gea

You really don't even need/want dedupe for this anymore. Both ESXi and hyper-v now support thin provisioning that solves this 'nearly identical VM' problem much more gracefully and with almost zero processing or memory overhead...
 
You really don't even need/want dedupe for this anymore. Both ESXi and hyper-v now support thin provisioning that solves this 'nearly identical VM' problem much more gracefully and with almost zero processing or memory overhead...

Thats a completely different approach to target the problem.
Thin provisioning does not reserve the allocated capacity but only the really used one.

with two problems from what i know:
- if you delete filles, the "used" capacity does not shrink mostly
- if you store x time the same information (like 50 x Windows 7 + apps), you will
use 50 x the space of each Installation, say 50 x 20 GB = 1 TB


If you have allocated a 200 GB disks to each system, you will not use 200x50 GB=10TB
but if you use deduplication, you will use nearly only 20 GB.

Summary:
50 x Win7, used space= 20 GB, Virtiual lokal disk = 200 GB: hard provisioned=10 TB
50 x Win7, used space= 20 GB, Virtiual lokal disk = 200 GB: thin provisioned=1 TB
50 x Win7, used space= 20 GB, Virtiual lokal disk = 200 GB: with dedup=20 GB
f you have enough RAM to cache, dedup is also potentially the fastest

But i would also prever thin provisioning until dedup is really needed
Block based, real time dedup (like in ZFS) is also very new, give it some more time,
mostly thin provsioning is currently the better solution in my opinion.

Gea
 
Last edited:
Hello,
For some reason I cannot set user permisions to Samba. I can connect to the share as root, but when I go to security for the share folder in Windows 7 and then advance, and find, none of my users I set up in Samba thru napp-it show up. I'm certain it's a simple step I'm missing, but I can't get it to work. Shouldn't I see root and myself???
 
Last edited:
Hi,

I am planning to run a 24GB or 32GB RAM OpenIndiana with ZFS sever at oi_148b or perhaps oi_151 loaded right on the bare hardware (no EXSi or vSphere).

Do we know if hardware encryption for ZFS works natively with the new Xeon E3 processors? (i.e. through AES-NI)

WRT C202/204/206 chipset has anyone had success with Openindina (b148b or above) or Solaris (SE11) is running on any motherboard (I am not interested in running on top of EXSi or vSphere). ref: http://hardforum.com/showpost.php?p=1037186247&postcount=46

Sorry if this is kindof a dupicate post, however "unclerunkle", http://hardforum.com/member.php?u=222332 , did seem to indicate that he would do some testing to this effect. ref: http://hardforum.com/showpost.php?p=1037115252&postcount=34 - yet I never quite was able to see a definitive answer in this forum.

If things look good for the Xeon E3's and OpenIndiana with ZFS I plan to use a ASUS system as follows:

ASUS RS300-E7-PS4 with a P8B-E/4L(61-MSVDF0-01) motherboard with a C204 (Sandy Bridge) Chipset.
http://www.asus.com/Server_Workstation/Servers/RS300E7PS4/

If it appears that Xeon E3 systems (like the ASUS) DO NOT install/boot OpenIndiana with ZFS nicely my fallback is a Supermicro system as follows:

SuperServer 6016T-6F with a X8DT6-F (MBD-X8DT6-F -O) motherboard with a 5520 (Tylersburg) Chipset.
http://www.supermicro.com/products/system/1U/6016/SYS-6016T-6F.cfm

FYI, my application doesn't really need 6Gbps SATA, but I do need high CPU performance - so an alternative SuperServer 6016T-URF (X8DTU-F mobo 5520 Chipset) might also work and avoid the annoying issue where the LSI 2008 will report disk related WWN instead of controller based ID.​

Since I only need one CPU the ASUS (UP) option seems like it would be slightly less expensive and also a bit better in speed (spec.org CPU results specifically for 1 CPU and 4 cores) - not sure about cost difference WRT CPUs for the two mobo's however. Of course if the ASUS E3 system can not load the oi_148b or oi_151 OS then I imagine it will be the Supermicro system.


Thanks in Advance.

Jon Strabala
 
Thats a completely different approach to target the problem.
Thin provisioning does not reserve the allocated capacity but only the really used one.

with two problems from what i know:
- if you delete filles, the "used" capacity does not shrink mostly
- if you store x time the same information (like 50 x Windows 7 + apps), you will
use 50 x the space of each Installation, say 50 x 20 GB = 1 TB


If you have allocated a 200 GB disks to each system, you will not use 200x50 GB=10TB
but if you use deduplication, you will use nearly only 20 GB.

Summary:
50 x Win7, used space= 20 GB, Virtiual lokal disk = 200 GB: hard provisioned=10 TB
50 x Win7, used space= 20 GB, Virtiual lokal disk = 200 GB: thin provisioned=1 TB
50 x Win7, used space= 20 GB, Virtiual lokal disk = 200 GB: with dedup=20 GB
f you have enough RAM to cache, dedup is also potentially the fastest

But i would also prever thin provisioning until dedup is really needed
Block based, real time dedup (like in ZFS) is also very new, give it some more time,
mostly thin provsioning is currently the better solution in my opinion.

Gea

Not quite the version of thin provisioning I was referring to...

You are describing starting with an empty disk, thin provisioned.

I am referring to starting with a completely configured "baseline" VM and then using thin provisioning such that only the changes to that model VM are actually stored on your virtual hard drive. This is supported in both ESXi 4.1 and Hyper-V r2 sp1. In this model, it is only the delta's between the fully provisioned "baseline" (OS+apps, in your example above) that take up space. For your "50 VM" example the total disk space used is limited to blocks that contain local changes and is just marginally more than using dedupe, but the processing/memory overhead to achieve it is almost zero (compared to a significant overhead with dedupe).

Even better, in Hyper-V and combined with dynamic memory, the actual in-memory footprint of these 50VMs is also limited to just the dirty pages (the locally unique data for each VM). Dedupe can't get you that...
 
Anyone here run NexentaStor CE 3.0.4/5 and can get there drives to spin down after X mins?
 
My 10 drives arrived today... (Hitachi 5k3000, interesting that half have a different revision number) and I'm going to be setting up a RAID-Z2 volume.

I may have an extra 1 TB (WD Blue) drive. Would it make a big difference in performance if I used it as a ZIL drive, or can I use it for anything else that would make a difference?

(My system drive is a very old WD 74 GB Raptor [from 2004] which I've thought about replacing but 1 TB for a system drive in beyond overkill...)

If there is no benefit though, I'll probably just sell the drive to recoup some of the money I spent on the new Hitachi drives...
 
Even better, in Hyper-V and combined with dynamic memory, the actual in-memory footprint of these 50VMs is also limited to just the dirty pages (the locally unique data for each VM). Dedupe can't get you that...

Are you sure about that? I don't think you'd want to use page sharing with large memory pages, and Hyper-V didn't, at least they didn't the last time I read through the docs...neither does VMWare if I recall correctly.
 
Hello,
For some reason I cannot set user permisions to Samba. I can connect to the share as root, but when I go to security for the share folder in Windows 7 and then advance, and find, none of my users I set up in Samba thru napp-it show up. I'm certain it's a simple step I'm missing, but I can't get it to work. Shouldn't I see root and myself???

NM
I didn't read to add the smb user to the admin group and then use that user to add the permissions

Now how would I do nfs permissions?
I have one user named movies that I want to give only read priveledges thru nfs
 
Last edited:
Could anyone please let me know what I'm missing? I've a new install of OpenIndiana (live) and cannot get gmail to work through napp-it.

From the napp-it - setup panel I have:

smtp mailserver: smtp.gmail.com:465
smtp username: [email protected]
smtp password: my.gmail.password
mailto: [email protected]

From the napp-it - jobs -email - smtp-test:

I press 'submit' (test email) and napp-it works away at it for a while. I can see the packet cross my firewall but I end up with the " oops... could not connect smtp-server smtp.gmail.com:465" error message.

As a secondary test I set up Thunderbird on the desktop and successfully send/received a message using the same credentials.

Based on no replies should I understand there is no way to get this to work? I currently have 3 or 4 other VMs working with gmail for notifications including NexentaStore. I was hoping to use timeslider and the napp-it interface but without email notifications there is no use in using OI and napp-it for me.

Options?

P..S. As a follow-up I tried using telnet to connect to gmail which was also successful. The connection is there but something (authentication protocol?) is not getting through.
 
Based on no replies should I understand there is no way to get this to work? I currently have 3 or 4 other VMs working with gmail for notifications including NexentaStore. I was hoping to use timeslider and the napp-it interface but without email notifications there is no use in using OI and napp-it for me.

Options?

P..S. As a follow-up I tried using telnet to connect to gmail which was also successful. The connection is there but something (authentication protocol?) is not getting through.

edited parameter:

you may try the following:
edit the smtp mailtest menu script

look for the line:
$smtp = Net::SMTP->new($server) || &mess("could not connect smtp-server $server");

and change it to:

$smtp = Net::SMTP->new($server,Port => 465) || &mess("could not connect smtp-server $server");

see info about Perl Net::SMTP module
http://linoleum.leapster.org/archives/48-Using-Perls-NetSMTP-module.html

Gea
 
Last edited:
Hello !

I am new to the ZFS world and I've decided to make a file server under Nexenta Core.

config :

µATX H61 or H67 motherboard
Core i3 2100
8go Ram
LSI 1068e based controller ( probably a br10i )
8 to 10 samsung F2 ecogreen 1.5to

I've got a few questions though :

1) Would the br10i work fine in a H61 or H67 PCI-E 16X ?

2) I intend to do a raidz2 or raidz3, i guess the raidz3 will consume more processor power ( A core i3 2100 enough i think ? ) but how does this translate in speed terms, let's say a 8xraidz2 vs a 8xraidz3. Would the raidz2 be a lot faster than the raidz3 or will there be no visible differences if the processor is powerful enough ?

3) The best option would be to go on a 10x1.5to raidz3, but since the LSI1068e will have only 8 ports, should i go to 8 disks on the LSI1068e controller and 2 disks on the H61/H67 onboard sata ( or 4/4 ? ), bearing in mind that there will be the OS disk there too ( maybe mirrored ), or is it a bad idea to mix the controllers ? ( i read that LSI1068e is fully supported on Opensolaris but didn't find much about H61/H67 so i don't know if there could be a problem there )
And if mixing controllers is working, could this be bad for performance ?

Thanks in advance,
Nolhian
 
Back
Top