OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

You may want to rethink your folder structure. Instead of a single ZFS folder, with directories under it, change those directories to be ZFS folders. You should be able to then share those out and set permissions on them.

I just tried that. The new zfs folder i made is set to 11TB free with 100% free the other has 11TB free but 23% free. Will this give problems with space? If i move data around will the new folder i made be limited to the 11TB?
 
I just tried that. The new zfs folder i made is set to 11TB free with 100% free the other has 11TB free but 23% free. Will this give problems with space? If i move data around will the new folder i made be limited to the 11TB?

All ZFS folder share the same pool capacity
If you have not set reservations you must only care about the pool value itself
 

your image misses the pool capacity but if you have a pool with about 50 TB
and your ZFS 'NAS' use about 38 TB with 11 TB free then you have about 20% free

Your ZFS 'Media Server' use 100K and 11 TB free, so you have 100 % free in this folder
(percent value is counted against this ZFS not against the pool

Due to reservations, all values depends on each other
ex. If you set a reservation of 11 TB to Media Server, your ZFS NAS has 0% free
Use the percentage value only to discover space problems early enough

If you need the overall value, look always at the pool percentage)
 
Last edited:
I have a different problem now.

I made ZFS Folder 'data' and shared it with smb with the guest ok option.
That worked fine.

Then I delete the ZFS Folder data.
I create a new ZFS Folder 'data' again.

And now I cannot access it as a guest anymore (nor any other ZFS folder I create) :S

Even tried to destroy the pool and recreate everything, but guest logins just aren't possible anymore.
 
your image misses the pool capacity but if you have a pool with about 50 TB
and your ZFS 'NAS' use about 38 TB with 11 TB free then you have about 20% free

Your ZFS 'Media Server' use 100K and 11 TB free, so you have 100 % free in this folder
(percent value is counted against this ZFS not against the pool

Due to reservations, all values depends on each other
ex. If you set a reservation of 11 TB to Media Server, your ZFS NAS has 0% free
Use the percentage value only to discover space problems early enough

If you need the overall value, look always at the pool percentage)

Try reloading the picture.

But it seems right what you are saying. Can i move ex. 20tb data from nas folder to mediaserver even tho its at 11tb only?
 
Try reloading the picture.

But it seems right what you are saying. Can i move ex. 20tb data from nas folder to mediaserver even tho its at 11tb only?

you cannot copy but you can move
 
gea

i added a dell 5i hba card ytd....after reboot....i was able to add disk on the the hba card
but after another reboot, it only detects the hba card and was unable to detect the os disk on my motherboard sata.... any idea how i could configure so that it boots from my onboard sata instead of the hba?
 
Is it possible to access previous versions of files (ie. snapshots) via OS X, either with NFS, SMB or AFP?

Its really nice in windows to be able to directly access snapshots, can this be done in OS X? Preferably without just going in with ssh and manually moving stuff.

Thanks!
 
gea

i added a dell 5i hba card ytd....after reboot....i was able to add disk on the the hba card
but after another reboot, it only detects the hba card and was unable to detect the os disk on my motherboard sata.... any idea how i could configure so that it boots from my onboard sata instead of the hba?

i suppose your mainboard has canged boot order by adding disks.
add disks and check bios
 
Is it possible to access previous versions of files (ie. snapshots) via OS X, either with NFS, SMB or AFP?

Its really nice in windows to be able to directly access snapshots, can this be done in OS X? Preferably without just going in with ssh and manually moving stuff.

Thanks!

No, OSX does not have such a mechanism.
You may use the desktop sharing feature of OpenIndiana and access snaps
via TimeSlider.
 
Is it possible to access previous versions of files (ie. snapshots) via OS X, either with NFS, SMB or AFP?
Its really nice in windows to be able to directly access snapshots, can this be done in OS X? Preferably without just going in with ssh and manually moving stuff.
Thanks!

Nope. If the files are local and you have a Time Machine backup over afp or iscsi then you can use that - but if they are on the server then your best bet is to do as Gea suggested and use the Time Slider gui. (just vnc in)
 
i suppose your mainboard has canged boot order by adding disks.
add disks and check bios

my os drive corrupted :(

think the drive was going to die....now couldnt even detect the drive in bios...the rest of the drives are ok

any suggestion on os drive? all the new drives out there are way too big 1TB is like overkill

ssd or compact flash gd idea?
 
my os drive corrupted :(

think the drive was going to die....now couldnt even detect the drive in bios...the rest of the drives are ok

any suggestion on os drive? all the new drives out there are way too big 1TB is like overkill

ssd or compact flash gd idea?

I'd say a cheap 16GB or 30GB SSD as boot-drive.
They're probably not the fastest SSDs around, but still faster than a big harddisk and shouldn't be too expensive either.
 
my os drive corrupted :(

think the drive was going to die....now couldnt even detect the drive in bios...the rest of the drives are ok

any suggestion on os drive? all the new drives out there are way too big 1TB is like overkill

ssd or compact flash gd idea?

I would go for an Intel SSD 311 drive (20 GB). Also known as "Larsen Creek". This one is intended for the Z68 motherboards as cache but is very good small boot drive since its based on SLC NAND and not MLC NAND and thus has longer life time. It's also very cheap, around $100.

SSD 311 info:
http://www.hardocp.com/article/2011/05/11/intel_smart_response_technology_srt/6
http://www.anandtech.com/show/4329/intel-z68-chipset-smart-response-technology-ssd-caching-review/3
 
Solaris runs well with AMD. Most performance or stability problems are due to
not or not well supported Nics or Disk Controller. (Realtek is a often known candidate)

On the orher hand, you must define your use case, ex home NAS for Video and Backup,
then a quite slow machine is well, even with Atom or similar CPU's. I also use a backup
machine at home based on a older AMD board.

But
For me i always look for multi-purpose machines. and virtualisation is a must to have for me.
In my case there is no way around a Intel based mainboard with server chipsets to have hardware
virtualisation via vt-d. For me, this extra is worth the 50-70 Euro premium or the use of a Xeon,
even if its only a cheap Dualcore . You can also use AMD with IOMMU but they are similar in price.

I am using an AMD Phenom 2 1055T Hexcore in my ESXi all in one server, with a Gigabyte GA-890FXA-UD5, it was by far the lowest price point in the UK for IOMMU when I got it last year. Fortunately I was able to score a 2 port Intel NIC from work for free and although most of my LAN traffic is internal to ESXi, the realtek nics are not supported natively by ESXi anyway. The new revision of this board (3.1) will support the new AM3+ zambezi/bulldozer CPUs
 
I would go for an Intel SSD 311 drive (20 GB). Also known as "Larsen Creek". This one is intended for the Z68 motherboards as cache but is very good small boot drive since its based on SLC NAND and not MLC NAND and thus has longer life time. It's also very cheap, around $100.

SSD 311 info:
http://www.hardocp.com/article/2011/05/11/intel_smart_response_technology_srt/6
http://www.anandtech.com/show/4329/intel-z68-chipset-smart-response-technology-ssd-caching-review/3

Can we please stop this nonsense about longer lifespans on SLC drives compared to MLC? Has ANY of you EVER heard of an MLC drive dying due to wear? I've searched around and have yet to find examples. In fact, if you really want to make sure your drive doesn't die due to wearing out, buy a much bigger one and make a small partition on it, since the firmware will use the free space to spread out writes and thus give the drive a theoretically longer lifespan than an SLC SSD at the same price point.

In reality though, SSD's die from bad firmwares and the usual failing components, very rarely (as I said, I have yet to find an example) of wearing out cells...
 
Can we please stop this nonsense about longer lifespans on SLC drives compared to MLC? Has ANY of you EVER heard of an MLC drive dying due to wear? I've searched around and have yet to find examples. In fact, if you really want to make sure your drive doesn't die due to wearing out, buy a much bigger one and make a small partition on it, since the firmware will use the free space to spread out writes and thus give the drive a theoretically longer lifespan than an SLC SSD at the same price point.

In reality though, SSD's die from bad firmwares and the usual failing components, very rarely (as I said, I have yet to find an example) of wearing out cells...

Indeed Intel themselves have stated that the choice of SLC for the 311 Larson Creek drive has nothing to do with write endurance but rather for speed.
 
Can we please stop this nonsense about longer lifespans on SLC drives compared to MLC? Has ANY of you EVER heard of an MLC drive dying due to wear? I've searched around and have yet to find examples. In fact, if you really want to make sure your drive doesn't die due to wearing out, buy a much bigger one and make a small partition on it, since the firmware will use the free space to spread out writes and thus give the drive a theoretically longer lifespan than an SLC SSD at the same price point.

In reality though, SSD's die from bad firmwares and the usual failing components, very rarely (as I said, I have yet to find an example) of wearing out cells...

What does Intel mean with longer endurance then?

http://download.intel.com/design/flash/nand/325502.pdf
"The Intel SSD 311 Series utilizes Intel 34nm Single Level Cell (SLC) NAND to offer high performance and longer endurance over Multi-Level Cell (MLC) NAND"
 
gea

if os drive corrupted...is there any way to repair? the machine goes into an endless reboot after selecting the opendiana to boot in

by adding a hba card would mess up the os drive installation?
 
gea

if os drive corrupted...is there any way to repair? the machine goes into an endless reboot after selecting the opendiana to boot in

by adding a hba card would mess up the os drive installation?

why not just replace the disk, reinstall OS and import the Pool?
 
first beta of napp-it 0.6 (only for testings)

This is a major update especially for job management/ replication extension
- job handling is completely done by background agents for a much better job/ error management
compared to cgi + cron management in napp-it 0.5
- Realtime Job Control
- Web-Gui is much faster in job-handling
- ZFS and Disk Infos are buffered to improve Web-Gui performance with a lot of Disks or ZFS
- Pools are always mounted to /pool on Import (To import Pools with other mountpoints)
- Reset email trigger for disk-errors
- Join a domain with setable NTLM auth level for Server 2008 R2
- Import Pool with missing Cache drives
- bugfix: create ZFS always as UTF8

This is the very first beta, only available if you use or evaluate extensions.
If you want to try, request an evaluation key from [email protected].
 
Is it possible to access previous versions of files (ie. snapshots) via OS X, either with NFS, SMB or AFP?

Its really nice in windows to be able to directly access snapshots, can this be done in OS X? Preferably without just going in with ssh and manually moving stuff.

Thanks!

You can via terminal. For example, if you have a share/filesystem called TVShows you can (after mounting the share):
cd /Volumes/TVShows/.zfs/snapshot
and you'll be in the snapshot folder. Works well enough if your comfortable with the command line.

[edit] doh...reading comprehension failure on my part...terminal is not quite ssh, the usability is the same.
 
What does Intel mean with longer endurance then?

http://download.intel.com/design/flash/nand/325502.pdf
"The Intel SSD 311 Series utilizes Intel 34nm Single Level Cell (SLC) NAND to offer high performance and longer endurance over Multi-Level Cell (MLC) NAND"

They mean exactly that. Longer endurance. But if endurance is not the limiting factor of the drives lifespan, then the endurance is a non-issue.
Besides, don't be fooled by PR. Making consumers think that they're making a bargain by buying a product with a higher profit margin is what the PR division gets paid for.
 
Before I embark on what has now become an ESXi adventure (previously planned a dedicated oi box):

SuperMicro`s X8ST3-F seems to be the best choice, with an onboard LSI 8p sas controller and all (although more expensive than its siblings)
I take it that the onboard LSI controller supports pci-passthrough?
Seems like Gea confirmed it here.

Also, I'm quite new to ESXi; does it have proper network aggregation support?
Does ESXi support aggregation of some sort, or should I buy and pass an extra NIC to my oi guest?
Best config, imo, would be to aggragate 2-4 gbit ports within ESXi and balance load from there. Would this work?

If all the pieces comes together, this would really be the ultimate config for an `all-in-one`-machine.

I`m most curious as to how well the network would perform though!
And how is the performance in general?
 
Before I embark on what has now become an ESXi adventure (previously planned a dedicated oi box):

SuperMicro`s X8ST3-F seems to be the best choice, with an onboard LSI 8p sas controller and all (although more expensive than its siblings)
I take it that the onboard LSI controller supports pci-passthrough?
Seems like Gea confirmed it here.

Also, I'm quite new to ESXi; does it have proper network aggregation support?
Does ESXi support aggregation of some sort, or should I buy and pass an extra NIC to my oi guest?
Best config, imo, would be to aggragate 2-4 gbit ports within ESXi and balance load from there. Would this work?

If all the pieces comes together, this would really be the ultimate config for an `all-in-one`-machine.

I`m most curious as to how well the network would perform though!
And how is the performance in general?

If you have a lot of concurrent users, port aggregation may help a little
otherwise it's just complicating things without revenue

If you really need more than 1 Gb, use a All in One with a 10 Gb NIC (350Euro),
to a vlan 10 Gb capable Switch (like a HP Procurce 2910, 1500 Euro) and you have a external speed of 10 Gb.
All-In-One internal speed between SAN and VMs over ESXi virtual switches is always high-speed (3-10 Gb, depends on VNIC driver and general machine performace)

If you need availability, use All-In-One's in pairs
 
Before I embark on what has now become an ESXi adventure (previously planned a dedicated oi box):

SuperMicro`s X8ST3-F seems to be the best choice, with an onboard LSI 8p sas controller and all (although more expensive than its siblings)
I take it that the onboard LSI controller supports pci-passthrough?
Seems like Gea confirmed it here.

Also, I'm quite new to ESXi; does it have proper network aggregation support?
Does ESXi support aggregation of some sort, or should I buy and pass an extra NIC to my oi guest?
Best config, imo, would be to aggragate 2-4 gbit ports within ESXi and balance load from there. Would this work?

If all the pieces comes together, this would really be the ultimate config for an `all-in-one`-machine.

I`m most curious as to how well the network would perform though!
And how is the performance in general?

Your best bet would be to skip link aggregation and utilize MPIO with NFS or iSCSI on ESXi 4+.
 
Can anyone help me get the latest version of transmission-daemon on a text only solaris 11 express?

Last time I checked the repo was back at 1.93 - I managed to get 2.1 installed from a oi-sfe publisher but I cant make it happen anymore.

cheers
Paul
 
Can anyone help me get the latest version of transmission-daemon on a text only solaris 11 express?

Last time I checked the repo was back at 1.93 - I managed to get 2.1 installed from a oi-sfe publisher but I cant make it happen anymore.

cheers
Paul

@Paul,

This is what I did for transmission 2.32, I haven't tried 2.33 but I assume it would work also, download the source code.

You'll need to first grab the developer/gnome/gettext and text/gnu-gettext packages from the solaris repo. Then download libevent (http://monkey.org/~provos/libevent/).

Build libevent from source:

./configure --prefix=/usr
make
sudo make install

Then build transmission:

./configure --prefix=/var --disable-gtk --enable-daemon XGETTEXT=/usr/gnu/bin/xgettext MSGFMT=/usr/gnu/bin/msgfmt
make
sudo make install

I think I made SMF manifest for the daemon so if you want that I can send it to you, otherwise I think this one might work:

http://www.4amlunch.net/SMF/transmission-daemon/

Let me know if that works for you, alternatively OpenCSW has a relatively new version that you can grab with their pkg-util thing.
 
@Gea & Maximus825
Yes, I suppose both solutions would be perfect in a business-oriented environment.

This is however, a home-server for everything from firewalling/dhcp to filesharing, backup, webserver, proxy, ++++.
But I don`t need that kind of redundancy -- only throughput.

Also, I`m totally new to the All-on-one concept, and was originally going to build a dedicated machine running OpenIndiana, with a few VMs running on top.
However, after reading on forum and Gea`s guides, I`ve decided on a combined
This is really ingenous, as I`ll be able to combine my current four servers into a single box.

I`m planning on booting several VM`s and physical machines, by using iSCSI-shared ZFS-based vols. And I need to maximize throughput from my 2-4 Gbit ports.

Can I utilize my NICs from ESXi by bonding, aggregation, IPMP etc --- or pass a NIC to my oi-guest?


I`ve decided on a Xeon W3565 to go with my X8ST3-F. Please comment on this!

Current shopping list includes:
SuperMicro X8ST3-F
Xeon W3565
3 x 4GB of [insert brand] DDR3 1066 ECC memory.
 
@Gea & Maximus825
Yes, I suppose both solutions would be perfect in a business-oriented environment.

This is however, a home-server for everything from firewalling/dhcp to filesharing, backup, webserver, proxy, ++++.
But I don`t need that kind of redundancy -- only throughput.

Also, I`m totally new to the All-on-one concept, and was originally going to build a dedicated machine running OpenIndiana, with a few VMs running on top.
However, after reading on forum and Gea`s guides, I`ve decided on a combined
This is really ingenous, as I`ll be able to combine my current four servers into a single box.

I`m planning on booting several VM`s and physical machines, by using iSCSI-shared ZFS-based vols. And I need to maximize throughput from my 2-4 Gbit ports.

Can I utilize my NICs from ESXi by bonding, aggregation, IPMP etc --- or pass a NIC to my oi-guest?


I`ve decided on a Xeon W3565 to go with my X8ST3-F. Please comment on this!

Current shopping list includes:
SuperMicro X8ST3-F
Xeon W3565
3 x 4GB of [insert brand] DDR3 1066 ECC memory.

Link aggregation on a All-In-One between ESXi and your VM's is not needed (Do not!)
because internal software links are high speed (3-10 GB/s) with a single vnic.

External transfer from your ESXi's virtual switch to your hardware switch may benefit from
Link aggregation especially if you have a lot of concurrent user.

My opinion:
There is little sense today with trunking in a business environment,
why do to expect anything beside troubles at home?
 
Link aggregation on a All-In-One between ESXi and your VM's is not needed (Do not!)
because internal software links are high speed (3-10 GB/s) with a single vnic.

External transfer from your ESXi's virtual switch to your hardware switch may benefit from
Link aggregation especially if you have a lot of concurrent user.

My opinion:
There is little sense today with trunking in a business environment,
why do to expect anything beside troubles at home?

It is primarilly the external traffic I`m concerned of.
As I`m new to concepts like network aggregation and niches of hardware in the server market, I`m reading and learning day by day. I basically just want to make sure I get the right hardware, and also try and eliminate as many potential startup-problems as possible. This way I can go straight to settings tuff up and start testing and do benches.

Doesn`t seem like LACP is supported by ESXi, according to this article.
Suppose I`ll just have to try things out as I get the hardware.
Any remarks on the hardware pick btw?
Also, I`ll be using the Samsung F4EGs, as I`m very pleased with my vdevs consisting of these.
 
Back
Top