Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
You may want to rethink your folder structure. Instead of a single ZFS folder, with directories under it, change those directories to be ZFS folders. You should be able to then share those out and set permissions on them.
I just tried that. The new zfs folder i made is set to 11TB free with 100% free the other has 11TB free but 23% free. Will this give problems with space? If i move data around will the new folder i made be limited to the 11TB?
So this looks just right? http://dl.dropbox.com/u/4118803/folders 2.png
your image misses the pool capacity but if you have a pool with about 50 TB
and your ZFS 'NAS' use about 38 TB with 11 TB free then you have about 20% free
Your ZFS 'Media Server' use 100K and 11 TB free, so you have 100 % free in this folder
(percent value is counted against this ZFS not against the pool
Due to reservations, all values depends on each other
ex. If you set a reservation of 11 TB to Media Server, your ZFS NAS has 0% free
Use the percentage value only to discover space problems early enough
If you need the overall value, look always at the pool percentage)
Try reloading the picture.
But it seems right what you are saying. Can i move ex. 20tb data from nas folder to mediaserver even tho its at 11tb only?
you cannot copy but you can move
I added a user and when i am logged in as root i can see the user in security tap, but when i remove the remember password on root it takes along time trying to connect to the share or \\nas and it says password is wrong.
http://dl.dropbox.com/u/4118803/acl settings.png
gea
i added a dell 5i hba card ytd....after reboot....i was able to add disk on the the hba card
but after another reboot, it only detects the hba card and was unable to detect the os disk on my motherboard sata.... any idea how i could configure so that it boots from my onboard sata instead of the hba?
Is it possible to access previous versions of files (ie. snapshots) via OS X, either with NFS, SMB or AFP?
Its really nice in windows to be able to directly access snapshots, can this be done in OS X? Preferably without just going in with ssh and manually moving stuff.
Thanks!
Any thoughts on this one? am I'm missing something?
i suppose your mainboard has canged boot order by adding disks.
add disks and check bios
Is it possible to access previous versions of files (ie. snapshots) via OS X, either with NFS, SMB or AFP?
Its really nice in windows to be able to directly access snapshots, can this be done in OS X? Preferably without just going in with ssh and manually moving stuff.
Thanks!
i suppose your mainboard has canged boot order by adding disks.
add disks and check bios
my os drive corrupted
think the drive was going to die....now couldnt even detect the drive in bios...the rest of the drives are ok
any suggestion on os drive? all the new drives out there are way too big 1TB is like overkill
ssd or compact flash gd idea?
my os drive corrupted
think the drive was going to die....now couldnt even detect the drive in bios...the rest of the drives are ok
any suggestion on os drive? all the new drives out there are way too big 1TB is like overkill
ssd or compact flash gd idea?
Solaris runs well with AMD. Most performance or stability problems are due to
not or not well supported Nics or Disk Controller. (Realtek is a often known candidate)
On the orher hand, you must define your use case, ex home NAS for Video and Backup,
then a quite slow machine is well, even with Atom or similar CPU's. I also use a backup
machine at home based on a older AMD board.
But
For me i always look for multi-purpose machines. and virtualisation is a must to have for me.
In my case there is no way around a Intel based mainboard with server chipsets to have hardware
virtualisation via vt-d. For me, this extra is worth the 50-70 Euro premium or the use of a Xeon,
even if its only a cheap Dualcore . You can also use AMD with IOMMU but they are similar in price.
I would go for an Intel SSD 311 drive (20 GB). Also known as "Larsen Creek". This one is intended for the Z68 motherboards as cache but is very good small boot drive since its based on SLC NAND and not MLC NAND and thus has longer life time. It's also very cheap, around $100.
SSD 311 info:
http://www.hardocp.com/article/2011/05/11/intel_smart_response_technology_srt/6
http://www.anandtech.com/show/4329/intel-z68-chipset-smart-response-technology-ssd-caching-review/3
Can we please stop this nonsense about longer lifespans on SLC drives compared to MLC? Has ANY of you EVER heard of an MLC drive dying due to wear? I've searched around and have yet to find examples. In fact, if you really want to make sure your drive doesn't die due to wearing out, buy a much bigger one and make a small partition on it, since the firmware will use the free space to spread out writes and thus give the drive a theoretically longer lifespan than an SLC SSD at the same price point.
In reality though, SSD's die from bad firmwares and the usual failing components, very rarely (as I said, I have yet to find an example) of wearing out cells...
Can we please stop this nonsense about longer lifespans on SLC drives compared to MLC? Has ANY of you EVER heard of an MLC drive dying due to wear? I've searched around and have yet to find examples. In fact, if you really want to make sure your drive doesn't die due to wearing out, buy a much bigger one and make a small partition on it, since the firmware will use the free space to spread out writes and thus give the drive a theoretically longer lifespan than an SLC SSD at the same price point.
In reality though, SSD's die from bad firmwares and the usual failing components, very rarely (as I said, I have yet to find an example) of wearing out cells...
gea
if os drive corrupted...is there any way to repair? the machine goes into an endless reboot after selecting the opendiana to boot in
by adding a hba card would mess up the os drive installation?
why not just replace the disk, reinstall OS and import the Pool?
Is it possible to access previous versions of files (ie. snapshots) via OS X, either with NFS, SMB or AFP?
Its really nice in windows to be able to directly access snapshots, can this be done in OS X? Preferably without just going in with ssh and manually moving stuff.
Thanks!
What does Intel mean with longer endurance then?
http://download.intel.com/design/flash/nand/325502.pdf
"The Intel SSD 311 Series utilizes Intel 34nm Single Level Cell (SLC) NAND to offer high performance and longer endurance over Multi-Level Cell (MLC) NAND"
Before I embark on what has now become an ESXi adventure (previously planned a dedicated oi box):
SuperMicro`s X8ST3-F seems to be the best choice, with an onboard LSI 8p sas controller and all (although more expensive than its siblings)
I take it that the onboard LSI controller supports pci-passthrough?
Seems like Gea confirmed it here.
Also, I'm quite new to ESXi; does it have proper network aggregation support?
Does ESXi support aggregation of some sort, or should I buy and pass an extra NIC to my oi guest?
Best config, imo, would be to aggragate 2-4 gbit ports within ESXi and balance load from there. Would this work?
If all the pieces comes together, this would really be the ultimate config for an `all-in-one`-machine.
I`m most curious as to how well the network would perform though!
And how is the performance in general?
Before I embark on what has now become an ESXi adventure (previously planned a dedicated oi box):
SuperMicro`s X8ST3-F seems to be the best choice, with an onboard LSI 8p sas controller and all (although more expensive than its siblings)
I take it that the onboard LSI controller supports pci-passthrough?
Seems like Gea confirmed it here.
Also, I'm quite new to ESXi; does it have proper network aggregation support?
Does ESXi support aggregation of some sort, or should I buy and pass an extra NIC to my oi guest?
Best config, imo, would be to aggragate 2-4 gbit ports within ESXi and balance load from there. Would this work?
If all the pieces comes together, this would really be the ultimate config for an `all-in-one`-machine.
I`m most curious as to how well the network would perform though!
And how is the performance in general?
Can anyone help me get the latest version of transmission-daemon on a text only solaris 11 express?
Last time I checked the repo was back at 1.93 - I managed to get 2.1 installed from a oi-sfe publisher but I cant make it happen anymore.
cheers
Paul
@Gea & Maximus825
Yes, I suppose both solutions would be perfect in a business-oriented environment.
This is however, a home-server for everything from firewalling/dhcp to filesharing, backup, webserver, proxy, ++++.
But I don`t need that kind of redundancy -- only throughput.
Also, I`m totally new to the All-on-one concept, and was originally going to build a dedicated machine running OpenIndiana, with a few VMs running on top.
However, after reading on forum and Gea`s guides, I`ve decided on a combined
This is really ingenous, as I`ll be able to combine my current four servers into a single box.
I`m planning on booting several VM`s and physical machines, by using iSCSI-shared ZFS-based vols. And I need to maximize throughput from my 2-4 Gbit ports.
Can I utilize my NICs from ESXi by bonding, aggregation, IPMP etc --- or pass a NIC to my oi-guest?
I`ve decided on a Xeon W3565 to go with my X8ST3-F. Please comment on this!
Current shopping list includes:
SuperMicro X8ST3-F
Xeon W3565
3 x 4GB of [insert brand] DDR3 1066 ECC memory.
Link aggregation on a All-In-One between ESXi and your VM's is not needed (Do not!)
because internal software links are high speed (3-10 GB/s) with a single vnic.
External transfer from your ESXi's virtual switch to your hardware switch may benefit from
Link aggregation especially if you have a lot of concurrent user.
My opinion:
There is little sense today with trunking in a business environment,
why do to expect anything beside troubles at home?