thanx this worked, had to redo my network config from scratch but now it's holding the settings.If you are using a static IP address, you have to disable the network manager and use the default network service.
http://forums.oracle.com/forums/thread.jspa?threadID=2139833 is what I followed.
svcadm disable network/physical:nwam
svcadm enable network/physical:default
Or you can try and make the changes to /etc/default/dhcpagent per that page.
And from there set up your IP information manually, using directions from http://wiki.sun-rays.org/index.php/SRS_5.1_on_Solaris_11_Express (not the packages, just the legacy network part)
Basically even if you have a static IP address set, in some cases Solaris 11 DHCP still does stupid things with DHCP and overwrites some files (like resolv.conf.).
hello dropbearCan you provide us with upgrade instructions please, for those of us who have older versions of napp-it?
i would build one pool from one raid-z1 or z2 raidset (similar to raid-5 or raid-6, but without the write-hole problem) with the five 2 tb drives (capacity optimized)I just jumped in, very new to unix/linux. Tried doing the install on ESXi, but don't think my MB supports the passthrough of my BR10i controller.
I've been running my 8TB media server inside of Win7. Just using SyncBack to keep daily backups of the Folders to equal sized int/usb drives.
I've installed OpenIndiana on my WD Raptor and adding 2xSamsung 2xSeagate 1xWD Green (all 2TB) drives to the server. I also have a 60GB SSD I might like to use for a cache drive. Does OI support that and how would be the best way to set this up?
Also, since I have 2xInt & 2xUSB 1.5TB and 3xInt & 3xUSB 1TB drives, I haven't seen anyone writing about using USB drives in a Pool. Any opinions?
Also, is there a way to mount a disk with data on it to copy over to a pool?
If I went with S11, would I be able to mount disks to copy data over? It looks like I'm going to setup a pool copy all the data off the 2TB drives (that I moved over from the USB drives earlier) then create the 2TB disk pool and copy it back. Over the network might still be faster than USB.mount ntfs-disks
it may be possible by adding extra software, but i suggest to use always the simplest way.
connect the ntfs drive to a windows or mac and copy the files to a smb-shared folder on your pool.
Nope. I wouldn't risk important data with it for at least another year. A storage server should generally only be used for storage.. so even if you only know linux and not the *solaris varieties I would 100% recommend NexentaCore instead as you basically get a linux environment with proper ZFS.
Hi _GeaHello LBJ
you may look if you have already the newest Bios.
If so, you have to shut down completely
zfs property nbmand=off fixed the afp problem!about napp-it and netatalk
i'm working on a problem with napp-it + netatalk (2.15 and 2.2 beta) and Solaris*
similar to http://permalink.gmane.org/gmane.network.netatalk.user/20763
I can connect as a Solaris user via afp from OSX 10.5 and 10.6
I can create files and folders. i can delete empty folders and i can rename
files. if i try to delete a file i get permission denied.
If i use file-information on osx i see the user as owner
if i reuse file-information, the owner is unknown.
I have tried to compile netatalk with and without PAM-support and tried
volume options:acls and upriv with perm 0777 -without success
I am not very experienced with netatalk problems,
so maybee someone has an idea.
Install Nexenta, OpenIndiana or SE11
install napp-it via wget -O - www.napp-it.org/nappit04 | perl
Install afp via wget -O - www.napp-it.org/afp | perl
(installer is downloaded and in your $HOME folder, if
you want to check installer settings)
share a folder for afp. Settings see menu services-afp
I would try this one: Net::SMTP_authcurrently i use the perl-included modul Net:smtp
which does not support authentification / i could not get it working.
i had tried that last summer but i could not get it working on NexentaI would try this one: Net::SMTP_auth
*Solaris* is Sparc and Intel only.Are there any available versions of the OSes for the PPC architecture? (not SPARC)
Even older versions would work, thanks!
I would say, a All-In-One server (ESXi+embedded NAS) should be doable with this config. The Dell contains a quite old 5500 chipset, but this was the first Intel server-chipset that supports vti-d, needed for pass-through. Broadcom nics are sometimes reported to have problems with Solaris. Be aware to add a Intel nic if possible (slot available)I plan on doing this with a Dell PowerEdge 510 here are the specs:
Dual Xeon 5620's
8GB (4x2gb) Unbuffered Ram
12 bay hot-swap chasis + a 2.5" cage inside that holds 2 drives
Perc H200 HBA card (LSI 2008)
Onboard dual Broadcom Gig NICs
750W Power supply
No hard drives I'll order them when I order the server from somewhere like NewEgg
Flash the HBA card with the LSI IT firmare.
Install ESXi on 2 2.5' drives (raid 1) (Was planning on cheap laptop drives, would it be advisable to use 2.5" ssd or 2.5" SAS drives?)
Follow the instructions for passing through the HBA card and setting up Solaris (haven't chosen a version yet)
Create a zpool with 2 or 3tb drives haven't decided which yet. I don't have enough money to fill up all the drive bays so I'm leaning towards getting a few 3tb's to get me started and expand as the prices go down.
Anyone have any recommendations on either hardware or software configuration? Any ideas on what is the best utilization of the disks? If I setup a 4 drive raidz-2 and over time expand it to 11 + hotspare or 11 + cache drive or even just a 12 drive raidz-2. I assume the LSI 2008 card uses an expander, would it be better to use two of these cards? If I used two cards what is the best way to setup the vdevs?
Im going to be streaming bluerays/dvds/music/etc, running a few VMs one Linux webserver and a few Windows boxes, backing up desktops, running an ftp server.
Let me know what you guys think. Thanks.
Thanks for your reply and thanks for all your work on napp-it.
If Solaris has issues with Broadcoms wouldn't it not matter since they would be virtualized and Solaris would see them as e1000's?
The internal cage supports 2 2.5" drives and is wired to the mainboard that has raid1 capabilities.
As for ram, the motherboard has 18 slots, so even at 4 x 2gb there is still a lot of room to expand.
As for the second HBA card, how should I set that up? In a simple example lets say I have a raid1, would it be best to have 1 drive on one HBA and the mirror on the second HBA or have them both on one HBA?
I chose Dell because I work with hundreds of R series servers at work. I am familier with them, I like the support and have a lot of contacts at Dell who can help me out. I like their refined look and how well and logical they seem to be put together. I realize I am probably paying a premium for that, but that doesn't bother me.
Hmm didn't consider that ESXi would have a problem with the onboard raid1. The system has internal usb ports, would it make more sense to boot ESXi from say a 64gb usb stick instead of the drives? If so would you reccomend also installing the Opensolaris VM on that stick? Obviously there would be no redundency, but I could make backups of the stick periodically.
I guess another option would be if I had the two HBA cards I could use 2 of the 16 ports to run the 2.5" drives and skip the onboard controller.
I'll have to double check the chipset, this is from the R510 tech guide from Dell:
Introduction of the new Intel Xeon processor 5600 series includes a stepping revision of the Intel 5520 and 5500 chipset, which is required to enable the full 5600 series feature set. Dell servers shipped with the new chipset revision have the symbol II in the System Revision Field visible through OpenManage Server Administrator (OMSA) and the iDRAC GUI. They are physically marked with a 12 x 6mm rectangular label containing the symbol II. The memory interface is optimized for 800/1066/1333 MHz DDR3 SDRAM memory with ECC when running with Intel Xeon processor 5600 series.