Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
well I can afford to max out my server with 32GB but I'd rather not if it's going to be wastedas much as possible, because ZFS is using ram as cache memory
But I think anything above 2gb will work
how much RAM is recommended if not using deduplication?
the DD tables can be mostly cached in l2arc though right?ZFS deduplication is realtime/online what means that every read/write must be deduplicated.
You can activate it with low RAM but then you have quite soon a situation, where the dedup tables are
too large for RAM and must be processed from disk.
A simple "delete a snap" can last a week under this condition.
The common suggestion for dedup is about 2-3 GB RAM per TB Data for deduplication + what is needed
for the storage server itself to become fast.
This is the reason that there are only very few use cases with dedup rates for example > 20 where dedup makes sense at all.
Activate compression and buy another disk is mostly the better way.
how much RAM is recommended if not using deduplication?
thanks for the reply GeaZFS deduplication is realtime/online what means that every read/write must be deduplicated.
You can activate it with low RAM but then you have quite soon a situation, where the dedup tables are
too large for RAM and must be processed from disk.
A simple "delete a snap" can last a week under this condition.
The common suggestion for dedup is about 2-3 GB RAM per TB Data for deduplication + what is needed
for the storage server itself to become fast.
This is the reason that there are only very few use cases with dedup rates for example > 20 where dedup makes sense at all.
Activate compression and buy another disk is mostly the better way.
Without dedup, you need about 1-2 GB for the OS itself. But then its slow. Performance is mostly a factor of RAM because
every read or write is cached in RAM. If you have a single user home/media server, 2-4 GB may be enough. If you want a fast
multiuser server with a lot of different file read and writes, RAM up to the amount of disk space (100% guaranteed cache rate) may be usefull.
Mostly it depends on needs and money.
I have a similar question in determining how much to allocate to OpenIndiana when it is running as an all-in-one box with ESXi. I have a M1015 card, setup in pass-through mode, so ESXi makes you "reserve" all the memory you are allocating to the OpenIndiana VM. Since there will be other VMs running on ESXi, I can't give OpenIndiana all the memory, but I want to make sure it gets enough to keep up with the SMB/NFS requests. I'm not really sure how to systematically determine how much to use.
I don't think XenServer has a RAM limit. Does it work well on Xen?
Gea and some others are working on the pacemaker thing and from what I understand it works but isn't a bare-metal solution iirc it requires the all in one VM approach. i could be wrong. I'm not a fan of running a hypervisor just so i can run storage clustering.
I created a auto snap job yesterday, and today I got:
---------
Job Alert
---------
end snap: 23.05.2012, 06:00 13 s too many arguments
usage:
snapshot [-r] [-o property=value] ... <filesystem@snapname|volume@snapname>
name=usersauto snaps
I wanted to move my OI install to another disk in the move to a new case and expansion to my OI server. I was going to move it from the 36gb disk it's on now to a 16gb SSD I had laying around. Is this something that can be done or is this going to require a new install?
I figured Acronis/GParted would be able to resize the partition, but it doesn't appear that it can.
This is why:
https://www.dropbox.com/s/roavorq5ijxsvm9/wd36.jpg
What options do I have to use a new disk as the boot/system disk?
So just install to the SSD, then import the drives and it'll know about the structure of the vdev/pools?
syncing file systems...
panic[cpu2]/thread=fffff00077d4c40: BAD TRAP: type=d (#gp General protection) rp=ffffffffffbc796d0 addr=0
dumping to /dev/zvol/dsk/rpool/dump, offset 65536, content: kernel
System has been hanging about 3 min after getting a logon screen. I think it's the OS drive, but when trying to install OI to the SSD I received an error:
and it hangs.Code:syncing file systems... panic[cpu2]/thread=fffff00077d4c40: BAD TRAP: type=d (#gp General protection) rp=ffffffffffbc796d0 addr=0 dumping to /dev/zvol/dsk/rpool/dump, offset 65536, content: kernel
Removed the LSI card, testing install again. If it fails, I'll stress test the machine. Maybe it's the new Seasonic 620w PS.
i wouldnt mix sector sizes within the same pool.Is there a good resource with the current best practices for choosing raidz vdev sizes with 4k disks or mixed 512/4k? I've read the wiki is out of date and on the forums there's no consistent message.
I'm trying to decide between a 3 and 4 drive pool for my backup server with at least some of the disks being 4k. But, I've seen conflicting benchmarks and I'm not sure what the most current info is. I don't want to waste a port but I also don't want to flush performance. Is it better to stick with 3 and figure something else out in the future with another controller or larger disks?
No, it does not.Does Solaris Express/ZFS support SMB2?
This is not completely surprising. My experience (over a 10Gbe network) was that SMB transfers from Solaris to Windows stall at just over 100MB/s while NFS can be pushed to well over 700MB/s. Assuming the client-side is a Windows based system, you need an optimized NFS client to reach these speeds (i.e., Hummingbird). The built-in Windows NFS client is just crap.Also, I tired doing a DD of a 10 GB file testing the difference between SMB and NFS:
NFS:
10240000000 bytes (10 GB) copied, 45.4108 s, 225 MB/s
SMB/CIFS:
10240000000 bytes (10 GB) copied, 120.952 s, 84.7 MB/s
Does that seem right?! I've run the test several times just to make sure but sure enough NFS seems MUCH, much faster.