My home rack is already pretty quiet. Most of the noice come from SC846 and its fans. I already have SQ PSUs, but I'd like to replace the three fans behind the backplane. I'm using the chassis as JBOD, so I can safely remove the back fans I think.
I noticed that the back fans can be replaced...
Thanks for the answers. I'll need the drive to saturate 10Gbps link so I can move stuff between my server and desktop fast.
http://www.tomshardware.com/reviews/intel-750-series-400gb-versus-samsung-sm951-512gb,4143.html
According this, 400GB seems to perform better than 1,2TB, at least in some...
Just found out that there is a 800GB model of this SSD. Has anyone got their hands on one yet? I would like to see some real world test results between each model. Is 1200GB model really worth it, or should I go for 400GB / 800GB if the capacity meets my needs?
Not sure which is the right subforum to post this, so let's start here:
I wanted to install Debian on ZFS root with UEFI boot.
1) I downloaded Ubuntu 15.04 desktop image because it has UEFI support (debian live does not)
2) Mounted the .iso via Supermicro IPMI
3) BIOS shows two devices...
Depends what you count as "reasonable". Fun hobbies are never cheap :).
I've been running a "10G network" at home for about a year now:
Juniper EX3300-24T - $1000, new
Intel X520-DA2 nics in each server/desktop - $100 each, new from China
MM OM3/4 cables - maybe $50 total?, new
Optics...
I think this setup is still doable even with 32GB ram. Also that's the max for this board/cpu.
Upgrading to 2011 now would be overkill. Xeon D is optimal storage platform, but I am not going to touch any 10GBASE-T stuff, never. I wonder if Intel will release a Skylake Xeon SoC?
I'll just list everything, since other people might wanna know as well. It's currently running on bare metal.
Xeon E3-1230v3
32GB ddr3 ECC
Supermicro X10SLH-f
SC846BA-R920B
SAS3-846EL1 backplane
M1015
2x 40GB Intel 320 SSD for OmniOS
Intel X520 dual port 10G nic
24x various 4TB HDDs, mix or WD...
Backups are there, no worries.
Think I'm going for 6 disk per vdev. 8x6/8/10TB drives in a single raidz2 sounds hazardous. Thanks for all the suggestions!
Yes, 3x8 would give me best capacity, and probably good-enough performance (need 1GB/s seq read/write), but is 8disk wide raidz2 vdevs good idea with 6TB, or even 8TB drives? I'm currently using only 4TB drives, but I'd like to make the pool future proof for bigger disks as well.
It will be mostly media streamed around the house, but also a iSCSI LUN to my Windows desktop, that's why I need the performance.
I tried to avoid largeblocks because it's not supported in ZoL and I might have to switch to ZoL in the future.
Well, scrubbing speed does not tell that much about performance since OpenZFS (I think you're not on Solaris? :P) does not do sequential resilver, also raidz3 resilvering is most stressful for disks.
Currently running 8x 4TB drives in single raidz2 vdev, going to add 16 more disks. Random performance is going to suck anyways so I'm going to max out on Seq to saturate 10G.
Some options (?) :
1) 4x 6disk raidz2
If I had to pick now without planning, I'd go for this one because:
6disk...
Thanks, makes sense. In MOST cases 16GB is more than enough, but I've managed to run OOM when doing testing with multiple VMs and such. I do have pretty powerful home server(s), but every now and then I find myself struggling with VMs on my desktop as well.
IIRC, Skylake (1151) supports dual...
I've been planning an upgrade to Skylake. Still need to decide which ram kit to buy. I wont be overclocking, but would like to get RAM that allows me to overclock if I change my mind in the future.
HyperX Predator looks best performance-wise, there's 4 kits with different clock/timings...
I think some boards support both modes, so you could swap from bios. In SATA mode all data would go through SATA controller and one of the normal SATA ports would be disabled when m.2 is used, correct me if I'm wrong.
I'm still a bit lost with M.2 "tech". Could someone explain me a few things about M.2:
- PCI-e mode vs. SATA mode, what's the difference? SATA mode uses SATA lanes, does it have worse bandwidth and/or latency than PCI-e mode?
- Comparing M.2 NVMe SSD to "pure" PCI-e NVMe SSD, which one has...
VIII Hero was the first board I was looking at, but what does it offer that Ranger does not? There's a killer deal for Ranger + i7-6700k + 16GB of hyperx here in Finland.
It's finally time for me to upgrade from Sandy Bridge. Here's the setup I'm going for:
- <Insert MoBo here>
- i7-6700k
- 32GB of Kingston DDR4 (Z170 boards support 64GB, but I could not find any 16GB non-ECC dimms?)
- GTX 980
- Intel 750 1.2TB NVMe SSD
I'm (most likely) not going to...
What's the major difference between Signature and ROG series?
What are the differences between Hero and Ranger? Only thing I found is ASmedia sata controller + two additional sata ports on Hero.
Backplane uses LSI 3x36 or 3x24 expander and I'll be using a LSI 9300-series HBA. If all the ports work same way, why is there 4 ports instead of 3 like in SAS2 model? Can you connecto another backplane via dual cables to achieve double bandwidth in that situation as well?
Just bought a Supermicro BPN-SAS3-846EL1 SAS3 backplane. I was reading the manual and noticed that the backplane has 4x SFF8643 connectors. Original SAS and SAS2 versions had 3x SFF8087 for 1 output to HBA and 2 for cascading.
I contacted Supermicro and they said that on 12Gbps backplanes two...
I used ESXi for couple of years but decided to switch to KVM+libvirt. Everything just works and it performs much better than ESXi (gotta love virtio <3). If you want to cluster multiple nodes i'd suggest you take a look at Opennebula instead of OpenStack. OpenStack is nice when you start to have...