I have a 10+ year old Freenas box that has done me very well but it's old hardware and I can't update it anymore, plus I now own a rack and thus want something rack mountable.
I've been searching high and low and it's really hard to find a short depth chassis that supports 4x 3.5 inch HDDs or...
Ok so that didn't work, I googled a bit and my memory refreshed.
I had top stop DFSR on both sides then go into E:\System Volume Information (After adding permissions) then delete the DFSR folder.
I restarted DFSR and created one test repl group and seems to be working so far.
So I did what you said and with DFSR reinstalled on the 2nd server I added one replicated group (out of 7) and it's not replicating even though of course the DFS GUI says everything is fine and can replicate through topology.
This popped up again on the source server but I don't want to run...
I killed all DFSR groups and even deleted all the DFSRprivate folders and data, users have no idea what happened (which is great)
Good call on removing it on the dead server, if that works you're a hero.
TL;DR my datacenter lost power, Yay! Most things came up OK but DFS-R was borked and I didn't get an alert because my email server was down as well. I have two identical Dell R510 servers with about 3-4TB total on them.
So replication stopped for a month and one of my servers is way behind now...
I forgot to answer this, 512GB is more than enough. I only use half of that currently and half of what I use I can delete at any time. That M.2 drive is as fast as it gets, amazing tech for the price.
Regarding the video card yeah its looking like either I go full bore @ 980TI (for high res...
Honestly nothing at the moment because I've never had the option, I mostly play dota2 right now but if I could play other games in 4K i might bet torn away.
I more or less want the ability to play 4K if I so choose but I'd prefer not to spend $1k on a video card.
If a 980 non ti can't cut...
I definitely want water cooled for the noise reduction, thanks for the cooler master suggestion looks like it has good reviews and will save me some $$
For the motherboard, good question. I want a high quality board with dual m.2 but beyond that what I'm not sure what I actually need beyond...
its on high everything, Ive tried every BIOS setting on and off.
..........still 45MB/s
At this point i've determined it's a bandwidth cap per host but I have no idea where it's coming from and no solid leads.
So I loaded up Ubuntu livecd on the box that has freenas then connected to another ubuntu machine and did 100MB/s + with a single stream, something freenas has not been able to do under any circumstance.
Moreover I've found that what is happening is freenas will max out streams from a...
- LZ4 is enabled however on the main pool however I have tried multiple pools without compression which net the same speed
- I've monitored samba during transfers and the CPU is quite low, NFS/FTP is quite slow so I doubt it's CPU related.
- I've tried multiple clients, I've tried it from...
I know for a fact there are people running this version of Freenas without issue, getting 100+ MB/s so there is a logical root cause, I just need to find it. Pretty frustrating to me because I am a huge Freenas fan and we spent a lot of money putting together this system
Zero impact...
Network adapters are setup for failover on freenas and each individual link goes to a seperate switch. I have de-coupled them and run single links and the speed remains the same.
I just ran an Iperf for you, shows 1.41 Gbytes @ 1.21Gbits/sec
For a real world test I've been copying a 4.7GB ISO...
The 10 gig link is showing no errors on the switch side so I hooked up the 1 gig link.
1GBE ~ 39MB/s
10GBE ~ 47MB/s
Once again I tried hitting the 10 gig with multiple streams, they all hit that limit in parallel. So if I send four streams from four servers it ends up at 47MB/s x 4.
Not sure...
I'm currently setup in a failover configuration although I tried individual 10gig links, each link goes to a different switch and I tried both. I'll get someone to check the port(s) to rule it out but pretty certain that is not the cause.
I have a Dell R510 with (Dual CPU / 6 Core) and 130GB of ram. My pool consists of three 12 disk raidz3 plus a mirrored SSD ZIL and a four disk SSD L2ARC. It's also on a fully 10 gig network (and reflects that in config). HBA is a beastly lsi 9300-16e.
Nothing I can do is making this thing...
RAIDz3 is going to hit me pretty hard with the 3 disk parity, plus im using regular 7200RPM sata disks and passing them through HW raid0 before ZFS gets ahold of them.
I'm using spare hardware to test Freenas, if I like what I see we'll buy custom boxes full of 6TB drives / 10GBe etc,.
Yes Freenas/ZFS uses SSH for snapshotting and it does appear to be dog slow unfortunately.
By hammering I mean I took four of my production servers on 10gig links and used richcopy to do a multithreaded transfer to each of the freenas servers while replication was running. Each server had...
Hey Guys,
I'm playing with two freenas servers (using zfs), both big boxes (lots of cpu/ram) and each have an MD1000 w/ 15 disks, one of which im using to boot the OS and the rest I'm using in a raidz3 (two 7 disk vdevs).
Anyways my question is, I enabled ZFS replication from freenas01...
Yep all latest, but get this.. After reboot I started transferring data, about 10 mins later I come back and open manage shows all the disk as "removed" then blue screened with 0xc000021a
Now on reboot I login to controller and all disks are labelled as foreign.
Yeah could try another PCI slot or an entirely different server, perhaps I'll do that next.
So I started a manual patrol read and left it alone (didn't clear any bad blocks)
I come in today and one virtual disk is lit up green, no errors but it doing a background init? why would it do that...
Power supplies on the MD1000s? All three are kicking drives out, could one PSU cause all three to have issues? I don't have a full backup of the array but there is nothing super critical on there. I would have to get the data off and it's going to take at least a week.
The drives getting...
Out of warranty, "uncertified" drives and bought from another vendor.
We've tried the cables, also there are redundant paths so both would have to be bunk no?
I have a storage server with two raid60 volumes and both are having random drives disappear, turn foreign or fail completely. If I reboot the disks pop back in and rebuild.
The failed drives will throw alerts "A block on the physical disk has been punctured by the controller: Physical Disk...
ReFS by itself if you don't mind sacrificing some write speed is a step in the right direction. Theoretically ReFS is far superior to NTFS when it comes to resiliency (if that is your top priority) however that said I haven't tested it enough to validate it.
Also if you are like me and have...
Well it just so happens I've spent a few days testing such things.
My Underlying hardware was a Dell Perc H810 in a raid 60 (7/7/7) connected to three MD1000's. Drives are regular 4TB SATA 7200RPM. I made a 56gig Volume.
NTFS @ 64KB = 900-1200MB/S Read 500-550MB/S Write
ReFS @ 64KB =...
OK spent a day on this, the problem is all my file servers are direct attached storage (md1000's) and Storage Spaces / Clustering requires iSCSI / FC. So none of this fancy new technology can help my situation.
Also here are my disk latencies transferring over some giant SQL backup files over...
I'm aware Storage Spaces doesn't do replication, someone else mentioned that I think earlier.
I actually am currently using NTFS/DFS for our company file share between FS01/02 however I hate DFS and was looking for an easier solution. It's implemented exactly how you've laid it out.
I...
I'm not looking to buy SSDs, would probably do a mirror between the two servers.
Do you have to actually setup a replication? I figured it would do it automatically.
I've done testing and I know the write performance sucks, with NTFS raid 60 I was doing 1200MB read and 550MB write. With...
I happen to have a 2012 R2 server attached to a couple MD1000 arrays. I dumped 24 x 4 TB drives in there with the intention of creating a large file server for my work ( we already have a few of these running RAID60 / NTFS Volumes )
I figured now that we are on R2 I would try Storage Spaces...
Perc6 is a raid card not an HBA, I intend on using Freenas/ZFS (Software raid). The Dell equivalent is a SAS 6/E which may or may not see 4TB
I've heard people recommend IBM ServeRAID will look into them as well