I built a mini itx setup for my girlfriend and used a Gigabyte 1660 TI, actually this one: https://www.newegg.com/gigabyte-geforce-gtx-1660-ti-gv-n166tixoc-6gd/p/N82E16814932132?Item=N82E16814932132 I know its out of stock, but this might be the form factor that may work for you.
Youd be hard...
totally missed all that. Didn't get that from your posts. Now I get the direction and your thought process. I guess its depends on peoples priorities and budget. Depends on the hobby and the goal. Nothing wrong with a 10k gaming rig or a 10k home lab. Neither are right or wrong. Both are...
What exactly are you or were you doing with your home lab? You've never really elaborated on what you are doing, much less your goals with it. your posts are vague. If you want to see what I'm doing, click on the link in my sig.. you can see my home hobby running my production code running on...
I think you need to reset your expectations on what you can do with the budgets you have in mind, and you also need to reset your expectations on how folks acquire gear from their company. 300$ for 200TBs of storage is fantasy. I'm on org number 4 in 20 years. The only "free" thing I ever got...
You can't say you are planning on building multinode cluster but then can't elaborate one what exactly that means and can't articulate the use case. That's why I asked what exactly does that mean to you? Hadoop, Elastic Search, or maybe some Kafka? Or even vm / container orchestration? Maybe...
its 2020.. there is absolutely nothing sexy about running your own server rack in your home these days. If mean.. If you got a bunch of cheap used gear that your proud to show off to your non technical friends to give them the impression you are your own version of Neo? I mean cool and all.. But...
1. is doable.
2. While "doable", its using a bunch of flea bay crap that no longer has support.
What exactly do you mean multinode clusters? What does that mean to you, and whats the use case.
I do a lot of ffmpeg work on the side. I see differences with CPU vs Hardware encoded transcoding. For me, its like anti-aliasing isn't working the same or something. Hard to explain. Only thing I can think of(and I havent messed with it much) is that some of the hardware specific tunings are...
No, no, no digs at all. we are both saying a lot of the same things, just worded differently. Agreed, software raid isn't always better. Both have a place for the use case. Biggest driver usually comes down to budget.
Totally fair.
totally totally fair. Professionally, I would laugh ZFS out the building. I have zero love for it, due to the sector im in. Just wouldn't cut it for the criticality of my data and work loads. At home? I have 3 monster machines kicking around for my development environment. I do...
Pretty obvious.. You are trying to start a machine the executable knows nothing about. You didn't update your path statements like I suggested. You are still starting the executable with YOUR environmental variables set. You need to set the same ones for the root user. Reread what I wrote about...
Lot of good notes here. Really at the end of the day.. absolutely zero raid type is the best raid type. You have to consider your work load, the hardware you have access to, manageability, and finally.. Budget.. Arguing which raid type is better ranks up there with windows vs apple debates. Both...
That's personal opinion, and based on your experiences. Not wrong, or bad or anything, but that is not my experience. I've had nothing but great luck with hardware raid on my home stuff. Im supporting over 8ps of storage during the day, and absolutely none of those platforms use zfs.
Damn, good point. I didnt even think about the SD failing part. If your project is mostly in memory and cpu related, that might be a good fit for your rpis. Maybe use a NAS to drag the data to those rpis? Maybe even remote boot? Ouside of that? Physical maybe your direction. This sounds crazy...
I backup Ps of space nightly.. its called dedupe and compression, I use avamar, datadomain and isilon. My previous gig, I backed up 65TB+ oracle DBs on the regular over to isilon or a monster tape robot.
edit.. one more bit.. storage expanded to pools from raid. You build raid, then build pools...
I personally manage an 8p FC environment right now that does over a million iops. No raid type is better then the other and no raid type is more complicated then the other when you actually know how to set them up and configure them. It depends on the work load, drive type, storage platform and...
Personal preference. I would have the mother board controller do it for me because it's built for it, but your limiting factor will be the controller running it. Most(but not all) consumer based mother boards don't have the controllers to take full advantage of the ssds your rocking. If you are...
Totally right. However, from OPS post, looks like this will be a mixed use environment. Bunch of streaming, and bunch of trans-coding. Most likely used for other "Stuff". Depending on the work load, and user connection count could easily murder 1 giant raid 6 volume. Especially if workload is...
Raid 6 is not efficient at all. What most people miss out on with raid6 is you take a double write penalty. You may get the redundancy, but you basically half the performance of the card running your raid6 array. If you are transcoding, you must be using ffmpeg. If you are serious about...
There is no more data. Its gone. =( You can't recover from that.
If you are serious about recovering anything, you'll stop touching it and reach out to a professional service to do it.
ouch, that does not sound like fun.. but at the end of the day, you figured it out, it didn't kill you, you didn't give up, and now you have another experience to put into your tool bag of trouble shooting skills.
I would say my first real gfx card being a 9800GTX. That thing was basically a rebranded 8800GTX. When it kicked on playing left for dead, it sounded like a hair dryer, and it put out just as much heat. Least memorable was the 550GTX I had.
mostly a centos guy here.. Usually when I have these issues, its because the network stack hasnt finished starting before samba starts. If that doesn't work, I'll disable se linux and try again. You may have to rip that band aid off and quit using your old commands / configs knowing that they've...
a lot of really great responses in here. A lot of it comes down to personal preference, budget, and how much you want to tinker with it. There is definitely something sexy(in my mind anyway) about having a few rpis setup and doing simple low and slow services. Let them handle the those work...
I dont have much experience with pacemaker but I do have a lot of(painful) experience dealing with scsi-3 locking issues mostly with windows clustering. The drives themselves don't handle the locking specifically but the storage controller / storage subsystem you are using is supposed to handle...
Are you starting your service from the CLI as root when you run it or a non root user? I know it sounds stupid, but there is a big difference between su and su - root. Using just su takes in the current environmental profiles of the user running the su command. Doing an su - means the process...
Actually.. not a bad idea. Pretty damn simple, and you have the ability remove any mount point you want at any point in time. There is no redundancy or safety net, but you do get the FS layout you want. Not hack at all. The best part, if he does acquire a larger drive to consolidate, this would...
Personal preference and you'll get a bunch responses. I jumped head first into it in the fall of 2016 getting a Vive and a 1080 at the same time. I was all about it for about 3 months, after about 6, it basically sat on the shelve. I took it out when friends came over. What killed it for me at...
That's a hell of an upgrade. I picked up my 1080 when I got my vive and it did VR things really well with it for about 2 years. From a 970 to a 1080? Thats a sweet jump. you'll be very happy with that for atleast another 2 or 3 years.
Damn, I'm sorry and I apologize for my tone. I spent a little bit of time searching around looking for an alternative solution to your issue but just couldnt find anything. Unfortunately NTFS wasn't built to smash together multiple different volumes into one common volume while retaining all the...
I went from a 970, to a 1660ti. Was a monster upgrade for me. My 970 was a great card and lived in a bunch of machines, and I definitely got my moneys worth out of it. Feel the 1660TI will be in the same boat and will easily last as long as my 970 lasted me. I use it more then my 1080, and 1080ti.
Most storage subsystems, and no filesystem actually does this natively. Maybe some epic hacky unreliable setups. NTFS definitely does not do this, and no way LVM supports what the OP is asking at all. Maybe some ZFS, hadoop something or other solution but it would just be stupidly complicated...
No right or wrong answer with this one. 100% personal preference, and the gear you have jammed into your machine dictates direction. Ive been rocking an ssd and WD 4tb sshd for a few years now. it maybe great for me.. Other people may want to throw it off a bridge. Outside of bragging rights...
Unpopular opinion here.. Don't do it. Not worth it today. In theory, and on paper, sounds amazing and doable. While it is, its a headache. You will spend more time tinkering, tuning and tweaking it. I mean, if that is something you like doing, then totally go for it. But you are better off dual...
your virtualization engine shouldn't give a crap about the underlying storage, and it shouldn't give a crap about the FS your guest OS is running. If it does.. youre running the wrong hypervisor. The only thing it should care about is if it can create datastores out of it, and it meets your...
thin provisioning goes right out the window with guest level encryption at the hypervisor level. suckage. only way around it is to have a disk subsystem that does the encryption for you so by the time its presented to your hypervisor, your hypervisor doesn't know about it, and doesn't care...
Got mine partially setup last night. Out of the box is seems really feature rich and seems polished. Out the box, it setup btrfs with out giving me the option to do raid as btfrs does it on its own. It does look like its configurable, but I couldn't un-configure it while it was initializing its...