Very good job indeed Nvidia. My stock shares continue to rise and make money. I hope they don’t change a god damn thing.
You don’t want to sign the NDA, then stop bitching. Buy the card and do the reviews on your own when they become available. No NDA required.
Cable cost is the same for me vs PSVue, so I stick with cable. The convenience of shit "just working" with no buffering/freezes/blackouts and actually being able to use a remote with a number pad still trumps the streaming services out there.
Also never have to worry about data caps.
As long as their stock continues to go up and make me money, who the fuck cares.
Get off your GPP mountain already and move on. I’ve been coming to this site since the 90s or so and it’s gotten away from news and gone pure editorial in every single post you guys link as “news”. Everything thing...
This is all well and good but you are still going to be told by the vendor that, because you broke the seal, they won't fix it. If you want to fight them in court to fix a $100 PSU, then I'll let you die on that mountain alone.
More site problems:
Maybe it's time to move away from Cloudflare. It is a weekly occurrence of problems with these guys.
Edit: Ha! I love my new tag!! First time posting since railing on you!
Lets not forget that this *is* an insurance company posting this video. Their goal isn't to educate you on the breakability of the phone but to convince you that you need phone insurance...
Christ what a bunch of babies. I did this regularly during the late 90's playing Q2 CTF with my clan on dial-up. It was the norm... You had to take into account lag if you wanted to play competitively then as well.
From my original post you highlighted my sentence (in red mind you) that said:
You then asked:
So, my entire explanation was based on parity RAID which is RAID 5 and RAID 6. If you didn't want a lesson on RAID 5 you should have been more specific and asked about RAID 6 instead. The RAID 10...
RAID 6 incurs double the write penalty that RAID 5 does. So on write intensive servers this is a large problem.
Yes, 6 is better than 5. But if you are going to have to purchase 4 drives anyway why not move to RAID 10 and dump parity all together? Unless you just really need the space that...
^ Why would you chose RAID 6 over RAID 10?? You need a minimum of 4 drives with RAID 6. You need a minimum of 4 drives in RAID 10. RAID 10 has better both read and write speed than RAID 6. RAID 10 has 4x the read speed and 2x the write speed in a 4 drive array. RAID 6 will incur twice the...
Just about every source nowadays says that you shouldn't be using parity RAID any longer for mechanical drives in large array sizes. There is a reason that Dell has officially taken the stance that RAID 5 should not be used for any business critical information...
The OS needs to support it for sure. For example, VMware vSphere does not at all which is absurd in this day. If using a controller then it should support them as well or your asking for problems. But I have used 4K drives on RAID controllers that don't say they support but also don't say you...
Raid 5 on spinning rust with a 6 TB array would be like almost wanting your data to be destroyed at some point. Resilvering an array that size will almost certainly encounter a URE and then your data is hosed.
You should not be using any parity RAID type on mechanical drives of this array...
These are most likely OEM drives. New, but OEM. So HGST will refer you to the business you purchased them from for warranty support.
But as you said, since they are so cheap it's worth stocking a few extras on hand anyway.
Bingo.
I think the issue he is having is with performance but I feel that is a software issue and has nothing to do with NFS. It's probably that fake RAID/mdraid garbage that people try to use. If he moved to an enterprise level RAID controller I bet his issues would be gone. No well in any...
Oh yes, there are plenty of RAID controllers that support well in excess of 24 drives. Hell, many support 16 drives on just a single port. And you can very much can resize/add/delete drives/arrays live with 'real' RAID cards without rebooting and going into their config bios. You just have to...
I can't help you with a SAN. I have no need for it as I find it too limiting when compared with local storage for me and my production usage. Hopefully others familiar with running a SAN can help.
But if you are going to drop this amount of coin on building out a SAN then get off...
Pretty much what Grentz said. A SAN complicates things greatly and adds cost/complexity while subtracting speed when comparing to local storage. If you need HA then you could replicate to another host with Veeam/Unitrends/Nakivo OR you could use a hyper converged solution like Starwinds VSAN...
You need to be looking at Xen server with Orchestra. It can do everything vSphere can do only free.
Why a SAN? Why not consolidate everything inside that 24 bay case and run the VMs from local storage? It's a hell of a lot easier (and much cheaper) to setup. The cheapest 10Gbe switch is...