- Joined
- Jan 14, 2007
- Messages
- 16,323
I took my kid to an amusement park today, drove there, so wasn't a bad day for me at all. 
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Post on difficulty in getting the systems back up
View attachment 666893
https://news.ycombinator.com/item?id=41003390
Azure does, I’ve had to use it more than once.Some clouds let you get a console on the VMs... and there's IPMI/Network serial/kvm for bare metal. Although you often need to configure it with your OS beforehand, a serial console is real handy for these situations.
Yeah. I've been disaster-proofing many of my clients' offices lately for insane incidents like this. It saves them money and makes less work for me because the downtime after an incident would be minimal.I fixed all the computers at my wife's dental office today. Heard nothing but trash about their actual IT team... and then fixed a computer that was physically broken that the team hasn't figured out yet while I was walking out the door. Had a bad stick of ram.
I am beyond pissed off right now, in fact, I'm furious.
WHY DID CROWDSTRIKE NOT TEST THIS UPDATE?
I'm going onto hour 13 of trying to rip this sys file off a few thousands server. Since Windows will not boot, we are having to mount a windows iso, boot from that, and remediate through cmd prompt.
So far- several thousand Win servers down. Many have lost their assigned drive letter so I am having to manually do that. On some, the system drive is locked and I cannot even see the volume (rarer). Running chkdsk, sfc, etc does not work- shows drive is locked. In these cases we are having to do restores. Even migrating vmdks to a new VM does not fix this issue.
This is an enormous problem that would have EASILY been found through testing. When I see easily -I mean easily. Over 80% of our Windows Servers have BSOD due to Crowdstrike sys file. How does something with this massive of an impact not get caught during testing? And this is only for our servers, the scope on our endpoints is massive as well, but luckily that's a desktop problem.
Lastly, if this issue did not cause Windows to BSOD and it would actually boot into Windows, I could automate. I could easily script and deploy the fix. Most of our environment is VMs (~4k), so I can console to fix....but we do have physical servers all over the state. We are unable to ilo to some of the HPE proliants to resolve the issue through a console. This will require an on-site visit.
Our team will spend 10s of thousands of dollars in overtime, not to mention lost productivity. Just my org will easily lose 200k. And for what? Some ransomware or other incident? NO. Because Crowdstrike cannot even use their test environment properly and rolls out updates that literally break Windows. Unbelieveable
I'm sure I will calm down in a week or so once we are done fixing everything, but man, I will never trust Crowdstrike again. We literally just migrated to it in the last few months. I'm back at it at 7am and will work all weekend. Hopefully tomorrow I can strategize an easier way to do this, but so far, manual intervention on each server is needed. Varying symptom/problems also make it complicated.
For the rest of you dealing with this- Good luck!
*end rant.
Funny, but that image is fake.
cant hack the system if the system is downI have to admire the solution to zero-day threats is to create a zero-day threat.
Crowdstrike actually.the gibson was running...
Crowdstrike actually.
Why tax payers?Costs from the global outage could top $1 billion – but who pays the bill is … probably tax payers
https://www.cnn.com/2024/07/21/business/crowdstrike-outage-cost/index.html
the buck always seems to be passed on to the Tax PayersWhy tax payers?
Yes.Is it safe to install and update Windows 11 on a new build ?
is the title better now?Yes.
That has nothing to do with this thread. Its mis-titled.
Is it safe to install and update Windows 11 on a new build ?
YupI hear Windows three 11 is pretty good![]()
Alternatives all work the same way and all have the same access, it is required for them to be effective against user level malware.I think the simplest solution would be to stop using ClowdStrike and find an alternative.
Yes, but those didn't shut down half the planet.Alternatives all work the same way and all have the same access, it is required for them to be effective against user level malware.
But they could one day....
Which is a completely meaningless thought process.But they could one day....
https://www.macrumors.com/2024/07/22/microsoft-blames-european-commission-for-outage/On Windows machines, CrowdStrike's Falcon security software is a kernel module, which gives the software full access to a PC. The kernel manages memory, processes, files, and devices, and it's basically the heart of the operating system. Much of the software on a PC is typically limited to user mode, where bad code can't cause harm, but software with kernel mode access can cause catastrophic total machine failures, like what was encountered last week.
The Falcon software was not able to wreak similar havoc on Macs because Apple does not give software makers kernel access. In macOS Catalina, which came out in 2019, Apple deprecated kernel extensions and transitioned to system extensions that run in a user space instead of at a kernel level. The change made Macs more stable and more secure, adding protection against unstable software updates like the one CrowdStrike pushed out. It is not possible for Macs to have a similar failure because of the change that Apple made.
is this true this type of software only has access to kernel due to legislation, would it be as effective if it ran at user level?
Sure no one thought CrowdStrike would do this either, and yet they did, twice now actually, with Debian/ Rocky linux a few months back and now windows.Which is a completely meaningless thought process.