This isn’t a gloat post. In fact, I was completely oblivious to this massive outage until I tried to check my bank balance and it wouldn’t log in.
Apparently Visa Paywave, banks, some TV networks, EFTPOS, etc. have gone down. Flights have had to be cancelled as some airlines systems have also gone down. Gas stations and public transport systems inoperable. As well as numerous Windows systems and Microsoft services affected. (At least according to one of my local MSMs.)
Seems insane to me that one company’s messed up update could cause so much global disruption and so many systems gone down :/ This is exactly why centralisation of services and large corporations gobbling up smaller companies and becoming behemoth services is so dangerous.
Am on holiday this week - called in to help deal with this shit show :(
Don’t worry, George Kurtz (crowdstrike CEO) is unavailable today. He’s got racing to do #04 https://www.gt-world-challenge-america.com/event/95/virginia-international-raceway
i hope you get overtime!
It’s proving that POSIX architecture is necessary even if it requires additional computer literacy on the part of users and admins.
The risk of hacking (which is what Crowdstrike essentially does to get so deeply embedded and be so effective at endpoint protection) a monolithic system like Windows OS is if you screw up the whole thing comes tumbling down.
It happens on Linux too: https://access.redhat.com/solutions/7068083
deleted by creator
It was affecting RHEL 9.4 users within the last two months.
deleted by creator
This specific issue is different than the other specific issue, correct.
The point is, “this could only happen on windows” is wrong.
Agreed.
I’ve heard not all Windows versions are effect by Crowdstrike depending if it was recently updated or not. It’s not clear which versions are effected. One other thing I thought Windows has a micro Kernel, and Linux is monolithic.
NT is a hybrid kernel, with bits of both.
deleted by creator
Even fucked up mu hospital meals. Fuck windows.
I mean yeah but they had literally nothing to do with this lol
I would be too, except Firefox just started crashing on Wayland all the morning D;
New Nvidia driver?
Yes but I upgraded to 555 at least a week or two ago and it started crashing a couple of days ago, I think there’s an issue with explicit sync
explicit sync is used, but no acquire point is set
If you Google this you’ll find various bug reports
US and UK flights are grounded because of the issue, banks, media and some businesses not fully functioning. Likely we’ll see more effects as the day goes on.
Same here. I was totally busy writing software in a new language and a new framework, and had a gazillion tabs on Google and stackexchange open. I didn’t notice any network issues until I was on my way home, and the windows f-up was the one big thing in the radio news. Looks like Windows admins will have a busy weekend.
Only if they manage Crowdstrike systems, thankfully.
While I don’t totally disagree with you, this has mostly nothing to do with Windows and everything to do with a piece of corporate spyware garbage that some IT Manager decided to install. If tools like that existed for Linux, doing what they do to to the OS, trust me, we would be seeing kernel panics as well.
I wouldn’t call Crowdstrike a corporate spyware garbage. I work as a Red Teamer in cybersecurity, and EDRs are bane of my existence - they are useful, and pretty good at what they do. In the last few years, I’m struggling more and more to with engagements we do, because EDRs just get in the way and catch a lot of what would pass undetected a month ago. Staying on top of them with our tooling is getting more and more difficult, and I would call that a good thing.
I’ve recently tested a company without EDR, and boy was it a treat. Not defending Crowdstrike, to call that a major fuckup is great understatement, but calling it “corporate spyware garbage” feels a little bit unfair - EDRs do make a difference, and this wasn’t an issue with their product in itself, but with irresponsibility of their patch management.
Fair enough.
Still this fiasco proved once again that the biggest thread to IT sometimes is on the inside. At the end of the day a bunch of people decided to buy Crowdstrike and got screwed over. Some of them actually had good reason to use a product like that, others it was just paranoia and FOMO.
How is it not a window problem?
Why should it be? A faulty software update from a 3rd party crashes the operating system. The exact same thing could happen to Linux hosts as well with how much access those IPSec programms usually get.
But that patch is for windows, not Linux. Not a hypothetical, this is happening.
Your fixated on the wrong part of the story. Synchronized supply chain update takes out global infrastructure isn’t a windows problem, this happens on linux too!
Just because a drunk driver crashes their BMW into a school doesn’t mean drunk driving is only a BMW vehicle problem.
I love how quickly everyone has forgotten about that xz attack.
I use and love Linux and have for over two decades now, but I’m not going to sit here and claim that something similar to the current Windows issue can’t happen to Linux.
xz attack
That has nothing to do with this. That was a security vulnerability, solved in record time, blame where it was due, and patched in hours.
You’re missing the point. That compromised xz made it into some production distributions. The point here is that shit can happen to Linux, too.
If BMW makes a car that has square wheels and needs to have everyone install round wheels so the fucking thing works you can’t blame a company for making wheels.
It’s a Microsoft problem through and through.
Your counter to the BMW Drunk driver example didn’t address drunk driving in volvos, toyotas, fords… you just introduced a variable that your upset with. BMW’s having weird wheels has nothing to do with Drunk Driving incidents.
Again your focused on the wrong thing, this story is a warning about supply chain issues.
Your just memeing on the hate for windows.
Have you never seen a DNS outage, a ansible outage, a terraform outage, a RADIUS outage, a database schema change outage, a router firmware update outage?
Again, you’re talking about something I am not. I am talking about THIS problem, right here, that is categorically a windows problem, in that it’s not on the linux kernel stack, or mac. How is this NOT a windows problem??
It is on the sense that Windows admins are the ones that like to buy this kind of shit and use it. It’s not on the sense that Windows was broken somehow.
The fault seems to be 90/10 CS, MS.
MS allegedly pushed a bad update. Ok, it happens. Crowdstrike’s initial statement seems to be blaming that.
CS software csagent.sys took exception to this and royally shit the bed, disabling the entire computer. I don’t think it should EVER do that, so the weight of blame must lie with them.
The really problematic part is, of course, the need to manually remediate these machines. I’ve just spent the morning of my day off doing just that. Thanks, Crowdstrike.
EDIT: Turns out it was 100% Crowdstrike, and the update was theirs. The initial press release from CS seemed to be blaming Microsoft for an update, but that now looks to be misleading.
Hate to break it to you, but CrowdStrike falcon is used on Linux too…
And Macs, we have it on all three OSs. But only Windows was affected by this.
And if it was a kernel-level driver that failed, Linux machines would fail to boot too. The amount of people seeing this and saying “MS Bad,” (which is true, but has nothing to do with this) instead of “how does an 83 billion dollar IT security firm push an update this fucked” is hilarious
You’re asking the wrong question: why does a security nightmare need a 90 billion dollar company to unfuck it?
What’s your solution to cyberattacks?
Linux in the hands of professionals. There’s a reason IIS isn’t used anymore.
That doesn’t solve anything. Linux is also subject to cyberattacks.
Falcon uses eBPF on Linux nowadays. It’s still an irritating piece of software, but it no make your boxen fail to boot.
edit: well, this is a bad take. I should avoid commenting on shit when I’m sleep deprived and filled with meeting dread.
It was panicking RHEL 9.4 boxes a month ago.
Were you using the kernel module? We’re using Flatcar which doesn’t support their .ko, and we haven’t been getting panics on any of our machines (of which there are many).
Nah it was specifically related to their usage of BPF with the Red Hat kernel, since fixed by Red Hat. Symptom was, you update your system and then it panics. Still usable if you selected a previous kernel at boot though.
deleted by creator
Hate to break it to you, but most IT Managers don’t care about crowdstrike: they’re forced to choose some kind of EDR to complete audits. But yes things like crowdstrike, huntress, sentinelone, even Microsoft Defender all run on Linux too.
Yeah, you’re right.
I’ve just spent the past 6 hours booting into safe mode and deleting crowd strike files on servers.
Feel you there. 4 hours here. All of them cloud instances whereby getting acces to the actual console isn’t as easy as it should be, and trying to hit F8 to get the menu to get into safe mode can take a very long time.
Ha! Yes. Same issue. Clicking Reset in vSphere and then quickly switching tabs to hold down F8 has been a ball ache to say the least!
Just go into settings and add a boot delay, then set it back when you’re done.
What I usually do is set next boot to BIOS so I have time to get into the console and do whatever.
Also instead of using a browser, I prefer to connect vmware Workstation to vCenter so all the consoles insta open in their own tabs in the workspace.
Can’t you automate it?
Sadly not. Windows doesn’t boot. You can boot it into safe mode with networking, at which point maybe with anaible we could login to delete the file but since it’s still manual work to get windows into safe mode there’s not much point
It is theoretically automatable, but on bare metal it requires having hardware that’s not normally just sitting in every data centre, so it would still require someone to go and plug something into each machine.
On VMs it’s more feasible, but on those VMs most people are probably just mounting the disk images and deleting the bad file to begin with.
I guess it depends on numbers too. We had 200 to work on. If you’re talking hundreds more than looking at automation would be a better solution. In our scenario it was just easier to throw engineers at it. I honestly thought at first this was my weekend gone but we got through them easily in the end.
The real problem with VM setups is that the host system might have crashed too
Since it has to happen in windows safe mode it seems to be very hard to automate the process. I haven’t seen a solution yet.
It’s also reported in Danish news now: https://www.dr.dk/nyheder/udland/store-it-problemer-flere-steder-i-verden
Dutch media are reporting the same thing: https://nos.nl/l/2529468 (liveblog) https://nos.nl/l/2529464 (Normal article)
I just saw it on the Swedish national broadcaster’s website:
https://www.svt.se/nyheter/snabbkollen/it-storningar-varlden-over-e1l936
The annoying aspect from somebody with decades of IT experience is - what should happen is that crowdstrike gets sued into oblivion, and people responsible for buying that shit should have an epihpany and properly look at how they are doing their infra.
But will happen is that they’ll just buy a new crwodstrike product that promises to mitigate the fallout of them fucking up again.
decades of IT experience
Do any changes - especially upgrades - on local test environments before applying them in production?
The scary bit is what most in the industry already know: critical systems are held on with duct tape and maintained by juniors 'cos they’re the cheapest Big Money can find. And even if not, There’s no time. or It’s too expensive. are probably the most common answers a PowerPoint manager will give to a serious technical issue being raised.
The Earth will keep turning.
Unfortunately falcon self updates. And it will not work properly if you don’t let it do it.
Also add “customer has rejected the maintenance window” to your list.
Turns out it doesn’t work properly if you do let it
Well, “don’t have self-upgrading shit on your production environment” also applies.
As in “if you brought something like this, there’s a problem with you”.
some years back I was the ‘Head’ of systems stuff at a national telco that provided the national telco infra. Part of my job was to manage the national systems upgrades. I had the stop/go decision to deploy, and indeed pushed the ‘enter’ button to do it. I was a complete PowerPoint Manager and had no clue what I was doing, it was total Accidental Empires, and I should not have been there. Luckily I got away with it for a few years. It was horrifically stressful and not the way to mitigate national risk. I feel for the CrowdStrike engineers. I wonder if the latest embargo on Russian oil sales is in anyway connected?
I wonder if the latest embargo on Russian oil sales is in anyway connected?
Doubt it, but it’s ironic that this happens shortly after Kaspersky gets banned.
Not OP. But that is how it used to be done. Issue is the attacks we have seen over the years. IE ransom attacks etc. Have made corps feel they needf to fixed and update instantly to avoid attacks. So they depend on the corp they pay for the software to test roll out.
Autoupdate is a 2 edged sword. Without it, attackers etc will take advantage of delays. With it. Well today.
I’d wager most ransomware relies on old vulnerabilities. Yes, keep your software updated but you don’t need the latest and greatest delivered right to production without any kind of test first.
Very much so. But the vulnerabilities do not tend to be discovered (by developers) until an attack happens. And auto updates are generally how the spread of attacks are limited.
Open source can help slightly. Due to both good and bad actors unrelated to development seeing the code. So it is more common for alerts to hit before attacks. But far from a fix all.
But generally, time between discovery and fix is a worry for big corps. So why auto updates have been accepted with less manual intervention than was common in the past.
I would add that a lot of attacks are done after a fix has been released - ie compare the previous release with the patch and bingo - there’s the vulnerability.
But agree, patching should happen regularly, just with a few days delay after the supplier release it.
I get the sentiment but defense in depth is a methodology to live by in IT and auto updating via the Internet is not a good risk to take in general. For example, should Crowdstrike just disappear one day, your entire infrastructure shouldn’t be at enormous risk nor should critical services. Even if it’s your anti-virus, a virus or ransomware shouldn’t be able to easily propagate through the enterprise. If it did, then it is doubtful something like Crowdstrike is going to be able to update and suddenly reverse course. If it can then you’re just lucky that the ransomware that made it through didn’t do anything in defense of itself (disconnecting from the network, blocking CIDRs like Crowdsource’s update servers, blocking processes, whatever) and frankly you can still update those clients anyway from your own AV update server which is a product you’d be using if you aren’t allowing updates from the Internet in order to roll them out in dev first, phasing and/or schedules from your own infrastructure.
Crowdstrike is just another lesson in that.
Me too. Additionally, I use guix so if a system update ever broke my machine I can just rollback to a prior system version (either via the command line or grub menu).
That’s assuming grub doesn’t get broken in the update…
True, then I’d be screwed. But, because my system config is declared in a single file (plus a file for channels) i could re-install my system and be back in business relatively quickly. There’s also guix home but I haven’t had a chance to try that.
I would definitely recommend using guix home because having a seperate config for you more user facing stuff is so convenient (plus no need for root access to install a package declaratively) (side note take this with a grain of salt because I don’t use gnu guix I use nixos)
Immutable systems sound like something desperately needed, tbh. It’s just such an obvious solution and I’m surprised that it’s been invented so late
It really seems like the future or some variation of it.
This is exactly why centralisation of services and large corporations gobbling up smaller companies and becoming behemoth services is so dangerous.
Its true, but otherside of same coin is that with too much solo implementation you lose benefits of economy of scale.
But indeed the world seems like a village today.
you lose benefits of economy of scale.
I think you mean - the shareholders enjoy the profits of scale.
When a company scales up, prices are rarely reduced. Users do get increased community support through common experiences especially when official channels are congested through events like today, but that’s about the only benefit the consumer sees.
For reference, this was the article I first read about this on: https://www.nzherald.co.nz/nz/bank-problems-reports-bnz-asb-kiwibank-anz-visa-paywave-services-down/R2EY42QKQBALXNF33G5PA6U3TQ/
What?! No, it must be Kaspersky!
/s
deleted by creator
It’s not a Microsoft problem
It’s a world-depending-on-a-few-large-companies problem