Jump to content

Crowdstrike Outage a/k/a Everything's Fucked


RPM

Recommended Posts

Posted (edited)
14 minutes ago, Blotto said:

I have not heard this. Source? 

 

My interpretation of the following Wired article. (edit: also per Ed Zitron: 

Quote

 

The problem here is systemic — that there is a company that the majority of people affected by this outage had no idea existed until today that Microsoft trusted to the extent that they were able to push an update that broke the back of a huge chunk of the world's digital infrastructure. 

Microsoft, as a company, instead of building the kind of rigorous security protocols that would, say, rigorously test something that connects to what seems to be a huge proportion of Windows computers. 

 

If it was a kernel update, Microsoft is supposed to test it. If it wasn't a kernel update Microsoft should have tested it but opts not to as a shortcut, even though non-kernel updates have caused widespread BSOD crashes in the past. Microsoft, of course, blames Crowdstrike. But I have difficulty believing it's a coincidence that Microsoft's own update that crashed systems on Thursday night, cloud platform Azure, was unrelated to Crowdstrike's, as Microsoft claims.              How One Bad CrowdStrike Update Crashed the World’s Computers | WIRED

Microsoft requires developers to get its approval for kernel driver updates, which entails the company’s own careful inspection process. But Microsoft wouldn’t necessarily require any such approval for a configuration file. A Microsoft spokesperson told WIRED that the “CrowdStrike update was responsible for bringing down a number of IT systems globally,” and added that “Microsoft does not have oversight into updates that CrowdStrike makes in its systems.”

...

The ability of one update to trigger such massive disruption still puzzles Raiu. According to Gartner, a market research firm, CrowdStrike accounts for 14 percent of the security software market by revenue, meaning its software is on a wide array of systems. Raiu suggests that the Falcon update must have triggered crashes in other parts of web infrastructure, which could have multiplied the disaster. “CrowdStrike is big, but it can’t be this big,” Raiu says. “Airports, critical infrastructure, hospitals. It cannot be just CrowdStrike everywhere. I suspect we’re seeing a combination of factors, a cascading effect, a chain reaction.”

...

“People may now demand changes in this operating model,” says Jake Williams, vice president of research and development at the cybersecurity consultancy Hunter Strategy. “For better or worse, CrowdStrike has just shown why pushing updates without IT intervention is unsustainable.”

Edited by Chopper
Link to comment
Share on other sites

Just now, Rimbo said:

dafuq?

See, all the things you need to learn everything about Linux are free. You just go read the fucking manual.

And in my career for the past 15 years or so, every dev is working on a Mac, which is ALSO a variant of UNIX, so they already know almost all the shit they needed to.

The only reason any engineer, data scientist or what have you under the age of 40 even bothered with Windows is so they can play games on Steam.

Office products are also significantly better and more optimized for windows. 

Link to comment
Share on other sites

3 hours ago, immamac said:

I think this highlights the problem with undereducated population relying on technology that they have no fucking idea what it is. It's why people think AI is magic or real. 

This isn't a dig on you, but there needs to be basic computer literacy if we are going to be this dependent on computers as a society. 

How much work can you do on your vehicles?  There’s a difference between changing oil and overhauling the engine. 

  • Hook 'Em 1
Link to comment
Share on other sites

31 minutes ago, immamac said:

Office products are also significantly better and more optimized for windows. 

Eh, Office for Mac is fine, and Office365 is all in the cloud now, anyhow. I don't make my accounting staff run Linux. But this guy's talking about Airflow. Which means somewhere, out there, he's running Windows as a database server. I can't think of a time in my life, much less my career, when that was a good idea.

 

7 minutes ago, Pato del Muerto said:

How much work can you do on your vehicles?  There’s a difference between changing oil and overhauling the engine. 

Far less now that I own a hybrid, but I still understand the basics. I know to check fluid levels every time I fill up, check the tires, etc.

Link to comment
Share on other sites

Posted (edited)
42 minutes ago, Rimbo said:

dafuq?

See, all the things you need to learn everything about Linux are free. You just go read the fucking manual.

And in my career for the past 15 years or so, every dev is working on a Mac, which is ALSO a variant of UNIX, so they already know almost all the shit they needed to.

The only reason any engineer, data scientist or what have you under the age of 40 even bothered with Windows is so they can play games on Steam.

For many years, Nvidia's CUDA tools were great on Windows vs where they were on Linux. Things have largely leveled out, but only relatively recently, so you'll definitely have plenty of people who used windows who worked in CC. Scientists working in big data visualization would have definitely worked with windows within the last 10 or so years.

Edited by WelfareBuysMyWeed
Link to comment
Share on other sites

25 minutes ago, Pato del Muerto said:

How much work can you do on your vehicles?  There’s a difference between changing oil and overhauling the engine. 

Lmao. Comparing deleting a file in a specific directory in safe mode to overhauling an engine is hilarious.

This is the equivalent to the car being locked (bitlocker) or unlocked and needing to put a jumper on it. It's actually that simple. Car won't start, battery dead need to jump it. Imagine everyones car that used a corporate card to pay for gas just had it's battery drain the next time you filled it up. That's what happened. 

There's all kinds of ways to jump a car. With a battery charger/jumper, with another battery, with another car. Etc. 

The complexity here is that everyone could jump their own car, but instead they all called wreckers or the dealership because they can't figure that out. 

  • Like 1
Link to comment
Share on other sites

4 minutes ago, immamac said:

Lmao. Comparing deleting a file in a specific directory in safe mode to overhauling an engine is hilarious.

This is the equivalent to the car being locked (bitlocker) or unlocked and needing to put a jumper on it. It's actually that simple. Car won't start, battery dead need to jump it. Imagine everyones car that used a corporate card to pay for gas just had it's battery drain the next time you filled it up. That's what happened. 

There's all kinds of ways to jump a car. With a battery charger/jumper, with another battery, with another car. Etc. 

The complexity here is that everyone could jump their own car, but instead they all called wreckers or the dealership because they can't figure that out. 

No the complexity is if people aren’t allowed to pop the hood to even poke around at the engine bay because their employer doesn’t trust them not to drain the oil and drive away.  Makes it tough to jump the battery if you are locked out of getting to the battery. 

  • Hook 'Em 6
  • Like 1
Link to comment
Share on other sites

Just now, Pato del Muerto said:

No the complexity is if people aren’t allowed to pop the hood to even poke around at the engine bay because their employer doesn’t trust them not to drain the oil and drive away.  Makes it tough to jump the battery if you are locked out of getting to the battery. 

Ok cool whatever I don't give a shit enough to argue it. 

Link to comment
Share on other sites

Posted (edited)
1 hour ago, Chopper said:

 

My interpretation of the following Wired article. (edit: also per Ed Zitron: 

If it was a kernel update, Microsoft is supposed to test it. If it wasn't a kernel update Microsoft should have tested it but opts not to as a shortcut, even though non-kernel updates have caused widespread BSOD crashes in the past. Microsoft, of course, blames Crowdstrike. But I have difficulty believing it's a coincidence that Microsoft's own update that crashed systems on Thursday night, cloud platform Azure, was unrelated to Crowdstrike's, as Microsoft claims.              How One Bad CrowdStrike Update Crashed the World’s Computers | WIRED

Microsoft requires developers to get its approval for kernel driver updates, which entails the company’s own careful inspection process. But Microsoft wouldn’t necessarily require any such approval for a configuration file. A Microsoft spokesperson told WIRED that the “CrowdStrike update was responsible for bringing down a number of IT systems globally,” and added that “Microsoft does not have oversight into updates that CrowdStrike makes in its systems.”

...

The ability of one update to trigger such massive disruption still puzzles Raiu. According to Gartner, a market research firm, CrowdStrike accounts for 14 percent of the security software market by revenue, meaning its software is on a wide array of systems. Raiu suggests that the Falcon update must have triggered crashes in other parts of web infrastructure, which could have multiplied the disaster. “CrowdStrike is big, but it can’t be this big,” Raiu says. “Airports, critical infrastructure, hospitals. It cannot be just CrowdStrike everywhere. I suspect we’re seeing a combination of factors, a cascading effect, a chain reaction.”

...

“People may now demand changes in this operating model,” says Jake Williams, vice president of research and development at the cybersecurity consultancy Hunter Strategy. “For better or worse, CrowdStrike has just shown why pushing updates without IT intervention is unsustainable.”

On the other hand, I wouldn't trust Wired to get anything technical right ever. 

Edited by Dahobbs
  • Hook 'Em 2
Link to comment
Share on other sites

1 hour ago, Chopper said:

 

My interpretation of the following Wired article. (edit: also per Ed Zitron: 

If it was a kernel update, Microsoft is supposed to test it. If it wasn't a kernel update Microsoft should have tested it but opts not to as a shortcut, even though non-kernel updates have caused widespread BSOD crashes in the past. Microsoft, of course, blames Crowdstrike. But I have difficulty believing it's a coincidence that Microsoft's own update that crashed systems on Thursday night, cloud platform Azure, was unrelated to Crowdstrike's, as Microsoft claims.              How One Bad CrowdStrike Update Crashed the World’s Computers | WIRED

Microsoft requires developers to get its approval for kernel driver updates, which entails the company’s own careful inspection process. But Microsoft wouldn’t necessarily require any such approval for a configuration file. A Microsoft spokesperson told WIRED that the “CrowdStrike update was responsible for bringing down a number of IT systems globally,” and added that “Microsoft does not have oversight into updates that CrowdStrike makes in its systems.”

...

The ability of one update to trigger such massive disruption still puzzles Raiu. According to Gartner, a market research firm, CrowdStrike accounts for 14 percent of the security software market by revenue, meaning its software is on a wide array of systems. Raiu suggests that the Falcon update must have triggered crashes in other parts of web infrastructure, which could have multiplied the disaster. “CrowdStrike is big, but it can’t be this big,” Raiu says. “Airports, critical infrastructure, hospitals. It cannot be just CrowdStrike everywhere. I suspect we’re seeing a combination of factors, a cascading effect, a chain reaction.”

...

“People may now demand changes in this operating model,” says Jake Williams, vice president of research and development at the cybersecurity consultancy Hunter Strategy. “For better or worse, CrowdStrike has just shown why pushing updates without IT intervention is unsustainable.”

There was no kernel update nor any update from Microsoft related to the CrowdStrike outage. Falcon runs in the kernel space and frequently releases updates via their channel files multiple times a day.

This issue isn't even unique to Windows. The same issue exists in Linux. While Falcon can run in the user space the CrowdStrike modules don't necessarily run in eBPF mode so Falcon defaulting to kernel mode in Linux is common.

  • Hook 'Em 3
Link to comment
Share on other sites

12 minutes ago, Dahobbs said:

On the other hand, I wouldn't trust Wired to get anything technical right ever. 

That article was written by someone who needed someone else to explain things to them and the writer clearly didn't understand the details.

  • Hook 'Em 3
Link to comment
Share on other sites

1 hour ago, Rimbo said:

Which means somewhere, out there, he's running Windows as a database server. I can't think of a time in my life, much less my career, when that was a good idea.

You must not be a MS sales engineer then

  • Haha 1
Link to comment
Share on other sites

6 hours ago, immamac said:

I think this highlights the problem with undereducated population relying on technology that they have no fucking idea what it is. It's why people think AI is magic or real. 

This isn't a dig on you, but there needs to be basic computer literacy if we are going to be this dependent on computers as a society. 

Ughhhh, it was a joke.  I know AI is magic.

  • Hook 'Em 1
Link to comment
Share on other sites

39 minutes ago, Armybrat said:

 

The best part about this channel is how technically accurate he always is with the proper caveats for oversimplification while also being hilarious and up leveling it to the layman. 

Link to comment
Share on other sites

Speaking of AI. About six years ago, I participated in a proof of concept project using AI in DFIR exercises where AI acted as an L1/L2 SOC analyst. I remember being blown away by how quickly the AI was able to assess anomalous behavior and generate relevant security incidents with summarized write ups on each incident.

So when CrowdStrike released their version of an AI SOC analyst in Falcon. I was curious to see how far this technology had come and what Crowdstrike's AI offered.

This sums up my experience with CrowdStrike AI.

"Charlotte, show me all hosts with vulnerable versions of SSH that also have public IP's and are listening on TCP 22"

Crowdstrike's AI Charlotte's response.

giphy.gif.08cea9e52501f9c1c9dfc0af232718a0.gif

 

  • Like 1
  • Haha 4
Link to comment
Share on other sites

6 hours ago, Pato del Muerto said:

No the complexity is if people aren’t allowed to pop the hood to even poke around at the engine bay because their employer doesn’t trust them not to drain the oil and drive away.  Makes it tough to jump the battery if you are locked out of getting to the battery. 

 

6 hours ago, immamac said:

Ok cool whatever I don't give a shit enough to argue it. 

 

6 hours ago, Pato del Muerto said:

I accept your concession. 

maybe yall just need to reboot this discussion? is the shit even plugged in? 

Link to comment
Share on other sites

Lmao. Comparing deleting a file in a specific directory in safe mode to overhauling an engine is hilarious.
This is the equivalent to the car being locked (bitlocker) or unlocked and needing to put a jumper on it. It's actually that simple. Car won't start, battery dead need to jump it. Imagine everyones car that used a corporate card to pay for gas just had it's battery drain the next time you filled it up. That's what happened. 
There's all kinds of ways to jump a car. With a battery charger/jumper, with another battery, with another car. Etc. 
The complexity here is that everyone could jump their own car, but instead they all called wreckers or the dealership because they can't figure that out. 

This was our experience trying to fix this shitshow. Took individual action by our IT crew on every. Single. Machine. How about an automatic rollback to the last successful boot up as a default action?
Link to comment
Share on other sites

27 minutes ago, Godzillatron said:


This was our experience trying to fix this shitshow. Took individual action by our IT crew on every. Single. Machine. How about an automatic rollback to the last successful boot up as a default action?

[Laughs in ITsec]

Link to comment
Share on other sites

As with most anything (especially something that requires a little bit of training, knowledge, and patience), shit’s really easy if you know how to do it.  

An advantage to the compute evolution and revolution over the past 40 years is that has obfuscated much of the underlying details from the end user.  And it has made those end users more innately trusting.

The tire change analogy applies for those of us who understand the basics and are comfortable using terminal mode and moving files around (shit - tty or DOS only interfaces were not THAT long ago).  For others, they do not want to further fuck up what is already fucked up.

Actually, the tire change analogy may fall short.  The process for changing a tire has not materially changed for 50+ years.  The process to un-fuck a computer changes on each fuckery.

  • Hook 'Em 1
Link to comment
Share on other sites

18 minutes ago, cactusflinthead said:

Seen this a couple of times.img_1_1721537810832.thumb.jpg.b9d8cc2d476fcb3e2a9782f671c131aa.jpg

My bank is doing "updates" until tomorrow morning.

Yeah, I bet you are.

Narrator: They all dress like this.

  • Hook 'Em 3
  • Haha 1
Link to comment
Share on other sites

Posted (edited)
17 hours ago, Pato del Muerto said:

No the complexity is if people aren’t allowed to pop the hood to even poke around at the engine bay because their employer doesn’t trust them not to drain the oil and drive away.  Makes it tough to jump the battery if you are locked out of getting to the battery. 

I can't even update teams... 

Edited by JMFP
Premature epostification :(
Link to comment
Share on other sites

I hope it continues to be part of the root cause analysis that the CS QA teams were fired just a little bit before they pushed out a config file filled with zeros. A fuck up like that takes time to pull off, you don't just wake up with zero internal governance or quality controls let a null bomb out the door to take airlines and hospitals and 911 offline. 

It's such a pure expression shareholder capitalism to chase privatized profit while dangling half a planets worth of social risk just to save some money on QA costs. Absolutely incredible work, everyone. You'll get em next quarter

  • Hook 'Em 3
  • Like 3
Link to comment
Share on other sites

On 7/19/2024 at 10:22 AM, utee94 said:

I had a personal system that got bricked by bitlocker when MS pushed an update to it, and it triggered the bitlocker into safety mode.  Only problem was, I had no idea what bitlocker was, and had never actually activated it, so no key had ever been generated. 

Happened to my father in law, too.  He had not enabled bitlocker in the first place because he didn't know what it was and couldn't get back in.

Link to comment
Share on other sites

As with most anything (especially something that requires a little bit of training, knowledge, and patience), shit’s really easy if you know how to do it.  
An advantage to the compute evolution and revolution over the past 40 years is that has obfuscated much of the underlying details from the end user.  And it has made those end users more innately trusting.
The tire change analogy applies for those of us who understand the basics and are comfortable using terminal mode and moving files around (shit - tty or DOS only interfaces were not THAT long ago).  For others, they do not want to further fuck up what is already fucked up.
Actually, the tire change analogy may fall short.  The process for changing a tire has not materially changed for 50+ years.  The process to un-fuck a computer changes on each fuckery.

Fair, but boot to safe mode and delete a bad OS file has been a standard Windows play for thirty years. The fact that there is no better way to fix it by now is amazing.
Link to comment
Share on other sites

4 hours ago, Captainant said:

It's such a pure expression shareholder capitalism to chase privatized profit while dangling half a planets worth of social risk just to save some money on QA costs. Absolutely incredible work, everyone. You'll get em next quarter

Catch 22 - we should get a better solution out of it for corporate cybersecurity and probably a lot of improved IT best-practices for small businesses.

Pretty sure every major IT software provider is now ensuring that this can't happen or won't happen to them. 

Link to comment
Share on other sites

8 minutes ago, ztejas said:

Pretty sure every major IT software provider is now ensuring that this can't happen or won't happen to them. 

Basic fucking governance and give-a-shit would have prevented this. It's not a hard thing to prevent lol. It's more a function of not letting quality slip in order to meet cost demands, not a matter of technical ability

  • Hook 'Em 2
  • Like 1
Link to comment
Share on other sites

1 minute ago, Captainant said:

It's more a function of not letting quality slip in order to meet cost demands

Right - but do you think crowdstrike is the only one doing this?

I mean this happens allll the time in app/game development. It can happen with security software/solutions, too.

Maybe it's the cynic in me wondering what other companies have dogshit QA and lack the protocols to prevent shit like this from happening - even if it might be on a smaller scale. 

  • Hook 'Em 1
Link to comment
Share on other sites

I think part of it is QA and the other part is devs are a bunch of whiney ass bitches.  Anytime there is new implementation to prevent stuff like this from happening.  They complain about the burden it puts on them and how their boss doesn't know shit and they should be the ones approving the code any ways.  

Link to comment
Share on other sites

1 hour ago, ztejas said:

Right - but do you think crowdstrike is the only one doing this?

I mean this happens allll the time in app/game development. It can happen with security software/solutions, too.

Maybe it's the cynic in me wondering what other companies have dogshit QA and lack the protocols to prevent shit like this from happening - even if it might be on a smaller scale. 

It does happen and it's fairly common. Hell, a few years ago, I was dealing with a competitor of Crowdstrike's when their product gradually went to complete shit. I ran into someone that used to work at this company and they said the problems started with cut backs, RIFs, etc... until their remaining staff was overworked then all their remaining talent bailed. Their Q/A was virtually non-existent due to a mass exodus. The various dev groups were filled with new hires unfamiliar with the product. Just an absolute shit show.

  • Rage+1 1
Link to comment
Share on other sites

19 hours ago, FartingMonk said:

I think part of it is QA and the other part is devs are a bunch of whiney ass bitches.  Anytime there is new implementation to prevent stuff like this from happening.  They complain about the burden it puts on them and how their boss doesn't know shit and they should be the ones approving the code any ways.  

My experience with this: 
QA are a bunch of morons who can follow a script to test the bug fix or functionality of a given piece of software, but they don't necessarily have the ability to find what's now broken after a code release.  DEV gives two shits about the side effects, and the code review process is headed off by management, who's been out of the DEV game long enough that they don't really understand to the T what's in front of them.  They're rubber stamping dozens of these code fixes in a given 30 minute review, and with the current agile approach, this shit breaks downstream due to QA not knowing a bag of dicks from a fallen stack of logs.  Someone gets the build pipeline to successfully deploy a few times out of 100 tries, and calls it a success, and pushes DEV to Prod and runs out the door for a long weekend (and puts no comments on the commit, so it's another task to find the responsible party).  

  • Hook 'Em 1
Link to comment
Share on other sites

1 hour ago, PvilleStang said:

My experience with this: 
QA are a bunch of morons who can follow a script to test the bug fix or functionality of a given piece of software, but they don't necessarily have the ability to find what's now broken after a code release.  DEV gives two shits about the side effects, and the code review process is headed off by management, who's been out of the DEV game long enough that they don't really understand to the T what's in front of them.  They're rubber stamping dozens of these code fixes in a given 30 minute review, and with the current agile approach, this shit breaks downstream due to QA not knowing a bag of dicks from a fallen stack of logs.  Someone gets the build pipeline to successfully deploy a few times out of 100 tries, and calls it a success, and pushes DEV to Prod and runs out the door for a long weekend (and puts no comments on the commit, so it's another task to find the responsible party).  

Yeah.  We get told if you can't finish it before Friday, push it to Monday.  

Link to comment
Share on other sites

Posted (edited)
2 hours ago, FartingMonk said:

Yeah.  We get told if you can't finish it before Friday, push it to Monday.  

Yeah nobody I work with has change windows into Friday - last planned window ends at like 6A on Thursday morning so they'll have 2 working days to fix any prod impact 

Edited by Captainant
  • Hook 'Em 2
Link to comment
Share on other sites

3 minutes ago, Hornius Emeritus said:

 

 

 

I know of multiple people who flat-out got stranded with no way to even think of booking a replacement flight on Delta (two different groups out of Boston, one out of Phoenix).  Delta seems to remain a shitshow, days after the event.  Wow.

Link to comment
Share on other sites

5 minutes ago, AUS-97HORN said:

jesus... I know Crowdstrike fucked the football here, but how the fuck is Delta still that down 4 full fucking days after the resolution was named and published?

 

Seems that their IT team is just this guy....with this gear....

seo-nerd_thumb.jpg

Link to comment
Share on other sites

12 minutes ago, AUS-97HORN said:

jesus... I know Crowdstrike fucked the football here, but how the fuck is Delta still that down 4 full fucking days after the resolution was named and published?

 

The outsourced crew scheduling/positioning to Southwest.

Link to comment
Share on other sites

1 hour ago, AUS-97HORN said:

jesus... I know Crowdstrike fucked the football here, but how the fuck is Delta still that down 4 full fucking days after the resolution was named and published?

 

Delta has probably outsourced most of its IT staff so it's probably still in the discovery phase of unfucking things. Laying off all that tribal knowledge exposes your org to significant operational risk of things aren't sufficiently documented. 

 

And things are never sufficiently documented 

  • Hook 'Em 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...