Classes realized from CrowdStrike outages on liberating instrument updates


The endpoint detection software CrowdStrike made headlines for causing global outages on Windows machines around the world last Friday, leading to over 45,000 flight delays and over 5,000 cancellations, along with a number of other shutdowns, such as payment systems, healthcare services, and 911 operations. 

The cause? An update that was pushed by CrowdStrike to Windows machines that triggered a logic error causing the device to get the Blue Screen of Death (BSOD). Even though CrowdStrike pulled the update fairly quickly, the computers had to be updated individually by IT teams, leading to a lengthy recovery process.

While we don’t know what specifically CrowdStrike’s testing process looked like, there are a number of basic steps that companies releasing software should be doing, explained Dr. Justin Cappos, professor of computer science and engineering at NYU. “I’m not gonna say they didn’t do any testing, because I don’t know … Fundamentally, while we have to wait for a little more detail to see what controls existed and why they weren’t effective, it’s clear that somehow they had massive problems here,” said Cappos.  

He says that one thing companies should be doing is rolling out major updates gradually. Paul Davis, field CISO at JFrog, agrees, noting that whenever he’s led security for companies, any major updates to the software would have been deployed slowly and the impact would be carefully monitored. 

He said that issues were first reported in Australia, and in his past experiences, they would keep a particularly close eye on users in that country after an update because Australia’s workday starts so much earlier than the rest of the world. If there was a problem there, the rollout would be immediately stopped before it had the chance to impact other countries later on. 

“In CrowdStrike’s situation, they would have been able to reduce the impact if they had time to block the distribution of the errant file if they had seen it earlier, but until we see the timeline, we can only guess,” he said. 

Cappos said that all software development teams also need a way to roll back systems to a previously good state when issues are discovered. 

“And whether that’s something that every vendor should have to figure out for themselves or Microsoft should provide a common good platform, we can maybe debate that, but it’s clear there was a huge failure here,” he said. 

Claire Vo, chief product officer at LaunchDarkly, agrees, adding: “Your ability to contain, identify, and remediate software issues is what makes the difference between a minor mishap and a major, brand-impacting event.” She believes that software bugs are inevitable and everyone should be operating under the assumption that they could happen.

She recommends software development teams decouple deployments from releases, do progressive rolluts, use flags that can power runtime fixes, and automate monitoring so that your team can “contain the blast radius of any issues.” 

Marcus Merrell, principal test strategist at Sauce Labs, also believes that companies need to assess the potential risk of any software release they’re planning. 

“The equation is simple: what is the risk of not shipping a code versus the risk of shutting down the world,” he said. “The vulnerabilities fixed in this update were pretty minor by comparison to ‘planes don’t work anymore’, and will likely have the knock-on effect of people not trusting auto-updates or security firms full stop, at least for a while.”

Despite what went wrong last week, Cappos says this isn’t a reason to not regularly update software, as software updates are crucial to keeping systems secure. 

“Software updates themselves are essential,” he said. “This is not a cautionary tale against software updates … Do take this as a cautionary tale about vendors needing to do better software supply chain QA. There are tons of things out there, many are free and open source, many are used widely within industry. This is not a problem that no one knows how to solve. This is just an issue where an organization has taken inadequate steps to handle this and brought a lot of attention to a really important issue that I hope gets fixed in a good way.”


You may also like…

Software testing’s chaotic conundrum: Navigating the Three-Body Problem of speed, quality, and cost

The secret to better products? Let engineers drive vision

Leave a Reply

Your email address will not be published. Required fields are marked *