• Delta Air Lines CEO Ed Bastian said the massive IT outage earlier this month that stranded thousands of customers will cost it $500 million.
  • The airline canceled more than 4,000 flights in the wake of the outage, which was caused by a botched CrowdStrike software update and took thousands of Microsoft systems around the world offline.
  • Bastian, speaking from Paris, told CNBC’s “Squawk Box” on Wednesday that the carrier would seek damages from the disruptions, adding, “We have no choice.”
  • Poem_for_your_sprog@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    4 months ago

    Why do news outlets keep calling it a Microsoft outage? It’s only a crowdstrike issue right? Microsoft doesn’t have anything to do with it?

    • jmcs@discuss.tchncs.de
      link
      fedilink
      arrow-up
      2
      ·
      3 months ago

      Because Microsoft could have prevented it by introducing proper APIs in the kernel like Linux did when crowdstrike did the same on their Linux solution?

    • cheddar@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      4 months ago

      The answer is simple: they have no idea what they are talking about. And that is true for almost every topic they are reporting about.

    • Echo Dot@feddit.uk
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      4 months ago

      It’s sort of 90% of one and 10% of the other. Mostly the issue is a crowdstrike problem, but Microsoft really should have it so their their operating system doesn’t continuously boot loop if a driver is failing. It should be able to detect that and shut down the affected driver. Of course equally the driver shouldn’t be crashing just because it doesn’t understand some code it’s being fed.

      Also there is an argument to be made that Microsoft should have pushed back more at allowing crowdstrike to effectively bypass their kernel testing policies. Since obviously that negates the whole point of the tests.

      Of course both these issues also exist in Linux so it’s not as if this is a Microsoft unique problem.

      • smeenz@lemmy.nz
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        3 months ago

        The crowdstrike driver has the boot_critical flag set, which prevents exactly what you describe from happening

        • Echo Dot@feddit.uk
          link
          fedilink
          arrow-up
          1
          ·
          3 months ago

          Yeah I know but booting in safe mode disables the flag so you can boot even if something is set to critical with it disabled. The critical flag is only set up for normal operations.

    • rekorse@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      Its sort of like calling the terrorist attack on 911 the day the towers fell.

      Although in my opinion, microsoft does have some blame here, but not for the individual outage, more for windows just being a shit system and for tricking people into relying on it.

    • skuzz@discuss.tchncs.de
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      Honestly, with how terrible Windows 11 has been degrading in the last 8 or 9 months, it’s probably good to turn up the heat on MS even if it isn’t completely deserved. They’re pissing away their operating system goodwill so fast.

      There have been some discussions on other Lemmy threads, the tl;dr is basically:

      • Microsoft has a driver certification process called WHQL.
      • This would have caught the CrowdStrike glitch before it ever went production, as the process goes through an extreme set of tests and validations.
      • AV companies get to circumvent this process, even though other driver vendors have to use it.
      • The part of CrowdStrike that broke Windows, however, likely wouldn’t have been part of the WHQL certification anyways.
      • Some could argue software like this shouldn’t be kernel drivers, maybe they should be treated like graphics drivers and shunted away from the kernel.
      • These tech companies are all running too fast and loose with software and it really needs to stop, but they’re all too blinded by the cocaine dreams of AI to care.
      • corsicanguppy@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        4 months ago

        They’re pissing away their operating system goodwill so fast.

        They pissed it away {checks DoJ v. Microsoft} 25 years ago.

        • skuzz@discuss.tchncs.de
          link
          fedilink
          arrow-up
          1
          ·
          3 months ago

          Windows 7 and especially 10 started changing the tune. 10: Linux and Android apps running integrated to the OS, huge support for very old PC hardware, support for Android phone integration, stability improvements like moving video drivers out of the kernel, maintaining backwards compatibility with very old apps (1998 Unreal runs fine on it!) by containerizing some to maintain stability while still allowing old code to run. For a commercial OS, it was trending towards something worth paying for.

    • Rekhyt@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      It was a Crowdstrike-triggered issue that only affected Microsoft Windows machines. Crowdstrike on Linux didn’t have issues and Windows without Crowdstrike didn’t have issues. It’s appropriate to refer to it as a Microsoft-Crowdstrike outage.

      • Poem_for_your_sprog@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        I guess microsoft-crowdstrike is fair, since the OS doesn’t have any kind of protection against a shitty antivirus destroying it.

        I keep seeing articles that just say “Microsoft outage”, even on major outlets like CNN.

        • Dran@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          To be clear, an operating system in an enterprise environment should have mechanisms to access and modify core system functions. Guard-railing anything that could cause an outage like this would make Microsoft a monopoly provider in any service category that requires this kind of access to work (antivirus, auditing, etc). That is arguably worse than incompetent IT departments hiring incompetent vendors to install malware across their fleets resulting in mass-downtime.

          The key takeaway here isn’t that Microsoft should change windows to prevent this, it’s that Delta could have spent any number smaller than $500,000,000 on competent IT staffing and prevented this at a lower cost than letting it happen.

          • Echo Dot@feddit.uk
            link
            fedilink
            arrow-up
            0
            ·
            4 months ago

            Delta could have spent any number smaller than $500,000,000 on competent IT staffing and prevented this at a lower cost than letting it happen.

            I guarantee someone in their IT department raised the point of not just downloading updates. I can guarantee they advise to test them first because any borderline competent I.T professional knows this stuff. I can also guarantee they were ignored.

            • ricecake@sh.itjust.works
              link
              fedilink
              arrow-up
              0
              ·
              4 months ago

              Also, part of the issue is that the update rolled out in a way that bypassed deployments having auto updates disabled.

              You did not have the ability to disable this type of update or control how it rolled out.

              https://www.crowdstrike.com/blog/falcon-content-update-preliminary-post-incident-report/

              Their fix for the issue includes “slow rolling their updates”, “monitoring the updates”, “letting customers decide if they want to receive updates”, and “telling customers about the updates”.

              Delta could have done everything by the book regarding staggered updates and testing before deployment and it wouldn’t have made any difference at all. (They’re an airline so they probably didn’t but it wouldn’t have helped if they had).

              • corsicanguppy@lemmy.ca
                link
                fedilink
                English
                arrow-up
                0
                ·
                4 months ago

                Delta could have done everything by the book

                Except pretty much every paragraph in ISO27002.

                That book?

                Highlights include:

                • ops procedures and responsibilities
                • change management (ohh. That’s a good one)
                • environmental segregation for safety (ie don’t test in prod)
                • controls against malware
                • INSTALLATION OF SOFTWARE ON OPERATIONAL SYSTEMS
                • restrictions on software installation (ie don’t have random fuckwits updating stuff)

                …etc. like, it’s all in there. And I get it’s super-fetch to do the cool stuff that looks great on a resume, but maybe, just fucking maybe, we should be operating like we don’t want to use that resume every 3 months.

                External people controlling your software rollout by virtue of locking you into some cloud bullshit for security software, when everyone knows they don’t give a shit about your apps security nor your SLA?

                Glad Skippy’s got a good looking resume.

                • ricecake@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  4 months ago

                  Yes, that book. Because the software indicated to end users that they had disabled or otherwise asserted appropriate controls on the system updating itself and it’s update process.

                  That’s sorta the point of why so many people are so shocked and angry about what went wrong, and why I said “could have done everything by the book”.

                  As far as the software communicated to anyone managing it, it should not have been doing updates, and cloudstrike didn’t advertise that it updated certain definition files outside of the exposed settings, nor did they communicate that those changes were happening.

                  Pretend you’ve got a nice little fleet of servers. Let’s pretend they’re running some vaguely responsible Linux distro, like a cent or Ubuntu.
                  Pretend that nothing updates without your permission, so everything is properly by the book. You host local repositories that all your servers pull from so you can verify every package change.
                  Now pretend that, unbeknownst to you, canonical or redhat had added a little thing to dnf or apt to let it install really important updates really fast, and it didn’t pay any attention to any of your configuration files, not even the setting that says “do not under any circumstances install anything without my express direction”.
                  Now pretend they use this to push out a kernel update that patches your kernel into a bowl of luke warm oatmeal and reboots your entire fleet into the abyss.
                  Is it fair to say that the admin of this fleet is a total fuckup for using a vendor that, up until this moment, was generally well regarded and presented no real reason to doubt while being commonly used? Even though they used software that connected to the Internet, and maybe even paid for it?

                  People use tools that other people build. When the tool does something totally insane that they specifically configured it not to, it’s weird to just keep blaming them for not doing everything in-house. Because what sort of asshole airline doesn’t write their own antivirus?

                  • rekorse@lemmy.world
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    3 months ago

                    General practices aside, should they really not plan anybackups system though? Crowd strike did not cause 500 million in damages to delta, deltas disaster recovery response did.

                    Where do we draw the line there though I’m not sure. If you set my house on fire but the fire department just stands outside and watches it burn for no reason, who should I be upset with?