While the exact details of this vulnerability are still investigated (see here if you want to catch up on the topic), I wanted to share some of the thoughts I had regarding to what this incident means for the wider open source ecosystem.

TL;DR: To summarize, these are the main points I found remarkable in this entire development:

  • A backdoor was snuck relatively openly into an open source project
  • It was done by a somewhat trusted maintainer
  • The target was not even xz itself, but rather sshd through an obscure chain of dependencies
  • Luckily, it was discovered within a few weeks before the backdoored version was widely adopted

Obviously, there are many examples of security vulnerabilities occurring in open source software. But these are usually due to oversights or mistakes of most likely well-meaning developers that end up enabling the possibility for critical exploits. In the case of the xz backdoor however, it was obviously constructed with malicious intent and high effort towards a precise target. Does anybody know of another vulnerability ending up in a high-profile open source project that is similar in that sense?

This was only possible because the malicious actor under the pseudonym Jia Tan had direct write access to the xz repository as a maintainer. I don’t think it is too unreasonable that with enough time and effort, anyone can get maintenance access to openly developed projects like xz. That is part of the beauty of the democratic process in open source. But what this incident shows is that for projects that are as widely used as xz, even changes coming from seemingly trusted maintainers should be properly reviewed. I don’t mean to say that the original maintainer Lasse Collin has any fault in this matter, or that he should have prevented it, this is too much of a burden to expect from a single person. Instead I think the large tech corporations should put more resources into vetting these kind of open source projects that much of their infrastructure so heavily relies on (in fact, this backdoor seems to mainly target servers).

Even just looking at the source code, the backdoor was very cleverly hidden in testing binaries for the compression algorithm. These things are always easy to say in hindsight, but I do believe that a closer review of the build system shenanigans used to install the backdoor would have at least raised some questions. There was just too much luck involved in the discovery of the backdoor with someone noticing ssh access taking 0.5 seconds longer than usual.

This isn’t really news, but this incident again shows that just like a chain is only as strong as its weakest link, a program is only as strong as its weakest dependency. The fact that the backdoor just hooks into the dynamic library loading process and completely hijacks authorization functions of ssh from inside xz is pretty scary. Maybe this will encourage developers to be more careful and sparing with adding dependencies. However to be honest, up until recently I would have pretty blindly trusted xz to be a very safe dependency due to its popularity and relatively simple use-case.

By opening a backdoor into ssh servers, this is a very critical issue, and there was clearly a lot of time and effort put into making it seem innocuous and hard to detect. I’m very glad that it got found and patched by the time it did, but it does leave me wondering what else is out there. It would be illusionary to think that such attack vectors always get found out eventually.

  • kbal@fedia.io
    link
    fedilink
    arrow-up
    36
    arrow-down
    1
    ·
    8 months ago

    Too many people are acting as if this type of attack is somehow peculiar to open source projects. Very similar methods can be — and by now probably have been — used against closed-source non-free software. The level of defence against it probably varies even more in the rest of the software world, outside that part of it which is free software. How many of the vulnerabilities that have been discovered in Cisco routers were deliberately planted by outside adversaries? I don’t suppose there’s any way to know for sure. The main thing the open source nature of the project did here was make it easier to detect and impossible to cover up afterwards.

    • catloaf
      link
      fedilink
      English
      arrow-up
      13
      ·
      8 months ago

      Right. If this was closed-source, it would probably never have been noticed. If he’d gotten a job somewhere and quietly worked in an exploit, how would anyone have known? As we’ve seen, the co-maintainer didn’t notice anything. Nor did anyone else looking at the project. It was because of a weird change in execution times, and because the person who noticed could investigate. If it had been closed-source, they would have had to open a support case to get more info, with someone who neither knew nor cared about what they were talking about.

    • Aniki 🌱🌿
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      8 months ago

      Thousand times this. Open Source was the reason this wasn’t worse and this should really call into question closed-source software more than anything. Didn’t the Snowden docs specifically show that exact thing happening at Cisco? NSA goons working at senior developer levels in commercial companies sneaking in back-doors?

  • chameleon@kbin.social
    link
    fedilink
    arrow-up
    17
    ·
    8 months ago

    These things are always easy to say in hindsight, but I do believe that a closer review of the build system shenanigans used to install the backdoor would have at least raised some questions.

    Nobody noticed it because nobody is reviewing autotools spaghetti and especially not autotools spaghetti that only exists as shipped in a tarball. Minor differences in those files are perfectly normal as the contents of them are copied in from the shared autoconf-archive project, but every distro ships a different version of that, so what any given thing looks like will depend on the maintainer’s computer. And nearly nobody has a good understanding of what any given line in a .m4 file is going to ultimately lead to the execution of regardless, so why bother investigating any differences? The maintainer of Meson has a good take on this.

    Shipping tarballs without any form of generated files and having a process to validate release tarballs against the repo would be a good step, but is much easier said than done for a variety of reasons. Same thing can be said for shipping without any form of binary files in the repo, there’s quite high value in integration tests and xz’s README for the test blobs has correctly included this paragraph for 16 years:

    Many of the files have been created by hand with a hex editor, thus there is no better “source code” than the files themselves.

    • Gobbel2000@programming.devOP
      link
      fedilink
      arrow-up
      2
      ·
      8 months ago

      I (luckily) haven’t had much experience using autotools, but I do suppose it was no coincidence that the injection was initiated there. I really like the comparison that was made in the post of the Meson maintainer you linked:

      Several “undefeatable” fortresses have been taken over by attackers entering via sewage pipes.

    • Corngood@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      8 months ago

      Many of the files have been created by hand with a hex editor, thus there is no better “source code” than the files themselves.

      I don’t buy that. There would have been some rationale behind the contents that could be automated, like “compressed file with bytes 3-7 in the header zeroed”.

      You also probably don’t need these test files to be available in the environment where the library itself is built. There are various ways you could avoid that.

      I do agree about the autotools stuff though.

      Minor differences in those files are perfectly normal as the contents of them are copied in from the shared autoconf-archive project, but every distro ships a different version of that, so what any given thing looks like will depend on the maintainer’s computer.

      This seems avoidable. We shouldn’t be copying code around like that.

      • chameleon@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        8 months ago

        Test files often represent states that can’t be represented in the library proper. Things like “a tree where node A is a child of B and node B is a child of A”, “the previous instruction repeated x times” where x was never set or there was no previous instruction, or weird combinations of mutually exclusive effects. More often than not, you can’t really generate those using the library itself, as libraries tend to be written to reject those kinds of invalid states (there’s only so much you can do in C but in functional programming land, “make invalid states unrepresentable” is a straight up mantra).

        Even if you did manage to do that, using the system under test to generate test data for the system under test is generally not very useful by itself; you’d need some kind of extra protections on top to make sure the actual test files continue to be identical between revisions (like hashing them). Otherwise, a major incompatibility could be easily overlooked. But that also makes it hard to make any kind of valid changes to the library at all. Worse yet, some libraries don’t implement everything needed to generate the test files: even xz is missing pieces, for example there’s an lzip decompressor but not a compressor.

        There’s some arguments to be made for separating the test system from the main distribution, but the end result will likely be that nobody runs the testsuite at all. It’s difficult enough to get distros to do it in the first place.

        • Corngood@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          8 months ago

          Yeah, that’s fair. If you want to test that you can still decompress something compressed with some random old version, you either need to keep the old algorithm around, or the data.

  • Jumuta@sh.itjust.works
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    8 months ago

    Companies that profit off of foss works should def contribute and monitor the projects, but I’m not sure how you could force them to.

      • unalivejoy
        link
        fedilink
        English
        arrow-up
        7
        ·
        8 months ago

        “My funds are getting low. I guess it’s time to hold the open source world for ransom.”