In a wide-ranging conversation with Verizon open-source officer Dirk Hohndel, ‘plodding engineer’ Linus Torvalds discussed where Linux is today and where it may go tomorrow.

As for the release numbers, Torvalds reminded everyone yet again, they mean nothing. Hohndel said, “You typically change the major number around 19 or 20, because you get bored.” No, replied Torvalds, it’s because, “when I can’t count on my fingers and toes anymore it’s time for another ‘major’ release.”

So, what should you do about the constant weekly flow of Linux security bug fixes? Greg Kroah-Hartman, the maintainer of the Linux stable kernel, thinks you should constantly update to the newest, most secure stable Linux kernel. Torvalds agrees but can see the case for sticking with older kernels and relying on less frequent security patch backports.

Switching to a more modern topic, the introduction of the Rust language into Linux, Torvalds is disappointed that its adoption isn’t going faster. “I was expecting updates to be faster, but part of the problem is that old-time kernel developers are used to C and don’t know Rust. They’re not exactly excited about having to learn a new language that is, in some respects, very different. So there’s been some pushback on Rust.”

The pair then moved on to the hottest of modern tech topics: AI. While Torvalds is skeptical about the current AI hype, he is hopeful that AI tools could eventually aid in code review and bug detection.

In the meantime, though, Torvalds is happy about AI’s side effects. For example, he said, “When AI came in, it was wonderful, because Nvidia got much more involved in the kernel. Nvidia went from being on my list of companies who are not good to my list of companies who are doing really good work.”

  • Churbleyimyam
    link
    fedilink
    arrow-up
    77
    arrow-down
    1
    ·
    20 days ago

    Yeah I guess Nvidia have had to grow up pretty fast recently.

    • CameronDev@programming.dev
      link
      fedilink
      arrow-up
      28
      arrow-down
      4
      ·
      20 days ago

      Nvidia have been big kernel contributers for a long time, even before the “fuck you nvidia” thing. They hold their graphics driver close to their chest, but have done a lot of other work for the kernel.

      • teawrecks@sopuli.xyz
        link
        fedilink
        arrow-up
        9
        ·
        20 days ago

        What’s an example? I would have thought, back then especially, their driver (and maybe nvapi) was most of the software they shipped.

        • CameronDev@programming.dev
          link
          fedilink
          arrow-up
          9
          arrow-down
          2
          ·
          20 days ago

          My memory is fuzzy, but they have had their tegra SOC since the 2000s, and somewhat more recently they have been a big player in data center networking.

          And ever since CUDA became a thing they have been a big name in HPC and super computers, which is usually Linux based.

          So they have done a lot of behind the scenes Linux work (and possibly BSD?).

          • teawrecks@sopuli.xyz
            link
            fedilink
            arrow-up
            5
            ·
            19 days ago

            Yeah, afaik the tegra was only used for embedded, closed source devices though, no? Did they submit any non-proprietary tegra support upstream?

            And afaik CUDA has also always been proprietary bins. Maybe you mean they had to submit upstream fixes here and there to get their closed-source stuff working properly?

            • CameronDev@programming.dev
              link
              fedilink
              arrow-up
              2
              ·
              19 days ago

              Tegra was used in android tablets, I had a couple. Not sure what the licence status was, but it was supported in cyanogen, so they must have had to make some changes to the kernel for that?

              Certainly some of the stuff the upstreamed was to support their drivers, but they would have also been working on other more general things to support their super computers and other HPC stuff.

              They also had a chipset for intel motherboards (which I can’t find anything about), which may have had some work required?

              I don’t really know exactly the scope of all the work, but they have been in the top 20 companies for kernel development for a long time, and I assuming it can’t just be supporting their own drivers.

              Its hard to find the stats, but from here: https://bootlin.com/community/contributions/kernel-contributions/ you can click through and get breakdowns per kernel release: https://web.archive.org/web/20160803012713/remword.com/kps_result/3.8_whole.html