• aard@kyu.de
    link
    fedilink
    arrow-up
    82
    ·
    8 months ago

    Short version: A bunch of shitty companies have as business model to sell open databases to companies to track security vulnerabilities - at pretty much zero effort to themselves. So they’ve been bugging the kernel folks to start issuing CVEs and do impact analysis so they have more to sell - and the kernel folks just went “it is the kernel, everything is critical”

    tl;dr: this is pretty much an elaborate “go fuck yourself” towards shady ‘security’ companies.

    • Cosmic Cleric@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      8 months ago

      and the kernel folks just went “it is the kernel, everything is critical”

      tl;dr: this is pretty much an elaborate “go fuck yourself” towards shady ‘security’ companies.

      Apologies for my ignorance, but could you elaborate?

      I’m sincerely not seeing the connection between saying everything is critical as a go fuck yourself towards those companies.

      Is it a ‘death by quantity’ thing?

      Anti Commercial-AI license (CC BY-NC-SA 4.0)

      • aard@kyu.de
        link
        fedilink
        arrow-up
        30
        ·
        7 months ago

        Is it a ‘death by quantity’ thing?

        Pretty much that - those companies rely on open projects to sort it for them, so they’re pretty much scraping open databases, and selling good data they pull from there. That’s why they were complaining about the kernel stuff - the info required was there already, just you needed to put effort in, so they were asking for CVEs. Now they got their CVEs - but to profit from it they’d still need to put the same effort in as they’d had to without CVEs in place.

        • taladar@sh.itjust.works
          link
          fedilink
          arrow-up
          14
          ·
          edit-2
          7 months ago

          the info required was there already, just you needed to put effort in

          Not really. This is mostly what this is all about. The companies are insisting that open source projects should do analysis of security impacts in addition to fixing the bugs whenever some “security researcher” runs some low effort fuzzing or static analysis thing that produces large numbers of bug reports and assigns CVEs to them without the consent of the project. The problem is that such an impact analysis is significant effort (often orders of magnitude more than the fix itself) by people with deep knowledge about the code bases and only really useful to the customers of those companies who want to selectively update instead of just applying all the latest fixes.

  • TheFool@infosec.pub
    link
    fedilink
    arrow-up
    41
    ·
    edit-2
    8 months ago

    What’s happened?

    The Linux kernel project has become its own CVE Numbering Authority (CNA) with two very notable features:

    • CVE identifiers will only be assigned after a fix is already available and in a release; and
    • the project will err on the side of caution, and assign CVEs to all fixes.

    This means each new kernel release will contain a lot of CVE fixes. 

    So what?

    This could contribute to a significant change in behaviour for commercial software vendors.

    The kernel project has long advocated updating to the latest stable release in order to benefit from fixes, including security patches. They’re not the only ones: Google has analysed this topic and Codethink talks extensively about creating software with Long Term Maintainability baked in.

    But alas, a general shift to this mentality appears to allude us: the prevalent attitude amongst the majority of commercial software products is still very much “ship and forget”.

    Consider the typical pattern: SoC vendors base their BSP on an old and stable Linux distribution. Bespoke development occurs on top of this, and some time later, a product is released to market. By this point, the Linux version is out of date, quite likely unsupported and almost certainly vulnerable from a security perspective.

    Now, fair enough, upgrading your kernel is non-trivial: it needs to be carefully thought through, requires extensive testing, and often careful planning to ensure collaboration between different parties, especially if you have dependencies on vendor blobs or other proprietary components. Clearly, this kind of thing needs to be thought about from day one of a new project. Sadly, in practice, in a lot of cases, upgrading simply isn’t even planned for.

    And now?

    With the Linux kernel project becoming a CNA, we’ll now have a situation where every new kernel release highlights the scale of how far behind mainline these products are, and by implication how exposed to security vulnerabilities the software is. 

    The result should be increased pressure on vendors to upgrade.

    With this, plus the recent surge in regulations around keeping software up to date (see the CRA, UNECE R155 and R156), we may start to see a genuine movement towards software being designed to be properly maintained and updated, ie, “ship and remember” or Long Term Maintainability. Let’s hope so.

    What else?

    Well, the Linux kernel is just one project. There are countless other FOSS projects which are depended on by almost all commercial projects, and they may also be interested in becoming their own CNA. 

    This would further increase the visibility of the problem, and apply a renewed focus on the criticality of releasing software products with plans to upgrade built in from the start.

    If you would like to learn more about CNAs or Codethink’s Long Term Maintainability approach, reach out via .